Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Drivers (15)

IoT forensic science uses technical methods to solve problems related to the investigation of incidents involving IoT devices. Some of the technical ways that IoT forensic science solves problems include:

  1. Data Extraction and Analysis: IoT forensic science uses advanced software tools to extract data from IoT devices, such as logs, sensor readings, and network traffic. The data is then analyzed to identify relevant information, such as timestamps, geolocation, and device identifiers, which can be used to reconstruct events leading up to an incident.

  2. Reverse Engineering: IoT forensic science uses reverse engineering techniques to understand the underlying functionality of IoT devices. This involves analyzing the hardware and software components of the device to identify vulnerabilities, backdoors, and other features that may be relevant to an investigation.

  3. Forensic Imaging: IoT forensic science uses forensic imaging techniques to preserve the state of IoT devices and ensure that the data collected is admissible in court. This involves creating a complete copy of the device's storage and memory, which can then be analyzed without altering the original data.

  4. Cryptography and Data Security: IoT forensic science uses cryptography and data security techniques to ensure the integrity and confidentiality of data collected from IoT devices. This includes the use of encryption, digital signatures, and other security measures to protect data during storage, analysis, and transmission.

  5. Machine Learning: IoT forensic science uses machine learning algorithms to automate the analysis of large amounts of data generated by IoT devices. This can help investigators identify patterns and anomalies that may be relevant to an investigation.

IoT forensic science uses many more (and more advances) technical methods to solve problems related to the investigation of incidents involving IoT devices. By leveraging these techniques, investigators can collect, analyze, and present digital evidence from IoT devices that can be used to reconstruct events and support legal proceedings.

Read more…

by Carsten Gregersen

With how fast the IoT industry is growing, it’s paramount your business isn’t left behind.

IoT technology has brought a ton of benefits and makes systems more efficient and easier to manage. As a result, it’s no surprise that more businesses are adopting IoT solutions. On top of that, businesses starting new projects have the slight advantage of buying all new technology and, therefore, not having to deal with legacy systems. 

On the other hand, if you have an already operational legacy system and you want to implement IoT, you may think you have to buy entirely new technology to get it online, right? Not necessarily. After all, if your legacy systems are still functional and your staff is comfortable with them, why should you waste all of that time and money?

Legacy systems can still bend to your will and be used for adopting IoT. Sticking rather than twisting can help your business save money on your IoT project.

In this blog, we’ll go over the steps you would need to follow for integrating IoT technology into your legacy systems and the different options you have to get this done.

1. Analyze Your Current Systems

First things first, take a look at your current system and take note of their purpose, the way they work, the type of data that they collect, and the way they could benefit by communicating with each other.

This step is important because it will allow you to plan out IoT integration more efficiently. When analyzing your current systems make sure you focus on these key aspects:

  • Automation – See how automation is currently accomplished and what other aspects should be automated.
  • Efficiency – What aspects are routinely tedious or slow and could become more efficient?
  • Data – How it’s taken, stored, and processed, and how it could be used better
  • Money – Analyze how much some processes cost and keep them in mind to know what aspects could be done for cheaper with IoT
  • Computing – The way data is processed, whether it be cloud, edge, or hybrid.

Following these steps will help you know your project in and out and apply IoT in the areas that truly matter.

2. Plan for IoT Integration

In order to integrate IoT into your legacy systems, you must get everything in order. 

In order to successfully integrate IoT into your system, you will need to have strong planning, design, and implementation phases. Steps you will need to follow in order to achieve this can be

  • Decide what IoT hardware is going to be needed
  • Set a budget taking software, hardware, and maintenance into account
  • Decide on a communication protocol
  • Develop software tools for interacting with the system
  • Decide on a security strategy

This process can be daunting if you don’t know how IoT works, but by following the right tutorials and developing with the right tools, your IoT project can be easily realizable. 

Nabto has tools that can not only help you set up an IoT project but also adding legacy systems and newer IoT devices to it.

Here are several ways in which we can help get your legacy systems IoT ready. 

  • You can integrate the Nabto SDK to add IoT remote control access to your devices.
  • Use the Nabto application to move data from one network to another – otherwise known as TCP tunneling.
  • Add secure remote access to your existing solutions. 
  • Build mobile apps for remote control of embedded devices our IoT app solution.

3. Implement IoT Sensors to Existing Hardware

IoT has the capability to automize, control, and make systems more efficient. Therefore, interconnecting your legacy systems to allow for communication is a great idea.

There’s a high chance your legacy systems don’t currently have the ability to sense or communicate data. However, adding new IoT sensors can give them these capabilities.

IoT sensors are small devices that can detect when something changes. Then, they capture and send information to a main computer over the internet to be processed or execute commands. These could measure (but not limited to):

  • Temperature
  • Humidity
  • Pressure
  • Gyroscope
  • Accelerometer

These sensors are cheap and easy to install, therefore, adding them to your existing legacy systems can be the simplest and quickest way to get to communicate over the internet.

Set up which inputs the sensor should respond to and under what conditions, and what it should do with the collected data. You could be surprised by the benefits that making a simple device to collect data can have for your project!

4. Connect Existing PLCs to the Internet

If you already have an automated system managed by a PLC (Programmable Logic Controller,) devices already share data with each other. Therefore, the next step is to get them online.

With access to the internet, these systems can be controlled remotely from anywhere in the world. Data can be accessed, modified, and analyzed more easily. On top of that, updates can be pushed globally at any time.

Given that some PLCs utilize proprietary protocols and have a weird way of making devices communicate with each other, an IoT gateway is the best way to take the PLC to the internet.

An IoT gateway is a device that acts as a bridge between IoT devices and the cloud, and allows for communication between them. This allows you to implement IoT to a PLC without having to restructure it or change it too much.

5. Connect Legacy using an IO port

A lot of times a legacy system has some kind of interface for data input/output. Sometimes, this is implemented for debugging when the product was developed. However, at other times, this is done to make it possible for service organizations to be able to interface with products in the field and to help customers with setup and/or debug problems.

These debug ports are similar to real serial ports, such as an RS-485 RS-232, etc. That being said, they can be more raw UART, SPI, or I2C. What’s more, the majority of the time the protocol on top of the serial connection is proprietary.

This kind of interface is great. It allows you a “black box” to be created via a physical interface matching the legacy system and firmware running on this black box. This can translate “internet” requests to the proprietary protocol of the legacy system. In addition,  this new system can be used as a design for newer internet-accessible versions of the system simply by adopting the black box onto the internal legacy design.

Bottom Line

Getting your legacy systems to work in IoT is not as much of a challenge as you might have initially thought.

Following some fairly simple strategies can let you set them up relatively quickly. However, don’t forget the planning phase for your IoT strategy and deciding how it’s going to be implemented in your own legacy system. This will allow you to streamline the process even more, and make you take full advantage of all the benefits that IoT brings to your project.

Originally posted here.

Read more…

Written by: Mirko Grabel

Edge computing brings a number of benefits to the Internet of Things. Reduced latency, improved resiliency and availability, lower costs, and local data storage (to assist with regulatory compliance) to name a few. In my last blog post I examined some of these benefits as a means of defining exactly where is the edge. Now let’s take a closer look at how edge computing benefits play out in real-world IoT use cases.

Benefit No. 1: Reduced latency

Many applications have strict latency requirements, but when it comes to safety and security applications, latency can be a matter of life or death. Consider, for example, an autonomous vehicle applying brakes or roadside signs warning drivers of upcoming hazards. By the time data is sent to the cloud and analyzed, and a response is returned to the car or sign, lives can be endangered. But let’s crunch some numbers just for fun.

Say a Department of Transportation in Florida is considering a cloud service to host the apps for its roadside signs. One of the vendors on the DoT’s shortlist is a cloud in California. The DoT’s latency requirement is less than 15ms. The light speed in fiber is about 5 μs/km. The distance from the U.S. east coast to the west coast is about 5,000 km. Do the math and the resulting round-trip latency is 50ms. It’s pure physics. If the DoT requires a real-time response, it must move the compute closer to the devices.

Benefit No. 2: Improved resiliency/availability

Critical infrastructure requires the highest level of availability and resiliency to ensure safety and continuity of services. Consider a refinery gas leakage detection system. It must be able to operate without Internet access. If the system goes offline and there’s a leakage, that’s an issue. Compute must be done at the edge. In this case, the edge may be on the system itself.

While it’s not a life-threatening use case, retail operations can also benefit from the availability provided by edge compute. Retailers want their Point of Sale (PoS) systems to be available 100% of the time to service customers. But some retail stores are in remote locations with unreliable WAN connections. Moving the PoS systems onto their edge compute enables retailers to maintain high availability.

Benefit No. 3: Reduced costs

Bandwidth is almost infinite, but it comes at a cost. Edge computing allows organizations to reduce bandwidth costs by processing data before it crosses the WAN. This benefit applies to any use case, but here are two example use-cases where this is very evident: video surveillance and preventive maintenance. For example, a single city-deployed HD video camera may generate 1,296GB a month. Streaming that data over LTE easily becomes cost prohibitive. Adding edge compute to pre-aggregate the data significantly reduces those costs.

Manufacturers use edge computing for preventive maintenance of remote machinery. Sensors are used to monitor temperatures and vibrations. The currency of this data is critical, as the slightest variation can indicate a problem. To ensure that issues are caught as early as possible, the application requires high-resolution data (for example, 1000 per second). Rather than sending all of this data over the Internet to be analyzed, edge compute is used to filter the data and only averages, anomalies and threshold violations are sent to the cloud.

Benefit No. 4: Comply with government regulations

Countries are increasingly instituting privacy and data retention laws. The European Union’s General Data Protection Regulation (GDPR) is a prime example. Any organization that has data belonging to an EU citizen is required to meet the GDPR’s requirements, which includes an obligation to report leaks of personal data. Edge computing can help these organizations comply with GDPR. For example, instead of storing and backhauling surveillance video, a smart city can evaluate the footage at the edge and only backhaul the meta data.

Canada’s Water Act: National Hydrometric Program is another edge computing use case that delivers regulatory compliance benefits. As part of the program, about 3,000 measurement stations have been implemented nationwide. Any missing data requires justification. However, storing data at the edge ensures data retention.

Bonus Benefit: “Because I want to…”

Finally, some users simply prefer to have full control. By implementing compute at the edge rather than the cloud, users have greater flexibility. We have seen this in manufacturing. Technicians want to have full control over the machinery. Edge computing gives them this control as well as independence from IT. The technicians know the machinery best and security and availability remain top of mind.

Summary

By reducing latency and costs, improving resiliency and availability, and keeping data local, edge computing opens up a new world of IoT use cases. Those described here are just the beginning. It will be exciting to see where we see edge computing turn up next. 

Originaly posted: here

Read more…

On-chip UHD SS–MSCs as a device-unitized power source. Credit: Professor Sang-Young Lee, UNIST

by Ulsan National Institute of Science and Technology

A tiny microsupercapacitor (MSC) that is as small as the width of a person's fingerprint and can be integrated directly with an electronic chip has been developed. This has attracted major attention as a novel technology to lead the era of Internet of Things (IoT) since it can be driven independently when applied to individual electronic components.

 

Through the study, Professor Sang-Young Lee and his research team in the School of Energy and Chemical Engineering at UNIST have unveiled a new class of ultrahigh areal number density solid-state MSCs (UHD SS–MSCs) on a chip via electrohydrodynamic (EHD) jet printing. According to the research team, this is the first study to exploit EHD jet printing in the MSCs.

A supercapacitor (SC), also known as an ultracapacitor, can store much more energy than ordinary capacitors. The benefits of supercapacitors include having high power delivery and longer cycle life compared to lithium-based secondary batteries. In particular, it can be produced as small as the width of a person's fingerprint via semiconductor manufacturing process, and thus can be also applicable for wearables and internet of things (IoT) devices.

However, becuase the heat produced in manufacturing process may cause deterioration of the electrical characteristics of the supercapacitor, it has been difficult to connect them directly to electronic components. In addition, the fabrication method that combines supercapacitors with electronic components via inkjet printing technique has also the disadvantage of lower precision.

The research team solved this issue using EHD jet printing, a high-resolution patterning technique in microelectronics. EHD jet printing uses the electrode and electrolyte for printing purpose similar to that of conventional inkjet printing, yet it can control printed liquid with an electric field.

"We were able to produce up to 54.9 unit cells per square centimeter (cm2) via electro-hydrodynamic jet printing technique, and thus the output of 65.9 volts (V) was achieved in the same area," says Kwonhyung Lee (Combined M.S/Ph.D. of Energy and Chemical Engineering, UNIST), the first author of the study.

The team also succeeded in fabricating 36 unit cells on a chip (area = 8.0 mm × 8.2 mm, 54.9 cells cm−2) and areal operating voltage (65.9 V cm−2) that lie far beyond those of previously reported MSCs fabricated by printing techniques. Besides, upon exposure to hot temperature (80 degrees C), these cells maintained normal cyclic voltammetry (CV) profiles, and thus has proven they can withstand excessive heat generated during the operation of actual electronic component. In addition, these batteries can provide customized powere supplies, as they can be connected either in series or parallel.

"In this study, we have demonstrated on-chip UHD SS–MSCs fabricated via EHD jet printing," says Professor Lee. "The on-chip UHD SS–MSCs presented here hold great promise as a new platform technology for miniaturized monolithic power sources with customized design and tunable electrochemical properties."

Originaly posted HERE

 
Read more…

A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them 'smart.' The study was published in Electronics.

Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance. For comparison, in ordinary modern computers, random access memory is calculated in gigabytes. Although microcontrollers possess significantly less processing power than laptops and smartphones, they are smaller and can be interfaced with household items. Smart doors, refrigerators, shoes, glasses, kettles and coffee makers create the foundation for so-called ambient intelligece. The term denotes an environment of interconnected smart devices. 

An example of ambient intelligence is a smart home. The devices with limited memory are not able to store a large number of keys for secure data transfer and arrays of neural network settings. It prevents the introduction of artificial intelligence into Internet of Things devices, as they lack the required computing power. However, artificial intelligence would allow smart devices to spend less time on analysis and decision-making, better understand a user and assist them in a friendly manner. Therefore, many new opportunities can arise in the creation of environmental intelligence, for example, in the field of health care.

Andrei Velichko from Petrozavodsk State University, Russia, has created a new neural network architecture that allows efficient use of small volumes of RAM and opens the opportunities for the introduction of low-power devices to the Internet of Things. The network, called LogNNet, is a feed-forward neural network in which the signals are directed exclusively from input to output. Its uses deterministic chaotic filters for the incoming signals. The system randomly mixes the input information, but at the same time extracts valuable data from the information that are invisible initially. A similar mechanism is used by reservoir neural networks. To generate chaos, a simple logistic mapping equation is applied, where the next value is calculated based on the previous one. The equation is commonly used in population biology and as an example of a simple equation for calculating a sequence of chaotic values. In this way, the simple equation stores an infinite set of random numbers calculated by the processor, and the network architecture uses them and consumes less RAM.

7978216495?profile=RESIZE_584x

The scientist tested his neural network on handwritten digit recognition from the MNIST database, which is considered the standard for training neural networks to recognize images. The database contains more than 70,000 handwritten digits. Sixty-thousand of these digits are intended for training the neural network, and another 10,000 for network testing. The more neurons and chaos in the network, the better it recognized images. The maximum accuracy achieved by the network is 96.3%, while the developed architecture uses no more than 29 KB of RAM. In addition, LogNNet demonstrated promising results using very small RAM sizes, in the range of 1-2kB. A miniature controller, Atmega328, can be embedded into a smart door or even a smart insole, has approximately the same amount of memory.

"Thanks to this development, new opportunities for the Internet of Things are opening up, as any device equipped with a low-power miniature controller can be powered with artificial intelligence. In this way, a path is opened for intelligent processing of information on peripheral devices without sending data to cloud services, and it improves the operation of, for example, a smart home. This is an important contribution to the development of IoT technologies, which are actively researched by the scientists of Petrozavodsk State University. In addition, the research outlines an alternative way to investigate the influence of chaos on artificial intelligence," said Andrei Velichko.

Originally posted HERE.

by Russian Science Foundation

Image Credit: Andrei Velichko

 

 

 

 

Read more…

Theoratical Embedded Linux requirements

Hardware

SoC

A System on Chip (SoC), is essentially an integrated circuit that takes a single platform and integrates an entire computer system onto it. It combines the power of the CPU with other components that it needs to perform and execute its functions. It is in charge of using the other hardware and running your software. The main advantage of SoC includes lower latency and power saving.

It is made of various building blocks:

  • Core + Caches + MMU – An SoC has a processor at its core which will define its functions. Normally, an SoC has multiple processor cores. For a “real” processor, e.g. ARM Cortex-A9. It’s the main thing kept in mind while choosing an SoC. Maybe co-adjuvanted by e.g. a SIMD co-processor like NEON.
  • Internal RAM – IRAM is composed of very high-speed SRAM located alongside the CPU. It acts similar to a CPU cache, and generally very small. It is used in the first phase of the boot sequence.
  • Peripherals – These can be a simple ADC, DSP, or a Graphical Processing Unit which is connected via some bus to the Core. A low power/real-time co-processor helps the main Core with real-time tasks or handle low power states. Examples of such IP cores are USB, PCI-E, SGX, etc.

External RAM

An SoC uses RAM to store temporary data during and after bootstrap. It is the memory an embedded system uses during regular operation.

Non-Volatile Memory

In an Embedded system or single-board computer, it is the SD card. In other cases, it can be a NAND, NOR, or SPI Data flash memory. It is the source of data the SoC reads and stores all the software components needed for the system to work.

External Peripherals

An SoC must have external interfaces for standard communication protocols such as USB, Ethernet, and HDMI. It also includes wireless technology protocols of Wi-Fi and Bluetooth.

Software

Second-Article-01-1024x576.jpghttps://www.tirichlabs.com/storage/2020/09/Second-Article-01-300x169.jpg 300w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-768x432.jpg 768w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-1200x675.jpg 1200w" alt="" />

First of all, we introduce the boot chain which is the series of actions that happens when an SoC is powered up.

Boot ROM: It is a piece of code stored in the ROM which is executed by the booting core when it is powered-on. This code contains instructions for the configuration of SoC to allow it to execute applications. The configurations performed by Boot ROM include initialization of the core’s register and stack pointer, enablement of caches and line buffers, programming of interrupt service routine, clock configuration.

Boot ROM also implements a Boot Assist Module (BAM) for downloading an application image from external memories using interfaces like Ethernet, SD/MMC, USB, CAN, UART, etc.

1st stage bootloader

In the first-stage bootloader performs the following

  • Setup the memory segments and stack used by the bootloader code
  • Reset the disk system
  • Display a string “Loading OS…”
  • Find the 2nd stage boot loader in the FAT directory
  • Read the 2nd stage boot loader image into memory at 1000:0000
  • Transfer control to the second-stage bootloader

It copies the Boot ROM into the SoC’s internal RAM. Must be tiny enough to fit that memory usually well under 100kB. It initializes the External RAM and the SoC’s external memory interface, as well as other peripherals that may be of interest (e.g. disable watchdog timers). Once done, it executes the next stage, depending on the context, which could be called MLO, SPL, or else.

2nd stage bootloader

This is the main bootloader and can be 10 times bigger than the 1st stage, it completes the initialization of the relevant peripherals.

  • Copy the boot sector to a local memory area
  • Find kernel image in the FAT directory
  • Read kernel image in memory at 2000:0000
  • Reset the disk system
  • Enable the A20 line
  • Setup interrupt descriptor table at 0000:0000
  • Setup the global descriptor table at 0000:0800
  • Load the descriptor tables into the CPU
  • Switch to protected mode
  • Clear the prefetch queue
  • Setup protected mode memory segments and stack for use by the kernel code
  • Transfer control to the kernel code using a long jump

Linux Kernel

The Linux kernel is the main component of a Linux OS and is the core interface between hardware and processes. It communicates between the hardware and processes, managing resources as efficiently as possible. The kernel performs following jobs

  • Memory management: Keep track of memory, how much is used to store what, and where
  • Process management: Determine which processes can use the processor, when, and for how long
  • Device drivers: Act as an interpreter between the hardware and the processes
  • System calls and security: Receive requests for the service from processes

To put the kernel in context, they can be interpreted as a Linux machine as having 3 layers:

  • The hardware: The physical machine—the base of the system, made up of memory (RAM) and the processor (CPU), as well as input/output (I/O) devices such as storage, networking, and graphics.
  • The Linux kernel: The core of the OS. It is a software residing in memory that tells the CPU what to do.
  • User processes: These are the running programs that the kernel manages. User processes are what collectively makeup user space. The kernel allows processes and servers to communicate with each other.

Init and rootfs – init is the first non-Kernel task to be run, and has PID 1. It initializes everything needed to use the system. In production embedded systems, it also starts the main application. In such systems, it is either BusyBox or a custom-crafted application.

View original post here

Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

Edge Products Are Now Managed At The Cloud

Now more than ever, there are billions of edge products in the world. But without proper cloud computing, making the most of electronic devices that run on Linux or any other OS would not be possible.

And so, a question most people keep asking is which is the best Software-as-a-service platform that can effectively manage edge devices through cloud computing. Well, while edge device management may not be something, the fact that cloud computing space is not fully exploited means there is a lot to do in the cloud space.

Product remote management is especially necessary for the 21st century and beyond. Because of the increasing number of devices connected to the internet of things (IoT), a reliable SaaS platform should, therefore, help with maintaining software glitches from anywhere in the world. From smart homes, stereo speakers, cars, to personal computers, any product that is connected to the internet needs real-time protection from hacking threats such as unlawful access to business or personal data.

Data being the most vital asset is constantly at risk, especially if individuals using edge products do not connect to trusted, reliable, and secure edge device management platforms.

Bridges the Gap Between Complicated Software And End Users

Cloud computing is the new frontier through which SaaS platforms help manage edge devices in real-time. But something even more noteworthy is the increasing number of complicated software that now run edge devices at homes and in workplaces.

Edge device management, therefore, ensures everything runs smoothly. From fixing bugs, running debugging commands to real-time software patch deployment, cloud management of edge products bridges a gap between end-users and complicated software that is becoming the norm these days.

Even more importantly, going beyond physical firewall barriers is a major necessity in remote management of edge devices. A reliable Software-as-a-Service, therefore, ensures data encryption for edge devices is not only hackproof by also accessed by the right people. Moreover, deployment of secure routers and access tools are especially critical in cloud computing when managing edge devices. And so, developers behind successful SaaS platforms do conduct regular security checks over the cloud, design and implement solutions for edge products.

Reliable IT Infrastructure Is Necessary

Software-as-a-service platforms that manage edge devices focus on having a reliable IT infrastructure and centralized systems through which they can conduct cloud computing. It is all about remotely managing edge devices with the help of an IT infrastructure that eliminates challenges such as connectivity latency.

Originally posted here

Read more…

Industrial IoT Revolution

Why the Nvidia Jetson Nano is responsible for the biggest industrial IoT revolution these days

 
c1f0a2_ecaa338269684f82b2661b550075f528~mv2.webp
 
 

It feels like yesterday when the Raspberry Pi foundation released the first-in-line Single Board Computer (SBC) to the market. Back in 2012, Raspberry Pi wasn't alone in the SBC growing market, however, it was the first to make a community-based product that brings the hardware and the software eco-system to a beautiful harmony on the internet. Before those days, embedded Linux based SBC's and SOM's were a place for Linux kernel and embedded hardware experts, no easy-to-use tools, ready Linux based distros, or most importantly without the enormous amount of questions and answers across the internet on anything related.

Today, 8 years later, the "2012 revolution" happens again

This time, it took a year to understand the impact of the new 'kid' in the market, but now, there are a few indications that defiantly build the route to a revolution.

The Raspberry Pi was the first to make embedded Linux easy while keeping the advantages of reliability and flexibility in terms of fitting to different kinds of industries applications. It's almost impossible to ignore the variety of industries where Raspberry Pi is in its hurt of products to save time-to-market and costs. The power of this magical board leans on the software side: The Raspberry Pi foundation and their community, worked hard across the years to improve and share their knowledge, but, at the same time, without notice or targeting, they brought the Pi development to an extremely "serverless" level.

The Nvidia Jetson Nano

Let's stop talking about the Raspberry Pi and focus on today's industry needs to understand better why the new kid in the town is here to change the market of IoT and smart products forever.

 
c1f0a2_2ca55bc3cd744a10a05bc244c4e092c1~mv2.webp
 
 Why do we need to thanks Nvidia and the Jetson Nano?
 

The market is going forward. AI, Robotics, amazing-looking screen app Gui's, image processing, and long data calculations are all become the new standard of smart edge products.

If a few years ago, you would only want to connect your product to the cloud and receive anything valuable, today, product managers and developers compete in a much tougher industry era. This time, the Raspberry Pi can't be the technology hero again, its resources are limited and the eco-system starts to squint to a better-fit solution.

 
c1f0a2_b46f958fa9b543af88a6ad38b2afce82~mv2.webp
 
 

NVIDIA Jetson devices in Upswift.io device management platform

The Jetson Nano is the first SBC to understood the necessary combination that will drive new products to use it. It's the first SBC designed in the mind of industrial powerful use cases, while not forgetting the prototyping stage and the harmony that gave the Raspberry Pi their success. It's the first solution to bring the whole package for developers and for hardware engineers with a "SaaS" feel: The OS is already perfect thanks to Ubuntu, there is plenty of software instructions by Nvidia and open-source ready-to-use tools custom made for the Jetson family, and for the hardware engineers: they are free to go with the System On Module (SOM) that is connected to a carrier board which includes all the necessary outputs and inputs to make the development stage even faster.

The Jetson Nano combination is basically providing the first world infrastructure for producing a "2020" product with complex software while working in a minimal budget and time-to-market. The Jetson Nano enables developers and product managers to imagine further without compromises, bringing tough software missions to the edge easily.

Originally posted here

Read more…

Industrial Prototyping for IoT

I-Pi SMARC.jpg

ADLINK is a global leader in edge computing driving data-to-decision applications across industries. The company recently introduced I-Pi SMARC for Industrial IoT prototyping.

-       AdLInk I-Pi SMARC consists of a simple carrier paired with a SMARC Computer on Module

-       SMARC Modules are available from entry level PX30 Rockchip to top of the line Intel Apollo Lake.

-       SMARC modules are specifically designed for typical industrial embedded applications that require long life, high MTBF and strict revision control.

-       Use popular off the shelve sensors and create prototypes or proof of concepts on short notice.

Additional information can be found here

 

Read more…

Helium Expands to Europe

Helium, the company behind one of the world’s first peer-to-peer wireless networks, is announcing the introduction of Helium Tabs, its first branded IoT tracking device that runs on The People’s Network. In addition, after launching its network in 1,000 cities in North America within one year, the company is expanding to Europe to address growing market demand with Helium Hotspots shipping to the region starting July 2020. 

Since its launch in June 2019, Helium quickly grew its footprint with Hotspots covering more than 700,000 square miles across North America. Helium is now expanding to Europe to allow for seamless use of connected devices across borders. Powered by entrepreneurs looking to own a piece of the people-powered network, Helium’s open-source blockchain technology incentivizes individuals to deploy Hotspots and earn Helium (HNT), a new cryptocurrency, for simultaneously building the network and enabling IoT devices to send data to the Internet. When connected with other nearby Hotspots, this acts as the backbone of the network. 

“We’re excited to launch Helium Tabs at a time where we’ve seen incredible growth of The People’s Network across North America,” said Amir Haleem, Helium’s CEO and co-founder. “We could not have accomplished what we have done, in such a short amount of time, without the support of our partners and our incredible community. We look forward to launching The People’s Network in Europe and eventually bringing Helium Tabs and other third-party IoT devices to consumers there.”  

Introducing Helium Tabs that Run on The People’s Network
Unlike other tracking devices,Tabs uses LongFi technology, which combines the LoRaWAN wireless protocol with the Helium blockchain, and provides network coverage up to 10 miles away from a single Hotspot. This is a game-changer compared to WiFi and Bluetooth enabled tracking devices which only work up to 100 feet from a network source. What’s more, due to Helium’s unique blockchain-based rewards system, Hotspot owners will be rewarded with Helium (HNT) each time a Tab connects to its network. 

In addition to its increased growth with partners and customers, Helium has also seen accelerated expansion of its Helium Patrons program, which was introduced in late 2019. All three combined have helped to strengthen its network. 

Patrons are entrepreneurial customers who purchase 15 or more Hotspots to help blanket their cities with coverage and enable customers, who use the network. In return, they receive discounts, priority shipping, network tools, and Helium support. Currently, the program has more than 70 Patrons throughout North America and is expanding to Europe. 

Key brands that use the Helium Network include: 

  • Nestle, ReadyRefresh, a beverage delivery service company
  • Agulus, an agricultural tech company
  • Conserv, a collections-focused environmental monitoring platform

Helium Tabs will initially be available to existing Hotspot owners for $49. The Helium Hotspot is now available for purchase online in Europe for €450.

Read more…

I recently joined the Embedded Online Conference thinking I was going to gain new insights on embedded and IoT techniques. But I was pleasantly surprised to see a huge variety of sessions with a focus on modern software development practices. It is becoming more and more important to gain familiarity with a more modern software approach, even when you’re programming a constrained microcontroller or an FPGA.

Historically, there has been a large separation between application developers and those writing code for constrained embedded devices. But, things are now changing. The embedded world intersecting with the world of IoT, data science, and ML, and the deeper co-operation between software and hardware communities is driving innovation. The Embedded Online Conference, artfully organised by Jacob Beningo, represented exactly this cross-section, projecting light on some of the most interesting areas in the embedded world - machine learning on microcontrollers, using test-driven development to reduce bugs and programming an FPGA in Python are all things that a few years ago, had little to do with the IoT and embedded industry.

This blog is the first part of a series discussing these new and exciting changes in the embedded industry. In this article, we will focus on machine learning techniques for low-power and cost-sensitive IoT and embedded Arm-based devices.

Think like a machine learning developer

Considered for many year's an academic dead end of limited practical use, machine learning has gained a lot of renewed traction in recent years and it has now become one of the most interesting trends in the IoT space. TinyML is the buzzword of the moment. And this was a hot topic at the Embedded Online Conference. However, for embedded developers, this buzzword can sometimes add an element of uncertainty.

The thought of developing IoT applications with the addition of machine learning can seem quite daunting. During Pete Warden’s session about the past, present and future of embedded ML, he described the embedded and machine learning worlds to be very fragmented; there are so many hardware variants, RTOS’s, toolchains and sensors meaning the ability to compile and run a simple ‘hello world’ program can take developers a long time. In the new world of machine learning, there’s a constant churn of new models, which often use different types of mathematical operations. Plus, exporting ML models to a development board or other targets is often more difficult than it should be.

Despite some of these challenges, change is coming. Machine learning on constrained IoT and embedded devices is being made easier by new development platforms, models that work out-of-the-box with these platforms, plus the expertise and increased resources from organisations like Arm and communities like tinyML. Here are a few must-watch talks to help in your embedded ML development: 

  • New to the tinyML space is Edge Impulse, a start-up that provides a solution for collecting device data, building a model based around it and deploying it to make sense of the data directly on the device. CTO at Edge Impulse, Jan Jongboom talks about how to use a traditional signal processing pipeline to detect anomalies with a machine learning model to detect different gestures. All of this has now been made even easier by the announced collaboration with Arduino, which simplifies even further the journey to train a neural network and deploy it on your device.
  • Arm recently announced new machine learning IP that not only has the capabilities to deliver a huge uplift in performance for low-power ML applications, but will also help solve many issues developers are facing today in terms of fragmented toolchains. The new Cortex-M55 processor and Ethos-U55 microNPU will be supported by a unified development flow for DSP and ML workloads, integrating optimizations for machine learning frameworks. Watch this talk to learn how to get started writing optimized code for these new processors.
  • An early adopter implementing object detection with ML on a Cortex-M is the OpenMV camera - a low-cost module for machine vision algorithms. During the conference, embedded software engineer, Lorenzo Rizzello walks you through how to get started with ML models and deploying them to the OpenMV camera to detect objects and the environment around the device.

Putting these machine learning technologies in the hands of embedded developers opens up new opportunities. I’m excited to see and hear what will come of all this amazing work and how it will improve development standards and transform embedded devices of the future.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!

Part 2 of my review can be viewed by clicking here

Read more…

Drivers are the backbone of the trucking industry and driver retention is one of the biggest concerns. Are you facing issues like driver shortage or driver retention? Advanced technology offers various solutions to transportation industry for such issues. Real-time monitoring, time-saving in shifts, better communication of drivers with managers, etc. simplify the work of drivers. Job dissatisfaction and operational inefficiencies are the major reasons why drivers leave the job. But this can be reduced, let’s see how.

How to improve the job satisfaction of drivers?
Trucking industry relies on the drivers for navigating through complex routes to transport sensitive goods safely and on time. It is a hectic job already and hence it is necessary that the drivers are satisfied with their job. Or else they will leave the job which hampers the business profits. What efforts can be taken for retention of drivers?

Simplifying the Driver’s Work using Technology
Drivers are constantly on the road. Traffic, bad weather conditions, etc. can be very irritating to them. However, using GPS systems, such conditions can be tracked proactively. The drivers can be directed to other routes which are having less traffic or some shifts can be canceled if the routes show bad weathers ahead. The drivers will not be annoyed by waiting for long hours and instead can take rest if the shifts are canceled due to bad weathers.

Automation and digitalization save time and efforts of the drivers. There are mobile apps which keep track of the load on the trucks. It saves the drivers of manual checking of load and also sends messages to the owners about the load. Also, in case of any theft or adding illegal loads to the truck, the owners can get instant messages right on their smartphones.

One such eminent example is the mobile app- Appweigh. It uses Bluetooth-enabled weight sensor to keep the track of the load on your truck. It is a budget-friendly app which combines the sensor and Bluetooth technology. Throughout the shipment of the trucks, the sensors detect the pressure on the tyres and clearly display the weight through AppWeigh on the smartphone of owners or fleet managers. The drivers don’t need to keep manual watch on the weight when the load reaches a certain destination. It is automatically sent to the owners.

Using IoT in Transportation for Better Communication and Time Management
Open communication with the drivers not only ensures transparency but also makes the drivers feel like true partners in the business. It encourages and engages them. Internet of things or IoT in transportation industry is playing a crucial role in connecting technology with people for more accurate results. It connects tools like sensors, RFID systems, GPS systems, smartphones, etc. to each other to gather vital data and communicate it to drivers and owners. Using this data, they can make informed decisions for improving various processes in fleet management. With such transparency, manual errors by drivers can be avoided and small issues can be discussed proactively before they turn into bigger problems.

Technology saves the drivers from keeping manual records of loads, timings, etc. as everything is automatically recorded. It reduces the stress of the drivers and ensures loyalty to the owners.

Making the Driver Health a Priority
Drivers’ health is the most critical topic when it comes to driver retention. A trucking industry can offer health benefits like health insurance plans, nutrition programs, free health screenings, etc. to drivers. Such benefits are an investment in your drivers. Also, the incorporation of smart cameras can reduce the risks of accidents. When the drivers feel safe and cared for their lives, your company reputation improves. They themselves will ask other drivers to join your company.

Giving Performance Incentives and Engaging Drivers
When drivers are appreciated and rewarded for their good work, it inspires them to do better and also be stable with your trucking industry. Financial incentive systems can be used to reward the most productive and safest drivers. Technology can be used to evaluate the drivers’ performance fairly. Real-time coaching and user-friendly solutions to any issues will help the drivers to progress faster and feel supported. The drivers who work hard to improve their performance can be awarded with the performance incentives. This will also encourage and engage fellow drivers. 

 

Conclusion
Smart technologies are providing highly efficient solutions to transportation industry. Along with driver retention, these technologies help in real-time visibility of the processes, maintaining the vehicle health, improving warehouse and yard management, etc. which enormously boost the business profits. IoT in transportation industry provides robust security services to drivers as well as the freight. Reliable data that owners get from smart technical solutions lets them take the right decisions to maintain their workforce. It enhances driver satisfaction and retention rates.

Read more…
RSS
Email me when there are new items in this category –

Sponsor