Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Devices (332)

Image Source: SEGGER.com

Nearly every embedded software developer working in the IoT space is now building secure devices. Developers have been mostly focused on how to handle secure applications and the basic microcontroller technologies such as how to use Arms TrustZone or leverage multicore processors. A looming problem that many companies and teams are overlooking is that figuring out how to develop secure applications is just the first step. There are three stages to secure product lifecycle management and in today’s post, we will review what is involved in each stage.

As a quick overview, the stages, which can be seen in the diagram below, are:

  • Development
  • Test and Production Deployment
  • Maintenance and In-field Servicing

Let us look at each of these stages in a little more detail. 

Stage #1 – Development

Development is probably the area that most developers are the most familiar with, but at the same time, the area that they are learning to adapt to the most. Many developers have designed and built systems without ever having to take security into account. Development involves a lot more than just deciding which components to isolate and how to separate the software into secure and non-secure regions.

For example, during the development phase developers now need to learn how to develop in the environment where a secure bootloader is in place. They need to consider how to handle firmware fallbacks, if they are allowed and if so, under what conditions. Firmware images may need to be compressed on top of the need for authentication.

While the development stage has become more complicated, developers should not struggle too much to extrapolate their past experiences to developing secure firmware successfully.

Stage #2 – Test and Production Deployment

The area that developers will probably struggle with the most is the test and production deployment stage. Testing secure software requires additional steps to be taken that authenticate debug hardware so that the developer can access secure memory regions to test their code and successfully debug it. Even more importantly, care must be taken to install that secure software onto a product during production.

There are several ways this can be done, but one method is to use a secure flashing device like SEGGERS Flasher Secure. These devices can follow a multistep process that involves validating a user ID which allows the secure firmware to be installed on the device. The devices themselves limit how many and on what devices the firmware can be installed which helps to protect a team’s intellectual property and prevents unauthorized production of a product.

8782955684?profile=RESIZE_710x

Stage #3 – Maintenance and In-field Servicing

Finally, there is the maintenance and in-field servicing stage which is a partial continuation of the development phase. Once a product has been deployed into the field, it needs to be securely updated. Updates can be done manually in-field, or they can be done using an over-the-air update process. This involves a device being able to contact a secure firmware server that can compress and encrypt the image and transport it to the device. Once the device has received the image, it must decrypt, decompress and validate the contents of the image. If everything looks good, the image can then be loaded as the primary firmware for the device.

Conclusions

 There is much more to designing and deploying a secure device than simply developing a secure application. The entire process is broken up into three main stages that we have looked at in greater detail today. Unfortunately, we have only just scratched the surface!

Orignally posted here.

Read more…

In this blog, we’ll discuss how users of Edge Impulse and Nordic can actuate and stream classification results over BLE using Nordic’s UART Service (NUS). This makes it easy to integrate embedded machine learning into your next generation IoT applications. Seamless integration with nRF Cloud is also possible since nRF Cloud has native support for a BLE terminal. 

We’ve extended the Edge Impulse example functionality already available for the nRF52840 DK and nRF5340 DK by adding the abilities to actuate and stream classification outputs. The extended example is available for download on github, and offers a uniform experience on both hardware platforms. 

Using nRF Toolbox 

After following the instructions in the example’s readme, download the nRF Toolbox mobile application (available on both iOS and Android) and connect to the nRF52840 DK or the nRF5340 DK that will be discovered as “Edge Impulse”. Once connected, set up the interface as follows so that you can get information about the device, available sensors, and start/stop the inferencing process. Save the preset configuration so that you can load it again for future use. Fill out the text of the various commands to use the same convention as what is used for the Edge Impulse AT command set. For example, sending AT+RUNIMPULSE starts the inferencing process on the device. 

IMG_7478_474aa59323.jpg
Figure 1. Setting up the Edge Impulse AT Command set

Once the appropriate AT command set mapping to an icon has been done, hit the appropriate icon. Hitting the ‘play’ button cause the device to start acquiring data and perform inference every couple of seconds. The results can be viewed in the “Logs” menu as shown below.

NUS_ble_logger_view_e9daba3698.jpg
Figure 2. Classification Output over BLE in the Logs View

Using nRF Cloud

Using the nRF Connect for Cloud mobile app for iOS and Android, you can turn your smartphone into a BLE gateway. This allows users to easily connect their BLE NUS devices running Edge Impulse to the nRF Cloud as an easy way to send the inferencing conclusions to the cloud. It’s as easy as setting up the BLE gateway through the app, connecting to the “Edge Impulse” device and watching the same results being displayed in the “Terminal over BLE” window shown below!

Screen_Hunter_229_Feb_16_23_45_26c8913865.jpgFigure 3. Classification Output Shown in nRF Cloud

Summary

Edge Impulse is supercharging IoT with embedded machine learning and we’ve discussed a couple of ways you can easily send conclusions to either the smartphone or to the cloud by leveraging the Nordic UART Service. We look forward to seeing how you’ll leverage Edge Impulse, Nordic and BLE to create your next gen IoT application.  

 

Article originally written for the Edge Impulse blog by Zin Thein Kyaw, Senior User Success Engineer at Edge Impulse.

Read more…

By Adam Dunkels

When you have to install thousands of IoT devices, you need to make device installation impressively fast. Here is how to do it.

Every single IoT device out there has been be installed by someone.

Installation is the activity that requires the most attention during that device’s lifetime.

This is particularly true for large scale IoT deployments.

We at Thingsquare have been involved in many IoT products and projects. Many of these have involved large scale IoT deployments with hundreds or thousands of devices per deployment site.

In this article, we look at why installation is so important for large IoT deployments – and a list of 6 installation tactics to make installation impressively fast while being highly useful:

  1. Take photos
  2. Make it easy to identify devices
  3. Record the location of every device
  4. Keep a log of who did what
  5. Develop an installation checklist, and turn it into an app
  6. Measure everything

And these tactics are useful even if you only have a handful of devices per site, but thousands or tens of thousands of devices in total.

Why Installation Tactics are Important in Large IoT Deployments

Installation is a necessary step of an IoT device’s life.

Someone – maybe your customers, your users, or a team of technicians working for you – will be responsible for the installation. The installer turns your device from a piece of hardware into a living thing: a valuable producer of information for your business.

But most of all, installation is an inevitable part of the IoT device life cycle.

The life cycle of an IoT device can be divided into four stages:

  1. Produce the device, at the factory (usually with a device programming tool).
  2. Install the device.
  3. Use the device. This is where the device generates the value that we created it for. The device may then be either re-installed at a new location, or we:
  4. Retire the device.

Two stages in the list contain the installation activity: both Install and Use.

So installation is inevitable – and important. We need to plan to deal with it.

Installation is the Most Time-Consuming Activity

Most devices should spend most of their lifetime in the Use stage of their life cycle.

But a device’s lifetime is different from the attention time that we need to spend on them.

Devices usually don’t need much attention in their Use stage. At this stage, they should mostly be sitting there and generate valuable information.

By contrast, for the people who work with the devices, most of their attention and time will be spent in the Install stage. Since those are people who’s salary you are paying for, you want to be as efficient as possible.

How To Make Installation Impressively Fast - and Useful

At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.

These are our top six tactics to make installation fast – and useful:

1. Take Photos

After installation, you will need to maintain and troubleshoot the system. This is a normal part of the Use stage.

Photos are a goldmine of information. Particularly if it is difficult to get to the location afterward.

Make sure you take plenty of photos of each device as they are installed. In fact, you should include multiple photos in your installation checklist – more about this below.

We have been involved in several deployments where we have needed to remotely troubleshoot installations after they were installed. Having a bunch of photos of how and where the devices were installed helps tremendously.

The photos don’t need to be great. Having a low-quality photo beats having no photo, every time.

 

2. Make it Easy to Identify Devices

When dealing with hundreds of devices, you need to make sure that you know exactly which you installed, and where.

You therefore need to make it easy to identify each device. Device identification can be made in several ways, and we recommend you to use more than one way to identify the devices. This will reduce the risk of manual errors.

The two ways we typically use are:

  • A printed unique ID number on the device, which you can take a photo of
  • Automatic secure device identification via Bluetooth – this is something the Thingsquare IoT platform supports out of the box

Being certain about where devices were installed will make maintenance and troubleshooting much easier – particularly if it is difficult to visit the installation site.

3. Record the Location of Every Device

When devices are installed, make sure to record their location.

The easiest way to do this is to take the GPS coordinates of the devices as it is being deployed. Preferably with the installation app, which can do this automatically – see below.

For indoor installations, exact GPS locations may be unreliable. But even for those devices, having a coarse-grained GPS location is useful.

The location is useful both when analyzing the data that the devices produce, and when troubleshooting problems in the network.

 

4. Keep a Log of Who Did What

In large deployments, there will be many people involved.

Being able to trace the installation actions, as well as who took what action, is enormously useful. Sometimes just knowing the steps that were taken when installing each device is important. And sometimes you need to talk to the person who did the installation.

5. Develop an Installation Checklist - and Turn it into an App

Determine what steps are needed to install each device, and develop a step-by-step checklist for each step.

Then turn this checklist into an app that installation personnel can run on their own phones.

Each step of each checklist should be really easy understand to avoid mistakes along the way. And it should be easy to go back and forth in the steps, if needed.

Ideally, the app should run on both Android and iOS, because you would like everyone to be able to use it on their own phones.

Here is an example checklist, that we developed for a sensor device in a retail IoT deployment:

  • Check that sensor has battery installed
  • Attach sensor to appliance
  • Make sure that the sensor is online
  • Check that the sensor has a strong signal
  • Check that the GPS location is correct
  • Move hand in front of sensor, to make sure sensor correctly detects movement
  • Be still, to make sure sensor correctly detects no movement
  • Enter description of sensor placement (e.g. “on top of the appliance”)
  • Enter description of appliance
  • Take a photo of the sensor
  • Take a photo of the appliance
  • Take a photo of the appliance and the two beside it
  • Take a photo of the appliance and the four beside it
 

6. Measure Everything

Since installation costs money, we want it to be efficient.

And the best way to make a process more efficient is to measure it, and then improve it.

Since we have an installation checklist app, measuring installation time is easy – just build it into the app.

Once we know how much time each step in the installation process needs, we are ready to revise the process and improve it. We should focus on the most time-consuming step first and measure the successive improvements to make sure we get the most bang for the buck.

Conclusions

Every IoT device needs to be installed and making the installation process efficient saves us attention time for everyone involved – and ultimately money.

At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.

We use our experience to solve hard problems in the IoT space, such as how to best install large IoT systems – get in touch with us to learn more!

Originally posted here.

Read more…

by Stephanie Overby

What's next for edge computing, and how should it shape your strategy? Experts weigh in on edge trends and talk workloads, cloud partnerships, security, and related issues


All year, industry analysts have been predicting that that edge computing – and complimentary 5G network offerings ­­– will see significant growth, as major cloud vendors are deploying more edge servers in local markets and telecom providers pushing ahead with 5G deployments.

The global pandemic has not significantly altered these predictions. In fact, according to IDC’s worldwide IT predictions for 2021, COVID-19’s impact on workforce and operational practices will be the dominant accelerator for 80 percent of edge-driven investments and business model change across most industries over the next few years.

First, what exactly do we mean by edge? Here’s how Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat, defines it: “Edge computing refers to the concept of bringing computing services closer to service consumers or data sources. Fueled by emerging use cases like IoT, AR/VR, robotics, machine learning, and telco network functions that require service provisioning closer to users, edge computing helps solve the key challenges of bandwidth, latency, resiliency, and data sovereignty. It complements the hybrid computing model where centralized computing can be used for compute-intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time.”

Moving data infrastructure, applications, and data resources to the edge can enable faster response to business needs, increased flexibility, greater business scaling, and more effective long-term resilience.

“Edge computing is more important than ever and is becoming a primary consideration for organizations defining new cloud-based products or services that exploit local processing, storage, and security capabilities at the edge of the network through the billions of smart objects known as edge devices,” says Craig Wright, managing director with business transformation and outsourcing advisory firm Pace Harmon.

“In 2021 this will be an increasing consideration as autonomous vehicles become more common, as new post-COVID-19 ways of working require more distributed compute and data processing power without incurring debilitating latency, and as 5G adoption stimulates a whole new generation of augmented reality, real-time application solutions, and gaming experiences on mobile devices,” Wright adds.

8 key edge computing trends in 2021


Noting the steady maturation of edge computing capabilities, Forrester analysts said, “It’s time to step up investment in edge computing,” in their recent Predictions 2020: Edge Computing report. As edge computing emerges as ever more important to business strategy and operations, here are eight trends IT leaders will want to keep an eye on in the year ahead.

1. Edge meets more AI/ML


Until recently, pre-processing of data via near-edge technologies or gateways had its share of challenges due to the increased complexity of data solutions, especially in use cases with a high volume of events or limited connectivity, explains David Williams, managing principal of advisory at digital business consultancy AHEAD. “Now, AI/ML-optimized hardware, container-packaged analytics applications, frameworks such as TensorFlow Lite and tinyML, and open standards such as the Open Neural Network Exchange (ONNX) are encouraging machine learning interoperability and making on-device machine learning and data analytics at the edge a reality.” 

Machine learning at the edge will enable faster decision-making. “Moreover, the amalgamation of edge and AI will further drive real-time personalization,” predicts Mukesh Ranjan, practice director with management consultancy and research firm Everest Group.

“But without proper thresholds in place, anomalies can slowly become standards,” notes Greg Jones, CTO of IoT solutions provider Kajeet. “Advanced policy controls will enable greater confidence in the actions made as a result of the data collected and interpreted from the edge.” 

 

2. Cloud and edge providers explore partnerships


IDC predicts a quarter of organizations will improve business agility by integrating edge data with applications built on cloud platforms by 2024. That will require partnerships across cloud and communications service providers, with some pairing up already beginning between wireless carriers and the major public cloud providers.

According to IDC research, the systems that organizations can leverage to enable real-time analytics are already starting to expand beyond traditional data centers and deployment locations. Devices and computing platforms closer to end customers and/or co-located with real-world assets will become an increasingly critical component of this IT portfolio. This edge computing strategy will be part of a larger computing fabric that also includes public cloud services and on-premises locations.

In this scenario, edge provides immediacy and cloud supports big data computing.

 

3. Edge management takes center stage


“As edge computing becomes as ubiquitous as cloud computing, there will be increased demand for scalability and centralized management,” says Wright of Pace Harmon. IT leaders deploying applications at scale will need to invest in tools to “harness step change in their capabilities so that edge computing solutions and data can be custom-developed right from the processor level and deployed consistently and easily just like any other mainstream compute or storage platform,” Wright says.

The traditional approach to data center or cloud monitoring won’t work at the edge, notes Williams of AHEAD. “Because of the rather volatile nature of edge technologies, organizations should shift from monitoring the health of devices or the applications they run to instead monitor the digital experience of their users,” Williams says. “This user-centric approach to monitoring takes into consideration all of the components that can impact user or customer experience while avoiding the blind spots that often lie between infrastructure and the user.”

As Stu Miniman, director of market insights on the Red Hat cloud platforms team, recently noted, “If there is any remaining argument that hybrid or multi-cloud is a reality, the growth of edge solidifies this truth: When we think about where data and applications live, they will be in many places.”

“The discussion of edge is very different if you are talking to a telco company, one of the public cloud providers, or a typical enterprise,” Miniman adds. “When it comes to Kubernetes and the cloud-native ecosystem, there are many technology-driven solutions competing for mindshare and customer interest. While telecom giants are already extending their NFV solutions into the edge discussion, there are many options for enterprises. Edge becomes part of the overall distributed nature of hybrid environments, so users should work closely with their vendors to make sure the edge does not become an island of technology with a specialized skill set.“

 

4. IT and operational technology begin to converge


Resiliency is perhaps the business term of the year, thanks to a pandemic that revealed most organizations’ weaknesses in this area. IoT-enabled devices (and other connected equipment) drive the adoption of edge solutions where infrastructure and applications are being placed within operations facilities. This approach will be “critical for real-time inference using AI models and digital twins, which can detect changes in operating conditions and automate remediation,” IDC’s research says.

IDC predicts that the number of new operational processes deployed on edge infrastructure will grow from less than 20 percent today to more than 90 percent in 2024 as IT and operational technology converge. Organizations will begin to prioritize not just extracting insight from their new sources of data, but integrating that intelligence into processes and workflows using edge capabilities.

Mobile edge computing (MEC) will be a key enabler of supply chain resilience in 2021, according to Pace Harmon’s Wright. “Through MEC, the ecosystem of supply chain enablers has the ability to deploy artificial intelligence and machine learning to access near real-time insights into consumption data and predictive analytics as well as visibility into the most granular elements of highly complex demand and supply chains,” Wright says. “For organizations to compete and prosper, IT leaders will need to deliver MEC-based solutions that enable an end-to-end view across the supply chain available 24/7 – from the point of manufacture or service  throughout its distribution.”

 

5. Edge eases connected ecosystem adoption


Edge not only enables and enhances the use of IoT, but it also makes it easier for organizations to participate in the connected ecosystem with minimized network latency and bandwidth issues, says Manali Bhaumik, lead analyst at technology research and advisory firm ISG. “Enterprises can leverage edge computing’s scalability to quickly expand to other profitable businesses without incurring huge infrastructure costs,” Bhaumik says. “Enterprises can now move into profitable and fast-streaming markets with the power of edge and easy data processing.”

 

6. COVID-19 drives innovation at the edge


“There’s nothing like a pandemic to take the hype out of technology effectiveness,” says Jason Mann, vice president of IoT at SAS. Take IoT technologies such as computer vision enabled by edge computing: “From social distancing to thermal imaging, safety device assurance and operational changes such as daily cleaning and sanitation activities, computer vision is an essential technology to accelerate solutions that turn raw IoT data (from video/cameras) into actionable insights,” Mann says. Retailers, for example, can use computer vision solutions to identify when people are violating the store’s social distance policy.

 

7. Private 5G adoption increases


“Use cases such as factory floor automation, augmented and virtual reality within field service management, and autonomous vehicles will drive the adoption of private 5G networks,” says Ranjan of Everest Group. Expect more maturity in this area in the year ahead, Ranjan says.

 

8. Edge improves data security


“Data efficiency is improved at the edge compared with the cloud, reducing internet and data costs,” says ISG’s Bhaumik. “The additional layer of security at the edge enhances the user experience.” Edge computing is also not dependent on a single point of application or storage, Bhaumik says. “Rather, it distributes processes across a vast range of devices.”

As organizations adopt DevSecOps and take a “design for security” approach, edge is becoming a major consideration for the CSO to enable secure cloud-based solutions, says Pace Harmon’s Wright. “This is particularly important where cloud architectures alone may not deliver enough resiliency or inherent security to assure the continuity of services required by autonomous solutions, by virtual or augmented reality experiences, or big data transaction processing,” Wright says. “However, IT leaders should be aware of the rate of change and relative lack of maturity of edge management and monitoring systems; consequently, an edge-based security component or solution for today will likely need to be revisited in 18 to 24 months’ time.”

Originally posted here.

Read more…

By Natallia Babrovich

My experience shows that most of the visits to doctors are likely to become virtual in the future. Let’s see how IoT solutions make the healthcare environment more convenient for patients and medical staff.

What are IoT and IoMT?

My colleague Alex Grizhnevich, IoT consultant at ScienceSoft, defines Internet of Things as a network of physical devices with sensors and actuators, software, and network connectivity that enable devices to gather and transmit data and fulfill users' tasks. Today, IoT becomes a key component of the digital transformation of healthcare, so we can distinguish a separate group of initiatives, the so-called IoHT (Internet of Health Things) or IoMT (Internet of Medical Things).

Popular IoMT Use Cases

IoT-based patient care

Medication intake tracking

IoT-based medication tracking allows doctors to monitor the impact of a prescribed medication’s dosage on a patient’s condition. In their turn, patients can control medication intake, e.g., by using in-app reminders and note in the app how their symptoms change for their doctor’s further analysis. The patient app can be connected to smart devices, (e.g., a smart pill bottle) for easier management of multiple medications.

Remote health monitoring

Among examples of employing IoT in healthcare, this use case is especially viable for chronic disease management. Patients can use connected medical devices or body-worn biosensors to allow doctors or nurses to check their vitals (blood pressure, glucose level, heart rate, etc.) via doctor/nurse-facing apps. Health professionals can monitor this data 24/7 and study app-generated reports to get insights into health trends. Patients who show signs of deteriorating health are scheduled for in-person visits.

IoT- and RFID-based medical asset monitoring

Medical inventory and equipment tracking

All medical tools and durable assets (beds, medical equipment) are equipped with RFID (radio frequency identification) tags. Fixed RFID readers (e.g., on the walls) collect the info about the location of assets. Medical staff can view it using a mobile or web application with a map.

Drug tracking

RFID-enabled drug tracking helps pharmacies and hospitals verify the authenticity of medication packages and timely spot medication shortages.

Smart hospital space

Cloud-connected ward sensors (e.g., a light switch, door and window contacts) and ambient sensors (e.g., hydrometers, noise detectors) allow patients to control their environment for a comfortable hospital stay.

Advantages of using IoT technology in healthcare

Patient-centric care

Medical IoT helps turn patients into active participants of the treatment process, thus improving care outcomes. Besides, IoMT helps increase patient satisfaction with care delivery, from communication with medical staff to physical comfort (smart lighting, climate control, etc.).

Reduced care-related costs

Non-critical patients can stay at home and use cloud-connected medical IoT devices, which gather, track and send health data to the medical facility. And with the help of telehealth technology, patients can schedule e-visits with nurses and doctors without traveling to the hospital.

Reduced readmissions

Patient apps connected to biosensors help ensure compliance with a discharge plan, enable prompt detection of health state deviations, and provide an opportunity to timely contact a health professional remotely.

Challenges of IoMT and how to address them

Potential health data security breaches

The connected nature of IoT brings about information security challenges for healthcare providers and patients.

Tip from ScienceSoft

We recommend implementing HIPAA-compliant IoMT solutions and conduct vulnerability assessment and penetration testing regularly to ensure the highest level of protection.

Integration difficulties

Every medical facility has its unique set of applications to be integrated with an IoMT solution (e.g., EHR, EMR). Some of these applications may be heavily customized or outdated.

Tip from ScienceSoft

Develop the integrations strategy from the start of your IoMT project, including the scope and the nature of custom integrations.

Enhance care delivery with IoMT

According to my estimates, the use of IoT technology in healthcare will continue to rise during the next decade, driven by the impact of the COVID situation and the growing demand for remote care. If you need help with creating and implementing a fitting IoMT solution, you’re welcome to turn to ScienceSoft’s healthcare IT team.

Originally posted here.

Read more…

OMG! Three 32-bit processor cores each running at 300 MHz, each with its own floating-point unit (FPU), and each with more memory than you can throw a stick at!

In a recent column on Recreating Retro-Futuristic 21-Segment Victorian Displays, I noted that I’ve habitually got a number of hobby projects on the go. I also joked that, “one day, I hope to actually finish one or more of the little rascals.” Unfortunately, I’m laughing through my tears because some of my projects do appear to be never-ending.

For example, shortly after the internet first impinged on the popular consciousness with the launch of the Mosaic web browser in 1993, a number of humorous memes started to bounce around. It was one of these that sparked my Pedagogical and Phantasmagorical Inamorata Prognostication Engine project, which has been a “work in progress” for more than 20 years as I pen these words.

Feast your orbs on the Prognostication Engine (Click image to see a larger version — Image source: Max Maxfield)

As you can see in the image to the right, the Prognostication Engine has grown in the telling. The main body of the engine is housed in a wooden radio cabinet from 1929. My chum, Carpenter Bob, created the smaller section on the top, with great attention to detail like the matching hand-carved rosettes.

The purported purpose of this bodacious beauty is to forecast the disposition of my wife (Gina the Gorgeous) when I’m poised to leave the office and head for home at the end of the day (hence the “Prognostication” portion of the engine’s moniker). Paradoxically, should Gina ever discover the true nature of the engine, I won’t actually need it to predict her mood of the moment.

As we see, the control panels are festooned with antique knobs, toggle switches, pushbuttons, and analog meters. The knobs are mounted on motorized potentiometers, so if an unauthorized user attempts to modify their settings, they will automatically return to their original values under program control. The switches and pushbuttons are each accompanied by two LEDs, while each knob is equipped with a ring of 16 LEDs, resulting in 116 LEDs in all. Then there are 64 LEDs in the furnace and 140 LEDs in the rings surrounding the bases of the large vacuum tubes mounted on the top of the beast.

I was just reflecting on how much technology has changed over the past couple of decades. For example, today’s “smart LEDs” like WS2812Bs (a.k.a. NeoPixels) can be daisy-chained together, allowing multiple devices to be controlled using a single pin on the microcontroller. It probably goes without saying (but I’ll say it anyway) that all of the LEDs in the current incarnation of the engine are tricolor devices in the form of NeoPixels, but this was not always the case.

An early prototype of a shift register capable of driving only 13 tricolor LEDs (Click image to see a larger version — Image source: Max Maxfield)

The tricolor LEDs I was planning on using 20 years ago each required three pins to be controlled. The solution at that time would have been to implement a huge external shift register. The image to the left shows an early shift register prototype sufficient to drive only 13 “old school” devices.

And, of course, developments in computing have been even more staggering. When I commenced this project, I was using a PIC microcontroller that I programmed in BASIC. After the Arduino first hit the scene circa 2005, I migrated to using Arduino Unos, followed by Arduino Megas, that I programmed in C/C++.

One of the reasons I like the Arduino Mega is its high pin count, boasting 54 digital input/output (I/O) pins, of which 15 can be used as pulse-width modulated (PWM) outputs, 16 analog inputs, and 4 UARTS. On the other hand, the Mega is only an 8-bit machine running at 16 MHz, it offers only 256 KB of Flash (program) memory and 8 KB of SRAM, and it doesn’t have hardware support for floating-point operations.

The thing is that the Prognostication Engine has a lot of things going on. In addition to reading the states of all the switches and pushbuttons and potentiometers, it has to control the motors behind the knobs and drive the analog meters. Currently, the LEDs are being driven with simple test patterns, but these are going to be upgraded to support much more sophisticated animation and fading effects. The engine is also constantly performing calculations of an astronomical and astrological nature, determining things like the dates of forthcoming full moons and blue moons.

In the fullness of time, the engine is going to be connected to the internet so it can monitor things like the weather. It’s also going to have its own environmental sensors (temperature, humidity, barometric pressure) and proximity detection sensors. Furthermore, the engine will also boast a suite of sound effects such that flicking a simple switch, for example, may result in myriad sounds of mechanical mechanisms performing their magic. At some stage, I’m even hoping to add things like artificial intelligence (AI) and facial recognition.

The current state of computational play (Click image to see a larger version — Image source: Max Maxfield)

Sad to relate, my existing computing solution is not capable of handling all the tasks I wish the engine to perform. The image to the right shows the current state of computational play. As we see, there is one Arduino Mega in the lower cabinet controlling the 116 LEDs on the front panel. Meanwhile, there are two Megas in the upper cabinet, with one controlling the LEDs in the furnace and the other controlling the LEDs associated with the large vacuum tubes.

Up until a couple of years ago, I was vaguely planning on adding more and more Megas. I was also cogitating and ruminating as to how I was going to get these little rascals to talk to each other so that everyone knew (a) what we were trying to do and (b) what everyone else was actually doing.

Unfortunately, the whole computational architecture was becoming unwieldy, so I started to look for another solution. You can only imagine my surprise and delight when I was first introduced to the original ShieldBuddy TC275 from the folks at Hitex (see Everybody Needs a ShieldBuddy). This little beauty, which has an Arduino Mega footprint, features the Aurix TC275 processor from Infineon. The TC275 boasts three 32-bit cores, all running at 200 MHz, each with its own floating-point unit (FPU), and all sharing 4 Mbytes of Flash and 500 Kbytes of RAM (this is a bit of a simplification, but it will suffice for now).

Processors like the Aurix are typically to be found only in state-of-the-art embedded systems and they rarely make it into the maker world. To be honest, when I first saw the ShieldBuddy TC275, I thought to myself, “Life can’t get any better than this!” Well, I was wrong, because the guys and gals at Hitex have just announced the ShieldBuddy TC375, which features an Aurix TC375 processor!

O.M.G! I just took delivery of one of these bodacious beauties, and I’m so excited that I was moved to make this video.

I don’t know where to start. As before, we have three 32-bit cores, each with its own FPU. This time, however, the cores run at 300 MHz. Although each core runs independently, they can communicate and coordinate between themselves using techniques like shared memory and software interrupts. With regard to memory, the easiest way to summarize this is as follows: The TC375 processor has:

  • 6MB Flash ROM
  • 384 KB Data flash

And each of the three cores has:

  • 240 KB Data Scratch-Pad RAM (DSPR)
  • 64 KB Instruction Scratch-Pad RAM (PSPR)
  • 32 KB Instruction Cache (ICACHE)
  • 16 KB Data Cache (DCACHE)
  • 64 KB DLMU RAM

Actually, there’s a lot more to this than meets the eye. For example, the main SRAMs (the DSPRs) associated with each of the cores appear at two locations in the memory map. In the case of Core 0, for example, the first location in its DSPR is located at address 0xD0000000 where it is considered to be local (i.e., it appears directly on Core 0’s local internal bus) and can be accessed quickly. However, this DSPR is also visible to Cores 1 and 2 at 0x70000000 via the main on-chip system bus, which allows them to read and write to this memory freely, but at a lower speed than Core 0. Similarly, Cores 1 and 2 access their own memories locally and each other’s memories globally.

Meet the ShieldBuddy TC375 (Click image to see a larger version — Image source: Hitex)

As for the original ShieldBuddy TC275, if you are a professional programmer, you’ll be delighted to hear that the main ShieldBuddy TC375 toolchain is the Eclipse-based “FreeEntryToolchain” from HighTec/PLS/Infineon. This is a full-on C/C++ development environment with source-level debugger and suchlike.

By comparison, if you are a novice programmer like your humble narrator, you’ll be overjoyed to hear that the ShieldBuddy TC375 can be programmed via the Arduino’s integrated development environment (IDE). As far as I’m concerned, this programming model is where things start to get very clever indeed.

An Arduino sketch (program) always contains two functions: setup(), which runs only one time, and loop(), which runs over and over again (the system automatically inserts a main() function while your back is turned). If you take an existing sketch and compile it for the ShieldBuddy, then it will run on Core 0 by default. You can achieve the same effect by renaming your setup() and loop() functions to be setup0() and loop0(), respectively.

Similarly, you can create setup1() and loop1() functions, which will automatically be compiled to run on Core 1, and you can create setup2() and loop2() functions, which will automatically be compiled to run on Core 2. Any of your “homegrown” functions will be compiled in such a way as to run on whichever of the cores need to use them. I know that, like Pooh, I’m a bear of little brain, but even I can wrap my poor old noggin around this usage model.

There’s much, much more to this incredible board than I can cover here, but if you are interested in learning more, then may I recommend that you visit this portion of the Hitex site where you will find all sorts of goodies, including the ShieldBuddy Forum and the ShieldBuddy TC375 User Manual.

Now, if you will forgive me, I must away because I have to go and gloat over “my precious” (my ShieldBuddy TC375) and commence preparations to upgrade the Prognostication Engine by removing all of its existing processors and replacing them with a single ShieldBuddy TC375.

Actually, I just had a parting thought, which is that the Prognostication Engine’s role in life is to help me predict the future but — when I started out on this project — I would never have predicted that technology would develop so fast that I would one day have a triple-core 300 MHz processor driving “the beast.” How about you? What are your thoughts on all of this?

Originally posted here.

Read more…

IoT Sustainability, Data At The Edge.

Recently I've written quite a bit about IOT, and one thing you may have picked up on is that the Internet of Things is made up of a lot of very large numbers.

For starters, the number of connected things is measured in the tens of billions, nearly 100's of billions. Then, behind that very large number is an even bigger number, the amount of data these billions of devices is predicted to generate.

As FutureIoT pointed out, IDC forecasted that the amount of data generated by IoT devices by 2025 is expected to be in excess of 79.4 zettabytes (ZB).

How much is Zettabyte!?

A zettabyte is a very large number indeed, but how big? How can you get your head around it? Does this help...?

A zettabyte is 1,000,000,000,000,000,000,000 bytes. Hmm, that's still not very easy to visualise.

So let's think of it in terms of London busses. Let's image a byte is represented as a human on a bus, a London bus can take 80 people, so you'd need 993 quintillion busses to accommodate 79.4 zettahumans.

I tried to calculate how long 993 quintillion busses would be. Relating it to the distance to the moon, Mars or the Sun wasn't doing it justice, the only comparable scale is the size of the Milky Way. Even with that, our 79.4 zettahumans lined up in London busses, would stretch across the entire Milky Way ... and a fair bit further!

Sustainability Of Cloud Storage For 993 Quintillion Busses Of Data

Everything we do has an impact on the planet. Just by reading this article, you're generating 0.2 grams of Carbon Dioxide (CO2) emissions per second ... so I'll try to keep this short.

Using data from the Stanford Magazine that suggests every 100 gigabytes of data stored in the Cloud could generate 0.2 tons of CO2 per year. Storing 79.4 zettabytes of data in the Cloud could be responsible for the production of 158.8 billion tons of greenhouse gases.

8598308493?profile=RESIZE_710x

 

Putting that number into context, using USA Today numbers, the total emissions for China, USA, India, Russia, Japan and Germany accounted for a little over 21 billion tons in 2019.

So if we just go ahead and let all the IoT devices stream data to the Cloud, those billions of little gadgets would indirectly generate more than seven times the air pollution than the six most industrial countries, combined.

Save The Planet, Store Data At The Edge

As mentioned in a previous article, not all data generated by IoT devices needs to be stored in the Cloud.

Speaking with an expert in data storage, ObjectBox, they say their users on average cut their Cloud data storage by 60%. So how does that work, then? 

First, what does The Edge mean?

The term "Edge" refers to the edge of the network, in other words the last piece of equipment or thing connected to the network closest to the point of usage.

Let me illustrate in rather over-simplified diagram.

8598328665?profile=RESIZE_710x

 

How Can Edge Data Storage Improve Sustainability?

In an article about computer vision and AI on the edge, I talked about how vast amounts of network data could be saved if the cameras themselves could detect what an important event was, and to just send that event over the network, not the entire video stream.

In that example, only the key events and meta data, like the identification marks of a vehicle crossing a stop light, needed to be transmitted across the network. However, it is important to keep the raw content at the edge, so it can be used for post processing, for further learning of the AI or even to be retrieved at a later date, e.g. by law-enforcement.

Another example could be sensors used to detect gas leaks, seismic activity, fires or broken glass. These sensors are capturing volumes of data each second, but they only want to alert someone when something happens - detection of abnormal gas levels, a tremor, fire or smashed window.

Those alerts are the primary purpose of those devices, but the data in between those events can also hold significant value. In this instance, keeping it locally at the edge, but having it as and when needed is an ideal way to reduce network traffic, reduce Cloud storage and save the planet (well, at least a little bit).

Accessible Data At The Edge

Keeping your data at the edge is a great way to save costs and increase performance, but you still want to be able to get access to it, when you need it.

ObjectBox have created not just one of the most efficient ways to store data at the edge, but they've also built a sophisticated and powerful method to synchronise data between edge devices, the Cloud and other edge devices.

Synchronise Data At The Edge - Fog Computing.

Fog Computing (which is computing that happens between the Cloud and the Edge) requires data to be exchanged with devices connected to the edge, but without going all the way to/from the servers in the Cloud. 

In the article on making smarter, safer cities, I talked about how by having AI-equipped cameras share data between themselves they could become smarter, more efficient. 

A solution like that could be using ObjectBox's synchronisation capabilities to efficiently discover and collect relevant video footage from various cameras to help either identify objects or even train the artificial intelligence algorithms running on the AI-equipped cameras at the edge.

Storing Data At The Edge Can Save A Bus Load CO2

Edge computing has a lot of benefits to offer, in this article I've just looked at what could often be overlooked - the cost of transferring data. I've also not really delved into the broader benefits of ObjectBox's technology, for example, from their open source benchmarks, ObjectBox seems to offer a ten times performance benefit compared to other solutions out there, and is being used by more than 300,000 developers.  

The team behind ObjectBox also built technologies currently used by internet heavy-weights like Twitter, Viber and Snapchat, so they seem to be doing something right, and if they can really cut down network traffic by 60%, they could be one of sustainable technology companies to watch.  

Originally posted here.

Read more…

Edge Impulse has joined 1% for Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. To complement this effort we launched the ElephantEdge competition, aiming to create the world’s best elephant tracking device to protect elephant populations that would otherwise be impacted by poaching. In this similar vein, this blog will detail how Lacuna Space, Edge Impulse, a microcontroller and LoraWAN can promote the conservation of endangered species by monitoring bird calls in remote areas.

Over the past years, The Things Networks has worked around the democratization of the Internet of Things, building a global and crowdsourced LoraWAN network carried by the thousands of users operating their own gateways worldwide. Thanks to Lacuna Space’ satellites constellation, the network coverage goes one step further. Lacuna Space uses LEO (Low-Earth Orbit) satellites to provide LoRaWAN coverage at any point around the globe. Messages received by satellites are then routed to ground stations and forwarded to LoRaWAN service providers such as TTN. This technology can benefit several industries and applications: tracking a vessel not only in harbors but across the oceans, monitoring endangered species in remote areas. All that with only 25mW power (ISM band limit) to send a message to the satellite. This is truly amazing!

Most of these devices are typically simple, just sending a single temperature value, or other sensor reading, to the satellite - but with machine learning we can track much more: what devices hear, see, or feel. In this blog post we'll take you through the process of deploying a bird sound classification project using an Arduino Nano 33 BLE Sense board and a Lacuna Space LS200 development kit. The inferencing results are then sent to a TTN application.

Note: Access to the Lacuna Space program and dev kit is closed group at the moment. Get in touch with Lacuna Space for hardware and software access. The technical details to configure your Arduino sketch and TTN application are available in our GitHub repository.

 

Our bird sound model classifies house sparrow and rose-ringed parakeet species with a 92% accuracy. You can clone our public project or make your own classification model following our different tutorials such as Recognize sounds from audio or Continuous Motion Recognition.

3U_BsvrlCU1J-Fvgi4Y_vV_I5u_LPwb7vSFhlV-Y4c3GCbOki958ccFA1GbVN4jVDRIrUVZVAa5gHwTmYKv17oFq6tXrmihcWbUblNACJ9gS1A_0f1sgLsw1WNYeFAz71_5KeimC

Once you have trained your model, head to the Deployment section, select the Arduino library and Build it.

QGsN2Sy7bP1MsEmsnFyH9cbMxsrbSAw-8_Q-K1_X8-YSXmHLXBHQ8SmGvXNv-mVT3InaLUJoJutnogOePJu-5yz2lctPemOrQUaj9rm0MSAbpRhKjxBb3BC5g-U5qHUImf4HIVvT

Import the library within the Arduino IDE, and open the microphone continuous example sketch. We made a few modifications to this example sketch to interact with the LS200 dev kit: we added a new UART link and we transmit classification results only if the prediction score is above 0.8.

Connect with the Lacuna Space dashboard by following the instructions on our application’s GitHub ReadMe. By using a web tracker you can determine when the next good time a Lacuna Space satellite will be flying in your location, then you can receive the signal through your The Things Network application and view the inferencing results on the bird call classification:

    {
       "housesparrow": "0.91406",
       "redringedparakeet": "0.05078",
       "noise": "0.03125",
       "satellite": true,
   }

No Lacuna Space development kit yet? No problem! You can already start building and verifying your ML models on the Arduino Nano 33 BLE Sense or one of our other development kits, test it out with your local LoRaWAN network (by pairing it with a LoRa radio or LoRa module) and switch over to the Lacuna satellites when you get your kit.

Originally posted on the Edge Impulse blog by Aurelien Lequertier - Lead User Success Engineer at Edge Impulse, Jenny Plunkett - User Success Engineer at Edge Impulse, & Raul James - Embedded Software Engineer at Edge Impulse

Read more…

The possibilities of what you can do with digital twin technology are only as limited as your imagination

Today, forward-thinking companies across industries are implementing digital twin technology in increasingly fascinating and ground-breaking ways. With Internet of Things (IoT) technology improving every day and more and more compute power readily available to organizations of all sizes, the possibilities of what you can do with digital twin technology are only as limited as your imagination.

What Is a Digital Twin?

A digital twin is a virtual representation of a physical asset that is practically indistinguishable from its physical counterpart. It is made possible thanks to IoT sensors that gather data from the physical world and send it to be virtually reconstructed. This data includes design and engineering details that describe the asset’s geometry, materials, components, and behavior or performance.

When combined with analytics, digital twin data can unlock hidden value for an organization and provide insights about how to improve operations, increase efficiency or discover and resolve problems before the real-world asset is affected.

These 4 Steps Are Critical for Digital Twin Success:

Involve the Entire Product Value Chain

It’s critical to involve stakeholders across the product value chain in your design and implementation. Each department faces diverse business challenges in their day-to-day operations, and a digital twin provides ready solutions to problems such as the inability to coordinate across end-to-end supply chain processes, minimal or no cross-functional collaboration, the inability to make data-driven decisions, or clouded visibility across the supply chain. Decision-makers at each level of the value chain have extensive knowledge on critical and practical challenges. Including their inputs will ensure a better and more efficient design of the digital twin and ensure more valuable and relevant insights.

Establish Well-Documented Practices

Standardized and well-documented design practices help organizations communicate ideas across departments, or across the globe, and make it easier for multiple users of the digital twin to build or alter the model without destroying existing components or repeating work. Best-in-class modelling practices increase transparency while simplifying and streamlining collaborative work.

Include Data From Multiple Sources

Data from multiple sources—both internal and external—is an essential part of creating realistic and helpful simulations. 3D modeling and geometry is sufficient to show how parts fit together and how a product works, but more input is required to model how various faults or errors might occur somewhere in the product’s lifecycle. Because many errors and problems can be nearly impossible to accurately predict by humans alone, a digital twin needs a vast amount of data and a robust analytics program to be able to run algorithms to make accurate forecasts and prevent downtime.

Ensure Long Access Lifecycles 

Digital twins implemented using proprietary design software have a risk of locking owners into a single vendor, which ties the long-term viability of the digital twin to the longevity of the supplier’s product. This risk is especially significant for assets with long lifecycles such as buildings, industrial machinery, airplanes, etc., since the lifecycles of these assets are usually much longer than software lifecycles. This proprietary dependency only becomes riskier and less sustainable over time. To overcome these risks, IT architects and digital twin owners need to carefully set terms with software vendors to ensure data compatibility is maintained and vendor lock-in can be avoided.

Common Pitfalls to Digital Twin Implementation

Digital twin implementation requires an extraordinary investment of time, capital, and engineering might, and as with any project of this scale, there are several common pitfalls to implementation success.

Pitfall 1: Using the Same Platform for Different Applications

Although it’s tempting to try and repurpose a digital twin platform, doing so can lead to incorrect data at best and catastrophic mistakes at worst. Each digital twin is completely unique to a part or machine, therefore assets with unique operating conditions and configurations cannot share digital twin platforms.

Pitfall 2: Going Too Big, Too Fast

In the long run, a digital twin replica of your entire production line or building is possible and could provide incredible insights, but it is a mistake to try and deploy digital twins for all of your pieces of equipment or programs all at once. Not only is doing too much, too fast costly, but it might cause you to rush and miss critical data and configurations along the way. Rather than rushing to do it all at once, perfect a few critical pieces of machinery first and work your way up from there.

Pitfall 3: Inability to Source Quality Data

Data collected in the field is subject to quality errors due to human mistakes or duplicate entries. The insights your digital twin provides you are only as valuable as the data it runs off of. Therefore, it is imperative to standardize data collection practices across your organization and to regularly cleanse your data to remove duplicate and erroneous entries.

Pitfall 4: Lack of Device Communication Standards

If your IoT devices do not speak a common language, miscommunications can muddy your processes and compromise your digital twin initiative. Build an IT framework that allows your IoT devices to communicate with one another seamlessly to ensure success.

Pitfall 5: Failing to Get User Buy-In

As mentioned earlier in this eBook, a successful digital twin strategy includes users from across your product value chain. It is critical that your users understand and appreciate the value your digital twin brings to them individually and to your organization as a whole. Lack of buy-in due to skepticism, lack of confidence, or resistance can lead to a lack of user participation, which can undermine all of your efforts.

The Challenge of Measuring Digital Twin Success

Each digital twin is unique and completely separate in its function and end-goal from others on the market, which can make measuring success challenging. Depending on the level of the twin implemented, businesses need to create KPIs for each individual digital twin as it relates to larger organizational goals.

The configuration of digital twins is determined by the type of input data, number of data sources and the defined metrics. The configuration determines the value an organization can extract from the digital twin. Therefore, a twin with a higher configuration can yield better predictions than can a twin with a lower configuration. The reality is that success can be relative, and it is impossible to compare the effectiveness of two different digital twins side by side.

Conclusion

It’s possible — probable even — that in the future all people, enterprises, and even cities will have a digital twin. With the enormous growth predicted in the digital twin market in the coming years, it’s evident that the technology is here to stay. The possible applications of digital twins are truly limitless, and as IoT technology becomes more advanced and widely accessible, we’re likely to see many more innovative and disruptive use cases.

However, a technology with this much potential must be carefully and thoughtfully implemented in order to ensure its business value and long-term viability. Before embracing a digital twin, an organization must first audit its maturity, standardize processes, and prepare its culture and staff for this radical change in operations. Is your organization ready?

Originally posted here.

Read more…

Skoltech researchers and their colleagues from Russia and Germany have designed an on-chip printed "electronic nose" that serves as a proof of concept for this kind of low-cost and sensitive devices to be used in portable electronics and healthcare. The paper was published in the journal ACS Applied Materials Interfaces.

The rapidly growing fields of Internet of Things (IoT) and advanced medical diagnostics require small, cost-effective, low-powered yet reasonably sensitive and selective gas-analytical systems like so-called "electronic noses." These systems can be used for noninvasive diagnostics of human breath, such as diagnosing chronic obstructive pulmonary disease (COPD) with a compact sensor system also designed at Skoltech. Some of these sensors work a lot like actual noses—say, yours—by using an array of sensors to better detect the complex signal of a gaseous compound.

One approach to creating these sensors is by additive manufacturing technologies, which have achieved enough power and precision to be able to produce the most intricate devices. Skoltech senior research scientist Fedor Fedorov, Professor Albert Nasibulin, research scientist Dmitry Rupasov and their collaborators created a multisensor "electronic nose" by printing nanocrystalline films of eight different metal oxides onto a multielectrode chip (they were manganese, cerium, zirconium, zinc, chromium, cobalt, tin, and titanium). The Skoltech team came up with the idea for this project.

"For this work, we used microplotter printing and true solution inks. There are a few things that make it valuable. First, the resolution of the printing is close to the distance between electrodes on the chip which is optimized for more convenient measurements. We show these technologies are compatible. Second, we managed to use several different oxides which enables more orthogonal signal from the chip resulting in improved selectivity. We can also speculate that this technology is reproducible and easy to be implemented in industry to obtain chips with similar characteristics, and that is really important for the 'e-nose' industry," Fedorov explained.

In subsequent experiments, the device was able to sniff out the difference between different alcohol vapors (methanol, ethanol, isopropanol, and n-butanol), which are chemically very similar and hard to tell apart, at low concentrations in the air. Since methanol is extremely toxic, detecting it in beverages and differentiating between methanol and ethanol can even save lives. To process the data, the team used linear discriminant analysis (LDA), a pattern recognition algorithm, but other machine learning algorithms could also be used for this task.

So far the device operates at rather high temperatures of 200-400 degrees Celsius, but the researchers believe that new quasi-2-D materials such as MXenes, graphene and so on could be used to increase the sensitivity of the array and ultimately allow it to operate at room temperature. The team will continue working in this direction, optimizing the materials used to lower power consumption.

Originally posted here.

Read more…

The benefits of IoT data are widely touted. Enhanced operational visibility, reduced costs, improved efficiencies and increased productivity have driven organizations to take major strides towards digital transformation. With countless promising business opportunities, it’s no surprise that IoT is expanding rapidly and relentlessly. It is estimated that there will be 75.4 billion IoT devices by 2025. As IoT grows, so do the volumes of IoT data that need to be collected, analyzed and stored. Unfortunately, significant barriers exist that can limit or block access to this data altogether.

Successful IoT data acquisition starts and ends with reliable and scalable IoT connectivity. Selecting the right communications technology is paramount to the long-term success of your IoT project and various factors must be considered from the beginning to build a functional wireless infrastructure that can support and manage the influx of IoT data today and in the future.

Here are five IoT architecture must-haves for unlocking IoT data at scale.

1. Network Ownership

For many businesses, IoT data is one of their greatest assets, if not the most valuable. This intensifies the demand to protect the flow of data at all costs. With maximum data authority and architecture control, the adoption of privately managed networks is becoming prevalent across industrial verticals.

Beyond the undeniable benefits of data security and privacy, private networks give users more control over their deployment with the flexibility to tailor their coverage to the specific needs of their campus style network. On a public network, users risk not having the reliable connectivity needed for indoor, underground and remote critical IoT applications. And since this network is privately owned and operated, users also avoid the costly monthly access, data plans and subscription costs imposed by public operators, lowering the overall total-cost-of-ownership. Private networks also provide full control over network availability and uptime to ensure users have reliable access to their data at all times.

2. Minimal Infrastructure Requirements

Since the number of end devices is often fixed to your IoT use cases, choosing a wireless technology that requires minimal supporting infrastructure like base stations and repeaters, as well as configuration and optimization is crucial to cost-effectively scale your IoT network.

Wireless solutions with long range and excellent penetration capability, such as next-gen low-power wide area networks, require fewer base stations to cover a vast, structurally dense industrial or commercial campuses. Likewise, a robust radio link and large network capacity allow an individual base station to effectively support massive amounts of sensors without comprising performance to ensure a continuous flow of IoT data today and in the future.

3. Network and Device Management

As IoT initiatives move beyond proofs-of-concept, businesses need an effective and secure approach to operate, control and expand their IoT network with minimal costs and complexity.

As IoT deployments scale to hundreds or even thousands of geographically dispersed nodes, a manual approach to connecting, configuring and troubleshooting devices is inefficient and expensive. Likewise, by leaving devices completely unattended, users risk losing business-critical IoT data when it’s needed the most. A network and device management platform provides a single-pane, top-down view of all network traffic, registered nodes and their status for streamlined network monitoring and troubleshooting. Likewise, it acts as the bridge between the edge network and users’ downstream data servers and enterprise applications so users can streamline management of their entire IoT project from device to dashboard.

4. Legacy System Integration

Most traditional assets, machines, and facilities were not designed for IoT connectivity, creating huge data silos. This leaves companies with two choices: building entirely new, greenfield plants with native IoT technologies or updating brownfield facilities for IoT connectivity. Highly integrable, plug-and-play IoT connectivity is key to streamlining the costs and complexity of an IoT deployment. Businesses need a solution that can bridge the gap between legacy OT and IT systems to unlock new layers of data that were previously inaccessible. Wireless IoT connectivity must be able to easily retrofit existing assets and equipment without complex hardware modifications and production downtime. Likewise, it must enable straightforward data transfer to the existing IT infrastructure and business applications for data management, visualization and machine learning.

5. Interoperability

Each IoT system is a mashup of diverse components and technologies. This makes interoperability a prerequisite for IoT scalability, to avoid being saddled with an obsolete system that fails to keep pace with new innovation later on. By designing an interoperable architecture from the beginning, you can avoid fragmentation and reduce the integration costs of your IoT project in the long run. 

Today, technology standards exist to foster horizontal interoperability by fueling global cross-vendor support through robust, transparent and consistent technology specifications. For example, a standard-based wireless protocol allows you to benefit from a growing portfolio of off-the-shelf hardware across industry domains. When it comes to vertical interoperability, versatile APIs and open messaging protocols act as the glue to connect the edge network with a multitude of value-deriving backend applications. Leveraging these open interfaces, you can also scale your deployment across locations and seamlessly aggregate IoT data across premises.  

IoT data is the lifeblood of business intelligence and competitive differentiation and IoT connectivity is the crux to ensuring reliable and secure access to this data. When it comes to building a future-proof wireless architecture, it’s important to consider not only existing requirements, but also those that might pop up down the road. A wireless solution that offers data ownership, minimal infrastructure requirements, built-in network management and integration and interoperability will not only ensure access to IoT data today, but provide cost-effective support for the influx of data and devices in the future.

Originally posted here.

Read more…

by Ariane Elena Fuchs

Solar power, wind energy, micro cogeneration power plants: energy from renewable sources has become indispensable, but it makes power generation and distribution far more complex. How the Internet of Things is helping make energy management sustainable.

It feels like Ground Hog Day yet again – in 2020 it happened on August 22. That was the point when the demand for raw materials exceeded the Earth’s supply and capacity to reproduce these natural resources. All reserves that are consumed from that date on cannot be regenerated in the current year. In other words, humanity is living above its means, consuming around 50 percent more energy than the Earth provides naturally.

To conserve these precious resources and reduce climate-damaging CO2 emissions, the energy we need must come from renewable sources such as wind, sun and water. This is the only way to reduce both greenhouse gases and our fossil fuel use. Fortunately, a start has now been made: In 2019, renewable energies – predominantly from wind and sun – will already cover almost 43 percent of Germany's energy requirements and the trend is rising.

DECENTRALIZING ENERGY PRODUCTION

This also means, however, that the traditional energy management model – a few power plants supplying a lot of consumers – is outdated. After all, the phasing out of large nuclear and coal-fired power plants doesn’t just have consequences for Germany’s CO2 balance. Shifting electricity production to wind, solar and smaller cogeneration plants reverses the previous pattern of energy generation and distribution from a highly centralized to an increasingly decentralized organizational structure. Instead of just a few large power plants sending electricity to the grid, there are now many smaller energy sources such as solar panels and wind turbines. This has made the management of it all – including the optimal distribution of the electricity – far more complex as a result. It’s up to the energy sector to wrangle this challenging transformation. As the country’s energy is becoming more sustainable, it’s also becoming harder to organize, since the energy generated from wind and sun cannot be planned in advance as easily as coal and nuclear power can. What’s more, there are thousands of wind turbines and solar panels making electricity and feeding it into the grid. This has made the management of the power network extremely difficult. In particular, there’s a lack of real-time information about the amount of electricity being generated.

KEY TECHNOLOGY IOT: FROM ENERGY FLOW TO DATA STREAM

This is where the Internet of Things comes into play: IoT can supply exactly this data from every power generator and send it to a central location. Once there, it can be evaluated before ideally enabling the power grid to be controlled automatically. The result is an IoT ecosystem. In order to operate wind farms more efficiently and reliably, a project team is currently developing an IoT-supported system that bundles and processes all relevant parameters and readings at a wind farm. They can then reconstruct the current operating and maintenance status of individual turbines. This information can be used to detect whether certain components are about to wear out and replace them before a turbine fails.

POTENTIAL FOR NEW BUSINESS MODELS

According to a recent Gartner study, the Internet of Things (IoT) is becoming a key technology for monitoring and orchestrating the complex energy and water ecosystem. In addition, consumers want more control over energy prices and more environmentally friendly power products. With the introduction of smart metering, data from so-called prosumers is becoming increasingly important. These energy producing consumers act like operators of the photovoltaic systems on their roofs. IoT sensors are used to collect the necessary power generation information. Although they are only used locally and for specific purposes, they provide energy companies with a lot of data. In order to be able to use the potential of this information for the expansion of renewable energy, it must be combined and evaluated intelligently. According to Gartner, IoT has the potential to change the energy value chain in four key areas: It enables new business models, optimizes asset management, automates operations and digitalizes the entire value chain from energy source to kWh.

ENERGY TRANSITION REQUIRES TECHNOLOGICAL CHANGE

Installing smaller power-generating systems will soon no longer pose the greatest challenge for operators. In the near future, coherently linking, integrating and controlling them will be the order of the day. The energy transition is therefore spurring technological change on a grand scale. For example, smart grids will only function properly and increase overall capacity when data on generation, consumption and networks is available in real-time. The Internet of Things enables the necessary fast data processing, even from the smallest consumers and prosumers on the grid. With the help of the Internet of Things, more and more household appliances can communicate with the Internet. These devices are then in turn connected to a smart meter gateway, i.e. a hub for the intelligent management of consumers, producers and storage locations at private households and commercial enterprises. To be able to use the true potential of this information, however, all the data must flow together into a common data platform, so that it can be analyzed intelligently.

FROM INDIVIDUAL APPLICATIONS TO AN ECOSYSTEM

For the transmission of data from the Internet of Things, Germany has national fixed-line and mobile networks available. New technology such as the 5G mobile standard allows data to be securely and reliably transferred to the cloud either directly via the 5G network or a 5G campus networks. Software for data analytics and AI tailored to energy firms are now available – including monitoring, analysis, forecasting and optimization tools. Any analyzed data can be accessed via web browsers and in-house data centers. Taken together, it all provides the energy sector with a comprehensive IoT ecosystem for the future.

Originally posted here.

Read more…

by Philipp Richert

New digital and IoT use cases are becoming more and more important. When it comes to the adoption of these new technologies, there are several different maturity levels, depending on the domain. Within the retail industry, and specifically food retail, we are currently seeing the emergence of a host of IoT use cases.

Two forces are driving this: a technology push, in which suppliers in the retail domain have technologies available to build retail IoT use cases within a connected store; and a market pull by their customers, who are boosting the demand for such use cases.

Retail-IoT-use-case-technology-push-and-market-pull-1136x139.png

However, we also need to ask the following questions: What are IoT use cases good for? And what are they aiming at? We currently see three different fields of application:

  • Increasing efficiency and optimizing processes
  • Increasing customer satisfaction
  • Increasing revenues with new business models

No matter what is most important for your organization or whatever your focus, it is crucial to set up a process that provides guidance for identifying the right use cases. In the following section, we share some insights on how retailers can best design this process. We collated these insights together with the team from the Food Tech Campus.

How to identify the right retail IoT use cases

When identifying the right use cases for their stores, retailers should make sure to look into all phases within the entire innovation process: from problem description and idea collation to solution concept and implementation. Within this process, it is also essential to consider the so-called innovator’s trilemma and ensure that use cases are:

  • Desirable ones that your customer really needs
  • Technically feasible
  • Profitable for your sustainable business development

Before we can actually start identifying retail IoT use cases, we need to define search fields so that we can work on one topic with greater dedication and focus. We must then open up the problem space in order to extract the most relevant problems and pain points. Starting with prioritized and selected pain points, we then open up the solution space in order to define several solution concepts. Once these have been validated, the result should be a well-defined problem statement that concisely describes one singular pain point.

In the following, we want to take a deep dive into the different phases of the process while giving concrete examples, tips and our top-rated tools. Enjoy!

Search fields

Retailers possess expertise and face challenges at various stages along their complex process chains. It helps here to focus on a specific target group in order to avoid distraction. Target groups are typically users or customers in a defined environment. A good example would be to focus your search on processes that happen inside a store location and are relevant to the customer (e.g., the food shopper).

Understand and observe problems

User research, observation and listening are keys to a well-defined problem statement that allows for further ideation. Embedding yourself in various situations and conducting interviews with all the stakeholders visiting or operating a store should be the first steps. Join employees around the store for a day or two and support them during their everyday tasks. Empathize, look for any friction and ask questions. Take your key findings into workshops and spend some time isolating specific causes. Use personas based on your user research and make use of frameworks and canvas templates in order to structure your findings. Use working titles to name the specific problem statements. One example might be: Long queueing as a major nuisance for customers.

Synthesize findings

Are your findings somehow connected? Single-purpose processes and their owners within a store environment are prone to isolated views. Creating a common problem space increases the chances of adoption of any solution later. So it is worth taking the time to map out all findings and take a look at projects in the past and their outcome. In our example, queueing is linked to staff planning, lack of communication and unpredictable customer behavior.

Prioritize problems and pain points

Ask users or stakeholders to give their view on defined problem statements and let them vote. Challenge their view and make them empathize and broaden their view towards a more holistic benefit. Once the quality of a problem statement has been assessed, evaluate the economic implications. In our example, this could mean that queueing affects most employees in the store, directly or indirectly. This problem might be solved through technology and should be further explored.

The result of a well-structured problem statement list should consist of a few new insights that might result in quick gains; one or two major known pain points, where the solution might be viable and feasible; and a list with additional topics that exist but are not too pressing at the moment.

Define opportunity areas

Map technologies and problems together. Are there any strategic goals that these problem statements might be assigned to? Have things changed in terms of technical feasibility (e.g., has the cost of a technology dropped over the past three years?). Can problems be validated within a larger setup easily or are we talking about singular use cases? All these considerations should lead towards the most attractive problem to solve. Again, in our example, this might be: Queuing is a major problem in most locations, satisfying our customers should be our main goal, existing solutions are too expensive or inflexible.

Retail-IoT-use-case-problem-solution-space-1-1136x580.png

When identifying the right use cases for their stores, retailers should make sure to look into all phases within the entire innovation process: from problem description and idea collation to solution concept and implementation.

Ideate and explore use cases

When conducting an ideation session, it is very helpful to bring in trends that are relevant to the defined problem areas so as to help boost creativity. In our example, for instance, this might be technology trends such as frictionless checkout for retail, hybrid checkout concepts, bring your own device (BYOD) and sensor approaches. It is always important to keep the following in mind: What do these trends mean for the customer journey in-store and how can they be integrated in (legacy) environments?

Define solutions concepts

In the process of further defining the solution concepts, it is essential to evaluate the market potential and to consider customer and user feedback. Depending on the solution, it might be necessary to ask the various stakeholders – from store managers to personnel to customers – in order to get a clearer picture. When talking to customers or users, it is also helpful to bring along scribbles, pictures or prototypes in order to increase immersion. The insights gathered in this way help to validate assumptions and to pilot the concept accordingly.

Set metrics and KPIs to prove success

Defining data-based metrics and KPIs is essential for a successful solution. When setting up metrics and KPIs, you need to consider two aspects:

  • Use existing data – e.g., checkout frequency – in order to demonstrate the impact of the new solution. This offers a very inexpensive way of validating the business potential of the solution early on.
  • Use new data – e.g. measure waiting time – from the solution and evaluate it on a regular basis. This helps to get a better understanding of whether you are collecting the right data and to derive measures that help to improve your solution.

Prototype for quick insights

In terms of technology, practically everything is feasible today. However, the value proposition of a use case (in terms of business and users) can remain unclear and requires testing. Instead of building a technical prototype, it can be helpful to evaluate the value proposition of the solution with humans (empathy prototyping). This could be a person triggering an alarm based on the information at hand instead of an automatic action. Insights and lessons learnt from this phase can be used alongside the technical realization (proof-of-concept) in order to tweak specific features of the solution.

Initiate a PoC for technical feasibility

When it comes to technical feasibility, a clear picture of the objectives and key results (OKRs) for the PoC is essential. This helps to set the boundaries for a lean process with respect to the installation of hardware, an efficient timeline and minimum costs. Furthermore, a well-defined test setup fosters short testing timespans that often yield all needed results.

How IoT platforms can help build retail IoT use cases

The strong trend towards digitization within the retail industry opens up new use cases for the (food) retail industry. In order to make the most of this trend and to build on IoT, it is crucial first of all to determine which use cases to start with. Every retailer has a different focus and needs for their stores.

In the course of our retail projects, we have identified some of the recurring use cases that food retailers are currently implementing. We have also learnt a lot about how they can best leverage IoT in order to build a connected store. We share these insights in our white paper “The connected retail store.”

Originally posted here.

Read more…

By Sanjay Tripathi, Lauren Luellwitz, and Kevin Egge

There are petabytes of data generated by intelligent, interconnected and autonomous systems of Industry 4.0. When combined with artificial intelligence tools that provide actionable insight, it has the potential to improve every function within a plant, i.e. operations, engineering, quality, reliability and maintenance.

The maintenance function, while crucial to the smooth functioning of a plant has, until recently not seen much innovation. Many among us have experienced the equipment downtime, process drifts, massive hits to yield, and decline in product reliability because of maintenance performed poorly or late. Yet, Enterprise Asset Management (EAM) systems – ERP systems that help maintain assets – remained as systems of record that typically generated work-orders and recorded maintenance performed. Even as production processes became mind-numbingly complex, EAM systems remained much the same.

IBM Maximo 8.0, or Maximo Application Suite, is one example of a system that combines artificial intelligent (AI), big data and cloud computing technologies with domain expertise from operating technologies (OT) to simplify maintenance and deliver production resilience.

Maximo 8.0 leverages AI to visually inspect gas pipelines, rail tracks, bridges and tunnels; AI guides technicians as they conduct complex repairs; it provides maintenance supervisors real-time visibility into the health and safety of their technicians. Domain expertise is incorporated in the form of data to train AI models. These capabilities improve the ability to avoid unscheduled downtime, improve first-time-fix rate, and reduce safety incidents.

Maintenance records residing in Maximo are combined with real-time operational data from production assets and their associated asset model to better predict when maintenance is required. In this example, asset models embody domain expertise. These models characterize how a production asset such as a power generator or catalytic converter should perform in the context of where it is installed in the process.

The Maximo application itself is encapsulated (containerized) using Red Hat’s OpenShift technology. Containerization allows the application to be easily deployed on-premises, on private clouds or hybrid clouds. This flexibility in deployment benefits IT organizations that need to continually evolve their infrastructure, which is almost every organization.

Maximo 8.0 is available as a suite that includes both core and advanced capabilities. A single software entitlement provides access to all capabilities. The entitlement provides access to the core EAM functionality of work and resource scheduling, asset management, industry-specific customizations, EHS guidelines, and mobile functionality. And it provides access to advanced functionality such as Maximo Monitor, which automatically detects anomalies in how an asset may be performing; Maximo Health, which measures equipment health; Maximo Predict, which, as the name suggests, predicts when maintenance is required; and Maximo Assist which assists technicians conduct repairs.

Originally posted here.

Read more…

by Olivier Pauzet

Over the past year, we have seen the Industrial IoT (IIoT) take an important step forward, crossing the chasm that previously separated IIoT early adopters from the majority of companies.

New solutions like Octave, Sierra Wireless’ edge-to-cloud solution for connecting industrial assets, have greatly simplified the IIoT, making it possible now for practically any company to securely extract, transmit, and act on data from bio-waste collectors, liquid fertilizer tanks, water purifiers, hot water heaters and other industrial equipment.

So, what IIoT trends will these 2020 developments lead to in 2021? I expect that they will drive greater adoption of the IIoT next year, as manufacturing, utility, healthcare, and other organizations further realize that they can help their previously silent industrial assets speak using the APIs integrated in new IoT solutions. At the same time, I expect we will start to see the development of some revolutionary IIoT applications that use 5G’s Ultra-Reliable, Low-Latency Communications (URLLC) capabilities to change the way our factories, electric grid, and healthcare systems operate.

In 2021, Industrial Equipment APIs Will Give Quiet Equipment A Voice

Cloud APIs have transformed the tech industry, and with it, our digital economy. By enabling SaaS and other cloud-based applications to easily and securely talk to each other, cloud APIs have vastly expanded the value of these applications to users. These APIs have also spawned billion-dollar companies like Stripe, Tableau, and Twilio, whose API-focused business models have transformed the online payments, data visualization, and customer service markets.

2021 will be the year industrial companies begin seeing their markets transformed by APIs, as more of these companies begin using industrial equipment APIs built into new IIoT solutions to enable their industrial assets to talk to the cloud.

Using new edge-to-cloud solutions - like Octave -with built-in Industrial equipment APIs for Modbus and other industrial communications protocols, these companies will be able to securely connect these assets to the cloud almost as easily as if this equipment was a cloud-based application.

In fact, by simply plugging a low-cost IoT gateway with these IIoT APIs into their industrial equipment, they will be able to deploy IIoT applications that allow them to remotely monitor, maintain, and control this equipment. Then, using these applications, they can lower equipment downtime, reduce maintenance costs, launch new Equipment-as-a-Service business models, and innovate faster.

Industrial companies have been trying to connect their assets to the cloud for years, but have been stymied by the complexity, time, and expense involved in doing so. In 2021, industrial equipment APIs will provide these companies with a way to simply, quickly, and cheaply connect this equipment to the cloud. By giving a voice to billions of pieces of industrial equipment, these Industrial IoT APIs will help bring about the productivity, sustainability, and other benefits Industry 4.0 has long promised.

In 2021 Manufacturing, Utility and Healthcare Will Drive Growth of the Industrial IoT

Until recently, the consumer sector, and especially the smart home market, has led the way in adopting the IoT, as the success of the Google Nest smart thermostat, the Amazon Echo smart speaker and Ring smart doorbell, and the Phillips Hue smart lights demonstrate. However, in 2021 another IIoT trend we can expect to see is the industrial sector starting to catch up to the consumer market regarding the IoT, with the manufacturing, utility, and healthcare markets leading the way.

For example, new IIoT solutions now make it possible for Original Equipment Manufacturers (OEMs) and other manufacturing companies to simply plug their equipment into the IIoT and begin acting on data from this equipment almost immediately. This has lowered the time to value for IIoT applications to the point where companies can begin reaping financial benefits greater than the total cost for their IIoT application in a few short months.

At this point, manufacturers who don’t have a plan to integrate the IIoT into their assets are, to put it bluntly, leaving money on the table – money their competitors will happily snap up with their own new connected industrial equipment offerings if they do not.

Like manufacturing companies, utilities will ramp up their use of the IIoT in 2021, as they seek to improve their operational efficiency, customer engagement, reliability, and sustainability. For example, utilities will increasingly use the IIoT to perform remote diagnostics and predictive maintenance on their grid infrastructure, reducing this equipment’s downtime while also lowering maintenance costs. In addition, a growing number of utilities will use the IIoT to collect and analyze data on their wind, solar and other renewable energy generation portfolios, allowing them to reduce greenhouse gas emissions while still balancing energy supply and demand on the grid.

Along with manufacturing and utilities, healthcare is the third market sector I expect to lead the way in adopting the IIoT in 2021. The COVID-19 pandemic has demonstrated to healthcare providers how connectivity – such as Internet-based telemedicine solutions -- can improve patient outcomes while reducing their costs. In 2021 they will increase their use of the IIoT, as they work to extend this connectivity to patient monitors, scanners and other medical devices. With the Internet of Medical Things (IoMT), healthcare providers will be better able to prepare patient treatments, remotely monitor and respond to changes to their patients’ conditions, and generate health care treatment documents.

Revolutionary Ultra-Reliable, Low-Latency 5G Applications Will Begin to Be Developed

There is a lot of buzz regarding 5G New Radio (NR) in the IIoT market. However, having been designed to co-exist with 4G LTE, most of 5G NR’s impact in this market is still evolutionary, not revolutionary. Companies are beginning to adopt 5G to wring better performance out of their existing IIoT applications, or to future-proof their connectivity strategies. But they are doing this while continuing to use LTE, as well as Low Power Wide Area (LPWA) 5G technologies, like LTE-M and NB-IoT, for now.

In 2021 however I think we will begin to see companies starting to develop revolutionary new IIoT application proof of concepts designed to take advantage of 5G NR’s Ultra-Reliable, Low-Latency Communications (URLLC) capabilities. These URLLC applications – including smart Automated Guided Vehicle (AGVs) for manufacturing, self-healing energy grids for utilities and remote surgery for health care – are simply not possible with existing wireless technologies.

Thanks to its ability to deliver ultra-high reliability and latencies as low as one millisecond, 5G NR enables companies to finally build URLLC applications – especially when 5G NR is used in conjunction with new edge computing technologies.

It will be a long time before any of these URLLC application proof-of-concepts are commercialized. But as far as 5G Wave 5+, next year is when we will first begin seeing this wave forming out at sea. And when it does eventually reach shore, it will have a revolutionary impact on our connected economy.

Originally posted here.

Read more…

When analyzing whether a machine learning model works well, we rely on accuracy numbers, F1 scores and confusion matrices - but they don't give any insight into why a machine learning model misclassifies data. Is it because data looks very similar, is it because data is mislabeled, or is it because preprocessing parameters are chosen incorrectly? To answer these questions we have now added the feature explorer to all neural network blocks in Edge Impulse. The feature explorer shows your complete dataset in one 3D graph, and shows you whether data was classified correct or incorrect.

8481148295?profile=RESIZE_710x

Showing exactly which data samples are misclassified in the feature explorer.

If you haven't used the feature explorer before: it's one of the most interesting options in the Edge Impulse. The axes are the output of the signal processing process (we heavily rely on signal processing to extract interesting features beforehand, making smaller and more reliable ML models), and they can let you quickly validate whether your data separates nicely. In addition the feature explorer is integrated in Live classification, where you can compare incoming test data directly with your training set.

8481149063?profile=RESIZE_710x

Redesign of the neural network pages.

This work has been part of a redesign of our neural network pages. These pages are now more compact, giving you full insight in both your neural network architecture, and the training performance - and giving you an easy way to compare models with different optimization options (like comparing an int8 quantized model vs. an unoptimized model) and show accurate on-device performance metrics for a wide variety of targets.

Next steps

Currently the feature explorer shows the performance of your training set, but over the next weeks we'll also integrate the feature explorer and the new confusion matrix to the Model testing page in Edge Impulse. This will give you direct insight in the performance of your test set in the same way, so keep an eye out for that!

Want to try the new feature explorer out? Just head to any neural network block in your Edge Impulse project and retrain. Don't have a project yet?! Followone of our tutorials on building embedded machine learning models on real sensor data, it takes 30 minutes and you can even use your phone as a sensor.

Article originally written by Jan Jongboom, the CTO and co-founder of Edge Impulse. He loves pretty pictures, colors, and insight in his ML models.

Read more…

When I think about the things that held the planet together in 2020, it was digital experiences delivered over wireless connectivity that made remote things local.

While heroes like doctors, nurses, first responders, teachers, and other essential personnel bore the brunt of the COVID-19 response, billions of people around the world found themselves cut off from society. In order to keep people safe, we were physically isolated from each other. Far beyond the six feet of social distancing, most of humanity weathered the storm from their homes.

And then little by little, old things we took for granted, combined with new things many had never heard of, pulled the world together. Let’s take a look at the technologies and trends that made the biggest impact in 2020 and where they’re headed in 2021:

The Internet

The global Internet infrastructure from which everything else is built is an undeniable hero of the pandemic. This highly-distributed network designed to withstand a nuclear attack performed admirably as usage by people, machines, critical infrastructure, hospitals, and businesses skyrocketed. Like the air we breathe, this primary facilitator of connected, digital experiences is indispensable to our modern society. Unfortunately, the Internet is also home to a growing cyberwar and security will be the biggest concern as we move into 2021 and beyond. It goes without saying that the Internet is one of the world’s most critical utilities along with water, electricity, and the farm-to-table supply chain of food.

Wireless Connectivity

People are mobile and they stay connected through their smartphones, tablets, in cars and airplanes, on laptops, and other devices. Just like the Internet, the cellular infrastructure has remained exceptionally resilient to enable communications and digital experiences delivered via native apps and the web. Indoor wireless connectivity continues to be dominated by WiFi at home and all those empty offices. Moving into 2021, the continued rollout of 5G around the world will give cellular endpoints dramatic increases in data capacity and WiFi-like speeds. Additionally, private 5G networks will challenge WiFi as a formidable indoor option, but WiFi 6E with increased capacity and speed won’t give up without a fight. All of these developments are good for consumers who need to stay connected from anywhere like never before.

Web Conferencing

With many people stuck at home in 2020, web conferencing technology took the place of traveling to other locations to meet people or receive education. This technology isn’t new and includes familiar players like GoToMeeting, Skype, WebEx, Google Hangouts/Meet, BlueJeans, FaceTime, and others. Before COVID, these platforms enjoyed success, but most people preferred to fly on airplanes to meet customers and attend conferences while students hopped on the bus to go to school. In 2020, “necessity is the mother of invention” took hold and the use of Zoom and Teams skyrocketed as airplanes sat on the ground while business offices and schools remained empty. These two platforms further increased their stickiness by increasing the number of visible people and adding features like breakout rooms to meet the demands of businesses, virtual conference organizers, and school teachers. Despite the rollout of the vaccine, COVID won’t be extinguished overnight and these platforms will remain strong through the first half of 2021 as organizations rethink where and when people work and learn. There’s way too many players in this space so look for some consolidation.

E-Commerce

“Stay at home” orders and closed businesses gave e-commerce platforms a dramatic boost in 2020 as they took the place of shopping at stores or going to malls. Amazon soared to even higher heights, Walmart upped their game, Etsy brought the artsy, and thousands of Shopify sites delivered the goods. Speaking of delivery, the empty city streets became home to fleets FedEx, Amazon, UPS, and DHL trucks bringing packages to your front doorstep. Many retail employees traded-in working at customer-facing stores for working in a distribution centers as long as they could outperform robots. Even though people are looking forward to hanging out at malls in 2021, the e-commerce, distribution center, delivery truck trinity is here to stay. This ball was already in motion and got a rocket boost from COVID. This market will stay hot in the first half of 2021 and then cool a bit in the second half.

Ghost Kitchens

The COVID pandemic really took a toll on restaurants in the 2020, with many of them going out of business permanently. Those that survived had to pivot to digital and other ways of doing business. High-end steakhouses started making burgers on grills in the parking lot, while takeout pizzerias discovered they finally had the best business model. Having a drive-thru lane was definitely one of the keys to success in a world without waiters, busboys, and hosts. “Front of house” was shut down, but the “back of house” still had a pulse. Adding mobile web and native apps that allowed customers to easily order from operating “ghost kitchens” and pay with credit cards or Apple/Google/Samsung Pay enabled many restaurants to survive. A combination of curbside pickup and delivery from the likes of DoorDash, Uber Eats, Postmates, Instacart and Grubhub made this business model work. A surge in digital marketing also took place where many restaurants learned the importance of maintaining a relationship with their loyal customers via connected mobile devices. For the most part, 2021 has restauranteurs hoping for 100% in-person dining, but a new business model that looks a lot like catering + digital + physical delivery is something that has legs.

The Internet of Things

At its very essence, IoT is all about remotely knowing the state of a device or environmental system along with being able to remotely control some of those machines. COVID forced people to work, learn, and meet remotely and this same trend applied to the industrial world. The need to remotely operate industrial equipment or an entire “lights out” factory became an urgent imperative in order to keep workers safe. This is yet another case where the pandemic dramatically accelerated digital transformation. Connecting everything via APIs, modeling entities as digital twins, and having software bots bring everything to life with analytics has become an ROI game-changer for companies trying to survive in a free-falling economy. Despite massive employee layoffs and furloughs, jobs and tasks still have to be accomplished, and business leaders will look to IoT-fueled automation to keep their companies running and drive economic gains in 2021.

Streaming Entertainment

Closed movie theaters, football stadiums, bowling alleys, and other sources of entertainment left most people sitting at home watching TV in 2020. This turned into a dream come true for streaming entertainment companies like Netflix, Apple TV+, Disney+, HBO Max, Hulu, Amazon Prime Video, Youtube TV, and others. That said, Quibi and Facebook Watch didn’t make it. The idea of binge-watching shows during the weekend turned into binge-watching every season of every show almost every day. Delivering all these streams over the Internet via apps has made it easy to get hooked. Multiplayer video games fall in this category as well and represent an even larger market than the film industry. Gamers socially distanced as they played each other from their locked-down homes. The rise of cloud gaming combined with the rollout of low-latency 5G and Edge computing will give gamers true mobility in 2021. On the other hand, the video streaming market has too many players and looks ripe for consolidation in 2021 as people escape the living room once the vaccine is broadly deployed.

Healthcare

With doctors and nurses working around the clock as hospitals and clinics were stretched to the limit, it became increasingly difficult for non-COVID patients to receive the healthcare they needed. This unfortunate situation gave tele-medicine the shot in the arm (no pun intended) it needed. The combination of healthcare professionals delivering healthcare digitally over widespread connectivity helped those in need. This was especially important in rural areas that lacked the healthcare capacity of cities. Concurrently, the Internet of Things is making deeper inroads into delivering the health of a person to healthcare professionals via wearable technology. Connected healthcare has a bright future that will accelerate in 2021 as high-bandwidth 5G provides coverage to more of the population to facilitate virtual visits to the doctor from anywhere.

Working and Living

As companies and governments told their employees to work from home, it gave people time to rethink their living and working situation. Lots of people living in previously hip, urban, high-rise buildings found themselves residing in not-so-cool, hollowed-out ghost towns comprised of boarded-up windows and closed bars and cafés. Others began to question why they were living in areas with expensive real estate and high taxes when they not longer had to be close to the office. This led to a 2020 COVID exodus out of pricey apartments/condos downtown to cheaper homes in distant suburbs as well as the move from pricey areas like Silicon Valley to cheaper destinations like Texas. Since you were stuck in your home, having a larger house with a home office, fast broadband, and a back yard became the most important thing. Looking ahead to 2021, a hybrid model of work-from-home plus occasionally going into the office is here to stay as employees will no longer tolerate sitting in traffic two hours a day just to sit in a cubicle in a skyscraper. The digital transformation of how and where we work has truly accelerated.

Data and Advanced Analytics

Data has shown itself to be one of the world’s most important assets during the time of COVID. Petabytes of data has continuously streamed-in from all over the world letting us know the number of cases, the growth or decline of infections, hospitalizations, contact-tracing, free ICU beds, temperature checks, deaths, and hotspots of infection. Some of this data has been reported manually while lots of other sources are fully automated from machines. Capturing, storing, organizing, modeling and analyzing this big data has elevated the importance of cloud and edge computing, global-scale databases, advanced analytics software, and the growing importance of machine learning. This is a trend that was already taking place in business and now has a giant spotlight on it due to its global importance. There’s no stopping the data + advanced analytics juggernaut in 2021 and beyond.

Conclusion

2020 was one of the worst years in human history and the loss of life was just heartbreaking. People, businesses, and our education system had to become resourceful to survive. This resourcefulness amplified the importance of delivering connected, digital experiences to make previously remote things into local ones. Cheers to 2021 and the hope for a brighter day for all of humanity.

Read more…

By Michele Pelino

The COVID-19 pandemic drove businesses and employees to became more reliant on technology for both professional and personal purposes. In 2021, demand for new internet-of-things (IoT) applications, technologies, and solutions will be driven by connected healthcare, smart offices, remote asset monitoring, and location services, all powered by a growing diversity of networking technologies.

In 2021, we predict that:

  • Network connectivity chaos will reign. Technology leaders will be inundated by an array of wireless connectivity options. Forrester expects that implementation of 5G and Wi-Fi technologies will decline from 2020 levels as organizations sort through market chaos. For long-distance connectivity, low-earth-orbit satellites now provide a complementary option, with more than 400 Starlink satellites delivering satellite connectivity today. We expect interest in satellite and other lower-power networking technologies to increase by 20% in the coming year.
  • Connected device makers will double down on healthcare use cases. Many people stayed at home in 2020, leaving chronic conditions unmanaged, cancers undetected, and preventable conditions unnoticed. In 2021, proactive engagement using wearables and sensors to detect patients’ health at home will surge. Consumer interest in digital health devices will accelerate as individuals appreciate the convenience of at-home monitoring, insight into their health, and the reduced cost of connected health devices.
  • Smart office initiatives will drive employee-experience transformation. In 2021, some firms will ditch expensive corporate real estate driven by the COVID-19 crisis. However, we expect at least 80% of firms to develop comprehensive on-premises return-to-work office strategies that include IoT applications to enhance employee safety and improve resource efficiency such as smart lighting, energy and environmental monitoring, or sensor-enabled space utilization and activity monitoring in high traffic areas.*
  • The near ubiquity of connected machines will finally disrupt traditional business. Manufacturers, distributors, utilities, and pharma firms switched to remote operations in 2020 and began connecting previously disconnected assets. This connected-asset approach increased reliance on remote experts to address repairs without protracted downtime and expensive travel. In 2021, field service firms and industrial OEMs will rush to keep up with customer demand for more connected assets and machines.
  • Consumer and employee location data will be core to convenience. The COVID-19 pandemic elevated the importance location plays in delivering convenient customer and employee experiences. In 2021, brands must utilize location to generate convenience for consumers or employees with virtual queues, curbside pickup, and checking in for reservations. They will depend on technology partners to help use location data, as well as a third-party source of location trusted and controlled by consumers.

* Proactive firms, including Atea, have extended IoT investments to enhance employee experience and productivity by enabling employees to access a mobile app that uses data collected from light-fixture sensors to locate open desks and conference rooms. Employees can modify light and temperature settings according to personal preferences, and the system adjusts light color and intensity to better align with employees’ circadian rhythms to aid in concentration and energy levels. See the Forrester report “Rethink Your Smart Office Strategy.”

Originally posted HERE.

Read more…

New solar performance monitoring system has potential to become IoT of photovoltaics. Credit: Pexels

A new system for measuring solar performance over the long term in scalable photovoltaic systems, developed by Arizona State University researchers, represents a breakthrough in the cost and longevity of interconnected power delivery.

When solar cells are developed, they are "current-voltage" tested in the lab before they are deployed in panels and systems outdoors. Once installed outdoors, they aren't usually tested again unless the system undergoes major issues. The new test system, Suns-Voc, measures the system's voltage as a function of light intensity in the outdoor setting, enabling real-time measurements of performance and detailed diagnostics.

"Inside the lab, however, everything is controlled," explained Alexander Killam, an ASU electrical engineering doctoral student and graduate research associate. "Our research has developed a way to use Suns-Voc to measure solar panels' degradation once they are outdoors in the real world and affected by weather, temperature and humidity," he said.

Current photovoltaic modules are rated to last 25 years at 80 percent efficiency. The goal is to expand that time frame to 50 years or longer.

"This system of monitoring will give photovoltaic manufacturers and big utility installations the kind of data necessary to adjust designs to increase efficiency and lifespans," said Killam, the lead author of "Monitoring of Photovoltaic System Performance Using Outdoor Suns-Voc," for Joule.

For example, most techniques used to measure outdoor solar efficiency require you to disconnect from the power delivery mechanism. The new approach can automatically measure daily during sunrise and sunset without interfering with power delivery.

"When we were developing photovoltaics 20 years ago, panels were expensive," said Stuart Bowden, an associate research professor who heads the silicon section of ASU's Solar Power Laboratory. "Now they are cheap enough that we don't have to worry about the cost of the panels. We are more interested in how they maintain their performance in different environments.

"A banker in Miami underwriting a photovoltaic system wants to know in dollars and cents how the system will perform in Miami and not in Phoenix, Arizona."

"The weather effects on photovoltaic systems in Arizona will be vastly different than those in Wisconsin or Louisiana," said Joseph Karas, co-author and materials science doctoral graduate now at the National Renewable Energy Lab. "The ability to collect data from a variety of climates and locations will support the development of universally effective solar cells and systems."

The research team was able to test its approach at ASU's Research Park facility, where the Solar Lab is primarily solar powered. For its next step, the lab is negotiating with a power plant in California that is looking to add a megawatt of silicon photovoltaics to its power profile.

The system, which can monitor reliability and lifespan remotely for larger, interconnected systems, will be a major breakthrough for the power industry.

"Most residential solar rooftop systems aren't owned by the homeowner, they are owned by a utility company or broker with a vested interest in monitoring photovoltaic efficiency," said Andre' Augusto, head of Silicon Heterojunction Research at ASU's Solar Power Laboratory and a co-author of the paper.

"Likewise, as developers of malls or even planned residential communities begin to incorporate solar power into their construction projects, the interest in monitoring at scale will increase, " Augusto said.

According to Bowden, it's all about the data, especially when it can be monitored automatically and remotely—data for the bankers, data for developers, and data for the utility providers.

If Bill Gates' smart city, planned about 30 miles from Phoenix in Buckeye, Ariz., uses the team's measurement technology, "It could become the IoT of Photovoltaics," said Bowden.

Originally posted HERE.

Read more…

Figure 1: Solution architecture with AWS IoT core

Critical and high-value assets are always on the move, and this holds across practically every industry vertical relying on supply chain and logistics operations. Naturally, enterprises seek ways to track their assets with the shipment carrier in ways that are most optimal to their requirements. The end goal is often to have greater visibility and control of assets while in transit with the shipment carrier while opening up opportunities to optimize business operations based on insights-driven decisions.

For assets in transit, proactive shipment monitoring results in greater reliability of the shipment's integrity by way of real-time updates about the shipment's location, transit status, and conditions like temperature and humidity (for perishable shipments). All this information helps quick issue identification and remediation by the respective stakeholders. This helps minimize losses and reduced insurance claims, which results in further cost optimization for the enterprise while delivering a delightful purchase experience to their customers.

A solution to address such requirements would need to be an IoT (Internet of Things) solution requiring a combination of tracker devices (hardware), cloud apps (software platform), enterprise systems integrations (with SCM, ERP & similar systems), and professional services & support for field installation & continuous data insights. For most enterprises, this implementation of the Internet of things is complex and non-core. Such an IoT solution requires the investment of capital, time, and expertise to build and deploy such a solution, especially one that's secure and scalable.

In this post, we'll discuss such an IoT solution that is built using AWS IoT Core and can be delivered in an affordable manner such that it's ready to use within a matter of days or weeks. The solution leverages GPS-enabled tracker hardware comprising condition monitoring sensors like temperature, humidity, shock impact, and ambient light. This device can be used to track entire containers or pallets having multiple cartons or even individual item boxes depending on the requirements. The shock impact sensor on the device indicates asset mishandling based on threshold limits, and the light sensor can indicate potentially unauthorized use/asset theft. Such a device requires a cellular connectivity service to communicate sensor data to the cloud per pre-configured rules.

By way of API integrations using AWS SDKs for the Internet of things, the tracker devices are first connected and authenticated. The data they generate is published to a cloud app powered by AWS IoT Core in real-time or at preset time intervals. The data sent to the cloud app is in JSON-format message payloads sent via the MQTT protocol supported by AWS IoT Core; and is presented on the Frontend Dashboard UI in a rich, interactive manner on a map interface with sensor-specific information available within a couple of taps/clicks.

These sensor data messages are further forwarded to other back-end systems like AWS IoT Analytics. The data is usually saved in a time-series data store for analysis and insights later in time. Additionally, API integrations can also be easily built for the cloud app to work with enterprise apps like Transport Management Systems and Warehouse Management Systems to realize autonomous supply chain operations. Business rules define such movement of data- and operations-specific logic and is handled via AWS's Rules Engine service, which also can be used to transform device data before forwarding it to a different application.

However, not every data point a sensor picks up needs to be sent to the cloud app unless such a mandate exists, often due to regulatory compliance requirements in verticals like healthcare and pharmaceuticals. The Dashboard UI on the cloud provides a simple interface to set ranges of minimum and maximum sensor readings acceptable. Any breach of this range is immediately notified as an alert to the team responsible for monitoring the shipment. The team can then contact the shipment carrier to take corrective action. Such ranges can also be separately configured within mere seconds for each shipment per its monitoring requirements.

The secure bidirectional messaging between the tracker device and the cloud app is enabled via AWS IoT Core's Device Gateway, which scales automatically to process millions of messages in either direction while ensuring low latency mission-critical applications.

This makes the purpose-built shipment monitoring solution completely configurable and hence scalable while still being quickly deployable without the hassles of capital expenses and significant resource time spent in custom building such solutions from scratch.

Summary

The intelligent shipment monitoring solution enables enterprises to have greater control over the movement of their assets while having enough data and insights over time to optimize business operations as required.

With AWS IoT Core and AWS IoT Analytics, such a data-driven outcome approach to handle supply chain operations delivers transformational benefits such as reduced losses, greater cost control, and improved customer satisfaction rates that can result in sustainable competitive advantage in the marketplace.

Originally posted HERE.

Read more…
RSS
Email me when there are new items in this category –

Sponsor