Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Connectivity (66)

Advancements in Software-Defined Networks (SDN), Network Function Virtualization (NFV), and IoT are transforming the networking landscape and enabling new possibilities for connectivity, scalability, and management. Let’s walk through some of the ways:

SDN and Network Virtualization: SDN separates the network's control plane from the underlying infrastructure, enabling centralized control and programmability. NFV, on the other hand, virtualizes network functions, allowing them to run on commodity hardware. The advancements in SDN and NFV have led to increased flexibility, scalability, and agility in network management. Network administrators can dynamically allocate resources, configure policies, and optimize traffic flow based on application requirements.

Network Slicing: Network slicing is an emerging concept that leverages SDN and NFV to create virtual networks with customized characteristics and capabilities. It enables the simultaneous support of multiple logical networks on a shared physical infrastructure, each tailored to specific use cases or industries. Network slicing is particularly relevant for IoT deployments where diverse applications with different connectivity, latency, and security requirements coexist.

Edge Computing and Fog Computing: As IoT devices generate vast amounts of data, processing data at the network edge becomes crucial for real-time analytics and low-latency applications. SDN and NFV enable the deployment of computing resources closer to the edge, known as edge computing or fog computing. This distributed architecture improves response times, reduces bandwidth requirements, and enhances overall system performance.

Intent-Based Networking: Intent-Based Networking (IBN) is an approach that leverages SDN and automation to simplify network management. IBN allows administrators to define high-level business policies and intent, and the network infrastructure automatically translates and enforces those policies. This abstraction layer enables efficient network operations, reduces manual configuration efforts, and improves network security and compliance.

Network Security and Threat Detection: IoT devices increase the attack surface of networks, making security a critical concern. SDN and NFV advancements have facilitated the development of innovative security solutions. Network traffic can be monitored and analyzed in real-time, leveraging machine learning and AI algorithms to detect anomalies, identify threats, and take proactive security measures.

Network Orchestration and Service Chaining: SDN and NFV technologies enable dynamic network orchestration and service chaining. Orchestration platforms automate the deployment, configuration, and scaling of network functions, allowing for rapid provisioning and service delivery. Service chaining facilitates the seamless chaining of multiple virtual network functions to create end-to-end service paths based on specific application requirements.

Telemetry and Analytics: SDN and NFV enable the collection and analysis of network telemetry data, providing insights into network performance, traffic patterns, and resource utilization. Advanced analytics techniques, such as machine learning, help optimize network operations, predict failures, and enhance quality of service for IoT applications.

Advancements in SDN, NFV, and IoT are improving scalability, agility, security, and management capabilities. They are driving the evolution of connectivity and enabling innovative applications across industries such as smart cities, industrial automation, healthcare, transportation, and more.

Read more…

5G URLLC Characteristics

5G URLLC (Ultra-Reliable Low Latency Communications) is a communication service category within the 5th generation of wireless technology. URLLC is designed to provide extremely reliable and low-latency communication for critical applications and services that require real-time responsiveness and high availability.

Here are some key characteristics and features of 5G URLLC:

Ultra-Reliable: URLLC aims to deliver highly dependable communication with extremely low failure rates. It is particularly suited for mission-critical applications where reliability is paramount, such as industrial automation, autonomous vehicles, remote surgery, and public safety.

Low Latency: URLLC focuses on achieving ultra-low communication latency, which refers to the time it takes for data to travel between the source and destination. By minimizing latency, URLLC enables real-time and near real-time applications that demand immediate responsiveness, such as real-time control systems and virtual reality.

Network Slicing: URLLC supports network slicing, which involves creating separate virtual networks within the 5G infrastructure. Network slicing allows the allocation of dedicated resources and tailored network configurations for specific URLLC use cases, ensuring guaranteed performance and isolation from other types of traffic.

Quality of Service (QoS): URLLC emphasizes stringent quality-of-service requirements, ensuring that critical applications receive the necessary network resources and priority to maintain reliability and low latency. QoS mechanisms prioritize URLLC traffic over other types of traffic to meet the stringent performance demands of critical applications.

Edge Computing: URLLC often leverages edge computing capabilities, where computational resources and data processing are performed closer to the edge of the network, reducing communication latency. By placing computing resources closer to the devices and applications, URLLC can achieve even lower latency and improved real-time responsiveness.

5G URLLC plays a vital role in enabling mission-critical and latency-sensitive applications that require high reliability and real-time communication in the era of 5G networks.

Read more…

Voice-Enabled IoT Applications

The Internet of Things (IoT) has transformed the way we interact with technology. With the rise of voice assistants such as Alexa, Siri, and Google Assistant, voice-enabled IoT applications have become increasingly popular in recent years. Voice-enabled IoT applications have the potential to revolutionize the way we interact with our homes, workplaces, and even our cars. In this article, we will explore the benefits and challenges of voice-enabled IoT applications and their potential for the future.

Voice-enabled IoT applications allow users to control various smart devices using their voice. These devices include smart speakers, smart TVs, smart thermostats, and smart lights, to name a few. By using voice commands, users can turn on the lights, adjust the temperature, play music, and even order food without having to touch any buttons or screens. This hands-free approach has made voice-enabled IoT applications popular among users of all ages, from children to seniors.

Free vector users buying smart speaker applications online. smart assistant applications online store, voice activated digital assistants apps market concept. vector isolated illustration.
One of the significant benefits of voice-enabled IoT applications is their convenience. With voice commands, users can control their smart devices while they are doing other tasks, such as cooking, cleaning, or exercising. This allows for a more seamless and efficient experience, without having to interrupt the task at hand. Additionally, voice-enabled IoT applications can be customized to suit individual preferences, allowing for a more personalized experience.

Another significant benefit of voice-enabled IoT applications is their potential for accessibility. For people with disabilities, voice-enabled IoT applications can provide an easier and more natural way to interact with their devices. By using their voice, people with limited mobility or vision can control their devices without having to rely on buttons or screens. This can improve their quality of life and independence.

However, there are also challenges associated with voice-enabled IoT applications. One of the significant challenges is privacy and security. As voice-enabled IoT applications are always listening for voice commands, they can potentially record and store sensitive information. Therefore, it is crucial for developers to implement strong security measures to protect users' privacy and prevent unauthorized access.

Another challenge is the potential for misinterpretation of voice commands. Accidental triggers or misinterpretation of voice commands can result in unintended actions, which can be frustrating for users. Additionally, voice-enabled IoT applications can struggle to understand certain accents, dialects, or languages, which can limit their accessibility to non-native speakers.

Despite these challenges, the potential for voice-enabled IoT applications is vast. In addition to smart homes, voice-enabled IoT applications can be used in a wide range of industries, including healthcare, retail, and transportation. In healthcare, voice-enabled IoT applications can be used to monitor patients' health conditions and provide real-time feedback. In retail, voice-enabled IoT applications can provide personalized shopping experiences and assist with inventory management. In transportation, voice-enabled IoT applications can be used to provide real-time traffic updates and navigation.

In conclusion, voice-enabled IoT applications have become increasingly popular in recent years, providing a more convenient and accessible way for users to interact with their devices. While there are challenges associated with voice-enabled IoT applications, their potential for revolutionizing various industries is vast. As technology continues to evolve, the future of voice-enabled IoT applications is sure to be exciting and full of potential

Read more…

Wearable technology: role in respiratory health and disease | European  Respiratory Society

Wearable devices, such as smartwatches, fitness trackers, and health monitors, have become increasingly popular in recent years. These devices are designed to be worn on the body and can measure various physiological parameters, such as heart rate, blood pressure, and body temperature. Wearable devices can also track physical activity, sleep patterns, and even detect falls and accidents.

Body sensor networks (BSNs) take the concept of wearables to the next level. BSNs consist of a network of wearable sensors that can communicate with each other and with other devices. BSNs can provide real-time monitoring of multiple physiological parameters, making them useful for a range of applications, including medical monitoring, sports performance monitoring, and military applications.

Smart portable devices, such as smartphones and tablets, are also an essential component of the IoT ecosystem. These devices are not worn on the body, but they are portable and connected to the internet, allowing for seamless communication and data transfer. Smart portable devices can be used for a wide range of applications, such as mobile health, mobile banking, and mobile commerce.

The development of wearables, BSNs, and smart portable devices requires a unique set of skills and expertise, including embedded engineering. Embedded engineers are responsible for designing and implementing the hardware and software components that make these devices possible. Embedded engineers must have a deep understanding of electronics, sensors, microcontrollers, and wireless communication protocols.

One of the significant challenges of developing wearables, BSNs, and smart portable devices is power consumption. These devices are designed to be small, lightweight, and portable, which means that they have limited battery capacity. Therefore, embedded engineers must design devices that can operate efficiently with minimal power consumption. This requires careful consideration of power management strategies, such as sleep modes and low-power communication protocols.

Another challenge of developing wearables, BSNs, and smart portable devices is data management. These devices generate large volumes of data that need to be collected, processed, and stored. The data generated by these devices can be highly sensitive and may need to be protected from unauthorized access. Therefore, embedded engineers must design devices that can perform efficient data processing and storage while providing robust security features.

The communication protocols used by wearables, BSNs, and smart portable devices also present a significant challenge for embedded engineers. These devices use wireless communication protocols, such as Bluetooth and Wi-Fi, to communicate with other devices and the internet. However, the communication range of these protocols is limited, which can make it challenging to establish and maintain reliable connections. Embedded engineers must design devices that can operate efficiently in environments with limited communication range and intermittent connectivity.

Finally, the user interface and user experience of wearables, BSNs, and smart portable devices are critical for their success. These devices must be easy to use and intuitive, with a user interface that is designed for small screens and limited input methods. Embedded engineers must work closely with user experience designers to ensure that the devices are user-friendly and provide a seamless user experience.

Read more…

Wireless Sensor Networks and IoT

We all know how IoT has revolutionized the way we interact with the world. IoT devices are now ubiquitous, from smart homes to industrial applications. A significant portion of these devices are Wireless Sensor Networks (WSNs), which are a key component of IoT systems. However, designing and implementing WSNs presents several challenges for embedded engineers. In this article, we discuss some of the significant challenges that embedded engineers face when working with WSNs.

WSNs are a network of small, low-cost, low-power, and wirelessly connected sensor nodes that can sense, process, and transmit data. These networks can be used in a wide range of applications such as environmental monitoring, healthcare, industrial automation, and smart cities. WSNs are typically composed of a large number of nodes, which communicate with each other to gather and exchange data. The nodes are equipped with sensors, microprocessors, transceivers, and power sources. The nodes can also be stationary or mobile, depending on the application.

One of the significant challenges of designing WSNs is the limited resources of the nodes. WSNs are designed to be low-cost, low-power, and small, which means that the nodes have limited processing power, memory, and energy. This constraint limits the functionality and performance of the nodes. Embedded engineers must design WSNs that can operate efficiently with limited resources. The nodes should be able to perform their tasks while consuming minimal power to maximize their lifetime.

Another challenge of WSNs is the limited communication range. The nodes communicate with each other using wireless radio signals. However, the range of the radio signals is limited, especially in indoor environments where the signals are attenuated by walls and other obstacles. The communication range also depends on the transmission power of the nodes, which is limited to conserve energy. Therefore, embedded engineers must design WSNs that can operate reliably in environments with limited communication range.

WSNs also present a significant challenge for embedded engineers in terms of data management. WSNs generate large volumes of data that need to be collected, processed, and stored. However, the nodes have limited storage capacity, and transferring data to a centralized location may not be practical due to the limited communication range. Therefore, embedded engineers must design WSNs that can perform distributed data processing and storage. The nodes should be able to process and store data locally and transmit only the relevant information to a centralized location.

Security is another significant challenge for WSNs. The nodes in WSNs are typically deployed in open and unprotected environments, making them vulnerable to physical and cyber-attacks. The nodes may also contain sensitive data, making them an attractive target for attackers. Embedded engineers must design WSNs with robust security features that can protect the nodes and the data they contain from unauthorized access.

The deployment and maintenance of WSNs present challenges for embedded engineers. WSNs are often deployed in harsh and remote environments, making it difficult to access and maintain the nodes. The nodes may also need to be replaced periodically due to the limited lifetime of the power sources. Therefore, embedded engineers must design WSNs that are easy to deploy, maintain, and replace. The nodes should be designed for easy installation and removal, and the network should be self-healing to recover from node failures automatically.

Final thought; WSNs present significant challenges for embedded engineers, including limited resources, communication range, data management, security, and deployment and maintenance. Addressing these challenges requires innovative design approaches that can maximize the performance and efficiency of WSNs while minimizing their cost and complexity. Embedded engineers must design WSNs that can operate efficiently with limited resources, perform distributed data processing and storage, provide robust security features, and be easy to deploy

Read more…

In recent days, neural networks have become a topic for discussion. But the question still needs to be solved- How can it affect our world today and tomorrow?

The global neural network market's compound annual growth rate (CAGR) is expected to be 26.7% from 2021 to 2030. This means that new areas of application for them might appear soon. The Internet of Things that is IoT, is today's most fascinating and required technological solution for business. Around 61% of companies utilize IoT platforms, and we can anticipate the integration of neural networks into enterprise IoT solutions. This anticipation raises many questions, like what gets such collaboration and how to prepare it. Can we optimize the IoT ecosystem using neural networks, and who will approach such solutions?

What do you understand by a neural network, and how is it beneficial for enterprise IoT?

 

An artificial neural network that is ANN is a network of artificial neurons striving to simulate the analytical mechanisms taken by the human brain. This type of artificial intelligence includes a range of algorithms that can "learn" from their own experience and improve themselves, which is very different from classical algorithms that are programmed to resolve only specific tasks. Thus, with time, the neural network will remain pertinent and keep on improving.

With the proper implementation, enterprise internet of things (EIoT) and ANN can offer the business the most valuable things: precise analytics and forecasts. In general, it is not possible to compare both. Enterprise IoT is a system that needs software for data analysis, whereas ANN is a component that needs a large amount of data to be operational. Their team naturally controls the analytical tasks; therefore, high-level business tasks are performed most effectively, reducing costs, automating processes, finding new revenue sources, etc.

In the Internet of Things ecosystem, neural networks help in two areas above all:

  • Data acquisition via ANN-based machine vision
  • Advanced-data analysis

If it needs significant investments to execute ANN in big data analytics solutions, neural network image processing can decrease the cost of the IoT solution. Thus, neural networks improve enterprise IoT solutions, enhance their value, and speed up global adoption.

Which solutions within enterprise IoT can be enhanced using neural networks?

 

IoT-based visual control

 

The IoT ecosystem begins with data collection. Data quality impacts the accuracy of the ultimate prediction. If you implement visual control in your production processes, neural networks can boost the quality of products by superseding outdated algorithms. Besides this, they will optimize the EIoT solution. Conventional machine vision systems are pricey as they require the highest resolution cameras to catch minor defects in a product. They come with complex specific software that fails to respond to immediate changes.

Neural networks within machine vision systems can:

  • Diminish camera requirements
  • Self-learn on your data
  • Automate high-speed operations

Indeed, industrial cameras use large-format global shutter sensors having high sensitivity and resolution to develop the highest quality images. Nevertheless, a well-trained ANN starts to identify images with time. It allows them to reduce the technical needs for the camera and ultimately cuts the final cost of the enterprise IoT implementation. You cannot compromise the quality of images to detect small components like parts in circuit boards; however, it is manageable for printing production, completeness checking, or food packaging.

After training, neural networks use massive amounts of data to identify objects from the images. It enables you to customize the EIoT solution and train the ANN to operate specifically with your product by processing your images.

For example, convolutional neural networks are utilized actively in the healthcare industry to detect X-rays and CT scans. The outcome offered by such custom systems is more precise than conventional ones. The capability to process information at high speeds permits the automation of production processes. When the problem or defect is caught, neural networks promptly report it to the operator or launch an intelligent reaction, like automating sorting. Hence, it allows real-time detection and rejection of defective production.

An exclusive example of how ANN is utilized for edge and fog computing. As per PSA, a neural network executed in a machine vision system permits lowering the number of defects by 90% in half a year, whereas production costs are decreased by 30%. Prospective areas for ANN in IoT visual control are quality assurance, sorting, production, collecting, marking, traffic control, and ADAS.

Big data advanced analytics for enterprise IoT:

 

Today, neural networks allow businesses to grab advantages like predictive maintenance, new revenue flows, asset management, etc. It is possible via deep neural networks (DNN) and the deep Learning (DL) method involving multiple layers for data processing. They detect hidden data trends and valuable information from a significant dataset by employing classification, clustering, and regression. It results in effective business solutions and the facilitation of business applications.

In comparison to traditional models, DL manages with the attributes that are expected for IoT data:

  1. Assess the time of taking measurements
  2. Resist the high noise of the enterprise IoT data
  3. Conduct accurate real-time analysis
  4. Determine heterogeneous and discordant data
  5. Process a large amount of data

In practice, this implies that you don't require middle solutions to deliver and sort the data in the cloud or to analyze them in real-time. For example, full-cycle metallurgical enterprises can execute one solution to analyze the variable and unstructured data from metal mining, smelting, and final manufacturing products. Airplanes generate about 800TB of data per hour, making it impossible to process it all ideally using conventional analytical systems.

Today, DNN models are successful in the following enterprise IoT applications. 

Healthcare:

Today, it has become easy to predict disease using AI-based IoT systems, and this technology is developing for further improvements. For instance, the latest invention based on the neural network can detect the risk of heart attacks by up to 94.8%. DNN is also helpful in disease detection: the spectrogram of a person's voice received using IoT devices can identify voice pathologies after DNN processing. In general, ANN-based IoT health monitoring systems' accuracy is estimated to be above 85%.

Power consumption:

DL systems in the enterprise Internet of Things have provided results in power demand prediction based on power price forecasting, consumption data, anomaly, power theft detection, and leak detection. Smart meter data analysis permits you to calculate consumption, determine the unusual usage of electricity, and predict with an accuracy of more than 95%, which will help you to adjust energy consumption.

Manufacturing:

Neural networks help to use the most demanded IoT service among manufacturers properly- predictive equipment maintenance. It was ascertained to be a workable practice for mechanical and electrical systems. This network provides accurate real-time status monitoring and predicts proper life rest. Another best example is the recognition of employee activity by taking readings and following in-depth analysis.

Transportation & Logistics:

Deep Learning has made smart transportation systems possible. It offers better traffic congestion management by processing travel time, speed, weather, and occupational parking forecasting. Analytical reports based on vehicle data help to discover dangerous driving and possible issues before the failure happens.

As we know, the previous industries generate heterogeneous data. Therefore, the potential of ANN analytics within EIoT will be unlocked for multiple complicated systems.

When to consider ANN for enterprise IoT:

 

Till now, research in the field of ANNs been very active, and we cannot foretell all the advantages or pitfalls these solutions will convey. No doubt, neural networks find out correlations, models, and trends better than other algorithms. The IoT ecosystem's data will become more extensive, complex, and diverse with time. So, the development of neural networks is the future of IoT.

For now, we can look into the following features of neural networks for enterprise IoT:

  • They suit the IoT ecosystem architecture, substituting alternative solutions with significant advantages.
  • Essential for industrial image processing.
  • Progressive ANN-based data analytics gets the high-level business value of the enterprise IoT solutions – improves productivity, and exactness, boosts sales, and produces informed business decisions.
  • Training the ANN requires time and expenditure but will become fully customizable.
  • We cannot conclude it is an affordable solution, but the advantages are priceless if the IoT ecosystem is executed accurately.

Therefore, if you are provided with a neural network as one of the opportunities for executing your idea within the IoT ecosystem, give it a chance. You never know, this solution will become a must-have in the coming years.

Read more…

Against the backdrop of digital technology and the industrial revolution, the Internet of Things has become the most influential and disruptive of all the latest technologies. As an advanced technology, IoT is showing a palpable difference in how businesses operate. 

Although the Fourth Industrial Revolution is still in its infancy, early adopters of this advanced technology are edging out the competition with their competitive advantage. 

Businesses eager to become a part of this disruptive technology are jostling against each other to implement IoT solutions. Yet, they are unaware of the steps in effective implementation and the challenges they might face during the process. 

This is a complete guide– the only one you’ll need – that focuses on delivering effective and uncomplicated IoT implementation. 

 

Key Elements of IoT

There are three main elements of IoT technology:

  • Connectivity:

IoT devices are connected to the internet and have a URI – Unique Resource Identifier – that can relay data to the connected network. The devices can be connected among themselves to a centralized server, a cloud, or a network of servers.

  • Data Communication:

IoT devices continuously share data with other devices in the network or the server. 

  • Interaction

IoT devices do not simply gather data. They transmit it to their endpoints or server. There is no point in collecting data if it is not put to good use. The collected data is used to deliver IoT smart solutions in automation, take real-time business decisions, formulate strategies, or monitor processes. 

How Does IoT work?

IoT devices have URI and come with embedded sensors. With these sensors, the devices sense their environment and gather information. For example, the devices could be air conditioners, smart watches, cars, etc. Then, all the devices dump their collected data into the IoT platform or gateway. 

The IoT platform then performs analytics on the data from various sources and derives useful information per the requirement

What are the Layers in IoT Architecture?

Although there isn’t a standard IoT structure that’s universally accepted, the 4-layer architecture is considered to be the basic form. The four layers include perception, network, middleware, and application.

  • Perception:

Perception is the first or the physical layer of IoT architecture. All the sensors, edge devices, and actuators gather useful information based on the project needs in this layer. The purpose of this layer is to gather data and transfer it to the next layer. 

  • Network:

It is the connecting layer between perception and application. This layer gathers information from the perception and transmits the data to other devices or servers. 

  • Middleware

The middleware layer offers storage and processing capabilities. It stores the incoming data and applies appropriate analytics based on requirements. 

  • Application

The user interacts with the application layer, responsible for taking specific services to the end-user. 

Implementation Requirements

Effective and seamless implementation of IoT depends on specific tools, such as:

  • High-Level Security 

Security is one of the fundamental IoT implementation requirements. Since the IoT devices gather real-time sensitive data about the environment, it is critical to put in place high-level security measures that ensure that sensitive information stays protected and confidential.  

  • Asset Management

Asset management includes the software, hardware, and processes that ensure that the devices are registered, upgraded, secured, and well-managed. 

  • Cloud Computing

Since massive amounts of structured and unstructured data are gathered and processed, it is stored in the cloud. The cloud acts as a centralized repository of resources that allows the data to be accessed easily. Cloud computing ensures seamless communication between various IoT devices. 

  • Data Analytics

With advanced algorithms, large amounts of data are processed and analyzed from the cloud platform. As a result, you can derive trends based on the analytics, and corrective action can be taken. 

What are the IoT Implementation Steps?

Knowing the appropriate IoT implementation steps will help your business align your goals and expectations against the solution. You can also ensure the entire process is time-bound, cost-efficient, and satisfies all your business needs. 

10661243654?profile=RESIZE_710x

Set Business Objectives 

IoT implementation should serve your business goals and objectives. Unfortunately, not every entrepreneur is an accomplished technician or computer-savvy. You can hire experts if you lack the practical know-how regarding IoT, the components needed, and specialist knowledge. 

Think of what you will accomplish with IoT, such as improving customer experience, eliminating operational inconsistencies, reducing costs, etc. With a clear understanding of IoT technology, you should be able to align your business needs to IoT applications. 

Hardware Components and Tools

Selecting the necessary tools, components, hardware, and software systems needed for the implementation is the next critical step. First, you must choose the tools and technology, keeping in mind connectivity and interoperability. 

You should also select the right IoT platform that acts as a centralized repository for collecting and controlling all aspects of the network and devices. You can choose to have a custom-made platform or get one from suppliers. 

Some of the major components you require for implementation include,

  • Sensors
  • Gateways
  • Communication protocols
  • IoT platforms
  • Analytics and data management software

Implementation

Before initiating the implementation process, it is recommended that you put together a team of IoT experts and professionals with selected use case experience and knowledge. Make sure that the team comprises experts from operations and IT with a specific skill set in IoT. 

A typical team should be experts with skills in mechanical engineering, embedded system design, electrical and industrial design, technical expertise, and front/back-end development. 

Prototyping

Before giving the go-ahead, the team must develop an Internet of Things implementation prototype. 

A prototype will help you experiment and identify fault lines, connectivity, and compatibility issues. After testing the prototype, you can include modified design ideas. 

Integrate with Advanced technologies

After the sensors gather useful data, you can add layers of other technologies such as analytics, edge computing, and machine learning. 

The amount of unstructured data collected by the sensors far exceeds structured data. However, both structured and unstructured, machine learning, deep learning neural systems, and cognitive computing technologies can be used for improvement. 

Take Security Measures

Security is one of the top concerns of most businesses. With IoT depending predominantly on the internet for functioning, it is prone to security attacks. However, communication protocols, endpoint security, encryption, and access control management can minimize security breaches. 

Although there are no standardized IoT implementation steps, most projects follow these processes. But the exact sequence of IoT implementation depends on your project’s specific needs.

Challenges in IoT Implementation

Every new technology comes with its own set of implementation challenges. 

10661244498?profile=RESIZE_710x

When you keep these challenges of IoT implementation in mind, you’ll be better equipped to handle them. 

  • Lack of Network Security

When your entire system is dependent on the network connectivity for functioning, you are just adding another layer of security concern to deal with. 

Unless you have a robust network security system, you are bound to face issues such as hacking into the servers or devices. Unfortunately, the IoT hacking statistics are rising, with over 1.5 million security breaches reported in 2021 alone. 

  • Data Retention and Storage 

IoT devices continually gather data, and over time the data becomes unwieldy to handle. Such massive amounts of data need high-capacity storage units and advanced IoT analytics technologies. 

  • Lack of Compatibility 

IoT implementation involves several sensors, devices, and tools, and a successful implementation largely depends on the seamless integration between these systems. In addition, since there are no standards for devices or protocols, there could be major compatibility issues during implementation. 

IoT is the latest technology that is delivering promising results. Yet, similar to any technology, without proper implementation, your businesses can’t hope to leverage its immense benefits. 

Taking chances with IoT implementation is not a smart business move, as your productivity, security, customer experience, and future depend on proper and effective implementation. The only way to harness this technology would be to seek a reliable IoT app development company that can take your initiatives towards success.

Read more…

The construction industry is among the many under pressure for optimisation and sustainable growth, driven by the development of smart urban cities. Construction will account for about USD 12.9 trillion global output by 2022 and is predicted to grow globally by 3.1% by 2030. Its demand and spending are growing rapidly in an attempt to answer the need for housing of the fast-growing global population.

 

However, the industry faces unique challenges on a global scale. Despite continual stable growth underperformance, constant project rework, labour shortage and lack of adopted digital solutions cause production delays and a worrying 1% growth in productivity.

The construction industry is one of the slowest growing sectors for IoT adoption and digitalization. To put this in numbers:

  • Only 18% of the construction companies use mobile apps for project data and collaboration.
  • Nearly 50% of the companies in the field spend 1% or less on technology
  • 95% of all data captured in construction goes unused
  • 28% of the UK construction firms point out lack of on-site information as the biggest challenge for productivity

And yet

  • 70% of the contractors trust that the advance of technology and software solutions, in particular, can improve their work.

To sum up the data, the construction industry is open to digitalization and IoT, but the advancement is slow and difficult, due to the specific needs of the field. When we talk about technology implementation on the construction site, IoT can help with project data collection, environment condition monitoring, equipment tracking and remote management, as well as safety monitoring with wearables.

However, the implementation of IoT for construction sites calls for careful planning and calculation of costs, as well as trust in the technology. Here is where private LTE networks can come into play and help construction companies take initial steps into advancing their digitalization.

What is Private LTE?

Long-term Evolution or LTE is a broadband technology that allows companies to vertically scale solutions for easier management, improved latency, range, speed and costs. LTE is a connectivity standard used for cases with multiple devices with multiple bands and for global technologies. For construction sites, this would mean LTE can connect all devices on-site, including heavy machinery, mobile devices, trackers, sensors and anything else that requires a stable uninterrupted connection.

LTE requires companies to connect to an MNO and depend on local infrastructure to run the network, much like Wi-Fi. Private LTE, on the other hand, allows companies to create and operate an independent wireless network that covers all their business facilities. Private LTE is often used to reduce congestion, add a layer of security and reduce cost for locations with no existing infrastructure, where constructions sites fall into.

Why Private LTE for construction?

When it comes to the particular needs of construction companies, private LTE offers the following benefits, over public LTE and Wi-Fi.

  • Network ownership and autonomy

Private LTE can be seen as creating a connectivity island on the construction site, where all company devices and machinery can be monitored and controlled by the company network team. Owning the network increases flexibility because businesses do not need to rely on local providers for making changes, creating additional secure networks or moving devices from one network to another.

Construction sites do not often come with suitable networks in place, so putting your own one up whenever needed is just as important as being able to take it down quickly. With private LTE, construction companies can do it as they see fit.

  • Cost

Private LTE can optimise the cost for running a construction site, not just by providing stable connectivity for IoT implementation, but also from a pure network running point. For example, Wi-Fi is often not sufficient for serving large construction sites and may require a number of repeaters to cover the area, which increases the running cost. Private LTE can run on a single tower and can be combined with CBRS for further cost-reduction. This makes it ideal for locations that would incur high infrastructure installation costs.

  • Control and security

With private LTE, companies can control network access, preventing unwanted users or outside network interference. This is critical for securing the project data and device access. Private LTE allows for setting up specific levels of security access for different on-site members.

Network ownership also allows teams to use real-time data to make timely decisions on consumption, device control and management, as well as react in case of emergency. This can further help increase the on-site safety of the team.

  • Performance

Compared to public LTE or Wi-Fi, private LTE networks are simply better performing when it comes to hundreds of devices. Because the network is private, it allows the usage of connectivity management platforms for control of individual SIM cards (connected devices), traffic optimisation and control. Public LTE and Wi-Fi networks are often not equipped to handle multiple devices on the site, let alone underground projects, where there are barriers to the network. Uninterrupted performance is also key for real-time data and employee safety at high-risk construction sites.

Private LTE is a technology widely applicable for manufacturing, mining, cargo and freight, as well as for utility, hospitals and smart cities in general. It is considered the stepping stone for 5G implementation, because of its capacity and it is agreed to be the gateway for future-proofing network access.

While the implementation of NB-IoT is progressing, we at JT IoT have already developed solutions suitable for future IoT connectivity. To learn more about private LTE, watch our deep dive into the topic with Pod Group. 

Originally posted here.

Read more…

By Bee Hayes-Thakore

The Android Ready SE Alliance, announced by Google on March 25th, paves the path for tamper resistant hardware backed security services. Kigen is bringing the first secure iSIM OS, along with our GSMA certified eSIM OS and personalization services to support fast adoption of emerging security services across smartphones, tablets, WearOS, Android Auto Embedded and Android TV.

Google has been advancing their investment in how tamper-resistant secure hardware modules can protect not only Android and its functionality, but also protect third-party apps and secure sensitive transactions. The latest android smartphone device features enable tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the hardware-backed Keystore that resides in a hardware security module.

To accelerate adoption of new Android use cases with stronger security, Google announced the formation of the Android Ready SE Alliance. Secure Element (SE) vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. On March 25th, Google launched the General Availability (GA) version of StrongBox for SE.

8887974290?profile=RESIZE_710x

Hardware based security modules are becoming a mainstay of the mobile world. Juniper Research’s latest eSIM research, eSIMs: Sector Analysis, Emerging Opportunities & Market Forecasts 2021-2025, independently assessed eSIM adoption and demand in the consumer sector, industrial sector, and public sector, and predicts that the consumer sector will account for 94% of global eSIM installations by 2025. It anticipates that established adoption of eSIM frameworks from consumer device vendors such as Google, will accelerate the growth of eSIMs in consumer devices ahead of the industrial and public sectors.


Consumer sector will account for 94% of global eSIM installations by 2025

Juniper Research, 2021.

Expanding the secure architecture of trust to consumer wearables, smart TV and smart car

What’s more? A major development is that now this is not just for smartphones and tablets, but also applicable to WearOS, Android Auto Embedded and Android TV. These less traditional form factors have huge potential beyond being purely companion devices to smartphones or tablets. With the power, size and performance benefits offered by Kigen’s iSIM OS, OEMs and chipset vendors can consider the full scope of the vast Android ecosystem to deliver new services.

This means new secure services and innovations around:

🔐 Digital keys (car, home, office)

🛂 Mobile Driver’s License (mDL), National ID, ePassports

🏧 eMoney solutions (for example, Wallet)

How is Kigen supporting Google’s Android Ready SE Alliance?

The alliance was created to make discrete tamper resistant hardware backed security the lowest common denominator for the Android ecosystem. A major goal of this alliance is to enable a consistent, interoperable, and demonstrably secure applets across the Android ecosystem.

Kigen believes that enabling the broadest choice and interoperability is fundamental to the architecture of digital trust. Our secure, standards-compliant eSIM and iSIM OS, and secure personalization services are available to all chipset or device partners in the Android Ready SE Alliance to leverage the benefits of iSIM for customer-centric innovations for billions of Android users quickly.

Vincent Korstanje, CEO of Kigen

Kigen’s support for the Android Ready SE Alliance will allow our industry partners to easily leapfrog to the enhanced security and power efficiency benefits of iSIM technology or choose a seamless transition from embedded SIM so they can focus on their innovation.

We are delighted to partner with Kigen to further strengthen the security of Android through StrongBox via Secure Element (SE). We look forward to widespread adoption by our OEM partners and developers and the entire Android ecosystem.

Sudhi Herle, Director of Android Platform Security 

In the near term, the Google team is prioritizing and delivering the following Applets in conjunction with corresponding Android feature releases:

  • Mobile driver’s license and Identity Credentials
  • Digital car keys

Kigen brings the ability to bridge the physical embedded security hardware to a fully integrated form factor. Our Kigen standards-compliant eSIM OS (version 2.2. eUICC OS) is available to support chipsets and device makers now. This announcement is a start to what will bring a whole host of new and exciting trusted services offering better experience for users on Android.

Kigen’s eSIM (eUICC) OS brings

8887975464?profile=RESIZE_710x

The smallest operating system, allowing OEMs to select compact, cost-effective hardware to run it on.

Kigen OS offers the highest level of logical security when employed on any SIM form factor, including a secure enclave.

On top of Kigen OS, we have a broad portfolio of Java Card™ Applets to support your needs for the Android SE Ready Alliance.

Kigen’s Integrated SIM or iSIM (iUICC) OS further this advantage

8887975878?profile=RESIZE_710x

Integrated at the heart of the device and securely personalized, iSIM brings significant size and battery life benefits to cellular Iot devices. iSIM can act as a root of trust for payment, identity, and critical infrastructure applications

Kigen’s iSIM is flexible enough to support dual sim capability through a single profile or remote SIM provisioning mechanisms with the latter enabling out-of-the-box connectivity, secure and remote profile management.

For smartphones, set top boxes, android auto applications, auto car display, Chromecast or Google Assistant enabled devices, iSIM can offer significant benefits to incorporate Artificial intelligence at the edge.

Kigen’s secure personalization services to support fast adoption

SIM vendors have in-house capabilities for data generation but the eSIM and iSIM value chains redistribute many roles and responsibilities among new stakeholders for the personalization of operator credentials along different stages of production or over-the-air when devices are deployed.

Kigen can offer data generation as a service to vendors new to the ecosystem.

Partner with us to provide cellular chipset and module makers with the strongest security, performance for integrated SIM leading to accelerate these new use cases.

Security considerations for eSIM and iSIM enabled secure connected services

Designing a secure connected product requires considerable thought and planning and there really is no ‘one-size-fits-all’ solution. How security should be implemented draws upon a multitude of factors, including:

  • What data is being stored or transmitted between the device and other connected apps?
  • Are there regulatory requirements for the device? (i.e. PCI DSS, HIPAA, FDA, etc.)
  • What are the hardware or design limitations that will affect security implementation?
  • Will the devices be manufactured in a site accredited by all of the necessary industry bodies?
  • What is the expected lifespan of the device?

End-to-end ecosystem and services thinking needs to be a design consideration from the very early stage especially when considering the strain on battery consumption in devices such as wearables, smart watches and fitness devices as well as portable devices that are part of the connected consumer vehicles.

Originally posted here.

Read more…

Selecting an IoT Platform

The past several years have seen a huge growth in the number of companies offering IoT Platforms. The market research firm IoT Analytics reported 613 companies offering IoT platforms in 2021! This is a mind-blowing number. The IoT platforms vary widely in capabilities but typically focus on one or more of the building blocks of IoT systems – physical devices, internet connectivity, and digital services. In one way or another, they provide software (or in some cases hardware too) that gives companies a head-start when building IoT systems. There are so many companies offering platforms that it is nearly impossible to keep up with all of them.

 Charting the right path, avoiding pitfalls, maximizing your success.

If you are getting into IoT and not familiar with IoT platforms, you might be asking yourself questions like –  What makes up an IoT platform? What advantages could they have for my company? How do I select an IoT platform? 

Let’s tackle these questions one by one. 

What makes up an IoT platform?
 

Features

True IoT platforms typically provide the following features:

  • Digital services running in the cloud that physical devices connect to
  • Software that runs on devices that communicates with the digital services
  • A framework or schema for data messaging and remote command & control of devices
  • Security infrastructure to handle device registration, authentication, security credential management
  • Tools and methods for updating device firmware over-the-air (OTA)
  • Web dashboards for viewing the state of devices and interacting with the system

IoT  platforms may or may not also provide other features, including:

  • Analytics tools and dashboards
  • Digital twins or shadows
  • Application deployment orchestration
  • Machine learning orchestration
  • Rules engines
  • Fleet management tools
  • Integrations to other services
  • Gateway or hub support for bridging devices to the cloud
  • Cellular network plans for devices
  • Web or mobile application interfaces and templates

Example Elements of an IoT PlatformIoT-Platform-Blog-General-1024x270.png

Types of IoT Platforms

IoT platforms are not all the same. Their features and target use-cases vary a lot. However, at a high level, they can be grouped into two main categories.

Platform as a Service (PaaS) – Offered by the big cloud service providers

PaaS platforms provide building blocks to do most things an IoT system needs, but it is up to you to write the custom code that connects it all together. With a PaaS provider, you don’t have to worry about underlying server hardware, but you have to compose their services into a working architecture and manage the deployment of applications that use their services. This is more work but allows more flexibility and the opportunity to customize the system to your needs. Ongoing costs of a PaaS IoT platform are typically lower than a SaaS, but expertise is required to ensure correct usage patterns to avoid larger costs. The big cloud providers all offer PaaS IoT platforms. This includes Amazon Web Services (AWS)Microsoft Azure, and Google Cloud Platform (GCP).

Software as a Service (SaaS) – Offered by numerous software vendors, large and small

With a SaaS provider, you get access to use the software application they deploy and manage for you. Or you can license it and deploy it yourself. SaaS platforms typically provide some configurability and integrations with other systems. There is much less work on the cloud side as this is mostly taken care of for you. However, you are limited to the features that the IoT platform provider offers. You may need to invest more in bridging the platform to your other systems. Depending on your use case, a SaaS may provide more advanced features out-of-the-box than a PaaS. Ongoing costs are likely to be higher with SaaS IoT platforms. Examples of SaaS IoT platform providers include PelionLosantFriendly TechnologiesSoftware AGBlynkParticleThingsBoard, and Golioth.

What advantages could they have for my company?

Benefits of IoT Platforms – There are a lot.

The goal of IoT platforms is to provide a foundation for product-makers to build IoT solutions on top of.  IoT platforms take care of all the fundamental features that all solutions need (e.g. “the plumbing”), so you can focus on adding value with the differentiating features that you add on top. Users of IoT platforms get a huge benefit from economies of scale – especially if using the most popular platforms. This translates into improved security, more robust services, and lower costs. For these reasons, we always recommend using an IoT platform.

How do I select an IoT platform?

The Big Question – Should you use a PaaS or SaaS?

At SpinDance, we believe in lean and agile business principles. This usually translates into taking a staged approach and focusing on different priorities in each stage. IoT is a journey, not a destination. We have seen the most success when companies tackle each of their challenges in stages, don’t try to do too much too quickly, and don’t lock themselves into long-term decisions too early. Choosing whether to use a PaaS or Saas depends on the stage you are in along your IoT journey.

Our Answer – It depends on your stage in your IoT journey. 

The-IOT-Journey-Graphic-White2-scaled-e1641320879831-1024x553.jpg

If you are just starting on your IoT journey…

In the disconnected stage your main goals are to learn what technology can do for you, develop a vision for your new product or service with that knowledge, and evaluate your vision based on customer input. At this stage, you shouldn’t be worried too much about scale or efficiency. You need to nail down the problem you want to solve and the solution you propose to solve it with. Ash Maurya, entrepreneur and author of Running Lean, says that “Building a successful product is fundamentally about risk mitigation.” To evaluate and reduce your risk, you need to test your assumptions.

We often recommend building Proof of Concepts and Prototypes in this stage. These experiments are crucial to help you quickly validate the feasibility, desirability, and viability of your plans. They also help rally your organization and potential customers around new possibilities.

SaaS IoT platforms have their most advantage in this stage. They can help you get devices connected and data flowing quickly because they typically have more features ready out-of-the-box. However, since your knowledge about the future is limited at this stage, we recommend you avoid long-term commitments so you don’t get stuck with a solution that doesn’t work for you down the road.

 If you are working on your first connected product…

In the connecting stage, you should have some confidence in your problem-market fit and you should have a better idea of what benefits IoT can bring to your business. Now you need to build a system to deal with the rigors of production. You also need to adapt your organization to support your new product or service.

We recommend shifting your focus to creating robust experiences for your customers spanning across the physical devices and digital interfaces they interact with. You need to consider the other parts of the system such as mobile applications, web applications, database storage, operations dashboards, etc that you’ll need for your customers and your internal teams to interact with the system.

PaaS IoT platforms start to have a strong advantage in this stage. More often than not, we see the needs of the company outstretch the features provided by a SaaS. Therefore, there is a need to augment the capabilities of the SaaS platform or bridge it to your other systems. For example, if a SaaS IoT platform does not provide long-term data storage, you will need to create a bridge that pulls data from the platform’s service and puts it into a database that you control in the cloud. Maintaining and monitoring this bridge is non-trivial which may lead to you wanting to consolidate everything into your existing cloud. For reasons like this, we typically recommend PaaS platforms at this stage.

If you already have connected products out in the market…

The connected or accelerating stages are all about maximizing the benefit of IoT, taking advantage of the valuable data you are likely getting, and aligning your costs to revenue. You should be focused on scaling up your system while you improve your connected customer relationships and build up new processes and skills. These are not insignificant tasks. It takes in-house expertise. Your team needs to understand your systems, be able to improve efficiencies and optimize costs. You’ve got to get data to the right place when you need it, and it has to drive reliable actions across all your infrastructure.

PaaS IoT platforms offer the most advantage at this stage. You have more control of your systems and are not locked into a specific software platform. You have the ability to customize and have tighter integration with your existing systems. This lets you adapt and evolve to meet the needs of your customers over time.

Which production architecture works for you?

Considering the needs of your production system likely go beyond the needs of your prototypes and minimum viable product (MVP), it is best to think about what additional features you will need to augment the capabilities of your selected IoT platform. The diagrams below show the difference between augmenting a SaaS platform versus a PaaS platform.

An IoT System Built Around SaaS Platform
IoT-Platform-Blog-Bridge-to-SaaS-1024x385.png

An IoT System Built on a PaaS PlatformIoT-Platform-Blog-PaaS-1024x444.png

What else should be considered when choosing an IoT Platform?

When selecting an IoT platform, you are also choosing an ecosystem to join. This has ramifications that go beyond just the platform. Consider the following questions:

  • What device types are already supported / how easy is it to support the devices I need?
  • How close does the platform fit my use-case?
  • How easy is it to get started and use?
  • What skills do I need on my team to utilize the platform?
  • Will my team get the support we need to succeed?
  • Is the service reliable / highly available / trustworthy?
  • What additional features and services will I have to develop?
  • What systems do I need to integrate with? How easy is that?
  • What will my ongoing costs be for the IoT platform as well as other systems I need to maintain.
  • What happens if I want to change to a different IoT Platform?
  • Am I building the skills and knowledge we need inside my organization to succeed in the future?

Jumpstarting your IoT Systems with Starter Components

Building a system based on a PaaS platform offers a lot of flexibility and control. But you are faced with configuring and deploying your own applications to get your system running. There are a lot of reasons why you don’t want to create things from scratch. You need a head start. You need to follow good patterns and industry best practices. So, what should you do?

We believe that starter components, a.k.a. solution templates, solution implementations, etc, offer a great jumpstart to standing up a robust system. The big cloud companies know this and offer templates for various use-cases. These can be used in any stage of the  IoT Journey. For example, AWS has a Smart Product Solution solution implementation that features capabilities to connect devices, process and analyze telemetry data, etc within a scalable framework. A fundamentally great feature of this is that it is based on AWS Cloud Development Kit (CDK) which means it can be programmatically deployed in minutes. Microsoft Azure has similar solution examples that can also be deployed and tested relatively quickly.

Additionally, there are a lot of benefits from working with a solution provider that has experience with IoT systems and can offer good guidance and support. SpinDance recently collaborated with our partner TwistThink to build Auris Cloud, a set of customizable IoT components that capture our combined years of experience working on IoT systems. Auris components are customizable to meet the needs of many different types of use cases and are deployable on AWS. Things like security, performance, and scalability are baked into the system. Auris can be optimized for different performance and cost models, integrated with other systems, and deployed as an application that you control. We believe this approach offers a great trade-off between fully custom and off-the-shelf solutions.

Summary

At SpinDance, we don’t recommend you try to build an IoT system from scratch. There are great solutions available from both SaaS and PaaS providers. They offer massive benefits in enabling you to build secure and scalable IoT solutions. However, we recommend you consider your organization’s goals and the stage you are in before locking yourself into an IoT platform. Be sure to start with your customer needs and build backward. Prototype and get things right before scaling. A SaaS IoT platform can be great for building proof of concepts or prototyping but may not work for you long term. For maximum customization, flexibility, and tighter integration with your other cloud applications we recommend a PaaS IoT platform. And for the lowest risks and maximum benefits, we recommend using pre-built components that can be customized to your needs.

Read more…

by Carsten Gregersen

With how fast the IoT industry is growing, it’s paramount your business isn’t left behind.

IoT technology has brought a ton of benefits and makes systems more efficient and easier to manage. As a result, it’s no surprise that more businesses are adopting IoT solutions. On top of that, businesses starting new projects have the slight advantage of buying all new technology and, therefore, not having to deal with legacy systems. 

On the other hand, if you have an already operational legacy system and you want to implement IoT, you may think you have to buy entirely new technology to get it online, right? Not necessarily. After all, if your legacy systems are still functional and your staff is comfortable with them, why should you waste all of that time and money?

Legacy systems can still bend to your will and be used for adopting IoT. Sticking rather than twisting can help your business save money on your IoT project.

In this blog, we’ll go over the steps you would need to follow for integrating IoT technology into your legacy systems and the different options you have to get this done.

1. Analyze Your Current Systems

First things first, take a look at your current system and take note of their purpose, the way they work, the type of data that they collect, and the way they could benefit by communicating with each other.

This step is important because it will allow you to plan out IoT integration more efficiently. When analyzing your current systems make sure you focus on these key aspects:

  • Automation – See how automation is currently accomplished and what other aspects should be automated.
  • Efficiency – What aspects are routinely tedious or slow and could become more efficient?
  • Data – How it’s taken, stored, and processed, and how it could be used better
  • Money – Analyze how much some processes cost and keep them in mind to know what aspects could be done for cheaper with IoT
  • Computing – The way data is processed, whether it be cloud, edge, or hybrid.

Following these steps will help you know your project in and out and apply IoT in the areas that truly matter.

2. Plan for IoT Integration

In order to integrate IoT into your legacy systems, you must get everything in order. 

In order to successfully integrate IoT into your system, you will need to have strong planning, design, and implementation phases. Steps you will need to follow in order to achieve this can be

  • Decide what IoT hardware is going to be needed
  • Set a budget taking software, hardware, and maintenance into account
  • Decide on a communication protocol
  • Develop software tools for interacting with the system
  • Decide on a security strategy

This process can be daunting if you don’t know how IoT works, but by following the right tutorials and developing with the right tools, your IoT project can be easily realizable. 

Nabto has tools that can not only help you set up an IoT project but also adding legacy systems and newer IoT devices to it.

Here are several ways in which we can help get your legacy systems IoT ready. 

  • You can integrate the Nabto SDK to add IoT remote control access to your devices.
  • Use the Nabto application to move data from one network to another – otherwise known as TCP tunneling.
  • Add secure remote access to your existing solutions. 
  • Build mobile apps for remote control of embedded devices our IoT app solution.

3. Implement IoT Sensors to Existing Hardware

IoT has the capability to automize, control, and make systems more efficient. Therefore, interconnecting your legacy systems to allow for communication is a great idea.

There’s a high chance your legacy systems don’t currently have the ability to sense or communicate data. However, adding new IoT sensors can give them these capabilities.

IoT sensors are small devices that can detect when something changes. Then, they capture and send information to a main computer over the internet to be processed or execute commands. These could measure (but not limited to):

  • Temperature
  • Humidity
  • Pressure
  • Gyroscope
  • Accelerometer

These sensors are cheap and easy to install, therefore, adding them to your existing legacy systems can be the simplest and quickest way to get to communicate over the internet.

Set up which inputs the sensor should respond to and under what conditions, and what it should do with the collected data. You could be surprised by the benefits that making a simple device to collect data can have for your project!

4. Connect Existing PLCs to the Internet

If you already have an automated system managed by a PLC (Programmable Logic Controller,) devices already share data with each other. Therefore, the next step is to get them online.

With access to the internet, these systems can be controlled remotely from anywhere in the world. Data can be accessed, modified, and analyzed more easily. On top of that, updates can be pushed globally at any time.

Given that some PLCs utilize proprietary protocols and have a weird way of making devices communicate with each other, an IoT gateway is the best way to take the PLC to the internet.

An IoT gateway is a device that acts as a bridge between IoT devices and the cloud, and allows for communication between them. This allows you to implement IoT to a PLC without having to restructure it or change it too much.

5. Connect Legacy using an IO port

A lot of times a legacy system has some kind of interface for data input/output. Sometimes, this is implemented for debugging when the product was developed. However, at other times, this is done to make it possible for service organizations to be able to interface with products in the field and to help customers with setup and/or debug problems.

These debug ports are similar to real serial ports, such as an RS-485 RS-232, etc. That being said, they can be more raw UART, SPI, or I2C. What’s more, the majority of the time the protocol on top of the serial connection is proprietary.

This kind of interface is great. It allows you a “black box” to be created via a physical interface matching the legacy system and firmware running on this black box. This can translate “internet” requests to the proprietary protocol of the legacy system. In addition,  this new system can be used as a design for newer internet-accessible versions of the system simply by adopting the black box onto the internal legacy design.

Bottom Line

Getting your legacy systems to work in IoT is not as much of a challenge as you might have initially thought.

Following some fairly simple strategies can let you set them up relatively quickly. However, don’t forget the planning phase for your IoT strategy and deciding how it’s going to be implemented in your own legacy system. This will allow you to streamline the process even more, and make you take full advantage of all the benefits that IoT brings to your project.

Originally posted here.

Read more…

Today the world is obsessed with the IoT, as if this is a new concept. We've been building the IoT for decades, but it was only recently some marketing "genius" came up with the new buzz-acronym.

Before there was an IoT, before there was an Internet, many of us were busy networking. For the Internet itself was a (brilliant) extension of what was already going on in the industry.

My first experience with networking was in 1971 at the University of Maryland. The school had a new computer, a $10 million Univac 1108 mainframe. This was a massive beast that occupied most of the first floor of a building. A dual-processor machine it was transistorized, though the control console did have some ICs. Rows of big tape drives mirrored the layman's idea of computers in those days. Many dishwasher-sized disk drives were placed around the floor and printers, card readers and other equipment were crammed into every corner. Two Fastrand drum memories, each consisting of a pair of six-foot long counterrotating drums, stored a whopping 90 MB each. Through a window you could watch the heads bounce around.

The machine was networked. It had a 300 baud modem with which it could contact computers at other universities. A primitive email system let users create mail which was queued till nightfall. Then, when demands on the machine were small, it would call the appropriate remote computer and forward mail. The system operated somewhat like today's "hot potato" packets, where the message might get delivered to the easiest machine available, which would then attempt further forwarding. It could take a week to get an email, but at least one saved the $0.08 stamp that the USPS charged.

The system was too slow to be useful. After college I lost my email account but didn't miss it at all.

By the late 70s many of us had our own computers. Mine was a home-made CP/M machine with a Z80 processor and a small TV set as a low-res monitor. Around this time Compuserve came along and I, like so many others, got an account with them. Among other features, users had email addresses. Pretty soon it was common to dial into their machines over a 300 baud modem and exchange email and files. Eventually Compuserve became so ubiquitous that millions were connected, and at my tools business during the 1980s it was common to provide support via this email. The CP/M machine gave way to a succession of PCs, Modems ramped up to 57 K baud.

My tools business expanded rapidly and soon we had a number of employees. Sneakernet was getting less efficient so we installed an Arcnet network using Windows 3.11. That morphed into Ethernet connections, though the cursing from networking problems multiplied about as fast as the data transfers. Windows was just terrible at maintaining reliable connectivity.

In 1992 Mike Lee, a friend from my Boys Night Out beer/politics/sailing/great friends group, which still meets weekly (though lately virtually) came by the office with his laptop. "You have GOT to see this" he intoned, and he showed me the world-wide web. There wasn't much to see as there were few sites. But the promise was shockingly clear. I was stunned.

The tools business had been doing well. Within a month we spent $100k on computers, modems and the like and had a new business: Softaid Internet Services. SIS was one of Maryland's first ISPs and grew quickly to several thousand customers. We had a T1 connection to MAE-EAST in the DC area which gave us a 1.5 Mb/s link… for $5000/month. Though a few customers had ISDN connections to us, most were dialup, and our modem shelf grew to over 100 units with many big fans keeping the things cool.

The computers all ran BSD Unix, which was my first intro to that OS.

I was only a few months back from a failed attempt to singlehand my sailboat across the Atlantic and had written a book-length account of that trip. I hastily created a web page of that book to learn about using the web. It is still online and has been read several million times in the intervening years. We put up a site for the tools business which eventually became our prime marketing arm.

The SIS customers were sometimes, well, "interesting." There was the one who claimed to be a computer expert, but who tried to use the mouse by waving it around over the desk. Many had no idea how to connect a modem. Others complained about our service because it dropped out when mom would pick up the phone to make a call over the modem's beeping. A lot of handholding and training was required.

The logs showed a shocking (to me at the time) amount of porn consumption. Over lunch an industry pundit explained how porn drove all media, from the earliest introduction of printing hundreds of years earlier.

The woman who ran the ISP was from India. She was delightful and had a wonderful marriage. She later told me it had been arranged; they met  their wedding day. She came from a remote and poor village and had had no exposure to computers, or electricity, till emigrating to the USA.

Meanwhile many of our tools customers were building networking equipment. We worked closely with many of them and often had big routers, switches and the like onsite that our engineers were working on. We worked on a lot of what we'd now call IoT gear: sensors et al connected to the net via a profusion of interfaces.

I sold both the tools and Internet businesses in 1997, but by then the web and Internet were old stories.

Today, like so many of us, I have a fast (250 Mb/s) and cheap connection into the house with four wireless links and multiple computers chattering to each other. Where in 1992 the web was incredibly novel and truly lacking in useful functionality, now I can't imagine being deprived of it. Remember travel agents? Ordering things over the phone (a phone that had a physical wire connecting it to Ma Bell)? Using 15 volumes of an encyclopedia? Physically mailing stuff to each other?

As one gets older the years spin by like microseconds, but it is amazing to stop and consider just how much this world has changed. My great grandfather lived on a farm in a world that changed slowly; he finally got electricity in his last year of life. His daughter didn't have access to a telephone till later in life, and my dad designed spacecraft on vellum and starched linen using a slide rule. My son once saw a typewriter and asked me what it was; I mumbled that it was a predecessor of Microsoft Word.

That he understood. I didn't have the heart to try and explain carbon paper.

Originally posted HERE.

Read more…

In my last post, I explored how OTA updates are typically performed using Amazon Web Services and FreeRTOS. OTA updates are critically important to developers with connected devices. In today’s post, we are going to explore several best practices developers should keep in mind with implementing their OTA solution. Most of these will be generic although I will point out a few AWS specific best practices.

Best Practice #1 – Name your S3 bucket with afr-ota

There is a little trick with creating S3 buckets that I was completely oblivious to for a long time. Thankfully when I checked in with some colleagues about it, they also had not been aware of it so I’m not sure how long this has been supported but it can help an embedded developer from having to wade through too many AWS policies and simplify the process a little bit.

Anyone who has attempted to create an OTA Update with AWS and FreeRTOS knows that you have to setup several permissions to allow an OTA Update Job to access the S3 bucket. Well if you name your S3 bucket so that it begins with “afr-ota”, then the S3 bucket will automatically have the AWS managed policy AmazonFreeRTOSOTAUpdate attached to it. (See Create an OTA Update service role for more details). It’s a small help, but a good best practice worth knowing.

Best Practice #2 – Encrypt your firmware updates

Embedded software must be one of the most expensive things to develop that mankind has ever invented! It’s time consuming to create and test and can consume a large percentage of the development budget. Software though also drives most features in a product and can dramatically different a product. That software is intellectual property that is worth protecting through encryption.

Encrypting a firmware image provides several benefits. First, it can convert your firmware binary into a form that seems random or meaningless. This is desired because a developer shouldn’t want their binary image to be easily studied, investigated or reverse engineered. This makes it harder for someone to steal intellectual property and more difficult to understand for someone who may be interested in attacking the system. Second, encrypting the image means that the sender must have a key or credential of some sort that matches the device that will decrypt the image. This can be looked at a simple source for helping to authenticate the source, although more should be done than just encryption to fully authenticate and verify integrity such as signing the image.

Best Practice #3 – Do not support firmware rollbacks

There is often a debate as to whether firmware rollbacks should be supported in a system or not. My recommendation for a best practice is that firmware rollbacks be disabled. The argument for rollbacks is often that if something goes wrong with a firmware update then the user can rollback to an older version that was working. This seems like a good idea at first, but it can be a vulnerability source in a system. For example, let’s say that version 1.7 had a bug in the system that allowed remote attackers to access the system. A new firmware version, 1.8, fixes this flaw. A customer updates their firmware to version 1.8, but an attacker knows that if they can force the system back to 1.7, they can own the system. Firmware rollbacks seem like a convenient and good idea, in fact I’m sure in the past I used to recommend them as a best practice. However, in today’s connected world where we perform OTA updates, firmware rollbacks are a vulnerability so disable them to protect your users.

Best Practice #4 – Secure your bootloader

Updating firmware Over-the-Air requires several components to ensure that it is done securely and successfully. Often the focus is on getting the new image to the device and getting it decrypted. However, just like in traditional firmware updates, the bootloader is still a critical piece to the update process and in OTA updates, the bootloader can’t just be your traditional flavor but must be secure.

There are quite a few methods that can be used with the onboard bootloader, but no matter the method used, the bootloader must be secure. Secure bootloaders need to be capable of verifying the authenticity and integrity of the firmware before it is ever loaded. Some systems will use the application code to verify and install the firmware into a new application slot while others fully rely on the bootloader. In either case, the secure bootloader needs to be able to verify the authenticity and integrity of the firmware prior to accepting the new firmware image.

It’s also a good idea to ensure that the bootloader is built into a chain of trust and cannot be easily modified or updated. The secure bootloader is a critical component in a chain-of-trust that is necessary to keep a system secure.

Best Practice #5 – Build a Chain-of-Trust

A chain-of-trust is a sequence of events that occur while booting the device that ensures each link in the chain is trusted software. For example, I’ve been working with the Cypress PSoC 64 secure MCU’s recently and these parts come shipped from the factory with a hardware-based root-of-trust to authenticate that the MCU came from a secure source. That Root-of-Trust (RoT) is then transferred to a developer, who programs a secure bootloader and security policies onto the device. During the boot sequence, the RoT verifying the integrity and authenticity of the bootloader, which then verifies the integrity and authenticity of any second stage bootloader or software which then verifies the authenticity and integrity of the application. The application then verifies the authenticity and integrity of its data, keys, operational parameters and so on.

This sequence creates a Chain-Of-Trust which is needed and used by firmware OTA updates. When the new firmware request is made, the application must decrypt the image and verify that authenticity and integrity of the new firmware is intact. That new firmware can then only be used if the Chain-Of-Trust can successfully make its way through each link in the chain. The bottom line, a developer and the end user know that when the system boots successfully that the new firmware is legitimate. 

Conclusions

OTA updates are a critical infrastructure component to nearly every embedded IoT device. Sure, there are systems out there that once deployed will never update, however, those are probably a small percentage of systems. OTA updates are the go-to mechanism to update firmware in the field. We’ve examined several best practices that developers and companies should consider when they start to design their connected systems. In fact, the bonus best practice for today is that if you are building a connected device, make sure you explore your OTA update solution sooner rather than later. Otherwise, you may find that building that Chain-Of-Trust necessary in today’s deployments will be far more expensive and time consuming to implement.

Originally posted here.

Read more…

Wi-Fi, NB-IoT, Bluetooth, LoRaWAN… This webinar will help you to choose the appropriate connectivity protocol for your IoT application.

Connectivity is cool! The cornucopia of connectivity choices available to us today would make engineers gasp in awe and disbelief just a few short decades ago.

I was just pondering this point and – as usual – random thoughts started to bounce around my poor old noggin. Take the topic of interoperability, for example (for the purposes of these discussions, we will take “interoperability” to mean “the ability of computer systems or software to exchange and make use of information”).

Don’t get me started on the subject of the Endian Wars. Instead, let’s consider the 7-bit American Standard Code for Information Interchange (ASCII) that we know and love. The currently used ASCII standard of 96 printing characters and 32 control characters was first defined in 1968. For machines that supported ASCII, this greatly facilitated their ability to exchange information.

For reasons of their own, the folks at IBM decided to go their own way by developing a proprietary 8-bit code called the Extended Binary Coded Decimal Interchange Code (EBCDIC). This code was first used on the IBM 360 computer, which was presented to the market in 1964. Just for giggles and grins, IBM eventually introduced 57 different variants EBCDIC targeted at different countries (a “standard” that came in 57 different flavors!). This obviously didn’t help IBM machines in different countries to make use of each other’s files. Even worse, different types of IBM computers found difficult to talk to each other, let alone with machines from other manufacturers.

There’s an old joke that goes, “Standard are great – everyone should have one.” The problem is that almost everybody did. Sometime around late-1980 or early 1981, for example, I was working at International Computers (ICL) in Manchester, England. I recall being invited to what I was told was going to be a milestone event. This turned out to be a demonstration in which a mainframe computer was connected to a much smaller computer (akin to one of the first PCs) via a proprietary wired network. With great flourish and fanfare, the presenter created and saved a simple ASCII text file on the mainframe, then – to the amazement of all present – opened and edited the same file on the small computer.

This may sound like no big deal to the young folks of today, but it was an event of such significance at that time that journalists from the national papers came up on the train from London to witness this august occasion with their own eyes so that they could report back to the unwashed masses.

Now, of course, we have a wide variety of wired standards, from simple (short range) protocols like I2C and SPI, to sophisticated (longer range) offerings like Ethernet. And, of course, we have a cornucopia of wireless standards like Wi-Fi, NB-IoT, Bluetooth, and LoRaWAN. In some respects, this is almost an embarrassment of riches … there are so many options … how can we be expected to choose the most appropriate connectivity protocol for our IoT applications?

Well, I’m glad you asked, because I will be hosting a one-hour webinar on this very topic on Tuesday 28 September 2021, starting at 8:00 a.m. Pacific Time (11:00 a.m. Eastern Time).

Presented by IoT Central and sponsored by ARM, yours truly will be joined in this webinar by Samuele Falconer (Principal Product Manager at u-blox), Omer Cheema (Head of the Wi-Fi Business Unit at Renesas Semiconductor), Wienke Giezeman (Co-Founder and CEO at The Things Industries), and Thomas Cuyckens (System Architect at Qorvo).

If you are at all interested in connectivity for your cunning IoT creations, then may I make so bold as to suggest you Register Now before all of the good virtual seats are taken. I’m so enthused by this event that I’m prepared to pledge on my honor that – if you fail to learn something new – I will be very surprised (I was going to say that I would return the price of your admission but, since this event is free, that would have been a tad pointless).

So, what say you? Can I dare to hope to see you there? Register Now

Read more…

4 key questions to ask tech vendors

Posted by Terri Hiskey

Without mindful and strategic investments, a company’s supply chain could become wedged in its own proverbial Suez Canal, ground to a halt by outside forces and its inflexible, complex systems.

 

It’s a dramatic image, but one that became reality for many companies in the last year. Supply chain failures aren’t typically such high-profile events as the Suez Canal blockage, but rather death by a thousand inefficiencies, each slowing business operations and affecting the customer experience.

Delay by delay and spreadsheet by spreadsheet, companies are at risk of falling behind more nimble, cloud-enabled competitors. And as we emerge from the pandemic with a new understanding of how important adaptable, integrated supply chains are, company leaders have critical choices to make.

The Hannover Messe conference (held online from April 12-16) gives manufacturing and supply chain executives around the world a chance to hear perspectives from industry leaders and explore the latest manufacturing and supply chain technologies available.

Technology holds great promise. But if executives don’t ask key strategic questions to supply chain software vendors, they could unknowingly introduce a range of operational and strategic obstacles into their company’s future.

If you’re attending Hannover Messe, here are a few critical questions to ask:

Are advanced technologies like machine learning, IoT, and blockchain integrated into your supply chain applications and business processes, or are they addressed separately?

It’s important to go beyond the marketing. Is the vendor actually promoting pilots of advanced technologies that are simply customized use cases for small parts of an overall business process hosted on a separate platform? If so, it may be up to your company to figure out how to integrate it with the rest of that vendor’s applications and to maintain those integrations.

To avoid this situation, seek solutions that have been purpose-built to leverage advanced technologies across use cases that address the problems you hope to solve. It’s also critical that these solutions come with built-in connections to ensure easy integration across your enterprise and to third party applications.

Are your applications or solutions written specifically for the cloud?

If a vendor’s solution for a key process (like integrated business planning or plan to produce, for example) includes applications developed over time by a range of internal development teams, partners, and acquired companies, what you’re likely to end up with is a range of disjointed applications and processes with varying user interfaces and no common data model. Look for a cloud solution that helps connect and streamline your business processes seamlessly.

Update schedules for the various applications could also be disjointed and complicated, so customers can be tempted to skip updates. But some upgrades may be forced, causing disruption in key areas of your business at various times.

And if some of the applications in the solution were written for the on-premises world, business processes will likely need customization, making them hard-wired and inflexible. The convenience of cloud solutions is that they can take frequent updates more easily, resulting in greater value driven by the latest innovations.

Are your supply chain applications fully integrated—and can they be integrated with other key applications like ERP or CX?

A lack of integration between and among applications within the supply chain and beyond means that end users don’t have visibility into the company’s operations—and that directly affects the quality and speed of business decisions. When market disruptions or new opportunities occur, unintegrated systems make it harder to shift operations—or even come to an agreement on what shift should happen.

And because many key business processes span multiple areas—like manufacturing forecast to plan, order to cash, and procure to pay—integration also increases efficiency. If applications are not integrated across these entire processes, business users resort to pulling data from the various systems and then often spend time debating whose data is right.

Of course, all of these issues increase operational costs and make it harder for a company to adapt to change. They also keep the IT department busy with maintenance tasks rather than focusing on more strategic projects.

Do you rely heavily on partners to deliver functionality in your supply chain solutions?

Ask for clarity on which products within the solution belong to the vendor and which were developed by partners. Is there a single SLA for the entire solution? Will the two organizations’ development teams work together on a roadmap that aligns the technologies? Will their priority be on making a better solution together or on enhancements to their own technology? Will they focus on enabling data to flow easily across the supply chain solution, as well as to other systems like ERP? Will they be able to overcome technical issues that arise and streamline customer support?

It’s critical for supply chain decision-makers to gain insight into these crucial questions. If the vendor is unable to meet these foundational needs, the customer will face constant obstacles in their supply chain operations.

Originally posted here.

Read more…

By Ricardo Buranello

What Is the Concept of a Virtual Factory?

For a decade, the first Friday in October has been designated as National Manufacturing Day. This day begins a month-long events schedule at manufacturing companies nationwide to attract talent to modern manufacturing careers.

For some period, manufacturing went out of fashion. Young tech talents preferred software and financial services career opportunities. This preference has changed in recent years. The advent of digital technologies and robotization brought some glamour back.

The connected factory is democratizing another innovation — the virtual factory. Without critical asset connection at the IoT edge, the virtual factory couldn’t have been realized by anything other than brand-new factories and technology implementations.

There are technologies that enable decades-old assets to communicate. Such technologies allow us to join machine data with physical environment and operational conditions data. Benefits of virtual factory technologies like digital twin are within reach for greenfield and legacy implementations.

Digital twin technologies can be used for predictive maintenance and scenario planning analysis. At its core, the digital twin is about access to real-time operational data to predict and manage the asset’s life cycle. It leverages relevant life cycle management information inside and outside the factory. The possibilities of bringing various data types together for advanced analysis are promising.

I used to see a distinction between IoT-enabled greenfield technology in new factories and legacy technology in older ones. Data flowed seamlessly from IoT-enabled machines to enterprise systems or the cloud for advanced analytics in new factories’ connected assets. In older factories, while data wanted to move to the enterprise systems or the cloud, it hit countless walls. Innovative factories were creating IoT technologies in proof of concepts (POCs) on legacy equipment, but this wasn’t the norm.

No matter the age of the factory or equipment, everything looks alike. When manufacturing companies invest in machines, the expectation is this asset will be used for a decade or more. We had to invent something inclusive to new and legacy machines and systems.

We had to create something to allow decades-old equipment from diverse brands and types (PLCs, CNCs, robots, etc.) to communicate with one another. We had to think in terms of how to make legacy machines to talk to legacy systems. Connecting was not enough. We had to make it accessible for experienced developers and technicians not specialized in systems integration.

If plant managers and leaders have clear and consumable data, they can use it for analysis and measurement. Surfacing and routing data has enabled innovative use cases in processes controlled by aged equipment. Prescriptive and predictive maintenance reduce downtime and allow access to data. This access enables remote operation and improved safety on the plant floor. Each line flows better, improving supply chain orchestration and worker productivity.

Open protocols aren’t optimized for connecting to each machine. You need tools and optimized drivers to connect to the machines, cut latency time and get the data to where it needs to be in the appropriate format to save costs. These tools include:

  • Machine data collection
  • Data transformation and visualization
  • Device management
  • Edge logic
  • Embedded security
  • Enterprise integration
This digital copy of the entire factory floor brings more promise for improving productivity, quality, downtime, throughput and lending access to more data and visibility. It enables factories to make small changes in the way machines and processes operate to achieve improvements.

Plants are trying to get and use data to improve overall equipment effectiveness. OEE applications can calculate how many good and bad parts were produced compared to the machine’s capacity. This analysis can go much deeper. Factories can visualize how the machine works down to sub-processes. They can synchronize each movement to the millisecond and change timing to increase operational efficiency.

The technology is here. It is mature. It’s no longer a question of whether you want to use it — you have it to get to what’s next. I think this makes it a fascinating time for smart manufacturing.

Originally posted here.

Read more…

By Jacqi Levy

The Internet of Things (IoT) is transforming every facet of the building – how we inhabit them, how we manage them, and even how we build them. There is a vast ecosystem around today’s buildings, and no part of the ecosystem is untouched.

In this blog series, I plan to examine the trends being driven by IoT across the buildings ecosystem. Since the lifecycle of building begins with design and construction, let’s start there. Here are four ways that the IoT is radically transforming building design and construction.

Building information modeling

Building information modeling (BIM) is a process that provides an intelligent, 3D model of a building. Typically, BIM is used to model a building’s structure and systems during design and construction, so that changes to one set of plans can be updated simultaneously in all other impacted plans. Taken a step further, however, BIM can also become a catalyst for smart buildings projects.

Once a building is up and running, data from IoT sensors can be pulled into the BIM. You can use that data to model things like energy usage patterns, temperature trends or people movement throughout a building. The output from these models can then be analyzed to improve future buildings projects. Beyond its impact on design and construction, BIM also has important implications for the management of building operations.

Green building

The construction industry is a huge driver of landfill waste – up to 40% of all solid waste in the US comes from the buildings projects. This unfortunate fact has ignited a wave of interest in sustainable architecture and construction. But the green building movement has become about much more than keeping building materials out of landfills. It is influencing the design and engineering of building systems themselves, allowing buildings to reduce their impact on the environment through energy management.

Today’s green buildings are being engineered to do things like shut down unnecessary systems automatically when the building is unoccupied, or open and close louvers automatically to let in optimal levels of natural light. In a previous post, I talk about 3 examples of the IoT in green buildings, but these are just some of the cool ways that the construction industry is learning to be more sustainable with help from the IoT.

Intelligent prefab

Using prefabricated building components can be faster and more cost effective than traditional building methods, and it has an added benefit of creating less construction waste. However, using prefab for large commercial buildings projects can be very complex to coordinate. The IoT is helping to solve this problem.

Using RFID sensors, individual prefab parts can be tracked throughout the supply chain. A recent example is the construction of the Leadenhall Building in London. Since the building occupies a relatively small footprint but required large prefabricated components, it was a logistically complex task to coordinate the installation. RFID data was used to help mitigate the effects of any downstream delays in construction. In addition, the data was the fed into the BIM once parts were installed, allowing for real time rendering of the building in progress, as well as establishment of project controls and KPIs.

Construction management

Time is money, so any delays on a construction project can be costly. So how do you prevent your critical heavy equipment from going down and backing up all the other trades on site? With the IoT!

Heavy construction equipment is being outfitted with sensors, which can be remotely monitored for key indicators of potential maintenance issues like temperature fluctuations, excessive vibrations, etc. When abnormal patterns are detected, alerts can trigger maintenance workers to intervene early, before critical equipment fails. Performing predictive maintenance in this way can save time and money, as well as prevent unnecessary delays in construction projects.

Originally posted here.

Read more…

By Ashley Ferguson

Thanks to the introduction of connected products, digital services, and increased customer expectations, it has been the trend for IoT enterprise spend to consistently increase. The global IoT market is projected to reach $1.4 trillion USD by 2027. The pressure to build IoT solutions and get a return on those investments has teams on a frantic search for IoT engineers to secure in-house IoT expertise. However, due to the complexity of IoT solutions, finding this in a single engineer is a difficult or impossible proposition.

So how do you adjust your search for an IoT engineer? The first step is to acknowledge that IoT solution development requires the fusion of multiple disciplines. Even simple IoT applications require hardware and software engineering, knowledge of protocols and connectivity, web development skills, and analytics. Certainly, there are many engineers with IoT knowledge, but complete IoT solutions require a team of partners with diverse skills. This often requires utilizing external sources to supplement the expertise gaps.

THE ANATOMY OF AN IoT SOLUTION

IoT solutions provide enterprises with opportunities for innovation through new product offerings and cost savings through refined operations. An IoT solution is an integrated bundle of technologies that help users answer a question or solve a specific problem by receiving data from devices connected to the internet. One of the most common IoT use cases is asset tracking solutions for enterprises who want to monitor trucks, equipment, inventory, or other items with IoT. The anatomy of an asset tracking IoT solution includes the following:

9266380467?profile=RESIZE_710x

This is a simple asset tracking example. For more complex solutions including remote monitoring or predictive maintenance, enterprises must also consider installation, increased bandwidth, post-development support, and UX/UI for the design of the interface for customers or others who will use the solution. Enterprise IoT solutions require an ecosystem of partners, components, and tools to be brought to market successfully.

Consider the design of your desired connected solution. Do you know where you will need to augment skills and services?

If you are in the early stages of IoT concept development and at the center of a buy vs. build debate, it may be a worthwhile exercise to assess your existing team’s skills and how they correspond with the IoT solution you are trying to build.

IoT SKILLS ASSESSMENT

  • Hardware
  • Firmware
  • Connectivity
  • Programming
  • Cloud
  • Data Science
  • Presentation
  • Technical Support and Maintenance
  • Security
  • Organizational Alignment

MAKING TIME FOR IoT APPLICATION DEVELOPMENT

The time it will take your organization to build a solution is dependent on the complexity of the application. One way to estimate the time and cost of IoT application development is with Indeema’s IoT Cost Calculator. This tool can help roughly estimate the hours required and the cost associated with the IoT solution your team is interested in building. In MachNation’s independent comparison of the Losant Enterprise IoT Platform and Azure, it was determined that developers could build an IoT solution in 30 hours using Losant and in 74-94 hours using Microsoft Azure.

As you consider IoT application development, consider the makeup of your team. Is your team prepared to dedicate hours to the development of a new solution, or will it be a side project? Enterprise IT teams are often in place to maintain existing operating systems and to ensure networks are running smoothly. In the event that an IT team is tapped to even partially build an IoT solution, there is a great chance that the IT team will need to invite partners to build or provide part of the stack.

HOW THE IoT JOB GETS DONE

Successful enterprises recognize early on that some of these skills will need to be augmented through additional people, through an ecosystem, or with software. It will require more than one ‘IoT engineer’ for the job. According to the results of a McKinsey survey, “the preferences of IoT leaders suggest a greater willingness to draw capabilities from an ecosystem of technology partners, rather than rely on homegrown capabilities.”

IoT architecture alone is intricate. Losant, an IoT application enablement platform, is designed with many of the IoT-specific components already in place. Losant enables users to build applications in a low-to-no code environment and scale them up to millions of devices. Losant is one piece in the wider scope of an IoT solution. In order to build a complete solution, an enterprise needs hardware, software, connectivity, and integration. For those components, our team relies on additional partners from the IoT ecosystem.

The IoT ecosystem, also known as the IoT landscape, refers to the network of IoT suppliers (hardware, devices, software platforms, sensors, connectivity, software, systems integrators, data scientists, data analytics) whose combined services help enterprises create complete IoT solutions. At Losant, we’ve built an IoT ecosystem with reliable experienced partners. When IoT customers need custom hardware, connectivity, system integrators, dev shops, or other experts with proven IoT expertise, we can tap one of our partners to help in their areas of expertise.

SECURE, SCALABLE, SEAMLESS IoT

Creating secure, scalable, and seamless IoT solutions for your environment begins by starting small. Starting small gives your enterprise the ability to establish its ecosystem. Teams can begin with a small investment and apply learnings to subsequent projects. Many IoT success stories begin with enterprises setting out to solve one problem. The simple beginnings have enabled them to now reap the benefits of the data harvest in their environments.

Originally posted here.

Read more…

By Tony Pisani

For midstream oil and gas operators, data flow can be as important as product flow. The operator’s job is to safely move oil and natural gas from its extraction point (upstream), to where it’s converted to fuels (midstream), to customer delivery locations (downstream). During this process, pump stations, meter stations, storage sites, interconnection points, and block valves generate a substantial volume and variety of data that can lead to increased efficiency and safety.

“Just one pipeline pump station might have 6 Programmable Logic Controllers (PLCs), 12 flow computers, and 30 field instruments, and each one is a source of valuable operational information,” said Mike Walden, IT and SCADA Director for New Frontier Technologies, a Cisco IoT Design-In Partner that implements OT and IT systems for industrial applications. Until recently, data collection from pipelines was so expensive that most operators only collected the bare minimum data required to comply with industry regulations. That data included pump discharge pressure, for instance, but not pump bearing temperature, which helps predict future equipment failures.

A turnkey solution to modernize midstream operations

Now midstream operators are modernizing their pipelines with Industrial Internet of Things (IIoT) solutions. Cisco and New Frontier Technologies have teamed up to offer a solution combining the Cisco 1100 Series Industrial Integrated Services Router, Cisco Edge Intelligence, and New Frontier’s know-how. Deployed at edge locations like pump stations, the solution extracts data from pipeline equipment and is sent via legacy protocols, transforming data at the edge to a format that analytics and other enterprise applications understand. The transformation also minimizes bandwidth usage.

Mike Walden views the Cisco IR1101 as a game-changer for midstream operators. He shared with me that “Before the Cisco IR1101, our customers needed four separate devices to transmit edge data to a cloud server—a router at the pump station, an edge device to do protocol conversion from the old to the new, a network switch, and maybe a firewall to encrypt messages…With the Cisco IR1101, we can meet all of those requirements with one physical device.”

Collect more data, at almost no extra cost

Using this IIoT solution, midstream operators can for the first time:

  • Collect all available field data instead of just the data on a polling list. If the maintenance team requests a new type of data, the operations team can meet the request using the built-in protocol translators in Edge Intelligence. “Collecting a new type of data takes almost no extra work,” Mike said. “It makes the operations team look like heroes.”
  • Collect data more frequently, helping to spot anomalies. Recording pump discharge pressure more frequently, for example, makes it easier to detect leaks. Interest in predicting (rather than responding to) equipment failure is also growing. The life of pump seals, for example, depends on both the pressure that seals experience over their lifetime and the peak pressures. “If you only collect pump pressure every 30 minutes, you probably missed the spike,” Mike explained. “If you do see the spike and replace the seal before it fails, you can prevent a very costly unexpected outage – saving far more than the cost of a new seal.”
  • Protect sensitive data with end-to-end security. Security is built into the IR1101, with secure boot, VPN, certificate-based authentication, and TLS encryption.
  • Give IT and OT their own interfaces so they don’t have to rely on the other team. The IT team has an interface to set up network templates to make sure device configuration is secure and consistent. Field engineers have their own interface to extract, transform, and deliver industrial data from Modbus, OPC-UA, EIP/CIP, or MQTT devices.

As Mike summed it up, “It’s finally simple to deploy a secure industrial network that makes all field data available to enterprise applications—in less time and using less bandwidth.”

Originally posted here.

Read more…

By GE Digital

“The End of Cloud Computing.” “The Edge Will Eat The cloud.” “Edge Computing—The End of Cloud Computing as We Know It.”  

Such headlines grab attention, but don’t necessarily reflect reality—especially in Industrial Internet of Things (IoT) deployments. To be sure, edge computing is rapidly emerging as a powerful force in turning industrial machines into intelligent machines, but to paraphrase Mark Twain: “The reports of the death of cloud are greatly exaggerated.” 

The Tipping Point: Edge Computing Hits Mainstream

We’ve all heard the stats—billions and billions of IoT devices, generating inconceivable amounts of big data volumes, with trillions and trillions of U.S. dollars to be invested in IoT over the next several years. Why? Because industrials have squeezed every ounce of productivity and efficiency out of operations over the past couple of decades, and are now looking to digital strategies to improve production, performance, and profit. 

The Industrial Internet of Things (IIoT) represents a world where human intelligence and machine intelligence—what GE Digital calls minds and machines—connect to deliver new value for industrial companies. 

In this new landscape, organizations use data, advanced analytics, and machine learning to drive digital industrial transformation. This can lead to reduced maintenance costs, improved asset utilization, and new business model innovations that further monetize industrial machines and the data they create. 

Despite the “cloud is dead” headlines, GE believes the cloud is still very important in delivering on the promise of IIoT, powering compute-intense workloads to manage massive amounts of data generated by machines. However, there’s no question that edge computing is quickly becoming a critical factor in the total IIoT equation.

“The End of Cloud Computing.” “The Edge Will Eat The cloud.” “Edge Computing—The End of Cloud Computing as We Know It.”  

Such headlines grab attention, but don’t necessarily reflect reality—especially in Industrial Internet of Things (IoT) deployments. To be sure, edge computing is rapidly emerging as a powerful force in turning industrial machines into intelligent machines, but to paraphrase Mark Twain: “The reports of the death of cloud are greatly exaggerated.”

The Tipping Point: Edge Computing Hits Mainstream

We’ve all heard the stats—billions and billions of IoT devices, generating inconceivable amounts of big data volumes, with trillions and trillions of U.S. dollars to be invested in IoT over the next several years. Why? Because industrials have squeezed every ounce of productivity and efficiency out of operations over the past couple of decades, and are now looking to digital strategies to improve production, performance, and profit. 

The Industrial Internet of Things (IIoT) represents a world where human intelligence and machine intelligence—what GE Digital calls minds and machines—connect to deliver new value for industrial companies. 

In this new landscape, organizations use data, advanced analytics, and machine learning to drive digital industrial transformation. This can lead to reduced maintenance costs, improved asset utilization, and new business model innovations that further monetize industrial machines and the data they create. 

Despite the “cloud is dead” headlines, GE believes the cloud is still very important in delivering on the promise of IIoT, powering compute-intense workloads to manage massive amounts of data generated by machines. However, there’s no question that edge computing is quickly becoming a critical factor in the total IIoT equation. 

What is edge computing? 

The “edge” of a network generally refers to technology located adjacent to the machine which you are analyzing or actuating, such as a gas turbine, a jet engine, or magnetic resonance (MR) scanner. 

Until recently, edge computing has been limited to collecting, aggregating, and forwarding data to the cloud. But what if instead of collecting data for transmission to the cloud, industrial companies could turn massive amounts of data into actionable intelligence, available right at the edge? Now they can. 

This is not just valuable to industrial organizations, but absolutely essential.

Edge computing vs. Cloud computing 

Cloud and edge are not at war … it’s not an either/or scenario. Think of your two hands. You go about your day using one or the other or both depending on the task. The same is true in Industrial Internet workloads. If the left hand is edge computing and the right hand is cloud computing, there will be times when the left hand is dominant for a given task, instances where the right hand is dominant, and some cases where both hands are needed together. 

Scenarios in which edge computing will take a leading position include things such as low latency, bandwidth, real-time/near real-time actuation, intermittent or no connectivity, etc. Scenarios where cloud will play a more prominent role include compute-heavy tasks, machine learning, digital twins, cross-plant control, etc. 

The point is you need both options working in tandem to provide design choices across edge to cloud that best meet business and operational goals.

Edge Computing and Cloud Computing: Balance in Action 

Let’s look at a couple of illustrations. In an industrial context, examples of intelligent edge machines abound—pumps, motors, sensors, blowout preventers and more benefit from the growing capabilities of edge computing for real-time analytics and actuation. 

Take locomotives. These modern 200 ton digital machines carry more than 200 sensors that can pump one billion instructions per second. Today, applications can not only collect data locally and respond to changes on that data, but they can also perform meaningful localized analytics. GE Transportation’s Evolution Series Tier 4 Locomotive uses on-board edge computing to analyze data and apply algorithms for running smarter and more efficiently. This improves operational costs, safety, and uptime. 

Sending all that data created by the locomotive to the cloud for processing, analyzing, and actuation isn’t useful, practical, or cost-effective. 

Now let’s switch gears (pun intended) and talk about another mode of transportation—trucking. Here’s an example where edge plays an important yet minor role, while cloud assumes a more dominant position. In this example, the company has 1,000 trucks under management. There are sensors on each truck tracking performance of the vehicle such as engine, transmission, electrical, battery, and more. 

But in this case, instead of real-time analytics and actuation on the machine (like our locomotive example), the data is being ingested, then stored and forwarded to the cloud where time series data and analytics are used to track performance of vehicle components. The fleet operator then leverages a fleet management solution for scheduled maintenance and cost analysis. This gives him or her insights such as the cost over time per part type, or the median costs over time, etc. The company can use this data to improve uptime of its vehicles, lower repair costs, and improve the safe operation of the vehicle.

What’s next in edge computing 

While edge computing isn’t a new concept, innovation is now beginning to deliver on the promise—unlocking untapped value from the data being created by machines. 

GE has been at the forefront of bridging minds and machines. Predix Platform supports a consistent execution environment across cloud and edge devices, helping industrials achieve new levels of performance, production, and profit.

Originally posted here.

Read more…
RSS
Email me when there are new items in this category –

Sponsor