Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Platforms (277)

Written by: Mirko Grabel

Edge computing brings a number of benefits to the Internet of Things. Reduced latency, improved resiliency and availability, lower costs, and local data storage (to assist with regulatory compliance) to name a few. In my last blog post I examined some of these benefits as a means of defining exactly where is the edge. Now let’s take a closer look at how edge computing benefits play out in real-world IoT use cases.

Benefit No. 1: Reduced latency

Many applications have strict latency requirements, but when it comes to safety and security applications, latency can be a matter of life or death. Consider, for example, an autonomous vehicle applying brakes or roadside signs warning drivers of upcoming hazards. By the time data is sent to the cloud and analyzed, and a response is returned to the car or sign, lives can be endangered. But let’s crunch some numbers just for fun.

Say a Department of Transportation in Florida is considering a cloud service to host the apps for its roadside signs. One of the vendors on the DoT’s shortlist is a cloud in California. The DoT’s latency requirement is less than 15ms. The light speed in fiber is about 5 μs/km. The distance from the U.S. east coast to the west coast is about 5,000 km. Do the math and the resulting round-trip latency is 50ms. It’s pure physics. If the DoT requires a real-time response, it must move the compute closer to the devices.

Benefit No. 2: Improved resiliency/availability

Critical infrastructure requires the highest level of availability and resiliency to ensure safety and continuity of services. Consider a refinery gas leakage detection system. It must be able to operate without Internet access. If the system goes offline and there’s a leakage, that’s an issue. Compute must be done at the edge. In this case, the edge may be on the system itself.

While it’s not a life-threatening use case, retail operations can also benefit from the availability provided by edge compute. Retailers want their Point of Sale (PoS) systems to be available 100% of the time to service customers. But some retail stores are in remote locations with unreliable WAN connections. Moving the PoS systems onto their edge compute enables retailers to maintain high availability.

Benefit No. 3: Reduced costs

Bandwidth is almost infinite, but it comes at a cost. Edge computing allows organizations to reduce bandwidth costs by processing data before it crosses the WAN. This benefit applies to any use case, but here are two example use-cases where this is very evident: video surveillance and preventive maintenance. For example, a single city-deployed HD video camera may generate 1,296GB a month. Streaming that data over LTE easily becomes cost prohibitive. Adding edge compute to pre-aggregate the data significantly reduces those costs.

Manufacturers use edge computing for preventive maintenance of remote machinery. Sensors are used to monitor temperatures and vibrations. The currency of this data is critical, as the slightest variation can indicate a problem. To ensure that issues are caught as early as possible, the application requires high-resolution data (for example, 1000 per second). Rather than sending all of this data over the Internet to be analyzed, edge compute is used to filter the data and only averages, anomalies and threshold violations are sent to the cloud.

Benefit No. 4: Comply with government regulations

Countries are increasingly instituting privacy and data retention laws. The European Union’s General Data Protection Regulation (GDPR) is a prime example. Any organization that has data belonging to an EU citizen is required to meet the GDPR’s requirements, which includes an obligation to report leaks of personal data. Edge computing can help these organizations comply with GDPR. For example, instead of storing and backhauling surveillance video, a smart city can evaluate the footage at the edge and only backhaul the meta data.

Canada’s Water Act: National Hydrometric Program is another edge computing use case that delivers regulatory compliance benefits. As part of the program, about 3,000 measurement stations have been implemented nationwide. Any missing data requires justification. However, storing data at the edge ensures data retention.

Bonus Benefit: “Because I want to…”

Finally, some users simply prefer to have full control. By implementing compute at the edge rather than the cloud, users have greater flexibility. We have seen this in manufacturing. Technicians want to have full control over the machinery. Edge computing gives them this control as well as independence from IT. The technicians know the machinery best and security and availability remain top of mind.

Summary

By reducing latency and costs, improving resiliency and availability, and keeping data local, edge computing opens up a new world of IoT use cases. Those described here are just the beginning. It will be exciting to see where we see edge computing turn up next. 

Originaly posted: here

Read more…

In order to form proper networks to share data, the Internet of Things (IoT) needs reliable communications and connectivity. Because of popular demand, there’s a wide range of connectivity technologies that operators, as well as developers, can opt for.

IoT Connectivity Groups

The IoT connectivity technologies are currently divided into two groups. The first one is cellular-based, and the second one is unlicensed LPWAN. The first group is based around a licensed spectrum, something which offers an infrastructure that is consistent and better. This group supports larger data rates, but it comes with a cost of short battery life and expensive hardware. However, you don’t have to worry about this a lot as its hardware is becoming cheaper.

Cellular-Based IoT

Because of all this, cellular-based IoT is only offered by giant operators. The reason behind this is that acquiring licensed spectrum is expensive. But these big operators have access to this licensed spectrum, as well as expensive hardware. The cellular IoT connectivity also has its own two types. The first one being the narrowband IoT (NB-IoT) and category M1 IoT (Cat-M1).

Although both are based on cellular standards, there is one big difference between the two. That NB-IoT has a smaller bandwidth than Cat-M1, and thus offers a lower transmission power. In fact, its bandwidth is 10x smaller than that of Cat-M1. However, both still have a very long range with NB-IoT offering a range of up to 100 Km.

The cellular standard based IoT connectivity ensure more reliability. Their device operational lifetimes are longer as compared to unlicensed LPWAN. But when it comes to choosing, most operators prefer NB-IoT over Cat-M1. This is because Cat-M1 provides higher data rates that are not usually necessary. In addition to this, the higher costs of it prevent operators from choosing it.

Cat-M1 is mostly chosen by large-scale operators because it provides mobility support. This is something suitable for transportation and traffic control-based network. It can also be useful in emergency response situations as it offers voice data transfer.

The hardware (module) used for cellular IoT is relatively more expensive compared to LPWAN. It can cost around $10, compared to $2 LPWAN. However, this cost has been dropping rapidly recently because of its popular demand. 

Unlicensed LPWAN

As for the unlicensed LPWANs, they are used by those who don’t have the budget to afford cellular-based IoT. They are designed for customized IoT networks and offer lower data rates, but with increased battery life and long transmission range. They can also be deployed easily. At the moment, there are two types of unlicensed LPWANs, LoRa (Long Range) and SigFox.

Both types are amazing as they designed for devices that have a lower price, increased battery life, and long range. Their coverage range can be up to 10 Km, and their connectivity cost is as low as $2 per module. Not only this, but the cost is even lower than this sometimes. Therefore, they are ideal for local areas.

Weightless LPWAN

Although there are many variants of the LPWAN, Weightless is considered to be the most popular one. This is because the Weightless Special Interest Group, or the SIG, currently offers three different protocols. These include the Weightless-N, the Weightless-W, and the Weightless-P. All three work in a different way as they have different modalities.

Weightless-W

First off, we have the Weightless-W open standard model. This one is designed to operate in TV white space (TVWS). TV Whitespace (TVWS) is the inactive or unoccupied space found between channels actively used in UHF and VHF spectrum its frequency spans from 470 MHz – 790 MHz. For those who don’t know, this is similar to what Neul was developing before getting acquired by Huawei. Now, while using TVWS can be great as it uses ultra-high frequency spectrum, it has one downside. In theory, it seems perfect. But in practice, it is difficult because the rules and regulations for utilizing TVWS for IoT vary greatly.

In addition to this, the end nodes of this model don’t work like they are supposed to. They are designed to operate in a small part of the spectrum. As is difficult to design an antenna that can cover a such wide band of spectrum. This is why TVWS can be difficult when it comes to installing it. The Weightless-W is considered a good option in:

  • Smart Oil sector.
  • Gas sector.

Weightless-N

Second up we have the ultra-narrowband system, the Weightless-N. This model is similar to SigFox as both have a lot in common. The best thing about it is it is made up of different networks instead of being an end-to-end enclosed system. Weightless-N uses differential binary phase shift keying (DBPSK) digital modulation scheme same as of used in SigFox.

The Weightless-N line is operated by Nwave, a popular IoT hardware and software developer. However, while is model is best for sensor-based networks, temperature readings, tank level monitoring, and more, there are some problems with it. For instance, Nwave has a special requirement for TCXO, that is the temperature compensated crystal oscillator.

 In addition to this, it has an unbalanced link budget. The reason behind why this is bad is that there will be much more sensitivity going up to the base station compared to what will be coming down. 

Weightless-P

Finally, we have the Weightless-P. This model is the latest one in the group as it was launched some time after the above two. What people love the most about this one is that it has two-way features. In addition to this, it has a 12.5 kHz channel that is pretty amazing. The Weightless-P doesn’t require a TXCO, something which makes it different from Weightless-N and -W.

The main company behind Weightless-P is Ubiik. The only downside about this model is that it is not ideal for wide-area networks as it offers a range of around 2 Km. However, the Weightless-P is still ideal for:

  • Private Networks
  • Extra sophisticated use cases.
  • Areas where uplink data and downlink control are important.

Capacity

Because of the fact that the Weightless protocols are based on SDR, its base station for narrowband signals is much more complex. This is something that ends up creating thousands of small binary phase-shift keying channels. Although this will let you get more capacity, it will become a burden on your wallet.

In addition to this, since the Weightless-N end nodes require a TXCO, it will be more expensive. The TXCO is used when there is a threat of the frequency becoming unstable when the temperature gets disturbed at the end node.

Range

Talking about the ranges, the Weightless-N and -W has a range of around 5 Km in Urban environments. As for the Weightless-P, it can go up to 2 Km.

Comparison

Weightless and SigFox

If we take the technology into consideration, then the Weightless-N and SigFox are pretty similar. However, they are different when it comes to go-to-market. Since Weightless is a standard, it will require another company to create an IoT based on it. However, this is not the case with SigFox as it is a different type of solution.

Weightless and LoRa

In terms of technology, the Weightless and LoRa. Lorawan are different. However, the functionally of the Weightless-N and LoRaWAN is similar. This is because both are uplink-based systems. Weightless is also sometimes considered as the very good alternative when LoRa is not feasible to the user.

Weightless and Symphony Link

The Symphony Link and Weightless-P standards are more similar to each other. For instance, both focus on private networks. However, Symphony Link has a much more better range performance because it uses LoRa instead of Minimum-shift keying modulation MSK.

Originaly posted here

Read more…

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

SSE Airtricity employees Derek Conty, left, Francie Byrne, middle, and Ryan Doran, right, install solar panels on the roof of Kinsale Community School in Kinsale, Ireland. The installation is part of a project with Microsoft to demonstrate the feasibility of distributed power purchase agreements. Credit: Naoise Culhane

by John Roach

Solar panels being installed on the roofs of dozens of schools throughout Dublin, Ireland, reflect a novel front in the fight against global climate change, according to a senior software engineer and a sustainability lead at Microsoft.

The technology copmpany partnered with SSE Airtricity, Ireland's largest provider of 100% green energy and a part of FTSE listed SSE Group, to install and manage the internet-connected solar panels, which are connected via Azure IoT to Microsoft Azure, a cloud computing platform.

The software tools aggregate and analyze real-time data on energy generated by the solar panels, demonstrating a mechanism for Microsoft and other corporations to achieve sustainability goals and reduce the carbon footprint of the electric power grid.

"We need to decarbonize the global economy to avoid catastrophic climate change," said Conor Kelly, the software engineer who is leading the distributed solar energy project for Microsoft Azure IoT. "The first thing we can do, and the easiest thing we can do, is focus on electricity."

Microsoft's $1.1 million contribution to the project builds on the company's ongoing investment in renewable energy technologies to offset carbon emissions from the operation of its datacenters.

A typical approach to power datacenters with renewable energy is for companies such as Microsoft to sign so-called power purchase agreements with energy companies.The agreements provide financial guarantees needed to build industrial-scale wind and solar farms and connections to the power grid.

The new project demonstrates the feasibility of agreements to install solar panels on rooftops distributed across towns with existing grid connections and use internet of things, or IoT, technologies to aggregate the accumulated energy production for carbon offset accounting.

"It utilizes existing assets that are sitting there unmonetized, which are roofs of buildings that absorb sunlight all day," Kelly said.

New Business Model

The project is also a proof-of-concept, or blueprint, for how energy providers can adapt as the falling price of solar panels enables distributed electric power generation throughout the existing electric power grid.

Traditionally, suppliers purchase power from central power plants and industrial-scale wind and solar farms and sell it to consumers on the distribution grid. Now, energy providers like SSE Airtricity provide renewable energy solutions that allow end consumers to generate power, from sustainable sources, using the existing grid connection on their premises.

"The more forward-thinking energy providers that we are working with, like SSE Airtricity, identify this as an opportunity and industry changing shift in how energy will be generated and consumed," Kelly noted.

The opportunity comes in the ability to finance the installation of solar panels and batteries at homes, schools, businesses and other buildings throughout a community and leverage IoT technology to efficiently perform a range of services from energy trading to carbon offset accounting.

Kelly and his team with Azure IoT are working with SSE Airtricity to develop the tools and machine learning models necessary to unlock this opportunity.

"Instead of having utility scale solar farms located outside of cities, you could have a solar farm at the distribution level, spread across a number of locations," said Fergal Ahern, a business energy solutions manager and renewable energy expert with SSE Airtricity.

For the distributed power purchase agreement, SSE Airtricity uses Azure IoT to aggregate the generation of all the solar panels installed across 27 schools around the provinces of Leinster, Munster and Connacht and run it through a machine learning model to determine the carbon emissions that the solar panels avoid.

The schools use the electricity generated by the solar panels, which reduces their utility bills; Microsoft receives the renewable energy credits for the generated electricity, which the company applies to its carbon neutrality commitments.

The panels are expected to produce enough energy annually to power the equivalent of 68 Irish homes for a year and abate more than 2.1 million kilograms, which is equivalent to 4.6 million pounds, of carbon dioxide emissions over the 15 years of the agreement, according to Kelly.

"This is additional renewable energy that wouldn't have otherwise happened," he said. "Every little bit counts when it comes to meeting our sustainability targets and combatting climate change."

Every little bit counts

Victory Luke, a 16 year old student at Collinstown Park Community College in Dublin, has lived by the "every little bit counts" mantra since she participated in a "Generation Green" sustainability workshop in 2019 organized by the Sustainable Energy Authority of Ireland, SSE Airtricity and Microsoft.

The workshop was part of an education program surrounding the installation of solar panels and batteries at her school along with a retrofit of the lighting system with LEDs. Digital screens show the school's energy use in real time, allowing students to see the impact of the energy efficiency upgrades.

Luke said the workshop captured her interest on climate change issues. She started reading more about sustainability and environmental conservation and agreed to share her newfound knowledge with the younger students at her school.

"I was going around and talking to them about energy efficiency, sharing tips and tricks like if you are going to boil a kettle, only boil as much water as you need, not too much," she explained.

That June, the Sustainable Energy Authority of Ireland invited her to give a speech at the Global Conference on Energy Efficiency in Dublin, which was organized by the International Energy Agency, an organization that works with governments and industry to shape sustainable energy policy.

"It kind of felt surreal because I honestly felt like I wasn't adequate enough to be speaking about these things," she said, noting that the conference attendees included government ministers, CEOs and energy experts from around the world.

At the time, she added, the global climate strike movement and its youth leaders were making international headlines, which made her advocacy at school feel even smaller. "Then I kind of realized that it is those smaller things that make the big difference," she said.

SSE Airtricity and Microsoft plan to replicate the educational program that inspired Luke and her classmates at dozens of the schools around Ireland that are participating in the project.

"When you've got solar at a school and you can physically point at the installation and a screen that monitors the power being generated, it brings sustainability into daily school life," Ahern said.

Proof of concept for policymakers

The project's education campaign extends to renewable energy policymakers, Kelly noted. He explained that renewable energy credits—a market incentive for corporations to support renewable energy projects—are currently unavailable for distributed power purchase agreements.

For this project, Microsoft will receive genuine renewable energy credits from a wind farm that SSE Airtricity also operates, he added.

"And," he said, "we are hoping to use this project as an example of what regulation should look like, to say, 'You need to award renewable energy credits to distributed generation because they would allow corporates to scale-up this type of project.'"

For her part, Luke supports steps by multinational corporations such as Microsoft to invest in renewable energy projects that address global climate change.

"It is a good thing to see," she said. "Once one person does something, other people are going to follow.

Originaly posted HERE

Read more…

An edge device is the network component that is responsible for connecting a local area network to an external or wide area network, which can be accessed from anywhere. Edge devices offer several new services and improved outcomes for IoT deployments across all markets. Smart services that rely on high volumes of data and local analysis can be deployed in a wide range of environments.

Edge device provides the local data to an external network. If protocols are different in local and external networks, it also translates this information, and make the connection between both network boundaries. Edge devices analyze diagnostics and automatic data populating; however, it is necessary to make a secure connection between the field network and cloud computing. In the event of loss of internet connection or cloud crash edge device will store data until the connection is established, so it won’t lose any process information. The local data storage is optional and not all edge devices offer local storage, it depends on the application and service required to implement on the plant.

How does an edge device work?

An edge device has a very straightforward working principle, it communicates between two different networks and translates one protocol into another. Furthermore, it creates a secure connection with the cloud.

An edge device can be configured via local access and internet or cloud. In general, we can say an edge device is a plug-and-play, its setup is simple and does not require much time to configure.

Why should I use an edge device?

Depending on the service required in the plant, the edge devices will be a crucial point to collect the information and create an automatic digital twin of your device in the cloud. 

Edge devices are an essential part of IoT solutions since they connect the information from a network to a cloud solution. They do not affect the network but only collect the data from it, and never cause a problem with the communication between the control system and the field devices. by using an edge device to collect information, the user won’t need to touch the control system. Edge is one-way communication, nothing is written into the network, and data are acquired with the highest possible security.

Edge device requirements

Edge devices are required to meet certain requirements that are to meet at all conditions to perform in different secretions. This may include storage, network, and latency, etc.

Low latency

Sensor data is collected in near real-time by an edge server. For services like image recognition and visual monitoring, edge servers are located in very close proximity to the device, meeting low latency requirements. Edge deployment needs to ensure that these services are not lost through poor development practice or inadequate processing resources at the edge. Maintaining data quality and security at the edge whilst enabling low latency is a challenge that need to address.

Network independence

IoT services do not care for data communication topology.  The user requires the data through the most effective means possible which in many cases will be mobile networks, but in some scenarios, Wi-Fi or local mesh networking may be the most effective mechanism of collecting data to ensure latency requirements can be met.

Good-Edge-IOT-Device-1024x576.jpg

Data security

Users require data at the edge to be kept secure as when it is stored and used elsewhere. These challenges need to meet due to the larger vector and scope for attacks at the edge. Data authentication and user access are as important at the edge as it is on the device or at the core.  Additionally, the physical security of edge infrastructure needs to be considered, as it is likely to hold in less secure environments than dedicated data centers.

Data Quality

Data quality at the edge is a key requirement to guarantee to operate in demanding environments. To maintain data quality at the edge, applications must ensure that data is authenticated, replicated as and assigned into the correct classes and types of data category.

Flexibility in future enhancements

Additional sensors can be added and managed at the edge as requirements change. Sensors such as accelerometers, cameras, and GPS, can be added to equipment, with seamless integration and control at the edge.

Local storage

Local storage is essential in the event of loss of internet connection or cloud crash edge device will store data until the connection is established, so it won’t lose any process information. The local data storage is optional and not all edge devices offer local storage, it depends on the application and service required to implement on the plant

Originaly Posted here

Read more…

Impact of IoT in Inventory

Internet of Things (IoT) has revolutionized many industries including inventory management. IoT is a concept where devices are interconnected via the internet. It is expected that by 2020, there will be 26 billion devices connected worldwide. These connections are important because it allows data sharing which then can perform actions to make life and business more efficient. Since inventory is a significant portion of a company’s assets, inventory data is vital for an accounting department for the company’s asset management and annual report.

Inventory solutions based on IoT and RFID, individual inventory item receives an RFID tag. Each tag has a unique identification number (ID) that contains information about an inventory item, e.g. a model, a batch number, etc. these tags are scanned by RF reader. Upon scanning, a reader extracts its IDs and transmits them to the cloud for processing. Along with the tag’s ID, the cloud receives location and the time of reading. This data is used for updates about inventory items’, allowing users to monitor the inventory from anywhere, in real-time.

Industrial IoT

The role of IoT in inventory management is to receive data and turn it into meaningful insights about inventory items’ location, status, and giving users a corresponding output. For example, based on the data, and inventory management solution architecture, we can forecast the number of raw materials needed for the upcoming production cycle. The output of the system can also send an alert if any individual inventory item is lost.

Moreover, IoT based inventory management solutions can be integrated with other systems, i.e. ERP and share data with other departments.

RFID in Industrial IoT

RFID consist of three main components tag, antenna, and a reader

Tags: An RFID tag carries information about a specific object. It can be attached to any surface, including raw materials, finished goods, packages, etc.

RFID antennas: An RFID antenna receives signals to supply power and data for tags’ operation

RFID readers: An RFID reader, uses radio signals to read and write to the tags. The reader receives data stored in the tag and transmits it to the cloud.

Benefits of IoT in inventory management

The benefits of IoT on the supply chain are the most exciting physical manifestations we can observe. IoT in the supply chain creates unparalleled transparency that increases efficiencies.

Inventory tracking

The major benefit of inventory management is asset tracking, instead of using barcodes to scan and record data, items have RFID tags which can be registered wirelessly. It is possible to accurately obtain data and track items from any point in the supply chain.

With RFID and IoT, managers don’t have to spend time on manual tracking and reporting on spreadsheets. Each item is tracked and the data about it is recorded automatically. Automated asset tracking and reporting save time and reduce the probability of human error.

Inventory optimization

Real-time data about the quantity and the location of the inventory, manufacturers can reduce the amount of inventory on hand while meeting the needs of the customers at the end of the supply chain.

The data about the amount of available inventory and machine learning can forecast the required inventory which allows manufacturers to reduce the lead time.

Remote tracking

Remote product tracking makes it easy to have an eye on production and business. Knowing production and transit times, allows you to better tweak orders to suit lead times and in response to fluctuating demand. It shows which suppliers are meeting production and shipping criteria and which needs monitoring for the required outcome.

It gives visibility into the flow of raw materials, work-in-progress and finished goods by providing updates about the status and location of the items so that inventory managers see when an individual item enters or leaves a specific location.

Bottlenecks in the operations

With the real-time data about the location and the quantity, manufacturers can reveal bottlenecks in the process and pinpoint the machine with lower utilization rates. For instance, if part of the inventory tends to pile up in front of a machine, a manufacturer assumes that the machine is underutilized and needs to be seen to.

The Outcomes

The data collected by inventory management is more accurate and up-to-date. By reducing these time delays, the manufacturing process can enhance accuracy and reduce wastage. An IoT-based inventory management solution offers complete visibility on inventory by providing real-time information fetched by RFID tags. It helps to track the exact location of raw materials, work-in-progress and finished goods. As a result, manufacturers can balance the amount of on-hand inventory, increase the utilization of machines, reduce lead time, and thus, avoid costs bound to the less effective methods. This is all about optimizing inventory and ensuring anything ordered can be sold through whatever channel necessary.

Originally posted here

Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

 

max0492-01-arduino-breakout-board-1024x885.jpg

When I work on a development project, I’ve become a big fan of using development boards that have the Arduino headers on them. The vast number of shields that easily connect to these headers is phenomenal. The one problem that I’ve always had though was that there is always a need to use a breadboard to test a circuit or integrate a sensor that just isn’t in an Arduino header format. The result is a wiring mess that can result in loose or missing connections.

I was recently talking with Max Maxfield and he pointed me to a really cool adapter board designed to remove these wiring jumpers to a breadboard. Max wrote about this board here but I’m so excited about this that I thought I’d add my two cents as well.

The BreadShield, which can be purchased at https://www.crowdsupply.com/loser/breadshield, adapts the Arduino headers to a linear set of header pins designed to be plugged into a breadboard. You can see in the image below that this completely removes all the extra jumpers that one would normally require which has the potential to remove quite a few jumper wires.

max0492-03-arduino-breakout-board-1024x675.jpg

When I heard about these, I purchased three assembled units for about $28 which saves me the time from having to assemble the adapter myself. DIY assembly runs for about $15 for a set of three boards. Either way, a great price to remove a bunch of wires from the workbench.

Now I’m still waiting for mine to arrive, but from the image, you can see that the one challenge to using these adapters might be adapting the height of your breadboard to your hardware stack. While this could be an issue, I keep various length spacers around the office so that I can adapt board heights and undoubtedly there will be a length that will ensure these line up properly.

You can view the original post here

Read more…

In-Circuit Emulators

Does anyone remember in-circuit emulators (ICEs)?

Around 1975 Intel came out with the 8080 microprocessor. This was a big step up from the 8008, for the 8080 had a 64k address space, a reasonable ISA, and an honest stack pointer (the 8008 had a hardware stack a mere 7 levels deep). They soon released the MDS 800, a complete computer based on the 8080, with twin 8" floppy drives. An optional ICE was available; this was, as I recall, a two-board set that was inserted in the MDS. A ribbon cable from those boards went to a small pod that could be plugged into the 8080 CPU socket of a system an engineer was developing.

The idea was that the MDS could act as the device's under test (DUT) CPU. It was rather like today's JTAG debuggers in that one could run code on the DUT, set breakpoints, collect trace data, and generally debug the hardware and software. For there was no JTAG then.

We had been developing microprocessor-based products using the 8008, but quickly transitioned to the 8080 for the increased computational power and address space. I begged my boss for the money for an MDS, which was $20k (about $100k in today's dollars), and to my surprise he let us order one. Despite slow floppies that stored only 80 KB each this tool greatly accelerated our work.

Before long ICEs were the standard platform for embedded work. Remember: this was before PCs so there were no standard desktop computers. The ICE was the computer, the IDE (such as it was) and the debugger.

In the mid-80s I was consulting and designed a, uh, "data gathering" system for our friends in Langley, VA, using multiple NSC-800 CPUs. There were few tools available for this part so I created a custom ICE that let me debug the code. Then a light bulb went on: why not sell the thing? There was practically no market for NSC-800 tools so I came up with versions for the Z80 and 8085 and slapped a $695 label on it. Most ICEs at the time cost many thousands so sales spiked.

Back then we still drew schematics on large D-size (17" x 22") vellum with a pencil. I laid out the PCBs on mylar with black tape for the tracks, as was the norm at the time.

This ICE is perhaps the design I'm most proud of in my career. It was only 17 ICs but was the epitome of an embedded system. Software replaced the usual gobs of hardware. On a breakpoint, for instance, the hardware switched from using the DUT stack to a stack on the emulator, but since the user's stack pointer could point anywhere, and the RAM in the ICE was only a few KB, the hardware masked off the upper address bits and lots of convoluted code reconstructed the user environment.

At the time ICEs advertised their breakpoints; most supported no more than a few as comparators watched the address bus for the breakpoint. My ICE used a 64k by one bit memory that mirrored the user bus. Need a breakpoint at, say, address 0x1234? The emulator set that bit in the memory true. Thus, the thing had 65K breakpoints. One of my dumbest mistakes was to not patent that, as all ICE vendors eventually copied the approach.

The trouble with tools is support. An ICE replaces the DUT CPU, and interfaces with all sorts of unknown target hardware. Though the low clock rates of the Z80 meant we initially had few problems, as we expanded the product line support consumed more and more time. Eventually I learned it was equally easy to sell a six-thousand-dollar product as a six-hundred-dollar version, so those simple first emulators were replaced by much more complex many-hundred chip versions with vast numbers of features.

But the market was changing. By the mid-90s SMT CPUs were common. These were challenging to connect to. Clock rate soared making every connection a Maxwell Law nightmare. I sold the business in 1997 and went on to other endeavors. Eventually the ICE market disappeared.

One regret from all those years is that I didn't save any of the emulator's firmware or schematics. In this business everything is ephemeral. We should make an effort to preserve some of that history.

You can view the original post on TEM here

Read more…

Industrial Prototyping for IoT

I-Pi SMARC.jpg

ADLINK is a global leader in edge computing driving data-to-decision applications across industries. The company recently introduced I-Pi SMARC for Industrial IoT prototyping.

-       AdLInk I-Pi SMARC consists of a simple carrier paired with a SMARC Computer on Module

-       SMARC Modules are available from entry level PX30 Rockchip to top of the line Intel Apollo Lake.

-       SMARC modules are specifically designed for typical industrial embedded applications that require long life, high MTBF and strict revision control.

-       Use popular off the shelve sensors and create prototypes or proof of concepts on short notice.

Additional information can be found here

 

Read more…

By: Kelly McNelis

We have faced unprecedented disruption from the many challenges of COVID-19, and PTC’s LiveWorx was no exception. The definitive digital transformation event went virtual this year, and despite the transition from physical to digital, LiveWorx delivered.

Of the many insightful virtual keynotes, one that caught everyone’s attention was ‘Digital Transformation: The Technology & Support You Need to Succeed,’ presented by PTC’s Executive Vice President (EVP) of Products, Kevin Wrenn, and PTC’s EVP and Chief Customer Officer, Eduarda Camacho.

Their keynote focused on how companies should be prioritizing the use of best-in-class technology that will meet their changing needs during times of disruption and accelerated digital transformation. Wrenn and Camacho highlighted five of our customers through interactive case studies on how they are using PTC technology to capitalize on digital transformation to thrive in an era of disruption.

6907721673?profile=RESIZE_400x

Below is a summary of the five customers and their stories that were highlighted during the keynote.

1. Royal Enfield (Mass Customization)

Royal Enfield is an Indian motorcycle company that has been manufacturing motor bikes since 1901. They have British roots, and their main customer base is located in India and Europe. Riders of Royal Enfield wants their bikes to be particular to their brand, so they worked to better manage the complexities of mass customization and respond to market demands.

Royal Enfield is a long time PTC customer, but they were on old versions of PTC technology. They first upgraded Creo and Windchill to the latest releases so they could leverage the new capabilities. They then moved on to transform their processes for platform and variant designs, introduced simulation much earlier by using Creo Simulation Live, and leveraged generative design by bringing AI into engineering and applying it to engine and chassis complex custom forged components. Finally, they retrained and retooled their engineering staff to fully leverage the power of new processes and technologies.

The entire Royal Enfield team now has digital capabilities that accelerate new product designs, variants, and accessories for personalization; as a result, they are able to deliver a much-shortened design cycle. Royal Enfield is continuing their digital transformation trend, and will invest in new ways to create value while leveraging augmented reality with PTC's Vuforia suite.

2. VCST (Manufacturing Efficiency, Quality, and Innovation)

VCST is part of the BMT Group and are a world-class automotive supplier of precision-machined power train and brake components. Their problem was that they had high costs for their production facility in Belgium. They either needed to improve their cost efficiency in their plant or face the potential of needing to shut down the facility and relocate it to another region. VCST decided to implement ThingWorx so that anyone can have instant visibility to asset status and performance. VCST is also creating the ability to digitize maintenance requests and the ability to acquire about spare parts to improve the overall efficiency in support of their costs reduction goals.

Additionally, VCST has a goal to reach zero complaints for their customers and, if any quality problems appear to their customers, they can be required to do a 100% inspection until the problem is solved. Moreover, as cars have gotten quieter with electrification, the noise from the gears has become an issue, and puts pressure on VCST to innovate and reduce gear noise.

VCST has again relied on ThingWorx and Windchill to collect and share data for joint collaborative analysis to innovate and reduce gear noise. VCST also plans to use Vuforia Expert Capture and Vuforia Chalk to train maintenance workers to further improve their efficiency and cost effectiveness. The company is not done with their digital transformation, and they have plans to implement Creo and Windchill to enable end-to-end digital thread connectivity to the factory.

3. BID Group Holdings (Connected Product)

BID Group Holdings operates in the wood processing industry. It is one of the largest integrated suppliers and North American leader in the field. The purpose of BID Group is to deliver a complete range of innovative equipment, digital technologies, turnkey installations, and aftermarket services to their customers. BID Group decided to focus on their areas of expertise, an rely on PTC, Microsoft, and Rockwell Automation’s combined capabilities and scale to deliver SaaS type solutions to their own industry.

Leveraging this combined power, the BID Group developed a digital strategy for service to improve mill efficiency and profitability. The solution is named OPER8 and was built on the ThingWorx platform. This allowed BID Group to provide their customers an out of the box solution with efficient time-to-value and low costs of ownership. BID Group is continuing to work with PTC and Rockwell Automation, to develop additional solutions that will reduce downtime of OPER8 with a predictive analytics module by using ThingWorx Analytics and LogixAI.

4. Hitachi (Service Optimization)

Hitachi operates an extensive service decision that ensures its customers’ data systems remain up and running. Their challenge was not to only meet their customers uptime Service Level Agreements, but to do it without killing their cost structure. Hitachi decided to implement PTC’s Servigistics Service Parts Management software to ensure the right parts are available when and where they are needed for service. With Servigistics, Hitachi was able to accomplish their needs while staying cost effective and delighting their customers.

Hitachi runs on the cloud, which allows them to upgrade to current releases more often, take advantage of new functionality, and avoid unexpected costs.

PTC has driven engagement and support for Hitachi through the PTC Community, and encourages all customers to utilize this platform. The network of collaborative spaces in a gathering place for PTC customers and partners to showcase their work, inspire each other, and share ideas or best practices in order to expand the value of their PTC solutions and services.

5. COVID-19 Response 

COVID-19 has put significant strain on the world’s hospitals and healthcare infrastructure, and hospitalization rates for COVID brought into question the capacity of being able to handle cases. Many countries began thinking of the value field hospitals could bring to safely care for patients and ease the admissions numbers of ‘regular’ hospitals. However, the complication is that field hospitals have essentially no isolation or air filtration capability that is required for treating COVID patients or healthcare workers.

As a result, the US Army Corp of Engineers has put out specifications to create self-contained isolation units, which are fully functioning hospital rooms that can be transported or built onsite. But, the assembly needed to happen fast, and a group of companies (including PTC) led by The Innovation Machine rallied to help design and define the SCIU’s.

With buy-in from numerous companies, a common platform was needed for companies to collaborate. PTC felt compelled to react, and many PTC customers and partners joined in to help create a collaboration platform, with cloud-based Windchill as the foundation. But, PTC didn’t just provide software to this collaboration; PTC also contributed with digital thread and design advice to help the group solve some of the major challenges. This design is a result of the many companies coming together to create deployments across various US state governments, agencies, and FEMA.

Final Thoughts

All of the above customers approached digital transformation as a business imperative. They all had sizeable challenges that needed to be solved and took leadership positions to implement plans that leveraged digital transformation technologies combined with new processes.

PTC will continue to innovate across the digital transformation portfolio and is committed to ensuring that customer success offerings capture value faster and provide the best outcomes.

Original Post Link: https://www.ptc.com/en/product-lifecycle-report/liveworx-digital-transformation–technology-and-support-you-need-to-succeed

Author Bio: Kelly is a corporate communications specialist at PTC. Her responsibilities include drafting and approving content for PTC’s external and social media presence and supporting communications for the Chief Strategy Officer. Kelly has previous experience as a communications specialist working to create and implement materials for the Executive Vice President of the Products Organization and senior management team members.

 

Read more…

Helium Expands to Europe

Helium, the company behind one of the world’s first peer-to-peer wireless networks, is announcing the introduction of Helium Tabs, its first branded IoT tracking device that runs on The People’s Network. In addition, after launching its network in 1,000 cities in North America within one year, the company is expanding to Europe to address growing market demand with Helium Hotspots shipping to the region starting July 2020. 

Since its launch in June 2019, Helium quickly grew its footprint with Hotspots covering more than 700,000 square miles across North America. Helium is now expanding to Europe to allow for seamless use of connected devices across borders. Powered by entrepreneurs looking to own a piece of the people-powered network, Helium’s open-source blockchain technology incentivizes individuals to deploy Hotspots and earn Helium (HNT), a new cryptocurrency, for simultaneously building the network and enabling IoT devices to send data to the Internet. When connected with other nearby Hotspots, this acts as the backbone of the network. 

“We’re excited to launch Helium Tabs at a time where we’ve seen incredible growth of The People’s Network across North America,” said Amir Haleem, Helium’s CEO and co-founder. “We could not have accomplished what we have done, in such a short amount of time, without the support of our partners and our incredible community. We look forward to launching The People’s Network in Europe and eventually bringing Helium Tabs and other third-party IoT devices to consumers there.”  

Introducing Helium Tabs that Run on The People’s Network
Unlike other tracking devices,Tabs uses LongFi technology, which combines the LoRaWAN wireless protocol with the Helium blockchain, and provides network coverage up to 10 miles away from a single Hotspot. This is a game-changer compared to WiFi and Bluetooth enabled tracking devices which only work up to 100 feet from a network source. What’s more, due to Helium’s unique blockchain-based rewards system, Hotspot owners will be rewarded with Helium (HNT) each time a Tab connects to its network. 

In addition to its increased growth with partners and customers, Helium has also seen accelerated expansion of its Helium Patrons program, which was introduced in late 2019. All three combined have helped to strengthen its network. 

Patrons are entrepreneurial customers who purchase 15 or more Hotspots to help blanket their cities with coverage and enable customers, who use the network. In return, they receive discounts, priority shipping, network tools, and Helium support. Currently, the program has more than 70 Patrons throughout North America and is expanding to Europe. 

Key brands that use the Helium Network include: 

  • Nestle, ReadyRefresh, a beverage delivery service company
  • Agulus, an agricultural tech company
  • Conserv, a collections-focused environmental monitoring platform

Helium Tabs will initially be available to existing Hotspot owners for $49. The Helium Hotspot is now available for purchase online in Europe for €450.

Read more…

This blog is the second part of a series covering the insights I uncovered at the 2020 Embedded Online Conference. 

Last week, I wrote about the fascinating intersection of the embedded and IoT world with data science and machine learning, and the deeper co-operation I am experiencing between software and hardware developers. This intersection is driving a new wave of intelligence on small and cost-sensitive devices.

Today, I’d like to share with you my excitement around how far we have come in the FPGA world, what used to be something only a few individuals in the world used to be able to do, is at the verge of becoming more accessible.

I’m a hardware guy and I started my career writing in VHDL at university. I then started working on designing digital circuits with Verilog and C and used Python only as a way of automating some of the most tedious daily tasks. More recently, I have started to appreciate the power of abstraction and simplicity that is achievable through the use of higher-level languages, such as Python, Go, and Java. And I dream of a reality in which I’m able to use these languages to program even the most constrained embedded platforms.

At the Embedded Online Conference, Clive Maxfield talked about FPGAs, he mentions “in a world of 22 million software developers, there are only around a million core embedded programmers and even fewer FPGA engineers.” But, things are changing. As an industry, we are moving towards a world in which taking advantage of the capabilities of a reconfigurable hardware device, such as an FPGA, is becoming easier.

  • What the FAQ is an FPGA, by Max the Magnificent, starts with what an FPGA is and the beauties of parallelism in hardware – something that took me quite some time to grasp when I first started writing in HDL (hardware description languages). This is not only the case for an FPGA, but it also holds true in any digital circuit. The cool thing about an FPGA is the fact that at any point you can just reprogram the whole board to operate in a different hardware configuration, allowing you to accelerate a completely new set of software functions. What I find extremely interesting is the new tendency to abstract away even further, by creating HLS (high-level synthesis) representations that allow a wider set of software developers to start experimenting with programmable logic.
  • The concept of extending the way FPGAs can be programmed to an even wider audience is taken to the next level by Adam Taylor. He talks about PYNQ, an open-source project that allows you to program Xilinx boards in Python. This is extremely interesting as it opens up the world of FPGAs to even more software engineers. Adam demonstrates how you can program an FPGA to accelerate machine learning operations using the PYNQ framework, from creating and training a neural network model to running it on Arm-based Xilinx FPGA with custom hardware accelerator blocks in the FPGA fabric.

FPGAs always had the stigma of being hard and difficult to work on. The idea of programming an FPGA in Python, was something that no one had even imagined a few years ago. But, today, thanks to the many efforts all around our industry, embedded technologies, including FPGAs, are being made more accessible, allowing more developers to participate, experiment, and drive innovation.

I’m excited that more computing technologies are being put in the hands of more developers, improving development standards, driving innovation, and transforming our industry for the better.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

Part 3 of my review can be viewed by clicking here

In case you missed the previous post in this blog series, here it is:

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference! 

Read more…

Recovering from a system failure or a software glitch can be no easy task.  The longer the fault occurs the harder it can be to identify and recover.  The use of an external watchdog is an important and critical tool in the embedded systems engineer toolbox.  There are five tips that should be taken into account when designing a watchdog system.

Tip #1 – Monitor a heartbeat

The simplest function that an external watchdog can have is to monitor a heartbeat that is produced by the primary application processor.  Monitoring of the heartbeat should serve two distinct purposes.  First, the microcontroller should only generate the heartbeat after functional checks have been performed on the software to ensure that it is functioning.  Second, the heartbeat should be able to reveal if the real-time response of the system has been jeopardized.

Monitoring the heartbeat for software functionality and real-time response can be done using a simple, “dumb” external watchdog.  The external watchdog should have the capability to assign a heartbeat period along with a window that the heartbeat must appear within.  The purpose of the heartbeat window is to allow the watchdog to detect that the real-time response of the system is compromised.  In the event that either functional or real-time checks fail the watchdog then attempts to recover the system through a reset of the application processor.

Tip #2 – Use a low capability MCU

External watchdogs that can be to monitor a heartbeat are relatively low cost but can severely limit the capabilities and recovery possibilities of the watchdog system.  A low capability microcontroller can cost nearly the same amount as an external watchdog timer so why not add some intelligence to the watchdog and use a microcontroller.  The microcontroller firmware can be developed to fulfill the windowed heartbeat monitoring with the addition of so much more.  A “smart” watchdog like this is sometimes referred to as a supervisor or safety watchdog and has actually been used for many years in different industries such as automotive.  Generally a microcontroller watchdog has been reserved for safety critical applications but given the development tools and the cost of hardware it can be cost effective in other applications as well.

Tip #3 – Supervise critical system functions

The decision to use a small microcontroller as a watchdog opens nearly endless possibilities of how the watchdog can be used.  One of the first roles of a smart watchdog is usually to supervise critical system functions such as a system current or sensor state.  One example of how a watchdog could supervise a current would be to take an independent measurement and then provide that value to the application processor.  The application processor could then compare its own reading to that of the watchdog.  If there were disagreement between the two then the system would execute a fault tree that was deemed to be appropriate for the application.

Tip #4 – Observe a communication channel

Sometimes an embedded system can appear to be operating as expected to the watchdog and the application processor but from an external observer be in a non-responsive state.  In such cases it can be useful to tie the smart watchdog to a communication channel such as a UART.  When the watchdog is connected to a communication channel it not only monitor channel traffic but even commands that are specific to the watchdog.  A great example of this is a watchdog designed for a small satellite that monitors radio communications between the flight computer and ground station.  If the flight computer becomes non-responsive to the radio, a command could be sent to the watchdog that is then executed and used to reset the flight computer.

Tip #5 – Consider an externally timed reset function

The question of who is watching the watchdog is undoubtedly on the minds of many engineers when using a microcontroller for a watchdog.  Using a microcontroller to implement extra features adds some complexity and a new software element to the system.  In the event that the watchdog goes off into the weeds how is the watchdog going to recover? One option would be to use an external watchdog timer that was discussed earlier.  The smart watchdog would generate a heartbeat to keep itself from being reset by the watchdog timer.  Another option would be to have the application processor act as the watchdog for the watchdog.  Careful thought needs to be given to the best way to ensure both processors remain functioning as intended.

Conclusion

The purpose of the smart watchdog is to monitor the system and the primary microcontroller to ensure that they operate as expected.  During the design of a system watchdog it can be very tempting to allow the number of features supported to creep.  Developers need to keep in mind that as the complexity of the smart watchdog increases so does the probability that the watchdog itself will contain potential failure modes and bugs.  Keeping the watchdog simple and to the minimum necessary feature set will ensure that it can be exhaustively tested and proven to work.

Originally Posted here

 

Read more…

5 Tips for Expanding your Embedded Skills

As embedded systems engineers, we work in a field that is constantly changing. Not only does change come quickly, the amount of work and the skills we need in order to successfully do our jobs is constantly expanding. A firmware engineer used to need to know the microcontroller hardware and assembly language. Today, they need to know the hardware, several languages, machine learning, security, and dozen other topics. In today’s post, we are going to look at five ways to expand your skillset and stay ahead of the game.

Tip #1 – Take an online course

Taking an online course is a great way to enhance and add to your skillset. If anyone tries to tell you that you don’t need additional coursework don’t let them fool. I’ve often been called an expert in embedded systems, but just like everyone else, I need to take courses to learn and maintain my skillset. In fact, just this week I took a course on Test Driven Development taught by James Grenning, the expert in TDD. I’ve been playing with TDD on and off for several years but despite that familiarity, working with an expert in a subject matter will dramatically improve your skills. I was able to pick James’ brain on TDD, enhance my skills and walked away with several action items to work on over the next several months.

Start by identifying an area of your own skillset that is deficient, rusty or even an area that you want to just move to the next level in. Then find the expert on that topic and take an online, interactive or self-paced course with them. (I won’t mention my own courses that you can find here … ooopps!  )

Tip #2 – Read a book

Books can be a great way to enhance your skills. There are dozens of books on embedded system design that can easily be found at any bookstore or online. Some books are better than others. I’ve started to write-up reviews on the books that I’ve read in order to provide you with recommendations on books. This is just in its infancy and can be found at: https://www.beningo.com/?s=book (I’ll be adding a category in the near future to the blog).

You might also want to check out Jack Ganssles book reviews as well which you can find at: http://www.ganssle.com/bkreviews.htm

Books that I am currently working through myself that I’ve been finding to be fantastic so far include:

  • TinyML
  • Clean Code
  • The object-oriented thought process

Tip #3 – Watch a webinar

Webinars are a great way to get a high-level understanding of a new skill or topic. I don’t think a day goes by where I don’t get an advertisement for a webinar in my inbox. Unfortunately, all webinars are not created equal. I’ve come across many webinars that sound fantastic, only to later discover that they are totally marketing focused with little real technical information. I produced anywhere from 8 – 12 webinars per year and always try to include high-level theory, some low-level details and then a practical example through a demonstration. It doesn’t always work out that way and every now and then they undoubtedly flirt with being marketing versus technical, but I always try to make sure that developers get what they need and know where they need to go to dive deeper.

Over the coming months keep a close eye on webinars as a potential source to enhance your skills. I know that I’ll be attending several on Bluetooth Mesh networking (hoping they aren’t pure marketing pitches), and I will also be pulling together several of my own.

Tip #4 – Build something for fun

There is no better way to learn a new skill than to do something! I’ve always found that people who attend my webinars, courses, etc learn more if there are demonstrations and hands-on materials. It’s great to read about machine learning of continuous integration servers but unless you set one up, it’s just theory. We all know that the devil is in the details and applying the skill is what sharpens it.

I highly recommend that developers build something for fun. More than a decade ago when I wanted to learn how to design and layout PCB’s and work with USB firmware, I decided that I was going to develop a USB controlled light bar. I went through an accelerated development schedule and designed schematics and a PCB, had it fabricated and then hand soldered the parts. I wrote all the firmware and eventually had a working device. I learned so much building that simple light bar and even used it for as an example during interviews when I was looking for a new job (this was before I started my business).

Even today, I will still pick a project when I want to learn something. When I was evaluating MicroPython I built an internet connected weather station. It forced me to exercise many details and forced me to solve problems that I otherwise might not have considered if I hadn’t dived into the deep end.

Tip #5 – Find a mentor

The times that I’ve accelerated my understanding of something the most has usually been under the guidance of a mentor or coach. Someone who has mastered the skill you are trying to work with, has made every mistake and can share their wisdom. It’s certainly possible to learn and advance without a mentor but having feedback and the ability to answer a question and then get an educated response can dramatically accelerate the time involved. That’s one of the reasons why I often host interactive webinars and even have a coaching and trusted advisor offering for my clients. It’s just extremely helpful!

Conclusions

No matter how good you are at developing embedded software, hardware and systems, if you don’t take the time to update your skills then within just a few years you’ll find that everyone else is passing you by. You’ll be less efficient and find that you are struggling. Continuing education is critical to engineers to ensure that they are up to date on the latest and greatest practices and contribute their products success.

Originally posted here

Read more…
RSS
Email me when there are new items in this category –

Sponsor