Join IoT Central | Join our LinkedIn Group | Post on IoT Central


embedded (9)

Can AI Replace Firmware?

Scott Rosenthal and I go back about a thousand years; we've worked together, helped midwife the embedded field into being, had some amazing sailing adventures, and recently took a jaunt to the Azores just for the heck of it. Our sons are both big data people; their physics PhDs were perfect entrees into that field, and both now work in the field of artificial intelligence.

At lunch recently we were talking about embedded systems and AI, and Scott posed a thought that has been rattling around in my head since. Could AI replace firmware?

Firmware is a huge problem for our industry. It's hideously expensive. Only highly-skilled people can create it, and there are too few of us.

What if an AI engine of some sort could be dumped into a microcontroller and the "software" then created by training that AI? If that were possible - and that's a big "if" - then it might be possible to achieve what was hoped for when COBOL was invented: programmers would no longer be needed as domain experts could do the work. That didn't pan out for COBOL; the industry learned that accountants couldn't code. Though the language was much more friendly than the assembly it replaced, it still required serious development skills.

But with AI, could a domain expert train an inference engine?

Consider a robot: a "home economics" major could create scenarios of stacking dishes from a dishwasher. Maybe these would be in the form of videos, which were then fed to the AI engine as it tuned the weighting coefficients to achieve what the home ec expert deems worthy goals.

My first objection to this idea was that these sorts of systems have physical constraints. With firmware I'd write code to sample limit switches so the motors would turn off if at an end-of-motion extreme. During training an AI-based system would try and drive the motors into all kinds of crazy positions, banging destructively into stops. But think how a child learns: a parent encourages experimentation but prevents the youngster from self-harm. Maybe that's the role of the future developer training an AI. Or perhaps the training will be done on a simulator of some sort where nothing can go horribly wrong.

Taking this further, a domain expert could define the desired inputs and outputs, and then a poorly-paid person do the actual training. CEOs will love that. With that model a strange parallel emerges to computation a century ago: before the computer age "computers" were people doing simple math to create tables of logs, trig, ballistics, etc. A room full all labored at a problem. They weren't particularly skilled, didn't make much, but did the rote work under the direction of one master. Maybe AI trainers will be somewhat like that.

Like we outsource clothing manufacturing to Bangladesh, I could see training, basically grunt work, being sent overseas as well.

I'm not wild about this idea as it means we'd have an IoT of idiots: billions of AI-powered machines where no one really knows how they work. They've been well-trained but what happens when there's a corner case?

And most of the AI literature I read suggests that inference successes of 97% or so are the norm. That might be fine for classifying faces, but a 3% failure rate of a safety-critical system is a disaster. And the same rate for less-critical systems like factory controllers would also be completely unacceptable.

But the idea is intriguing.

Original post can be viewed here

Feel free to email me with comments.

Back to Jack's blog index page.

Read more…

Theoratical Embedded Linux requirements

Hardware

SoC

A System on Chip (SoC), is essentially an integrated circuit that takes a single platform and integrates an entire computer system onto it. It combines the power of the CPU with other components that it needs to perform and execute its functions. It is in charge of using the other hardware and running your software. The main advantage of SoC includes lower latency and power saving.

It is made of various building blocks:

  • Core + Caches + MMU – An SoC has a processor at its core which will define its functions. Normally, an SoC has multiple processor cores. For a “real” processor, e.g. ARM Cortex-A9. It’s the main thing kept in mind while choosing an SoC. Maybe co-adjuvanted by e.g. a SIMD co-processor like NEON.
  • Internal RAM – IRAM is composed of very high-speed SRAM located alongside the CPU. It acts similar to a CPU cache, and generally very small. It is used in the first phase of the boot sequence.
  • Peripherals – These can be a simple ADC, DSP, or a Graphical Processing Unit which is connected via some bus to the Core. A low power/real-time co-processor helps the main Core with real-time tasks or handle low power states. Examples of such IP cores are USB, PCI-E, SGX, etc.

External RAM

An SoC uses RAM to store temporary data during and after bootstrap. It is the memory an embedded system uses during regular operation.

Non-Volatile Memory

In an Embedded system or single-board computer, it is the SD card. In other cases, it can be a NAND, NOR, or SPI Data flash memory. It is the source of data the SoC reads and stores all the software components needed for the system to work.

External Peripherals

An SoC must have external interfaces for standard communication protocols such as USB, Ethernet, and HDMI. It also includes wireless technology protocols of Wi-Fi and Bluetooth.

Software

Second-Article-01-1024x576.jpghttps://www.tirichlabs.com/storage/2020/09/Second-Article-01-300x169.jpg 300w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-768x432.jpg 768w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-1200x675.jpg 1200w" alt="" />

First of all, we introduce the boot chain which is the series of actions that happens when an SoC is powered up.

Boot ROM: It is a piece of code stored in the ROM which is executed by the booting core when it is powered-on. This code contains instructions for the configuration of SoC to allow it to execute applications. The configurations performed by Boot ROM include initialization of the core’s register and stack pointer, enablement of caches and line buffers, programming of interrupt service routine, clock configuration.

Boot ROM also implements a Boot Assist Module (BAM) for downloading an application image from external memories using interfaces like Ethernet, SD/MMC, USB, CAN, UART, etc.

1st stage bootloader

In the first-stage bootloader performs the following

  • Setup the memory segments and stack used by the bootloader code
  • Reset the disk system
  • Display a string “Loading OS…”
  • Find the 2nd stage boot loader in the FAT directory
  • Read the 2nd stage boot loader image into memory at 1000:0000
  • Transfer control to the second-stage bootloader

It copies the Boot ROM into the SoC’s internal RAM. Must be tiny enough to fit that memory usually well under 100kB. It initializes the External RAM and the SoC’s external memory interface, as well as other peripherals that may be of interest (e.g. disable watchdog timers). Once done, it executes the next stage, depending on the context, which could be called MLO, SPL, or else.

2nd stage bootloader

This is the main bootloader and can be 10 times bigger than the 1st stage, it completes the initialization of the relevant peripherals.

  • Copy the boot sector to a local memory area
  • Find kernel image in the FAT directory
  • Read kernel image in memory at 2000:0000
  • Reset the disk system
  • Enable the A20 line
  • Setup interrupt descriptor table at 0000:0000
  • Setup the global descriptor table at 0000:0800
  • Load the descriptor tables into the CPU
  • Switch to protected mode
  • Clear the prefetch queue
  • Setup protected mode memory segments and stack for use by the kernel code
  • Transfer control to the kernel code using a long jump

Linux Kernel

The Linux kernel is the main component of a Linux OS and is the core interface between hardware and processes. It communicates between the hardware and processes, managing resources as efficiently as possible. The kernel performs following jobs

  • Memory management: Keep track of memory, how much is used to store what, and where
  • Process management: Determine which processes can use the processor, when, and for how long
  • Device drivers: Act as an interpreter between the hardware and the processes
  • System calls and security: Receive requests for the service from processes

To put the kernel in context, they can be interpreted as a Linux machine as having 3 layers:

  • The hardware: The physical machine—the base of the system, made up of memory (RAM) and the processor (CPU), as well as input/output (I/O) devices such as storage, networking, and graphics.
  • The Linux kernel: The core of the OS. It is a software residing in memory that tells the CPU what to do.
  • User processes: These are the running programs that the kernel manages. User processes are what collectively makeup user space. The kernel allows processes and servers to communicate with each other.

Init and rootfs – init is the first non-Kernel task to be run, and has PID 1. It initializes everything needed to use the system. In production embedded systems, it also starts the main application. In such systems, it is either BusyBox or a custom-crafted application.

View original post here

Read more…

This blog is the second part of a series covering the insights I uncovered at the 2020 Embedded Online Conference. 

Last week, I wrote about the fascinating intersection of the embedded and IoT world with data science and machine learning, and the deeper co-operation I am experiencing between software and hardware developers. This intersection is driving a new wave of intelligence on small and cost-sensitive devices.

Today, I’d like to share with you my excitement around how far we have come in the FPGA world, what used to be something only a few individuals in the world used to be able to do, is at the verge of becoming more accessible.

I’m a hardware guy and I started my career writing in VHDL at university. I then started working on designing digital circuits with Verilog and C and used Python only as a way of automating some of the most tedious daily tasks. More recently, I have started to appreciate the power of abstraction and simplicity that is achievable through the use of higher-level languages, such as Python, Go, and Java. And I dream of a reality in which I’m able to use these languages to program even the most constrained embedded platforms.

At the Embedded Online Conference, Clive Maxfield talked about FPGAs, he mentions “in a world of 22 million software developers, there are only around a million core embedded programmers and even fewer FPGA engineers.” But, things are changing. As an industry, we are moving towards a world in which taking advantage of the capabilities of a reconfigurable hardware device, such as an FPGA, is becoming easier.

  • What the FAQ is an FPGA, by Max the Magnificent, starts with what an FPGA is and the beauties of parallelism in hardware – something that took me quite some time to grasp when I first started writing in HDL (hardware description languages). This is not only the case for an FPGA, but it also holds true in any digital circuit. The cool thing about an FPGA is the fact that at any point you can just reprogram the whole board to operate in a different hardware configuration, allowing you to accelerate a completely new set of software functions. What I find extremely interesting is the new tendency to abstract away even further, by creating HLS (high-level synthesis) representations that allow a wider set of software developers to start experimenting with programmable logic.
  • The concept of extending the way FPGAs can be programmed to an even wider audience is taken to the next level by Adam Taylor. He talks about PYNQ, an open-source project that allows you to program Xilinx boards in Python. This is extremely interesting as it opens up the world of FPGAs to even more software engineers. Adam demonstrates how you can program an FPGA to accelerate machine learning operations using the PYNQ framework, from creating and training a neural network model to running it on Arm-based Xilinx FPGA with custom hardware accelerator blocks in the FPGA fabric.

FPGAs always had the stigma of being hard and difficult to work on. The idea of programming an FPGA in Python, was something that no one had even imagined a few years ago. But, today, thanks to the many efforts all around our industry, embedded technologies, including FPGAs, are being made more accessible, allowing more developers to participate, experiment, and drive innovation.

I’m excited that more computing technologies are being put in the hands of more developers, improving development standards, driving innovation, and transforming our industry for the better.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

Part 3 of my review can be viewed by clicking here

In case you missed the previous post in this blog series, here it is:

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference! 

Read more…

I recently joined the Embedded Online Conference thinking I was going to gain new insights on embedded and IoT techniques. But I was pleasantly surprised to see a huge variety of sessions with a focus on modern software development practices. It is becoming more and more important to gain familiarity with a more modern software approach, even when you’re programming a constrained microcontroller or an FPGA.

Historically, there has been a large separation between application developers and those writing code for constrained embedded devices. But, things are now changing. The embedded world intersecting with the world of IoT, data science, and ML, and the deeper co-operation between software and hardware communities is driving innovation. The Embedded Online Conference, artfully organised by Jacob Beningo, represented exactly this cross-section, projecting light on some of the most interesting areas in the embedded world - machine learning on microcontrollers, using test-driven development to reduce bugs and programming an FPGA in Python are all things that a few years ago, had little to do with the IoT and embedded industry.

This blog is the first part of a series discussing these new and exciting changes in the embedded industry. In this article, we will focus on machine learning techniques for low-power and cost-sensitive IoT and embedded Arm-based devices.

Think like a machine learning developer

Considered for many year's an academic dead end of limited practical use, machine learning has gained a lot of renewed traction in recent years and it has now become one of the most interesting trends in the IoT space. TinyML is the buzzword of the moment. And this was a hot topic at the Embedded Online Conference. However, for embedded developers, this buzzword can sometimes add an element of uncertainty.

The thought of developing IoT applications with the addition of machine learning can seem quite daunting. During Pete Warden’s session about the past, present and future of embedded ML, he described the embedded and machine learning worlds to be very fragmented; there are so many hardware variants, RTOS’s, toolchains and sensors meaning the ability to compile and run a simple ‘hello world’ program can take developers a long time. In the new world of machine learning, there’s a constant churn of new models, which often use different types of mathematical operations. Plus, exporting ML models to a development board or other targets is often more difficult than it should be.

Despite some of these challenges, change is coming. Machine learning on constrained IoT and embedded devices is being made easier by new development platforms, models that work out-of-the-box with these platforms, plus the expertise and increased resources from organisations like Arm and communities like tinyML. Here are a few must-watch talks to help in your embedded ML development: 

  • New to the tinyML space is Edge Impulse, a start-up that provides a solution for collecting device data, building a model based around it and deploying it to make sense of the data directly on the device. CTO at Edge Impulse, Jan Jongboom talks about how to use a traditional signal processing pipeline to detect anomalies with a machine learning model to detect different gestures. All of this has now been made even easier by the announced collaboration with Arduino, which simplifies even further the journey to train a neural network and deploy it on your device.
  • Arm recently announced new machine learning IP that not only has the capabilities to deliver a huge uplift in performance for low-power ML applications, but will also help solve many issues developers are facing today in terms of fragmented toolchains. The new Cortex-M55 processor and Ethos-U55 microNPU will be supported by a unified development flow for DSP and ML workloads, integrating optimizations for machine learning frameworks. Watch this talk to learn how to get started writing optimized code for these new processors.
  • An early adopter implementing object detection with ML on a Cortex-M is the OpenMV camera - a low-cost module for machine vision algorithms. During the conference, embedded software engineer, Lorenzo Rizzello walks you through how to get started with ML models and deploying them to the OpenMV camera to detect objects and the environment around the device.

Putting these machine learning technologies in the hands of embedded developers opens up new opportunities. I’m excited to see and hear what will come of all this amazing work and how it will improve development standards and transform embedded devices of the future.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!

Part 2 of my review can be viewed by clicking here

Read more…

It's Not All Linux

In the comments section of my 2020 embedded salary survey, quite a few respondents felt that much of the embedded world is being subsumed by canned solutions. Will OSes like Linux and cheap, powerful boards like the Raspberry Pi and Arduino replace traditional engineering? Has that already happened?

A number of people complained their colleagues no longer understand low-level embedded things like DMA, chip selects, diddling I/O registers, and the like. They feel these platforms isolate the engineer from those details.

Part of me says yeah! That's sort of what we want. Reuse and abstraction means the developer can focus on the application rather than bringing up a proprietary board. Customers want solutions and don't care about implementation details. We see these abstractions working brilliantly when we buy a TCP/IP stack, often the better part of 100K lines of complex code. Who wants to craft those drivers?

Another part of me says "save me from these sorts of products." It is fun to design a board. To write the BSP and toss bits at peripheral registers. Many of us got a rush the first time we made an LED blink or a motor spin. I still find that fulfilling.

So what's the truth? Is the future all Linux and Pis?

The answer is a resounding "no." A search for "MCU" on Digi-Key gets 89,149 part numbers. Sure, many of these are dups with varying packages and the like, but that's still a ton of controllers.

Limiting that search to 8 bitters nets 30,574 parts. I've yet to see Linux run on a PIC or other tiny device.

Or filter to Cortex-M devices only. You still get 16,265 chips. None of those run Linux, Windows, BSD, or any other general-purpose OS. These are all designed into proprietary boards. Those engineers are working on the bare metal... and having a ton of fun.

The bigger the embedded world gets the more applications are found. Consider machine learning. That's for big iron, for Amazon Web Services, right? Well, partly. Eta Compute and other companies are moving ML to the edge with smallish MCUs running at low clock rates with limited memory. Power consumption rules, and 2 GB of RAM at 1 GHz just doesn't cut it when harvesting tiny amounts of energy.

Then there's cost. If you can reduce the cost of a product made in the millions by just a buck the business prospers. Who wants a ten dollar CPU when a $0.50 microcontroller will do?

Though I relish low-level engineering our job is to get products to market as efficiently as possible. Writing drivers for a timer is sort of silly when you realize that thousands of engineers using the same part are doing the same thing. Sure, semi vendors often deliver code to handle all of this, but in my experience most of that is either crap or uses the peripherals in the most limited ways. A few exceptions exist, such as Renesas's Synergy. They go so far as to guarantee that code. My fiddling with it leaves me impressed, though the learning curve is steep. But that sort of abstraction surely must be a part of this industry going forward. Just as we don't write protocol stacks and RTOSes any more, canned code will become more common.

Linux and canned boards have important roles in this business. But an awful lot of us will still work on proprietary systems.

View original post here

For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe

Read more…
Embedded systems are maybe the most complex part of an integrated IoT solution. Looking at my company's experience I can say that most programmers that come to build IoT systems have to have additional experience if they want to work with hardware. Customers that want to hire IoT developers also need to have a basic understanding of what skills his future contractors must have.
Read more…

Embedded systems have become part and parcel of electronic equipment such as mobile phones, routers, modems, washing machines, microwave ovens, remote controls, RFID tags, PDAs, etc. They are low power consumption units that are used to perform some specific function of the device. For example, embedded systems are used in home automation with wired or wireless networking to control or regulate lights, security devices, sensors, audio/visual systems, sense climate change, monitoring etc. They are also used as networked thermostats in HVAC systems i.e. Heating, Ventilation and Air Conditioning systems. Furthermore, in the coming years embedded systems will be the mainstay for the deployment of many IoT solutions, especially within Industrial IoT applications. The leading players in embedded systems are engaged in hardware and software development, and are looking forward to bringing these transformations into their products to take advantage of the thriving IoT market.

The chief areas which are going to be transformed are microcontrollers, microprocessors and Real-Time Operating Systems (RTOS), followed by networking and memory devices, open source communities and developers. By 2020, humongous growth will be seen in the market for embedded systems. It is predicted that the market will grow with a CAGR of 22.5% to reach $226 billion by then. On the other hand, IoT will bring up a host of challenges for developers of embedded systems, as they need to develop devices which allow flawless and uninterrupted connectivity. To assist them meet the challenges posed by the internet of things, a real-time operating system (RTOS) must be designed that delivers flawless connectivity, scalability, modularity, and safety. 

What does internet of things, IoT mean for an embedded developer?

As the internet of things, IoT solutions are present across several industries, it gives a wonderful opportunity for embedded system developers too. For an embedded developer, it’s not all about linking multiple devices to the internet. There is much more than just connecting devices to the internet. Internet of things (IoT) for embedded systems is more about gathering, monitoring, and analyzing large amounts of disparate data from different sources and summarizing it into useful and actionable information to enhance the way services and devices are being used today.  

Hope you find this post helpful. If you did, share it with your colleagues and friends. For any query related to this post and career in IoT, you can comment down below. 

Read more…

Sponsor