Join IoT Central | Join our LinkedIn Group | Post on IoT Central


ml (4)

The concepts of AI and ML have been widely discussed and these concepts are also inspiring many entrepreneurs to invest in such products. This huge amount of investment brings a huge opportunity for young people to learn these concepts and get a full-time job in the same field.

To help you to get a full-time job in this field, we bring the top best concepts to learn before getting started with AI and machine learning. But before that, let's learn about the concepts of Artificial intelligence and machine learning.

About AI

AI stands for artificial intelligence which is expected to replace current man-force with accuracy. There are two main concepts of Artificial Intelligence and these concepts are Machine learning and deep learning. Both of these concepts are used to predict an out based on the data it has. Many big organizations have been implementing these concepts over the years successfully for the finance, gaming, technology, and image processing industries. In the typical machine learning product, you will train a model with a huge amount of data. Once the model is fully trained, the model will be able to decide on its own. A deep learning model contains a huge amount of data while the machine learning model contains comparatively less data but it is also equally amazing.

Below are the top best concepts that you must learn before learning all other AI/ML concepts.

1) Binary translation

A binary number is nothing but a two-bit number that contains just zeros and ones. All the advanced and traditional electronics systems understand just binary numbers. These systems cannot understand any other language apart from the binary. For that reason, it becomes really important for the software engineer to learn and understand the concepts of the binary conversion tool.

Along with these concepts, you can also learn about other numbering systems like hexadecimal, decimal, and octal numbers. Among these numbers, decimal numbers are the easiest numbers to learn and understand. Most apps and software that requires human involvement, need to be written in decimal number in the back-end and binary numbers on the front-end.

2) Programming

Most of the Machine learning and deep learning apps are written in the Python programming language. Though there are a lot of ready-made libraries and frameworks available in the Python programming language, you need basic programming skills to implement this framework on your project. You can start learning these concepts from the c language and then start learning the python programming language.

3) Probability in maths

The math concepts are not that important for programming but they can be helpful in machine learning and deep learning concepts especially probability concepts. In deep learning, you need to deal with a lot of data, and to search from this data you need to develop a tool that implements probability applications for a faster search.

So, these are the top three concepts that you must learn before developing your first AI/ML app to get the best job in the field. Do share your thoughts on this subject.

Read more…
An AI based approach increases accuracy and can even make the impossible possible.
 
What is an Outlier?
 
Put simply, an outlier is a piece of data or observation that differs drastically from a given norm.
 
In the image above, the red fish is an outlier. Clearly differing by color, but also by size, shape, and more obviously direction. As such, the analysis of detecting outliers in data fall into two categories: univariate, and multivariate
  • Univariate: considering a single variable
  • Multivariate: considering multiple variables
 
Outlier Detection in Industrial IoT
 
In Industrial IoT use cases, outlier detection can be instrumental in specific use cases such as understanding the health of your machine. Instead of looking at characteristics of a fish like above, we are looking at characteristics of a machine via data such as sensor readings.
 
The goal is to learn what normal operation looks like where outliers are abnormal activity indicative of a future problem.
 
Statistical Approach to Outlier Detection
Statistics - Normal Distribution 
Statistical/probability based approaches date back centuries. You may recall back the bell curve. The values of your dataset plot to a distribution. In simplest terms, you calculate the mean and standard deviation of that distribution. You then can plot the location of x standard deviations from the mean and anything that falls beyond that is an outlier.
 
A simple example to explore using this approach is outside air temperature. Looking at the low temperature in Boston for the month of January from 2008-2018 we find an average temperature of ~23 degrees F with a standard deviation of ~9.62 degrees. Plotting out 2 standard deviations results in the following.
 
 
 a797d2_2861843bb7ba4a82bab87eef54b09196~mv2.png
 
 
Interpreting the chart above, any temperature above the gray line or below the yellow can be considered outside the range of normal...or an outlier.
 
Why do we need AI?
If we just showed that you can determine outliers using simple statistics, then why do we need AI at all? The answer depends on the type of outlier analysis.
 
Why AI for Univariate Analysis?
In the example above, we successfully analyzed outliers in weather looking at a single variable: temperature.
 
So, why should we complicate things by introducing AI to the equation? The answer has to do with the distribution of your data. You can run univariate analysis using statistical measures, but in order for the results to be accurate, it is assumed that the distribution of your data is "normal". In other words, it needs to fit to the shape of a bell curve (like the left image below).
 
However, in the real world, and specifically in industrial use cases, the resulting sensor data is not perfectly normal (like the right image below).
 6 ways to test for a Normal Distribution — which one to use? | by Joos  Korstanje | Towards Data Science
As a result, statistical analysis on a non-normal dataset would result in more false positives and false negatives.
 
The Need for AI
AI-based methods on the other hand, do not require a normal distribution and finds patterns in the data that result in much higher accuracy. In the case of the weather in Boston, getting the forecast slightly wrong does not have a huge impact. However, in industries such as rail, oil and gas, and industrial equipment, trust in the accuracy of your results has a long lasting impact. An impact that can only be achieved by AI.
 
Why AI for Multivariate Analysis?
The case for AI in a multivariate analysis is a bit more straight forward. Effectively, when we are looking at a single variable we can easily plot the results on a plane such as the temperature chart or the normal and non-normal distribution charts above.
 
However, if we are analyzing multiple points, such as the current, voltage and wattage of a motor, or vibration over 3 axis, or the return temp and discharge temp of an HVAC system, plotting and analyzing with statistics has its limitations. Just visualizing the plot becomes impossible for a human as we go from a single plane to hyperplanes as shown below.
 
MSRI | Hyperplane arrangements and application
 
The Need for AI
For multivariate analysis, visual inspection starts to go beyond human capabilities while technical analysis goes beyond statistical capabilities. Instead, AI can be utilized to find patterns in the underlying data in order to learn normal operation and adequately monitor for outliers. In other words, for multivariate analysis AI starts to make the impossible possible.
 
Summary
Statistics and probability has been around far longer than anyone reading this post. However, not all data is created equal and in the world of industrial IoT, statistical techniques have crucial limitations.
 
AI-based techniques go beyond these limitations helping to reduce false positives/negatives and often times making robust analysis possible for the first time.
 
At Elipsa, we build simple, fast and flexible AI for IoT. Get free access to our Community Edition to start integrating machine learning into your applications.
 
Read more…

This blog is the second part of a series covering the insights I uncovered at the 2020 Embedded Online Conference. 

Last week, I wrote about the fascinating intersection of the embedded and IoT world with data science and machine learning, and the deeper co-operation I am experiencing between software and hardware developers. This intersection is driving a new wave of intelligence on small and cost-sensitive devices.

Today, I’d like to share with you my excitement around how far we have come in the FPGA world, what used to be something only a few individuals in the world used to be able to do, is at the verge of becoming more accessible.

I’m a hardware guy and I started my career writing in VHDL at university. I then started working on designing digital circuits with Verilog and C and used Python only as a way of automating some of the most tedious daily tasks. More recently, I have started to appreciate the power of abstraction and simplicity that is achievable through the use of higher-level languages, such as Python, Go, and Java. And I dream of a reality in which I’m able to use these languages to program even the most constrained embedded platforms.

At the Embedded Online Conference, Clive Maxfield talked about FPGAs, he mentions “in a world of 22 million software developers, there are only around a million core embedded programmers and even fewer FPGA engineers.” But, things are changing. As an industry, we are moving towards a world in which taking advantage of the capabilities of a reconfigurable hardware device, such as an FPGA, is becoming easier.

  • What the FAQ is an FPGA, by Max the Magnificent, starts with what an FPGA is and the beauties of parallelism in hardware – something that took me quite some time to grasp when I first started writing in HDL (hardware description languages). This is not only the case for an FPGA, but it also holds true in any digital circuit. The cool thing about an FPGA is the fact that at any point you can just reprogram the whole board to operate in a different hardware configuration, allowing you to accelerate a completely new set of software functions. What I find extremely interesting is the new tendency to abstract away even further, by creating HLS (high-level synthesis) representations that allow a wider set of software developers to start experimenting with programmable logic.
  • The concept of extending the way FPGAs can be programmed to an even wider audience is taken to the next level by Adam Taylor. He talks about PYNQ, an open-source project that allows you to program Xilinx boards in Python. This is extremely interesting as it opens up the world of FPGAs to even more software engineers. Adam demonstrates how you can program an FPGA to accelerate machine learning operations using the PYNQ framework, from creating and training a neural network model to running it on Arm-based Xilinx FPGA with custom hardware accelerator blocks in the FPGA fabric.

FPGAs always had the stigma of being hard and difficult to work on. The idea of programming an FPGA in Python, was something that no one had even imagined a few years ago. But, today, thanks to the many efforts all around our industry, embedded technologies, including FPGAs, are being made more accessible, allowing more developers to participate, experiment, and drive innovation.

I’m excited that more computing technologies are being put in the hands of more developers, improving development standards, driving innovation, and transforming our industry for the better.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

Part 3 of my review can be viewed by clicking here

In case you missed the previous post in this blog series, here it is:

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference! 

Read more…

I recently joined the Embedded Online Conference thinking I was going to gain new insights on embedded and IoT techniques. But I was pleasantly surprised to see a huge variety of sessions with a focus on modern software development practices. It is becoming more and more important to gain familiarity with a more modern software approach, even when you’re programming a constrained microcontroller or an FPGA.

Historically, there has been a large separation between application developers and those writing code for constrained embedded devices. But, things are now changing. The embedded world intersecting with the world of IoT, data science, and ML, and the deeper co-operation between software and hardware communities is driving innovation. The Embedded Online Conference, artfully organised by Jacob Beningo, represented exactly this cross-section, projecting light on some of the most interesting areas in the embedded world - machine learning on microcontrollers, using test-driven development to reduce bugs and programming an FPGA in Python are all things that a few years ago, had little to do with the IoT and embedded industry.

This blog is the first part of a series discussing these new and exciting changes in the embedded industry. In this article, we will focus on machine learning techniques for low-power and cost-sensitive IoT and embedded Arm-based devices.

Think like a machine learning developer

Considered for many year's an academic dead end of limited practical use, machine learning has gained a lot of renewed traction in recent years and it has now become one of the most interesting trends in the IoT space. TinyML is the buzzword of the moment. And this was a hot topic at the Embedded Online Conference. However, for embedded developers, this buzzword can sometimes add an element of uncertainty.

The thought of developing IoT applications with the addition of machine learning can seem quite daunting. During Pete Warden’s session about the past, present and future of embedded ML, he described the embedded and machine learning worlds to be very fragmented; there are so many hardware variants, RTOS’s, toolchains and sensors meaning the ability to compile and run a simple ‘hello world’ program can take developers a long time. In the new world of machine learning, there’s a constant churn of new models, which often use different types of mathematical operations. Plus, exporting ML models to a development board or other targets is often more difficult than it should be.

Despite some of these challenges, change is coming. Machine learning on constrained IoT and embedded devices is being made easier by new development platforms, models that work out-of-the-box with these platforms, plus the expertise and increased resources from organisations like Arm and communities like tinyML. Here are a few must-watch talks to help in your embedded ML development: 

  • New to the tinyML space is Edge Impulse, a start-up that provides a solution for collecting device data, building a model based around it and deploying it to make sense of the data directly on the device. CTO at Edge Impulse, Jan Jongboom talks about how to use a traditional signal processing pipeline to detect anomalies with a machine learning model to detect different gestures. All of this has now been made even easier by the announced collaboration with Arduino, which simplifies even further the journey to train a neural network and deploy it on your device.
  • Arm recently announced new machine learning IP that not only has the capabilities to deliver a huge uplift in performance for low-power ML applications, but will also help solve many issues developers are facing today in terms of fragmented toolchains. The new Cortex-M55 processor and Ethos-U55 microNPU will be supported by a unified development flow for DSP and ML workloads, integrating optimizations for machine learning frameworks. Watch this talk to learn how to get started writing optimized code for these new processors.
  • An early adopter implementing object detection with ML on a Cortex-M is the OpenMV camera - a low-cost module for machine vision algorithms. During the conference, embedded software engineer, Lorenzo Rizzello walks you through how to get started with ML models and deploying them to the OpenMV camera to detect objects and the environment around the device.

Putting these machine learning technologies in the hands of embedded developers opens up new opportunities. I’m excited to see and hear what will come of all this amazing work and how it will improve development standards and transform embedded devices of the future.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!

Part 2 of my review can be viewed by clicking here

Read more…

Sponsor