Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Dev Boards (32)

Wireless Sensor Networks and IoT

We all know how IoT has revolutionized the way we interact with the world. IoT devices are now ubiquitous, from smart homes to industrial applications. A significant portion of these devices are Wireless Sensor Networks (WSNs), which are a key component of IoT systems. However, designing and implementing WSNs presents several challenges for embedded engineers. In this article, we discuss some of the significant challenges that embedded engineers face when working with WSNs.

WSNs are a network of small, low-cost, low-power, and wirelessly connected sensor nodes that can sense, process, and transmit data. These networks can be used in a wide range of applications such as environmental monitoring, healthcare, industrial automation, and smart cities. WSNs are typically composed of a large number of nodes, which communicate with each other to gather and exchange data. The nodes are equipped with sensors, microprocessors, transceivers, and power sources. The nodes can also be stationary or mobile, depending on the application.

One of the significant challenges of designing WSNs is the limited resources of the nodes. WSNs are designed to be low-cost, low-power, and small, which means that the nodes have limited processing power, memory, and energy. This constraint limits the functionality and performance of the nodes. Embedded engineers must design WSNs that can operate efficiently with limited resources. The nodes should be able to perform their tasks while consuming minimal power to maximize their lifetime.

Another challenge of WSNs is the limited communication range. The nodes communicate with each other using wireless radio signals. However, the range of the radio signals is limited, especially in indoor environments where the signals are attenuated by walls and other obstacles. The communication range also depends on the transmission power of the nodes, which is limited to conserve energy. Therefore, embedded engineers must design WSNs that can operate reliably in environments with limited communication range.

WSNs also present a significant challenge for embedded engineers in terms of data management. WSNs generate large volumes of data that need to be collected, processed, and stored. However, the nodes have limited storage capacity, and transferring data to a centralized location may not be practical due to the limited communication range. Therefore, embedded engineers must design WSNs that can perform distributed data processing and storage. The nodes should be able to process and store data locally and transmit only the relevant information to a centralized location.

Security is another significant challenge for WSNs. The nodes in WSNs are typically deployed in open and unprotected environments, making them vulnerable to physical and cyber-attacks. The nodes may also contain sensitive data, making them an attractive target for attackers. Embedded engineers must design WSNs with robust security features that can protect the nodes and the data they contain from unauthorized access.

The deployment and maintenance of WSNs present challenges for embedded engineers. WSNs are often deployed in harsh and remote environments, making it difficult to access and maintain the nodes. The nodes may also need to be replaced periodically due to the limited lifetime of the power sources. Therefore, embedded engineers must design WSNs that are easy to deploy, maintain, and replace. The nodes should be designed for easy installation and removal, and the network should be self-healing to recover from node failures automatically.

Final thought; WSNs present significant challenges for embedded engineers, including limited resources, communication range, data management, security, and deployment and maintenance. Addressing these challenges requires innovative design approaches that can maximize the performance and efficiency of WSNs while minimizing their cost and complexity. Embedded engineers must design WSNs that can operate efficiently with limited resources, perform distributed data processing and storage, provide robust security features, and be easy to deploy

Read more…

Mesh Networking Application of E104-BT10

 

E104-BT10 is a SIG MESH-based networking module that supports SIG MESH's Generic on/off, HSL model, and data transparent transmission model. Users can use it to quickly build a MESH network, which can be used in smart homes, parking lots, warehouse logistics, building lighting control and other scenarios.

 

The following picture shows the concept of mesh network

ble module

Application scenario introduction

 

  scene one

  As shown in the following scenario, the E104-BT10 is in the smart home application block diagram.

ble mesh module

  In the above figure, the E104-BT10 is applied to the smart home. The advantage is that the E104-BT10 can transmit the information of the entire network to any node in the network. You can add a networked device to any node to The status information of the network is transmitted to the cloud, or the cloud sends instructions to control the home appliance. The entire network only needs one gateway to make the smart home network. In addition, the mobile phone Bluetooth can be used to connect any node to control and collect the entire network.

  Compared with the traditional wifi solution, the E104-BT10 uses Bluetooth mesh to reduce the burden on the home router and reduce the hardware cost of the smart home appliance.

  Scene two

  The following scenario shows the use of E104-BT10 in building automation.

ble module

  In the above picture, we use E104-BT10 in building automation. Each “small white point” in the above figure represents an E104-BT10, and the dotted line represents the message transmission path. Send out the control node using any E104-BT10 node! The entire network can respond in a very short time. In the place where the message cannot be directly delivered, the intranet module will automatically relay the message.

  Scene three

  The following scene shows the use of E104-BT10 in the intelligent parking lot.

wirelss module

  In the above picture, we apply E104-BT10 to the intelligent parking lot. The number of E104-BT10 network can reach 10922, and the signal is Bluetooth signal, suitable for all kinds of large underground parking lots.

  Detailed application: Install one user's own detection device on each parking space. The information exchange between the devices adopts E104-BT10, so that the information between each device can be quickly exchanged. Users can install E104- on the terminal monitoring device. BT10, collect parking space information! It can also be used for fire extinguishing pipe valves and underground light switch control. As long as you are on a network, messages are relayed.

 

Ssummary

  The E104-BT10 is suitable for a variety of complex networking environments. When a single wireless device has insufficient signal coverage, it can be used to build a peer-to-peer network. E104-BT10 has the following advantages: support data relay multi-hop, support 10922 maximum number of networking, single node removal does not affect the overall communication, low data delay, primary distribution network lifetime network access, network access quickly.

 

Read more…
An AI based approach increases accuracy and can even make the impossible possible.
 
What is an Outlier?
 
Put simply, an outlier is a piece of data or observation that differs drastically from a given norm.
 
In the image above, the red fish is an outlier. Clearly differing by color, but also by size, shape, and more obviously direction. As such, the analysis of detecting outliers in data fall into two categories: univariate, and multivariate
  • Univariate: considering a single variable
  • Multivariate: considering multiple variables
 
Outlier Detection in Industrial IoT
 
In Industrial IoT use cases, outlier detection can be instrumental in specific use cases such as understanding the health of your machine. Instead of looking at characteristics of a fish like above, we are looking at characteristics of a machine via data such as sensor readings.
 
The goal is to learn what normal operation looks like where outliers are abnormal activity indicative of a future problem.
 
Statistical Approach to Outlier Detection
Statistics - Normal Distribution 
Statistical/probability based approaches date back centuries. You may recall back the bell curve. The values of your dataset plot to a distribution. In simplest terms, you calculate the mean and standard deviation of that distribution. You then can plot the location of x standard deviations from the mean and anything that falls beyond that is an outlier.
 
A simple example to explore using this approach is outside air temperature. Looking at the low temperature in Boston for the month of January from 2008-2018 we find an average temperature of ~23 degrees F with a standard deviation of ~9.62 degrees. Plotting out 2 standard deviations results in the following.
 
 
 a797d2_2861843bb7ba4a82bab87eef54b09196~mv2.png
 
 
Interpreting the chart above, any temperature above the gray line or below the yellow can be considered outside the range of normal...or an outlier.
 
Why do we need AI?
If we just showed that you can determine outliers using simple statistics, then why do we need AI at all? The answer depends on the type of outlier analysis.
 
Why AI for Univariate Analysis?
In the example above, we successfully analyzed outliers in weather looking at a single variable: temperature.
 
So, why should we complicate things by introducing AI to the equation? The answer has to do with the distribution of your data. You can run univariate analysis using statistical measures, but in order for the results to be accurate, it is assumed that the distribution of your data is "normal". In other words, it needs to fit to the shape of a bell curve (like the left image below).
 
However, in the real world, and specifically in industrial use cases, the resulting sensor data is not perfectly normal (like the right image below).
 6 ways to test for a Normal Distribution — which one to use? | by Joos  Korstanje | Towards Data Science
As a result, statistical analysis on a non-normal dataset would result in more false positives and false negatives.
 
The Need for AI
AI-based methods on the other hand, do not require a normal distribution and finds patterns in the data that result in much higher accuracy. In the case of the weather in Boston, getting the forecast slightly wrong does not have a huge impact. However, in industries such as rail, oil and gas, and industrial equipment, trust in the accuracy of your results has a long lasting impact. An impact that can only be achieved by AI.
 
Why AI for Multivariate Analysis?
The case for AI in a multivariate analysis is a bit more straight forward. Effectively, when we are looking at a single variable we can easily plot the results on a plane such as the temperature chart or the normal and non-normal distribution charts above.
 
However, if we are analyzing multiple points, such as the current, voltage and wattage of a motor, or vibration over 3 axis, or the return temp and discharge temp of an HVAC system, plotting and analyzing with statistics has its limitations. Just visualizing the plot becomes impossible for a human as we go from a single plane to hyperplanes as shown below.
 
MSRI | Hyperplane arrangements and application
 
The Need for AI
For multivariate analysis, visual inspection starts to go beyond human capabilities while technical analysis goes beyond statistical capabilities. Instead, AI can be utilized to find patterns in the underlying data in order to learn normal operation and adequately monitor for outliers. In other words, for multivariate analysis AI starts to make the impossible possible.
 
Summary
Statistics and probability has been around far longer than anyone reading this post. However, not all data is created equal and in the world of industrial IoT, statistical techniques have crucial limitations.
 
AI-based techniques go beyond these limitations helping to reduce false positives/negatives and often times making robust analysis possible for the first time.
 
At Elipsa, we build simple, fast and flexible AI for IoT. Get free access to our Community Edition to start integrating machine learning into your applications.
 
Read more…

OMG! Three 32-bit processor cores each running at 300 MHz, each with its own floating-point unit (FPU), and each with more memory than you can throw a stick at!

In a recent column on Recreating Retro-Futuristic 21-Segment Victorian Displays, I noted that I’ve habitually got a number of hobby projects on the go. I also joked that, “one day, I hope to actually finish one or more of the little rascals.” Unfortunately, I’m laughing through my tears because some of my projects do appear to be never-ending.

For example, shortly after the internet first impinged on the popular consciousness with the launch of the Mosaic web browser in 1993, a number of humorous memes started to bounce around. It was one of these that sparked my Pedagogical and Phantasmagorical Inamorata Prognostication Engine project, which has been a “work in progress” for more than 20 years as I pen these words.

Feast your orbs on the Prognostication Engine (Click image to see a larger version — Image source: Max Maxfield)

As you can see in the image to the right, the Prognostication Engine has grown in the telling. The main body of the engine is housed in a wooden radio cabinet from 1929. My chum, Carpenter Bob, created the smaller section on the top, with great attention to detail like the matching hand-carved rosettes.

The purported purpose of this bodacious beauty is to forecast the disposition of my wife (Gina the Gorgeous) when I’m poised to leave the office and head for home at the end of the day (hence the “Prognostication” portion of the engine’s moniker). Paradoxically, should Gina ever discover the true nature of the engine, I won’t actually need it to predict her mood of the moment.

As we see, the control panels are festooned with antique knobs, toggle switches, pushbuttons, and analog meters. The knobs are mounted on motorized potentiometers, so if an unauthorized user attempts to modify their settings, they will automatically return to their original values under program control. The switches and pushbuttons are each accompanied by two LEDs, while each knob is equipped with a ring of 16 LEDs, resulting in 116 LEDs in all. Then there are 64 LEDs in the furnace and 140 LEDs in the rings surrounding the bases of the large vacuum tubes mounted on the top of the beast.

I was just reflecting on how much technology has changed over the past couple of decades. For example, today’s “smart LEDs” like WS2812Bs (a.k.a. NeoPixels) can be daisy-chained together, allowing multiple devices to be controlled using a single pin on the microcontroller. It probably goes without saying (but I’ll say it anyway) that all of the LEDs in the current incarnation of the engine are tricolor devices in the form of NeoPixels, but this was not always the case.

An early prototype of a shift register capable of driving only 13 tricolor LEDs (Click image to see a larger version — Image source: Max Maxfield)

The tricolor LEDs I was planning on using 20 years ago each required three pins to be controlled. The solution at that time would have been to implement a huge external shift register. The image to the left shows an early shift register prototype sufficient to drive only 13 “old school” devices.

And, of course, developments in computing have been even more staggering. When I commenced this project, I was using a PIC microcontroller that I programmed in BASIC. After the Arduino first hit the scene circa 2005, I migrated to using Arduino Unos, followed by Arduino Megas, that I programmed in C/C++.

One of the reasons I like the Arduino Mega is its high pin count, boasting 54 digital input/output (I/O) pins, of which 15 can be used as pulse-width modulated (PWM) outputs, 16 analog inputs, and 4 UARTS. On the other hand, the Mega is only an 8-bit machine running at 16 MHz, it offers only 256 KB of Flash (program) memory and 8 KB of SRAM, and it doesn’t have hardware support for floating-point operations.

The thing is that the Prognostication Engine has a lot of things going on. In addition to reading the states of all the switches and pushbuttons and potentiometers, it has to control the motors behind the knobs and drive the analog meters. Currently, the LEDs are being driven with simple test patterns, but these are going to be upgraded to support much more sophisticated animation and fading effects. The engine is also constantly performing calculations of an astronomical and astrological nature, determining things like the dates of forthcoming full moons and blue moons.

In the fullness of time, the engine is going to be connected to the internet so it can monitor things like the weather. It’s also going to have its own environmental sensors (temperature, humidity, barometric pressure) and proximity detection sensors. Furthermore, the engine will also boast a suite of sound effects such that flicking a simple switch, for example, may result in myriad sounds of mechanical mechanisms performing their magic. At some stage, I’m even hoping to add things like artificial intelligence (AI) and facial recognition.

The current state of computational play (Click image to see a larger version — Image source: Max Maxfield)

Sad to relate, my existing computing solution is not capable of handling all the tasks I wish the engine to perform. The image to the right shows the current state of computational play. As we see, there is one Arduino Mega in the lower cabinet controlling the 116 LEDs on the front panel. Meanwhile, there are two Megas in the upper cabinet, with one controlling the LEDs in the furnace and the other controlling the LEDs associated with the large vacuum tubes.

Up until a couple of years ago, I was vaguely planning on adding more and more Megas. I was also cogitating and ruminating as to how I was going to get these little rascals to talk to each other so that everyone knew (a) what we were trying to do and (b) what everyone else was actually doing.

Unfortunately, the whole computational architecture was becoming unwieldy, so I started to look for another solution. You can only imagine my surprise and delight when I was first introduced to the original ShieldBuddy TC275 from the folks at Hitex (see Everybody Needs a ShieldBuddy). This little beauty, which has an Arduino Mega footprint, features the Aurix TC275 processor from Infineon. The TC275 boasts three 32-bit cores, all running at 200 MHz, each with its own floating-point unit (FPU), and all sharing 4 Mbytes of Flash and 500 Kbytes of RAM (this is a bit of a simplification, but it will suffice for now).

Processors like the Aurix are typically to be found only in state-of-the-art embedded systems and they rarely make it into the maker world. To be honest, when I first saw the ShieldBuddy TC275, I thought to myself, “Life can’t get any better than this!” Well, I was wrong, because the guys and gals at Hitex have just announced the ShieldBuddy TC375, which features an Aurix TC375 processor!

O.M.G! I just took delivery of one of these bodacious beauties, and I’m so excited that I was moved to make this video.

I don’t know where to start. As before, we have three 32-bit cores, each with its own FPU. This time, however, the cores run at 300 MHz. Although each core runs independently, they can communicate and coordinate between themselves using techniques like shared memory and software interrupts. With regard to memory, the easiest way to summarize this is as follows: The TC375 processor has:

  • 6MB Flash ROM
  • 384 KB Data flash

And each of the three cores has:

  • 240 KB Data Scratch-Pad RAM (DSPR)
  • 64 KB Instruction Scratch-Pad RAM (PSPR)
  • 32 KB Instruction Cache (ICACHE)
  • 16 KB Data Cache (DCACHE)
  • 64 KB DLMU RAM

Actually, there’s a lot more to this than meets the eye. For example, the main SRAMs (the DSPRs) associated with each of the cores appear at two locations in the memory map. In the case of Core 0, for example, the first location in its DSPR is located at address 0xD0000000 where it is considered to be local (i.e., it appears directly on Core 0’s local internal bus) and can be accessed quickly. However, this DSPR is also visible to Cores 1 and 2 at 0x70000000 via the main on-chip system bus, which allows them to read and write to this memory freely, but at a lower speed than Core 0. Similarly, Cores 1 and 2 access their own memories locally and each other’s memories globally.

Meet the ShieldBuddy TC375 (Click image to see a larger version — Image source: Hitex)

As for the original ShieldBuddy TC275, if you are a professional programmer, you’ll be delighted to hear that the main ShieldBuddy TC375 toolchain is the Eclipse-based “FreeEntryToolchain” from HighTec/PLS/Infineon. This is a full-on C/C++ development environment with source-level debugger and suchlike.

By comparison, if you are a novice programmer like your humble narrator, you’ll be overjoyed to hear that the ShieldBuddy TC375 can be programmed via the Arduino’s integrated development environment (IDE). As far as I’m concerned, this programming model is where things start to get very clever indeed.

An Arduino sketch (program) always contains two functions: setup(), which runs only one time, and loop(), which runs over and over again (the system automatically inserts a main() function while your back is turned). If you take an existing sketch and compile it for the ShieldBuddy, then it will run on Core 0 by default. You can achieve the same effect by renaming your setup() and loop() functions to be setup0() and loop0(), respectively.

Similarly, you can create setup1() and loop1() functions, which will automatically be compiled to run on Core 1, and you can create setup2() and loop2() functions, which will automatically be compiled to run on Core 2. Any of your “homegrown” functions will be compiled in such a way as to run on whichever of the cores need to use them. I know that, like Pooh, I’m a bear of little brain, but even I can wrap my poor old noggin around this usage model.

There’s much, much more to this incredible board than I can cover here, but if you are interested in learning more, then may I recommend that you visit this portion of the Hitex site where you will find all sorts of goodies, including the ShieldBuddy Forum and the ShieldBuddy TC375 User Manual.

Now, if you will forgive me, I must away because I have to go and gloat over “my precious” (my ShieldBuddy TC375) and commence preparations to upgrade the Prognostication Engine by removing all of its existing processors and replacing them with a single ShieldBuddy TC375.

Actually, I just had a parting thought, which is that the Prognostication Engine’s role in life is to help me predict the future but — when I started out on this project — I would never have predicted that technology would develop so fast that I would one day have a triple-core 300 MHz processor driving “the beast.” How about you? What are your thoughts on all of this?

Originally posted here.

Read more…

Edge Impulse has joined 1% for Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. To complement this effort we launched the ElephantEdge competition, aiming to create the world’s best elephant tracking device to protect elephant populations that would otherwise be impacted by poaching. In this similar vein, this blog will detail how Lacuna Space, Edge Impulse, a microcontroller and LoraWAN can promote the conservation of endangered species by monitoring bird calls in remote areas.

Over the past years, The Things Networks has worked around the democratization of the Internet of Things, building a global and crowdsourced LoraWAN network carried by the thousands of users operating their own gateways worldwide. Thanks to Lacuna Space’ satellites constellation, the network coverage goes one step further. Lacuna Space uses LEO (Low-Earth Orbit) satellites to provide LoRaWAN coverage at any point around the globe. Messages received by satellites are then routed to ground stations and forwarded to LoRaWAN service providers such as TTN. This technology can benefit several industries and applications: tracking a vessel not only in harbors but across the oceans, monitoring endangered species in remote areas. All that with only 25mW power (ISM band limit) to send a message to the satellite. This is truly amazing!

Most of these devices are typically simple, just sending a single temperature value, or other sensor reading, to the satellite - but with machine learning we can track much more: what devices hear, see, or feel. In this blog post we'll take you through the process of deploying a bird sound classification project using an Arduino Nano 33 BLE Sense board and a Lacuna Space LS200 development kit. The inferencing results are then sent to a TTN application.

Note: Access to the Lacuna Space program and dev kit is closed group at the moment. Get in touch with Lacuna Space for hardware and software access. The technical details to configure your Arduino sketch and TTN application are available in our GitHub repository.

 

Our bird sound model classifies house sparrow and rose-ringed parakeet species with a 92% accuracy. You can clone our public project or make your own classification model following our different tutorials such as Recognize sounds from audio or Continuous Motion Recognition.

3U_BsvrlCU1J-Fvgi4Y_vV_I5u_LPwb7vSFhlV-Y4c3GCbOki958ccFA1GbVN4jVDRIrUVZVAa5gHwTmYKv17oFq6tXrmihcWbUblNACJ9gS1A_0f1sgLsw1WNYeFAz71_5KeimC

Once you have trained your model, head to the Deployment section, select the Arduino library and Build it.

QGsN2Sy7bP1MsEmsnFyH9cbMxsrbSAw-8_Q-K1_X8-YSXmHLXBHQ8SmGvXNv-mVT3InaLUJoJutnogOePJu-5yz2lctPemOrQUaj9rm0MSAbpRhKjxBb3BC5g-U5qHUImf4HIVvT

Import the library within the Arduino IDE, and open the microphone continuous example sketch. We made a few modifications to this example sketch to interact with the LS200 dev kit: we added a new UART link and we transmit classification results only if the prediction score is above 0.8.

Connect with the Lacuna Space dashboard by following the instructions on our application’s GitHub ReadMe. By using a web tracker you can determine when the next good time a Lacuna Space satellite will be flying in your location, then you can receive the signal through your The Things Network application and view the inferencing results on the bird call classification:

    {
       "housesparrow": "0.91406",
       "redringedparakeet": "0.05078",
       "noise": "0.03125",
       "satellite": true,
   }

No Lacuna Space development kit yet? No problem! You can already start building and verifying your ML models on the Arduino Nano 33 BLE Sense or one of our other development kits, test it out with your local LoRaWAN network (by pairing it with a LoRa radio or LoRa module) and switch over to the Lacuna satellites when you get your kit.

Originally posted on the Edge Impulse blog by Aurelien Lequertier - Lead User Success Engineer at Edge Impulse, Jenny Plunkett - User Success Engineer at Edge Impulse, & Raul James - Embedded Software Engineer at Edge Impulse

Read more…

 

Question 1 : So, let’s start with the obvious question. What is DevOps and why is it inevitable for today’s businesses to adopt?

Answer : DevOps at the end of the day, if you look at it from a higher level, it is really the automation of agile, a better way to perform application design, development and deployment. This is far superior than the waterfall method which I actually started working on and was even teaching college, back in the 80s and 90s. So, the idea is that we’re going to, in essence, continuously improve the software through an agile methodology where we get together to deal with events versus some sort of a schedule or sequence, and how software is delivered.

So, DevOps really is the ability to automate that. And so, it’s the idea that we can actually code applications, say an application that runs on Linux, we can hit a button and it automatically goes through testing, including penetration testing, security testing, stability testing, performance testing. And then moves into a continuous integration process, then moves into a continuous deployment process and then is pushed out to a particular staging area and then it’s pushed out from the staging area to a production server.

The goal of DevOps is really kind of remove the humans from that process even though we haven’t done that completely yet. It is, in essence, to create a repeatable process as leverage with the number of tool sets that are working together to streamline the modification and delivery of software in a way that’s going to be better quality each time the software is delivered. There’s some cultural issues around DevOps as well, by the way, that are just important, it’s the ability to, in essence, understand that thinkers are going to be integrated iterative, the ability to deal with feedback directly from the testers and the operators, the ability to flatten the organization, and have a very open and interactive organization moving forward. And that’s the other side of the coin.

So people have a tendency to look at DevOps as just a tool chain with lots of cool tools, continuous integration, continuous testing, those sorts of things are working together, but it’s really a combination of a toolchain of process, and also a cultural change that’s probably more important than any of the technological changes.

Question 2 : One interesting point you mentioned about agile. So, I mean, as we all know, agile is a very commonly adopted methodology that’s in the software industry and I mean lot of companies are implementing agile successfully. So, as we talk about DevOps, I know it’s an extension, but how is this complementing to agile from a practical implementation standpoint?

Answer : Again, DevOps is really going to be very much of the automation of agile. So Agile is going to take a cultural change, an organizational change in order to make it effective. And ultimately, we’re leveraging a toolchain within DevOps as a way to automate everything that occurs in an agile environment. So, if we’re getting together on a daily basis to form a scrum and we’re talking about what needs to be changed, then typically the DevOps toolchain is where those changes are going to occur. 

Read more…

Say Hello to the Seeeduino XIAO

 

As small as a postage stamp, the Seeeduino XIAO boasts a 32-bit Arm Cortex-M0+ processor running at 48 MHz with 256 KB of flash memory and 32 KB of SRAM.

A couple of months ago, I penned a column, The Worm Turns, in which I revealed that — although I’d been bravely fighting my urges — my will had crumbled and I had decided to create a display comprising a 12 x 12 = 144 array of ping pong balls, each illuminated with a tricolor WS2818 LED (a.k.a. a NeoPixel).

8221240097?profile=RESIZE_400x

The author proudly presenting his 12 x 12 ping pong ball array (Click image to see a larger version — Image source: Max Maxfield)

First, I found a pack of 144 ping pong balls on Amazon for only $11. I ordered two cartons because I knew I would need some spares. Of course, this immediately tempted me to increase the size of my array to 15 = 15 = 225 ping pong balls, but I’d already ordered 150 NeoPixels in the form of five meters of 30 pixels/meter strips from Adafruit, so I decided to stick with the original plan, which we will call “Plan A” so no one gets confused.

Thank goodness I restrained myself, because the 12 x 12 array is proving to be a lot more work than I expected — a 15 x 15 array would have brought me to my knees.

The next step was to build a 2-ball prototype because I wanted to see whether it was best to attach the NeoPixel to the outside of the ball (the fast-and-easy option) or inside the ball (the slow-and-painful alternative). Although you can’t see it from the picture or from this video, there is a slight but noticeable difference in the real-world, and one method is indeed better than the other — can you guess which one?

8221246481?profile=RESIZE_400x

A prototype using two ping pong balls (Click image to see a larger version — Image source: Max Maxfield)

Have you ever tried to drill 3/8” holes into 144 ping pong balls? Me neither. Over the years, I’ve learned a thing or two, and one of the things I’ve learned is that drilling holes in ping pong balls always ends in tears. Thus, I ended up cutting these holes using a small pair of curved nail scissors (there’s one long evening I’ll never see again).

The reason for using the strips is that this is the cheapest way to purchase NeoPixels with associated capacitors in the easiest-to-use form. Unfortunately, the ball-to-ball spacing (43 mm) on the board is greater than the pixel-to-pixel spacing (33 mm) on the strip. This means chopping the strip into segments, attaching each segment to its associated ping pong ball, and then connecting adjacent segments together using three wires. So, 144 x 3 = 432 short wires to strip and solder. Do you have any idea how long this takes? I do!

8221247276?profile=RESIZE_400x

The Seeeduino XIAO is the size of a small postage stamp (Click image to see a larger version — Image source: Seeed Studio)

Now, you may have noticed that I was driving my 2-ball prototype with an Arduino Uno, but this is too large to be used in my array. In the past, I would have been tempted to use an Arduino Nano, which is reasonably small and not-too-expensive. On the other hand, the fact that this is an 8-bit processor running at only 16 MHz with only 32 KB of flash memory and only 2 KB of SRAM would limit the effects I could achieve.

Sometimes (rarely) the fates decide to roll the dice in one’s favor. In this case, while I was pondering which processor to employ, the folks from Seeed Studio contacted me to tell me about their Seeeduino XIAO.

OMG! This little rapscallion — which is only the size of a small postage stamp and costs only $5 — is awesome! In addition to a 32-bit Arm Cortex-M0+ processor running at 48 MHz, this bodacious beauty boasts 256 KB of flash memory and 32 KB of SRAM.

As an aside, it’s important to note is that the Seeeduino XIAO’s programming connector is USB Type-C, which means you’re going to need a USB-A to USB Type-C cable.

8221253899?profile=RESIZE_584x

The Seeeduino XIAO’s 11 input/output pins pack a punch (Click image to see a larger version — Image source: Seeed Studio)

In addition to its power and ground pins, the Seeeduino XIAO has 11 data pins, each of which can act as an analog input or a digital input/output (I/O). Furthermore, one of these pins can by driven by an internal digital-to-analog converter (DAC) and act as a true analog output, while the other pins can be used to provide I2C, SPI, and UART interfaces.

Sad to relate, there is one small fly in the soup or a large elephant in the room (I’m feeling generous today, so I’ll let you pick the metaphor you prefer). The problem is that, although it can be powered with the same 5 V supply as the NeoPixels, the Seeeduino XIAO’s I/O pins use a 3.3 V interface, but the NeoPixels require 5 V data signals, so we need some way to convert between the two.

In the past, I would probably have used a full-up bidirectional logic level converter, like the 4-channel BOB (breakout board) from SparkFun, but I only need a single unidirectional signal, so this seems a bit of overkill.

Happily, I recently ran across an awesome hack on Hackaday.com that provides a simple solution requiring only a single general-purpose IN4001 diode.

8221260061?profile=RESIZE_584x

A cheap-and-cheerful voltage level converter hack (Click image to see a larger version — Image source: Max Maxfield)

The way this works is rather clever. From the NeoPixel’s data sheet we learn that a logic 1 is considered to be 0.7 * Vcc. Since we are powering our NeoPixels with 5 V, this means a logic 1 will be 0.7 * 5 = 3.5 V, which is higher than the XIAO’s 3.3 V digital output. Bummer!

Actually, if the truth be told, there is some “wriggle room” here, and the 3.3 V signal from the XIAO might work, but are we the sort of people for whom “might” is good enough? Of course we aren’t!

The solution is to add a “sacrificial NeoPixel” at the beginning of the chain, and to power this pixel via our IN4001 diode. Since the IN4001 has a forward voltage drop of 0.7 V, the first NeoPixel will see a Vcc of 5 – 0.7 = 4.3 V. Remember that the NeoPixel considers a logic 1 to be 0.7 * Vcc, so this first NeoPixel will accept anything above 0.7 * 4.3 = 3.01 V as being a logic 1. Meanwhile, the next NeoPixel in the chain will see the 4.3 V data signal coming out of the first NeoPixel as being a valid logic 1. Pretty clever, eh?

I’m currently about half of the way through wiring everything up. I cannot wait to see my array light up for the first time. Once everything is up and running, I will return to regale you with more details. Until that frabjous day, I will delight to hear your comments, questions, and suggestions.

Originally posted HERE.

Read more…

Ever wanted the power of the all new Raspberry Pi 4 Single Board Computer, but in a smaller form factor? With more options to expand the I/Os and its functions? Well, The Raspberry Pi Compute Module 4 (a.k.a. CM4) got you covered! In this article, we’ll be taking a deep dive into the all-new CM4, see what’s new and how different the latest iteration is from its predecessor, CM3.

Introduction - The System on Module Architecture

The CM4 can be described as a ‘stripped-down’ version of the Raspberry Pi 4 Model B, which contains the same processor, memory, eMMC flash memory and the power regulation circuitry built-in. The CM4 looks almost like a breakout board with two connectors underneath, hence the name “System on Module (SoM)”. However, what differs the CM4 (all compute modules for that matter) from the regular Raspberry Pi 4 is that the CM4 does not come equipped with any hardware I/O ports such as USB, Ethernet and HDMI, but offers access to all the useful I/O pins of the cpu to be utilized to connect external peripherals that the designers include in their circuit designs. This offers the ultimate freedom to the designers and developers to use the computing power of the Raspberry Pi 4, while reducing the overall cost of their designs by only having to use what’s necessary in their designs.

 

What’s New In The CM4?

The key difference with the CM4, at first glance, is the form factor of the module. The previous versions, including the CM3 were designed to have the DDR2-SODIMM (mechanically compatible) form factor which looked like a laptop RAM stick. The successor, CM4 comes in a smaller form factor, with 2x 100-pin High-Density connector which can be ‘popped-on’ to the receiving board.

8221226476?profile=RESIZE_710x

Key Features

The CM4 comes in 32 different variants which has varying Flash and RAM options and optional wireless connectivity. Similar to the predecessors, there is also a CM4Lite version, which does not come with a built-in eMMC memory, reducing the cost of the module to a minimum of $25. However, all the variants of CM4 are equipped with following key features:

 
  • Broadcom BCM2711, Quad Core Cortex-A72 (Arm V8) 64-bit System on Chip, running at 1.5 GHz

  • 1/2/4/8GB LPDDR4 RAM options

  • 0(CM4Lite)/8/16/32GB of eMMC storage options (upto 100MB/s bandwidth)

  • Smaller footprint of 55mm x 40mm x 4.7mm (w x l x h)

  • Supports H.265 (4Kp60 Decode); H.264 (1080p60fps Decode, 1080p30fps Encode) using OpenGL ES 3.0 graphics

  • Radio Module

  • 2.4/5GHz IEEE 802.11 b/g/n/ac Wireless (optional)

  • Bluetooth 5.0 BLE

  • On-board selector to switch between PCB trace antenna and external antenna

  • On-board Gigabit Ethernet PHY supporting IEEE 1588 standard

  • 1x PCI Express Gen2.0 lane (5Gbps)

  • 2x HDMI2.0 ports (upto 4k60fps)

  • 1x USB 2.0 port (480MBps)

  • 28x GPIO pins, with the support on both 1.8V or 3.3V logic levels along with the peripheral options:

  • 2x PWM channels

  • 3x GPCLK

  • 6x UART (Serial)

  • 6x I2C

  • 5x SPI

  • 1x SDIO interface

  • 1x DPI

  • 1x PCM

  • MIPI DSI (Serial Display)

  • 1x 2-lane MIPI DSI display port

  • 1x 4-lane MIPI DSI display port

  • MIPI CSI-2 (Serial Camera)

  • 1x 2-lane MIPI CSI camera port

  • 1x 4-lane MIPI CSI camera port

  • 1x +5V Power Supply Input (on-board regulator circuitry available)

 

The Applications - DIY? Industrial?

The CM4 can be integrated into end products, designed and prototyped using the full-size Raspberry Pi 4 SBC. This allows the removal of unused ports, peripherals and components which reduces the overall cost and complexity. Therefore application ideas are virtually limitless and range all the way from DIY projects such as the PiBoy to industrial IoT designs such as integrated home automation systems, small scale hosting servers, data exchange hubs and portable electronics which require the processing power offered by the CM4, all while maintaining the smaller form factor and power consumption. Compute Module Clusters such as the Turing Pi 2, which harnesses the power of multiple Compute Modules are also an option with this powerful, yet small System on Module, the Raspberry Pi CM4.

 

How Can I Use Upswift Solutions On My Compute Module 4 Based Design?

Upswift offers hassle-free management solutions for all Linux-based embedded systems (CM4 included), by providing you a one-click solution to monitor, control and manage all your connected devices, from one place.

Originally posted HERE.

Read more…

Everybody Needs a ShieldBuddy

 

Arduino Mega footprint; three 32-bit cores all running at 200 MHz; 4 Mbytes of Flash and 500 Kbytes of RAM; works with the Arduino IDE; what’s not to love?

I tend to have a lot of hobby projects on the go at any particular time. Occasionally, I even manage to finish one. More rarely, one actually works.

maxncb-0102-01-awesome-audio-reactive-artifact-300x218.jpg?profile=RESIZE_400x

 Awesome Audio Reactive Artifact (Click image to see a larger version — Image source: Max Maxfield)

I also have a soft spot for 8-bit microprocessors and microcontrollers. Thus, many of my hobby projects are based on the Arduino Nano, Uno, or Mega platforms.

Take my Awesome Audio Reactive Artifact, for example. This little rascal is currently powered using an Arduino Uno, which is driving 145 tricolor NeoPixels. In turn, these NeoPixels are mounted under 31 defunct vacuum tubes (see also Awesome Audio-Reactive Artifact meets BirmingHAMfest).

The Awesome Audio Reactive Artifact also includes an ADMP401-based MEMS Microphone breakout board (BOB), which costs $10.95 from the guys and gals at SparkFun. In turn, this feeds a cheap-and-cheerful MSGEQ7 audio spectrum analyzer chip, which relieves the Arduino of a lot of processing pain (see also Using MSGEQ7s In Audio-Reactive Projects).

 

maxncb-0102-02-countdown-timer-300x200.jpg?profile=RESIZE_400x Countdown Timer (Click image to see a larger version — Image source: Max Maxfield) 

Sad to relate, 8-bit Arduinos sometimes run out of steam. Consider my Countdown Timer, for example, whose task it is to display the years (YY), months (MM), days (DD), hours (HH), minutes (MM), and seconds (SS) to my 100th birthday (see also Yes! My Countdown Timer is Alive!).

This little scamp employs 12 Lixie displays, each of which contains 20 NeoPixels, which gives us 240 NeoPixels in all. As the sophistication of the effects I was trying to implement increased, so did my processing requirements. Thus, I decided to use a Teensy 3.6, which features a 32-bit 180 MHz ARM Cortex-M4 processor with a floating-point unit. Furthermore, the Teensy 3.6 boasts 1 Mbyte of Flash memory for code, along with 256 Kbytes of RAM for dynamic data and variables.

maxncb-0102-03-inamorata-prognostication-engine-152x300.jpg?profile=RESIZE_180x180 Prognostication Engine (Click image to see a larger version — Image source: Max Maxfield)

All of which brings us to the pièce de résistance in the form of my Pedagogical and Phantasmagorical Prognostication Engine (see also The Color of Prognostication). This bodacious beauty sports two knife switches, eight toggle switches, ten pushbutton switches, five motorized potentiometers, six analog meters, and a variety of sensors (temperature, barometric pressure, humidity, proximity). All of this requires a bunch of analog and digital general-purpose input/output (GPIO) pins.

Furthermore, in addition to a wealth of weird, wonderful, and wide-ranging sound effects, the engine is equipped with 354 NeoPixels. These could potentially be daisy-chained from a single pin, although I ended up partitioning them into five strands. More importantly, the various effects require a lot of processing and memory.

When things finally started to come together on this project, I was initially thinking of using an Arduino Mega to power the beast, mainly because it has 54 digital pins and 16 analog inputs. On the downside, we have to remember that this is only an 8-bit processor gamely running at 16 MHz with a scant 256 Kbytes of Flash memory and 8 Kbytes of RAM. Furthermore, the Mega doesn’t have a floating-point unit (FPU), which means that if you need to use floating-point operations, this will really impact the performance of your programs.

maxncb-0102-04-hitex-shieldbuddy-300x155.jpg?profile=RESIZE_400x The tri-core ShieldBuddy (Click image to see a larger version — Image source: Hitex)

But turn that frown upside down into a smile, because the boffins at Hitex (hitex.com) have taken the Arduino Mega form factor and slapped an awesome Infineon Aurix TC275 processor down on it.

These processors are typically found only in state-of-the-art embedded systems. they rarely make it into the maker world (like the somewhat disheveled scientist who is desperately in need of a haircut says in the movie Independence: “They don’t let us out very often”).

The result is called the ShieldBuddy. As you can see in this video, I just took delivery of my first ShieldBuddy, and I’m really rather excited (I say “first” because I have no doubt this is going to be one of many).

So, what makes the ShieldBuddy so special? Well, how about the fact that the TC275 boasts three independent 32-bit cores, all running at 200 MHz, each with its own FPU, and all sharing 4 Mbytes of Flash and 500 Kbytes of RAM (actually, this is a bit of a simplification, but it will suffice for now). 

There’s no need for you to be embarrassed — I’m squealing in excitement alongside you. Now, if you are a professional programmer, you’ll be delighted to hear that the main ShieldBuddy toolchain is the Eclipse-based “FreeEntryToolchain” from HighTec/PLS/ Infineon. This is a full-on C/C++ development environment with source-level debugger and suchlike. 

But how about if — like me — you aren’t used to awesomely powerful (and correspondingly complicated) Eclipse-based toolchains? Well, there’s no need to worry, because the guys and gals at Hitex also have a solution for the Arduino’s integrated development environment (IDE). 

Sit up straight and pay attention, because this is where things start to get really clever. In addition to any functions you create yourself, an Arduino sketch (program) always contains two functions: setup(), which runs only one time, and loop(), which runs over and over again. 

Now, remember that the ShieldBuddy has three processor cores, which we might call Core 0, Core 1, and Core 2. Well, you can take your existing sketches and compile/upload them for the ShieldBuddy, and — by default — they will run on Core 0. 

You could achieve the same effect by renaming your setup() function to be setup0(), and renaming your loop() function to be loop0(), which explicitly tells the compiler to target these functions at Core 0. 

The point is that you can also create setup1() and loop1() functions, which will automatically be compiled to run on Core 1, and you can create setup2() and loop2() functions, which will automatically be compiled to run on Core 2. Any of your remaining functions will be compiled in such a way as to run on whichever of the cores need to use them. 

Although each core runs independently, they can communicate between themselves using techniques like shared memory. Also, you can use interrupts to coordinate and communicate between cores. 

And things just keep on getting better and better, because it turns out that a NeoPixel library is already available for the ShieldBuddy. 

I’m just about to start experimenting with this little beauty. All I can say is that everybody needs a ShieldBuddy and my ShieldBuddy is my new best friend (sorry Little Steve). How about you? Could any of your projects benefit from the awesome processing power provided by the ShieldBuddy? 

Originally posted HERE.

Read more…

On-chip UHD SS–MSCs as a device-unitized power source. Credit: Professor Sang-Young Lee, UNIST

by Ulsan National Institute of Science and Technology

A tiny microsupercapacitor (MSC) that is as small as the width of a person's fingerprint and can be integrated directly with an electronic chip has been developed. This has attracted major attention as a novel technology to lead the era of Internet of Things (IoT) since it can be driven independently when applied to individual electronic components.

 

Through the study, Professor Sang-Young Lee and his research team in the School of Energy and Chemical Engineering at UNIST have unveiled a new class of ultrahigh areal number density solid-state MSCs (UHD SS–MSCs) on a chip via electrohydrodynamic (EHD) jet printing. According to the research team, this is the first study to exploit EHD jet printing in the MSCs.

A supercapacitor (SC), also known as an ultracapacitor, can store much more energy than ordinary capacitors. The benefits of supercapacitors include having high power delivery and longer cycle life compared to lithium-based secondary batteries. In particular, it can be produced as small as the width of a person's fingerprint via semiconductor manufacturing process, and thus can be also applicable for wearables and internet of things (IoT) devices.

However, becuase the heat produced in manufacturing process may cause deterioration of the electrical characteristics of the supercapacitor, it has been difficult to connect them directly to electronic components. In addition, the fabrication method that combines supercapacitors with electronic components via inkjet printing technique has also the disadvantage of lower precision.

The research team solved this issue using EHD jet printing, a high-resolution patterning technique in microelectronics. EHD jet printing uses the electrode and electrolyte for printing purpose similar to that of conventional inkjet printing, yet it can control printed liquid with an electric field.

"We were able to produce up to 54.9 unit cells per square centimeter (cm2) via electro-hydrodynamic jet printing technique, and thus the output of 65.9 volts (V) was achieved in the same area," says Kwonhyung Lee (Combined M.S/Ph.D. of Energy and Chemical Engineering, UNIST), the first author of the study.

The team also succeeded in fabricating 36 unit cells on a chip (area = 8.0 mm × 8.2 mm, 54.9 cells cm−2) and areal operating voltage (65.9 V cm−2) that lie far beyond those of previously reported MSCs fabricated by printing techniques. Besides, upon exposure to hot temperature (80 degrees C), these cells maintained normal cyclic voltammetry (CV) profiles, and thus has proven they can withstand excessive heat generated during the operation of actual electronic component. In addition, these batteries can provide customized powere supplies, as they can be connected either in series or parallel.

"In this study, we have demonstrated on-chip UHD SS–MSCs fabricated via EHD jet printing," says Professor Lee. "The on-chip UHD SS–MSCs presented here hold great promise as a new platform technology for miniaturized monolithic power sources with customized design and tunable electrochemical properties."

Originaly posted HERE

 
Read more…

PYNQ is great for accelerating Python applications in programmable logic. Let's take a look at how we can use it with OpenMV camera.

Things used in this project

Hardware:

  • Avnet Ultra96-V2 (Can also use V1 or V3)
  • OpenMV Cam M7
  • Avnet Ultra96 (Can use V1 or V2)

Software:

  • Xilinx PYNQ Framework

Introduction

Image processing is required for a range of applications from vision guided robotics to machine vision in industrial applications.

In this project we are going to look at how we can fuse the OpenMV camera with the Ultra96 running PYNQ. This will allow out PYNQ application to offload some image processing to the camera. Doing so will provide a higher performance system and open the Ultra96 using PYNQ to be able to work with the OpenMV ecosystem.

 

What Is the OpenMV Camera 

The OpenMV camera is a low cost machine vision camera which is developed using Python. Thanks to this architecture of the OpenMV Camera we can therefore offload some of the image processing to the camera. Meaning the image frames received by our Ultra96 already have faces identified, eyes tracked or Sobel filtering, it all depends on how we set up the OpenMV Camera.

As the OpenMV camera has been designed to be extensible it provides 10 external IO which can be used to drive external sensors. These 10 are able to support a range of interfaces from UART to SPI, I2C and PWM. Of course the PWM is very useful for driving servos.

On very useful feature of the OpenMV camera is its LEDs mine (OpenMV M7) provides a tri-colour LED which can be used to output Red, Green, Blue and a separate IR LED. As the sensor is IR sensitive this can be useful for low light performance.

8100406101?profile=RESIZE_400xOpenMV Camera

How Does the OpenMV Camera Work

OpenMV Cam uses micro python to control the imager and output frames over the USB link. Micro python is intended for use on micro controllers and is based on Python 3.4. To use the OpenMV camera we need to first generate a micro python script which configures the camera for the given algorithm we wish to implement. We then execute this script by uploading and running it over the USB link.

This means we need some OpenMV APIs and libraries on a host machine to communicate with the OpenMV Camera.

To develop the script we want to be able to ensure it works, which is where the OpenMV IDE comes into its own, this allows us to develop and test the script which we later use in our Ultra96 application.

We can develop this script using either a Windows, MAC or Linux desktop.

 

Creating the OpenMV Script using the OpenMV IDE

To get started with the OpenMV IDE we frist need to download and install it. Once it is installed the next step is to connect our OpenMV camera to it using the USB link and then running a script on it.

To get started we can run the example hello world provided, which configures the camera to outputs standard RGB image at QVGA resolution. On the right hand side of the IDE you will be able to see the images output from the camera.

 

We can use this IDE to develop scripts for the OpenMV camera such as the one below which detects and identifies circles in the captured image.

Note the frame rate is lower when the camera is connected to the IDE.

 

We can use the scripts developed here in our Ultra96 PYNQ implementation let's take a look at how we set up the Ultra96 and PYNQ

Setting Up the Ultra96 PYNQ Image

The first thing we need to do if we have not already done it, is to download and create a PYNQ SD Card so we can run the PYNQ framework on the Ultra96.

As we want to use the Xilinx image processing overlay we should download the Ultra96 PYNQ v2.3 image.

Once you have this image creating a SD Card is very simple, extract the ISO image from the compressed file and write it to a SD Card. To write the ISO image to the SD Card we need a program such a etcher or win32 disk imager.

With a SD Card available we can then boot the Ultra96 and connect to the PYNQ framework using either

  • Use a USB Ethernet connection over the MicroUSB (upstream USB connection).
  • Connect via WiFi.
  • Use the Ultra96 as a single-board computer and connect a monitor, keyboard and mouse.

For this project I used the USB Ethernet connection.

The next thing to do is to ensure we have the necessary overlays to be able to accelerate image processing functions into the programmable logic. To to this we need to install the PYNQ computer vision overlay. 

Downloading the Image Processing Overlay

Installing this overlay is very straight forward. Open a browser window and connect to the web address of 192.168.3.1 (USB Ethernet address). This will open a log in page to the Jupyter notebooks, the password is Xilinx

 

Upon log in you will see the following folders and scripts

 

Click on new and select terminal, this will open a new terminal window in a browser window. To download and use the PYNQ Computer Vision overlays we enter the following command

sudo pip3 install --upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
 

Once these are downloaded if you look back at the Jupyter home page you will see a new directory called pynqOpenCV.

 

Using these Jupyter notebooks we can test the image processing performance when we accelerate OpenCV functions into the programmable logic.

 

Typically the hardware acceleration as can be seen in the image above greatly out performs implementing the algorithm in SW.

Of course we can call this overlay from our own Jupyter notebooks

 

Setting Up the OpenMV Camera in PYNQ

The next step is to configure the Ultra96 PYNQ instance to be able to control the OpenMV camera using its APIs. We can obtain these by downloading the OpenMV git repo using the command below in a terminal window on the Ultra96.

git clone https://github.com/openmv/openmv
 

Once this is downloaded we need to move the file pyopenmv.py

From openmv/tools

To /usr/lib/python3.6

This will allow us to control the OpenMV camera from within our Jupyter applications.

To be able to do this we need to know which serial port the OpenMV camera enumerates as. This will generally be ttyACM0 or ttyACM1 we can find this out by doing a LS of the /dev directory

 

Now we are ready to begin working with the OpenMV camera in our applications let's take a look at how we set it up our Jupyter Scripts

 

Initial Test of OpenMV Camera

The first thing we need to do in a new Jupyter notebook is to import the necessary packages. This includes the pyopenmv as we just installed.

We will alos be importing numpy as the image is returned as a numpy array so that we can display it using numpy functionality.

import pyopenmvimport timeimport sysimport numpy as np 

The first thing we need to do is define the script we developed in the IDE, for the "first light" with the PYNQ and OpenMV we will use the hello world script to obtain a simple image.

script = """

# Hello World Example

#

# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time

import pyb

sensor.reset()                      # Reset and initialize the sensor.

sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)

sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)

sensor.skip_frames(time = 2000)     # Wait for settings take effect.

clock = time.clock()                # Create a clock object to track the FPS.

red_led = pyb.LED(1)

red_led.off()

red_led.on()

while(True):

   clock.tick() 

   img = sensor.snapshot()         # Take a picture and return the image.

"""

Once the script is defined the next thing we need to do is connect to the OpenMV camera and download the script.

 

portname = "/dev/ttyACM0"

connected = False

pyopenmv.disconnect()

for i in range(10):

   try:

       # opens CDC port.

       # Set small timeout when connecting

       pyopenmv.init(portname, baudrate=921600, timeout=0.050)

       connected = True

       break

   except Exception as e:

       connected = False

       sleep(0.100)

if not connected:

   print ( "Failed to connect to OpenMV's serial port.\n"

           "Please install OpenMV's udev rules first:\n"

           "sudo cp openmv/udev/50-openmv.rules /etc/udev/rules.d/\n"

           "sudo udevadm control --reload-rules\n\n")

   sys.exit(1)

# Set higher timeout after connecting for lengthy transfers.

pyopenmv.set_timeout(1*2) # SD Cards can cause big hicups.

pyopenmv.stop_script()

pyopenmv.enable_fb(True)

pyopenmv.exec_script(script)

Finally once the script has been downloaded and is executing, we want to be able to read out the frame buffer. This Cell below reads out the framebuffer and saves it as a jpg file in the PYNQ file system.

 

running = True

import numpy as np

from PIL import Image

from matplotlib import pyplot as plt

while running:

   fb = pyopenmv.fb_dump()

   if fb != None:

       img = Image.fromarray(fb[2], 'RGB')

       img.save("frame.jpg")

       img = Image.open("frame.jpg")

       img

       time.sleep(0.100)

 

When I ran this script the first light image below was received of me working in my office.

 

Having achieved this the next step is to start working with advanced scripts in the PYNQ Jupyter notebook. using the same approach as above we can redefine scripts which can be used for different processing including

script = """

import sensor, image, time

sensor.reset() # Initialize the camera sensor.

sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565

sensor.set_framesize(sensor.QQVGA) # or sensor.QVGA (or others)

sensor.skip_frames(time = 2000) # Let new settings take affect.

sensor.set_gainceiling(8)

clock = time.clock() # Tracks FPS.

while(True):

   clock.tick() # Track elapsed milliseconds between snapshots().

   img = sensor.snapshot() # Take a picture and return the image.

   # Use Canny edge detector

   img.find_edges(image.EDGE_CANNY, threshold=(50, 80))

   # Faster simpler edge detection

   #img.find_edges(image.EDGE_SIMPLE, threshold=(100, 255))

   print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while

"""

For Canny edge detection when imaging a MiniZed Board

 

Alternatively we can also extract key points from images for tracking in subsequent images.

script = """

import sensor, time, image

# Reset sensor

sensor.reset()

# Sensor settings

sensor.set_contrast(3)

sensor.set_gainceiling(16)

sensor.set_framesize(sensor.VGA)

sensor.set_windowing((320, 240))

sensor.set_pixformat(sensor.GRAYSCALE)

sensor.skip_frames(time = 2000)

sensor.set_auto_gain(False, value=100)

def draw_keypoints(img, kpts):

   if kpts:

       print(kpts)

       img.draw_keypoints(kpts)

       img = sensor.snapshot()

       time.sleep(1000)

kpts1 = None

# NOTE: uncomment to load a keypoints descriptor from file

#kpts1 = image.load_descriptor("/desc.orb")

#img = sensor.snapshot()

#draw_keypoints(img, kpts1)

clock = time.clock()

while (True):

   clock.tick()

   img = sensor.snapshot()

   if (kpts1 == None):

       # NOTE: By default find_keypoints returns multi-scale keypoints extracted from an image pyramid.

       kpts1 = img.find_keypoints(max_keypoints=150, threshold=10, scale_factor=1.2)

       draw_keypoints(img, kpts1)

   else:

       # NOTE: When extracting keypoints to match the first descriptor, we use normalized=True to extract

       # keypoints from the first scale only, which will match one of the scales in the first descriptor.

       kpts2 = img.find_keypoints(max_keypoints=150, threshold=10, normalized=True)

       if (kpts2):

           match = image.match_descriptor(kpts1, kpts2, threshold=85)

           if (match.count()>10):

               # If we have at least n "good matches"

               # Draw bounding rectangle and cross.

               img.draw_rectangle(match.rect())

               img.draw_cross(match.cx(), match.cy(), size=10)

           print(kpts2, "matched:%d dt:%d"%(match.count(), match.theta()))

           # NOTE: uncomment if you want to draw the keypoints

           #img.draw_keypoints(kpts2, size=KEYPOINTS_SIZE, matched=True)

   # Draw FPS

   img.draw_string(0, 0, "FPS:%.2f"%(clock.fps()))

"""

Circle Detection

 

import sensor, image, time

sensor.reset()

sensor.set_pixformat(sensor.RGB565) # grayscale is faster

sensor.set_framesize(sensor.QQVGA)

sensor.skip_frames(time = 2000)

clock = time.clock()

while(True):

   clock.tick()

   img = sensor.snapshot().lens_corr(1.8)

   # Circle objects have four values: x, y, r (radius), and magnitude. The

   # magnitude is the strength of the detection of the circle. Higher is

   # better...

   # `threshold` controls how many circles are found. Increase its value

   # to decrease the number of circles detected...

   # `x_margin`, `y_margin`, and `r_margin` control the merging of similar

   # circles in the x, y, and r (radius) directions.

   # r_min, r_max, and r_step control what radiuses of circles are tested.

   # Shrinking the number of tested circle radiuses yields a big performance boost.

   for c in img.find_circles(threshold = 2000, x_margin = 10, y_margin = 10, r_margin = 10,

           r_min = 2, r_max = 100, r_step = 2):

       img.draw_circle(c.x(), c.y(), c.r(), color = (255, 0, 0))

       print(c)

   print("FPS %f" % clock.fps())

 

 

 

This fusion of ability to offload processing to either the OpenMV camera or the Ultra96 programmable logic running Pynq provides the system designer with maximum flexibility.

 

Wrap Up

The ability to use the OpenMV camera, coupled with the PYNQ computer vision libraries along with other overlays such as the klaman filter and base overlays. We can implement algorithms which can be used to enable us to implement vision guided robotics. Using the base overlay and the Input Output processors also enables us to communicate with lower level drives, interfaces and other sensors required to implement such a solution.

Originaly posted here.

 

Read more…

Will We Ever Get Quantum Computers?

In a recent issue of IEEE Spectrum, Mikhail Dyakonov makes a pretty compelling argument that quantum computing (QC) isn't going to fly anytime soon. Now, I'm no expert on QC, and there sure is a lot of money being thrown at the problem by some very smart people, but having watched from the sidelines QC seems a lot like fusion research. Every year more claims are made, more venture capital gets burned, but we don't seem to get closer to useful systems.

Consider D-Wave Systems. They've been trying to build a QC for twenty years, and indeed do have products more or less on the market, including, it's claimed, one of 1024 q-bits. But there's a lot of controversy about whether their machines are either quantum computers at all, or if they offer any speedup over classical machines. One would think that if a 1K q-bit machine really did work the press would be all abuzz, and we'd be hearing constantly of new incredible results. Instead, the machines seem to disappear into research labs.

Mr. Duakonov notes that optimistic people expect useful QCs in the next 5-10 years; those less sanguine expect 20-30 years, a prediction that hasn't changed in two decades. He thinks a window of many decades to never is more realistic. Experts think that a useful machine, one that can do the sort of calculations your laptop is capable of, will require between 1000 and 100,000 q-bits. To me, this level of uncertainty suggests that there is a profound lack of knowledge about how these machines will work and what they will be able to do.

According to the author, a 1000 q-bit machine can be in 21000 states (a classical machine with N transistors can be in only 2N states), which is about 10300, or more than the number of sub-atomic particles in the universe. At 100,000 q-bits we're talking 1030,000, a mind-boggling number.

Because of noise, expect errors. Some theorize that those errors can be eliminated by adding q-bits, on the order of 1000 to 100,000 additional per q-bit. So a useful machine will need at least millions, or perhaps many orders of magnitude more, of these squirrelly microdots that are tamed only by keeping them at 10 millikelvin.

A related article in Spectrum mentions a committee formed of prestigious researchers tasked with assessing the probability of success with QC concluded that:

"[I]t is highly unexpected" that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable "noisy intermediate-scale quantum computers" will be built within that time frame, "there are at present no known algorithms/applications that could make effective use of this class of machine," the committee says."

I don't have a dog in this fight, but am relieved that useful QC seems to be no closer than The Distant Shore (to quote Jan de Hartog, one of my favorite writers). If it were feasible to easily break encryption schemes banking and other systems could collapse. I imagine Blockchain would fail as hash algorithms became reversable. The resulting disruption would not be healthy for our society.

On the other hand, Bruce Schneier's article in the March issue of IEEE Computing Edge suggests that QC won't break all forms of encryption, though he does think a lot of our current infrastructure will be vulnerable. The moral: if and when QC becomes practical, expect chaos.

I was once afraid of quantum computing, as it involves mechanisms that I'll never understand. But then I realized those machines will have an API. Just as one doesn't need to know how a computer works to program in Python, we'll be insulated from the quantum horrors by layers of abstraction.

Originaly posted here

Read more…

A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them 'smart.' The study was published in Electronics.

Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance. For comparison, in ordinary modern computers, random access memory is calculated in gigabytes. Although microcontrollers possess significantly less processing power than laptops and smartphones, they are smaller and can be interfaced with household items. Smart doors, refrigerators, shoes, glasses, kettles and coffee makers create the foundation for so-called ambient intelligece. The term denotes an environment of interconnected smart devices. 

An example of ambient intelligence is a smart home. The devices with limited memory are not able to store a large number of keys for secure data transfer and arrays of neural network settings. It prevents the introduction of artificial intelligence into Internet of Things devices, as they lack the required computing power. However, artificial intelligence would allow smart devices to spend less time on analysis and decision-making, better understand a user and assist them in a friendly manner. Therefore, many new opportunities can arise in the creation of environmental intelligence, for example, in the field of health care.

Andrei Velichko from Petrozavodsk State University, Russia, has created a new neural network architecture that allows efficient use of small volumes of RAM and opens the opportunities for the introduction of low-power devices to the Internet of Things. The network, called LogNNet, is a feed-forward neural network in which the signals are directed exclusively from input to output. Its uses deterministic chaotic filters for the incoming signals. The system randomly mixes the input information, but at the same time extracts valuable data from the information that are invisible initially. A similar mechanism is used by reservoir neural networks. To generate chaos, a simple logistic mapping equation is applied, where the next value is calculated based on the previous one. The equation is commonly used in population biology and as an example of a simple equation for calculating a sequence of chaotic values. In this way, the simple equation stores an infinite set of random numbers calculated by the processor, and the network architecture uses them and consumes less RAM.

7978216495?profile=RESIZE_584x

The scientist tested his neural network on handwritten digit recognition from the MNIST database, which is considered the standard for training neural networks to recognize images. The database contains more than 70,000 handwritten digits. Sixty-thousand of these digits are intended for training the neural network, and another 10,000 for network testing. The more neurons and chaos in the network, the better it recognized images. The maximum accuracy achieved by the network is 96.3%, while the developed architecture uses no more than 29 KB of RAM. In addition, LogNNet demonstrated promising results using very small RAM sizes, in the range of 1-2kB. A miniature controller, Atmega328, can be embedded into a smart door or even a smart insole, has approximately the same amount of memory.

"Thanks to this development, new opportunities for the Internet of Things are opening up, as any device equipped with a low-power miniature controller can be powered with artificial intelligence. In this way, a path is opened for intelligent processing of information on peripheral devices without sending data to cloud services, and it improves the operation of, for example, a smart home. This is an important contribution to the development of IoT technologies, which are actively researched by the scientists of Petrozavodsk State University. In addition, the research outlines an alternative way to investigate the influence of chaos on artificial intelligence," said Andrei Velichko.

Originally posted HERE.

by Russian Science Foundation

Image Credit: Andrei Velichko

 

 

 

 

Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

 

max0492-01-arduino-breakout-board-1024x885.jpg

When I work on a development project, I’ve become a big fan of using development boards that have the Arduino headers on them. The vast number of shields that easily connect to these headers is phenomenal. The one problem that I’ve always had though was that there is always a need to use a breadboard to test a circuit or integrate a sensor that just isn’t in an Arduino header format. The result is a wiring mess that can result in loose or missing connections.

I was recently talking with Max Maxfield and he pointed me to a really cool adapter board designed to remove these wiring jumpers to a breadboard. Max wrote about this board here but I’m so excited about this that I thought I’d add my two cents as well.

The BreadShield, which can be purchased at https://www.crowdsupply.com/loser/breadshield, adapts the Arduino headers to a linear set of header pins designed to be plugged into a breadboard. You can see in the image below that this completely removes all the extra jumpers that one would normally require which has the potential to remove quite a few jumper wires.

max0492-03-arduino-breakout-board-1024x675.jpg

When I heard about these, I purchased three assembled units for about $28 which saves me the time from having to assemble the adapter myself. DIY assembly runs for about $15 for a set of three boards. Either way, a great price to remove a bunch of wires from the workbench.

Now I’m still waiting for mine to arrive, but from the image, you can see that the one challenge to using these adapters might be adapting the height of your breadboard to your hardware stack. While this could be an issue, I keep various length spacers around the office so that I can adapt board heights and undoubtedly there will be a length that will ensure these line up properly.

You can view the original post here

Read more…

In-Circuit Emulators

Does anyone remember in-circuit emulators (ICEs)?

Around 1975 Intel came out with the 8080 microprocessor. This was a big step up from the 8008, for the 8080 had a 64k address space, a reasonable ISA, and an honest stack pointer (the 8008 had a hardware stack a mere 7 levels deep). They soon released the MDS 800, a complete computer based on the 8080, with twin 8" floppy drives. An optional ICE was available; this was, as I recall, a two-board set that was inserted in the MDS. A ribbon cable from those boards went to a small pod that could be plugged into the 8080 CPU socket of a system an engineer was developing.

The idea was that the MDS could act as the device's under test (DUT) CPU. It was rather like today's JTAG debuggers in that one could run code on the DUT, set breakpoints, collect trace data, and generally debug the hardware and software. For there was no JTAG then.

We had been developing microprocessor-based products using the 8008, but quickly transitioned to the 8080 for the increased computational power and address space. I begged my boss for the money for an MDS, which was $20k (about $100k in today's dollars), and to my surprise he let us order one. Despite slow floppies that stored only 80 KB each this tool greatly accelerated our work.

Before long ICEs were the standard platform for embedded work. Remember: this was before PCs so there were no standard desktop computers. The ICE was the computer, the IDE (such as it was) and the debugger.

In the mid-80s I was consulting and designed a, uh, "data gathering" system for our friends in Langley, VA, using multiple NSC-800 CPUs. There were few tools available for this part so I created a custom ICE that let me debug the code. Then a light bulb went on: why not sell the thing? There was practically no market for NSC-800 tools so I came up with versions for the Z80 and 8085 and slapped a $695 label on it. Most ICEs at the time cost many thousands so sales spiked.

Back then we still drew schematics on large D-size (17" x 22") vellum with a pencil. I laid out the PCBs on mylar with black tape for the tracks, as was the norm at the time.

This ICE is perhaps the design I'm most proud of in my career. It was only 17 ICs but was the epitome of an embedded system. Software replaced the usual gobs of hardware. On a breakpoint, for instance, the hardware switched from using the DUT stack to a stack on the emulator, but since the user's stack pointer could point anywhere, and the RAM in the ICE was only a few KB, the hardware masked off the upper address bits and lots of convoluted code reconstructed the user environment.

At the time ICEs advertised their breakpoints; most supported no more than a few as comparators watched the address bus for the breakpoint. My ICE used a 64k by one bit memory that mirrored the user bus. Need a breakpoint at, say, address 0x1234? The emulator set that bit in the memory true. Thus, the thing had 65K breakpoints. One of my dumbest mistakes was to not patent that, as all ICE vendors eventually copied the approach.

The trouble with tools is support. An ICE replaces the DUT CPU, and interfaces with all sorts of unknown target hardware. Though the low clock rates of the Z80 meant we initially had few problems, as we expanded the product line support consumed more and more time. Eventually I learned it was equally easy to sell a six-thousand-dollar product as a six-hundred-dollar version, so those simple first emulators were replaced by much more complex many-hundred chip versions with vast numbers of features.

But the market was changing. By the mid-90s SMT CPUs were common. These were challenging to connect to. Clock rate soared making every connection a Maxwell Law nightmare. I sold the business in 1997 and went on to other endeavors. Eventually the ICE market disappeared.

One regret from all those years is that I didn't save any of the emulator's firmware or schematics. In this business everything is ephemeral. We should make an effort to preserve some of that history.

You can view the original post on TEM here

Read more…

Industrial Prototyping for IoT

I-Pi SMARC.jpg

ADLINK is a global leader in edge computing driving data-to-decision applications across industries. The company recently introduced I-Pi SMARC for Industrial IoT prototyping.

-       AdLInk I-Pi SMARC consists of a simple carrier paired with a SMARC Computer on Module

-       SMARC Modules are available from entry level PX30 Rockchip to top of the line Intel Apollo Lake.

-       SMARC modules are specifically designed for typical industrial embedded applications that require long life, high MTBF and strict revision control.

-       Use popular off the shelve sensors and create prototypes or proof of concepts on short notice.

Additional information can be found here

 

Read more…

Helium Expands to Europe

Helium, the company behind one of the world’s first peer-to-peer wireless networks, is announcing the introduction of Helium Tabs, its first branded IoT tracking device that runs on The People’s Network. In addition, after launching its network in 1,000 cities in North America within one year, the company is expanding to Europe to address growing market demand with Helium Hotspots shipping to the region starting July 2020. 

Since its launch in June 2019, Helium quickly grew its footprint with Hotspots covering more than 700,000 square miles across North America. Helium is now expanding to Europe to allow for seamless use of connected devices across borders. Powered by entrepreneurs looking to own a piece of the people-powered network, Helium’s open-source blockchain technology incentivizes individuals to deploy Hotspots and earn Helium (HNT), a new cryptocurrency, for simultaneously building the network and enabling IoT devices to send data to the Internet. When connected with other nearby Hotspots, this acts as the backbone of the network. 

“We’re excited to launch Helium Tabs at a time where we’ve seen incredible growth of The People’s Network across North America,” said Amir Haleem, Helium’s CEO and co-founder. “We could not have accomplished what we have done, in such a short amount of time, without the support of our partners and our incredible community. We look forward to launching The People’s Network in Europe and eventually bringing Helium Tabs and other third-party IoT devices to consumers there.”  

Introducing Helium Tabs that Run on The People’s Network
Unlike other tracking devices,Tabs uses LongFi technology, which combines the LoRaWAN wireless protocol with the Helium blockchain, and provides network coverage up to 10 miles away from a single Hotspot. This is a game-changer compared to WiFi and Bluetooth enabled tracking devices which only work up to 100 feet from a network source. What’s more, due to Helium’s unique blockchain-based rewards system, Hotspot owners will be rewarded with Helium (HNT) each time a Tab connects to its network. 

In addition to its increased growth with partners and customers, Helium has also seen accelerated expansion of its Helium Patrons program, which was introduced in late 2019. All three combined have helped to strengthen its network. 

Patrons are entrepreneurial customers who purchase 15 or more Hotspots to help blanket their cities with coverage and enable customers, who use the network. In return, they receive discounts, priority shipping, network tools, and Helium support. Currently, the program has more than 70 Patrons throughout North America and is expanding to Europe. 

Key brands that use the Helium Network include: 

  • Nestle, ReadyRefresh, a beverage delivery service company
  • Agulus, an agricultural tech company
  • Conserv, a collections-focused environmental monitoring platform

Helium Tabs will initially be available to existing Hotspot owners for $49. The Helium Hotspot is now available for purchase online in Europe for €450.

Read more…
RSS
Email me when there are new items in this category –

Sponsor