Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Programming (233)

When analyzing whether a machine learning model works well, we rely on accuracy numbers, F1 scores and confusion matrices - but they don't give any insight into why a machine learning model misclassifies data. Is it because data looks very similar, is it because data is mislabeled, or is it because preprocessing parameters are chosen incorrectly? To answer these questions we have now added the feature explorer to all neural network blocks in Edge Impulse. The feature explorer shows your complete dataset in one 3D graph, and shows you whether data was classified correct or incorrect.

8481148295?profile=RESIZE_710x

Showing exactly which data samples are misclassified in the feature explorer.

If you haven't used the feature explorer before: it's one of the most interesting options in the Edge Impulse. The axes are the output of the signal processing process (we heavily rely on signal processing to extract interesting features beforehand, making smaller and more reliable ML models), and they can let you quickly validate whether your data separates nicely. In addition the feature explorer is integrated in Live classification, where you can compare incoming test data directly with your training set.

8481149063?profile=RESIZE_710x

Redesign of the neural network pages.

This work has been part of a redesign of our neural network pages. These pages are now more compact, giving you full insight in both your neural network architecture, and the training performance - and giving you an easy way to compare models with different optimization options (like comparing an int8 quantized model vs. an unoptimized model) and show accurate on-device performance metrics for a wide variety of targets.

Next steps

Currently the feature explorer shows the performance of your training set, but over the next weeks we'll also integrate the feature explorer and the new confusion matrix to the Model testing page in Edge Impulse. This will give you direct insight in the performance of your test set in the same way, so keep an eye out for that!

Want to try the new feature explorer out? Just head to any neural network block in your Edge Impulse project and retrain. Don't have a project yet?! Followone of our tutorials on building embedded machine learning models on real sensor data, it takes 30 minutes and you can even use your phone as a sensor.

Article originally written by Jan Jongboom, the CTO and co-founder of Edge Impulse. He loves pretty pictures, colors, and insight in his ML models.

Read more…

When I think about the things that held the planet together in 2020, it was digital experiences delivered over wireless connectivity that made remote things local.

While heroes like doctors, nurses, first responders, teachers, and other essential personnel bore the brunt of the COVID-19 response, billions of people around the world found themselves cut off from society. In order to keep people safe, we were physically isolated from each other. Far beyond the six feet of social distancing, most of humanity weathered the storm from their homes.

And then little by little, old things we took for granted, combined with new things many had never heard of, pulled the world together. Let’s take a look at the technologies and trends that made the biggest impact in 2020 and where they’re headed in 2021:

The Internet

The global Internet infrastructure from which everything else is built is an undeniable hero of the pandemic. This highly-distributed network designed to withstand a nuclear attack performed admirably as usage by people, machines, critical infrastructure, hospitals, and businesses skyrocketed. Like the air we breathe, this primary facilitator of connected, digital experiences is indispensable to our modern society. Unfortunately, the Internet is also home to a growing cyberwar and security will be the biggest concern as we move into 2021 and beyond. It goes without saying that the Internet is one of the world’s most critical utilities along with water, electricity, and the farm-to-table supply chain of food.

Wireless Connectivity

People are mobile and they stay connected through their smartphones, tablets, in cars and airplanes, on laptops, and other devices. Just like the Internet, the cellular infrastructure has remained exceptionally resilient to enable communications and digital experiences delivered via native apps and the web. Indoor wireless connectivity continues to be dominated by WiFi at home and all those empty offices. Moving into 2021, the continued rollout of 5G around the world will give cellular endpoints dramatic increases in data capacity and WiFi-like speeds. Additionally, private 5G networks will challenge WiFi as a formidable indoor option, but WiFi 6E with increased capacity and speed won’t give up without a fight. All of these developments are good for consumers who need to stay connected from anywhere like never before.

Web Conferencing

With many people stuck at home in 2020, web conferencing technology took the place of traveling to other locations to meet people or receive education. This technology isn’t new and includes familiar players like GoToMeeting, Skype, WebEx, Google Hangouts/Meet, BlueJeans, FaceTime, and others. Before COVID, these platforms enjoyed success, but most people preferred to fly on airplanes to meet customers and attend conferences while students hopped on the bus to go to school. In 2020, “necessity is the mother of invention” took hold and the use of Zoom and Teams skyrocketed as airplanes sat on the ground while business offices and schools remained empty. These two platforms further increased their stickiness by increasing the number of visible people and adding features like breakout rooms to meet the demands of businesses, virtual conference organizers, and school teachers. Despite the rollout of the vaccine, COVID won’t be extinguished overnight and these platforms will remain strong through the first half of 2021 as organizations rethink where and when people work and learn. There’s way too many players in this space so look for some consolidation.

E-Commerce

“Stay at home” orders and closed businesses gave e-commerce platforms a dramatic boost in 2020 as they took the place of shopping at stores or going to malls. Amazon soared to even higher heights, Walmart upped their game, Etsy brought the artsy, and thousands of Shopify sites delivered the goods. Speaking of delivery, the empty city streets became home to fleets FedEx, Amazon, UPS, and DHL trucks bringing packages to your front doorstep. Many retail employees traded-in working at customer-facing stores for working in a distribution centers as long as they could outperform robots. Even though people are looking forward to hanging out at malls in 2021, the e-commerce, distribution center, delivery truck trinity is here to stay. This ball was already in motion and got a rocket boost from COVID. This market will stay hot in the first half of 2021 and then cool a bit in the second half.

Ghost Kitchens

The COVID pandemic really took a toll on restaurants in the 2020, with many of them going out of business permanently. Those that survived had to pivot to digital and other ways of doing business. High-end steakhouses started making burgers on grills in the parking lot, while takeout pizzerias discovered they finally had the best business model. Having a drive-thru lane was definitely one of the keys to success in a world without waiters, busboys, and hosts. “Front of house” was shut down, but the “back of house” still had a pulse. Adding mobile web and native apps that allowed customers to easily order from operating “ghost kitchens” and pay with credit cards or Apple/Google/Samsung Pay enabled many restaurants to survive. A combination of curbside pickup and delivery from the likes of DoorDash, Uber Eats, Postmates, Instacart and Grubhub made this business model work. A surge in digital marketing also took place where many restaurants learned the importance of maintaining a relationship with their loyal customers via connected mobile devices. For the most part, 2021 has restauranteurs hoping for 100% in-person dining, but a new business model that looks a lot like catering + digital + physical delivery is something that has legs.

The Internet of Things

At its very essence, IoT is all about remotely knowing the state of a device or environmental system along with being able to remotely control some of those machines. COVID forced people to work, learn, and meet remotely and this same trend applied to the industrial world. The need to remotely operate industrial equipment or an entire “lights out” factory became an urgent imperative in order to keep workers safe. This is yet another case where the pandemic dramatically accelerated digital transformation. Connecting everything via APIs, modeling entities as digital twins, and having software bots bring everything to life with analytics has become an ROI game-changer for companies trying to survive in a free-falling economy. Despite massive employee layoffs and furloughs, jobs and tasks still have to be accomplished, and business leaders will look to IoT-fueled automation to keep their companies running and drive economic gains in 2021.

Streaming Entertainment

Closed movie theaters, football stadiums, bowling alleys, and other sources of entertainment left most people sitting at home watching TV in 2020. This turned into a dream come true for streaming entertainment companies like Netflix, Apple TV+, Disney+, HBO Max, Hulu, Amazon Prime Video, Youtube TV, and others. That said, Quibi and Facebook Watch didn’t make it. The idea of binge-watching shows during the weekend turned into binge-watching every season of every show almost every day. Delivering all these streams over the Internet via apps has made it easy to get hooked. Multiplayer video games fall in this category as well and represent an even larger market than the film industry. Gamers socially distanced as they played each other from their locked-down homes. The rise of cloud gaming combined with the rollout of low-latency 5G and Edge computing will give gamers true mobility in 2021. On the other hand, the video streaming market has too many players and looks ripe for consolidation in 2021 as people escape the living room once the vaccine is broadly deployed.

Healthcare

With doctors and nurses working around the clock as hospitals and clinics were stretched to the limit, it became increasingly difficult for non-COVID patients to receive the healthcare they needed. This unfortunate situation gave tele-medicine the shot in the arm (no pun intended) it needed. The combination of healthcare professionals delivering healthcare digitally over widespread connectivity helped those in need. This was especially important in rural areas that lacked the healthcare capacity of cities. Concurrently, the Internet of Things is making deeper inroads into delivering the health of a person to healthcare professionals via wearable technology. Connected healthcare has a bright future that will accelerate in 2021 as high-bandwidth 5G provides coverage to more of the population to facilitate virtual visits to the doctor from anywhere.

Working and Living

As companies and governments told their employees to work from home, it gave people time to rethink their living and working situation. Lots of people living in previously hip, urban, high-rise buildings found themselves residing in not-so-cool, hollowed-out ghost towns comprised of boarded-up windows and closed bars and cafés. Others began to question why they were living in areas with expensive real estate and high taxes when they not longer had to be close to the office. This led to a 2020 COVID exodus out of pricey apartments/condos downtown to cheaper homes in distant suburbs as well as the move from pricey areas like Silicon Valley to cheaper destinations like Texas. Since you were stuck in your home, having a larger house with a home office, fast broadband, and a back yard became the most important thing. Looking ahead to 2021, a hybrid model of work-from-home plus occasionally going into the office is here to stay as employees will no longer tolerate sitting in traffic two hours a day just to sit in a cubicle in a skyscraper. The digital transformation of how and where we work has truly accelerated.

Data and Advanced Analytics

Data has shown itself to be one of the world’s most important assets during the time of COVID. Petabytes of data has continuously streamed-in from all over the world letting us know the number of cases, the growth or decline of infections, hospitalizations, contact-tracing, free ICU beds, temperature checks, deaths, and hotspots of infection. Some of this data has been reported manually while lots of other sources are fully automated from machines. Capturing, storing, organizing, modeling and analyzing this big data has elevated the importance of cloud and edge computing, global-scale databases, advanced analytics software, and the growing importance of machine learning. This is a trend that was already taking place in business and now has a giant spotlight on it due to its global importance. There’s no stopping the data + advanced analytics juggernaut in 2021 and beyond.

Conclusion

2020 was one of the worst years in human history and the loss of life was just heartbreaking. People, businesses, and our education system had to become resourceful to survive. This resourcefulness amplified the importance of delivering connected, digital experiences to make previously remote things into local ones. Cheers to 2021 and the hope for a brighter day for all of humanity.

Read more…

Written by: Mirko Grabel

Edge computing brings a number of benefits to the Internet of Things. Reduced latency, improved resiliency and availability, lower costs, and local data storage (to assist with regulatory compliance) to name a few. In my last blog post I examined some of these benefits as a means of defining exactly where is the edge. Now let’s take a closer look at how edge computing benefits play out in real-world IoT use cases.

Benefit No. 1: Reduced latency

Many applications have strict latency requirements, but when it comes to safety and security applications, latency can be a matter of life or death. Consider, for example, an autonomous vehicle applying brakes or roadside signs warning drivers of upcoming hazards. By the time data is sent to the cloud and analyzed, and a response is returned to the car or sign, lives can be endangered. But let’s crunch some numbers just for fun.

Say a Department of Transportation in Florida is considering a cloud service to host the apps for its roadside signs. One of the vendors on the DoT’s shortlist is a cloud in California. The DoT’s latency requirement is less than 15ms. The light speed in fiber is about 5 μs/km. The distance from the U.S. east coast to the west coast is about 5,000 km. Do the math and the resulting round-trip latency is 50ms. It’s pure physics. If the DoT requires a real-time response, it must move the compute closer to the devices.

Benefit No. 2: Improved resiliency/availability

Critical infrastructure requires the highest level of availability and resiliency to ensure safety and continuity of services. Consider a refinery gas leakage detection system. It must be able to operate without Internet access. If the system goes offline and there’s a leakage, that’s an issue. Compute must be done at the edge. In this case, the edge may be on the system itself.

While it’s not a life-threatening use case, retail operations can also benefit from the availability provided by edge compute. Retailers want their Point of Sale (PoS) systems to be available 100% of the time to service customers. But some retail stores are in remote locations with unreliable WAN connections. Moving the PoS systems onto their edge compute enables retailers to maintain high availability.

Benefit No. 3: Reduced costs

Bandwidth is almost infinite, but it comes at a cost. Edge computing allows organizations to reduce bandwidth costs by processing data before it crosses the WAN. This benefit applies to any use case, but here are two example use-cases where this is very evident: video surveillance and preventive maintenance. For example, a single city-deployed HD video camera may generate 1,296GB a month. Streaming that data over LTE easily becomes cost prohibitive. Adding edge compute to pre-aggregate the data significantly reduces those costs.

Manufacturers use edge computing for preventive maintenance of remote machinery. Sensors are used to monitor temperatures and vibrations. The currency of this data is critical, as the slightest variation can indicate a problem. To ensure that issues are caught as early as possible, the application requires high-resolution data (for example, 1000 per second). Rather than sending all of this data over the Internet to be analyzed, edge compute is used to filter the data and only averages, anomalies and threshold violations are sent to the cloud.

Benefit No. 4: Comply with government regulations

Countries are increasingly instituting privacy and data retention laws. The European Union’s General Data Protection Regulation (GDPR) is a prime example. Any organization that has data belonging to an EU citizen is required to meet the GDPR’s requirements, which includes an obligation to report leaks of personal data. Edge computing can help these organizations comply with GDPR. For example, instead of storing and backhauling surveillance video, a smart city can evaluate the footage at the edge and only backhaul the meta data.

Canada’s Water Act: National Hydrometric Program is another edge computing use case that delivers regulatory compliance benefits. As part of the program, about 3,000 measurement stations have been implemented nationwide. Any missing data requires justification. However, storing data at the edge ensures data retention.

Bonus Benefit: “Because I want to…”

Finally, some users simply prefer to have full control. By implementing compute at the edge rather than the cloud, users have greater flexibility. We have seen this in manufacturing. Technicians want to have full control over the machinery. Edge computing gives them this control as well as independence from IT. The technicians know the machinery best and security and availability remain top of mind.

Summary

By reducing latency and costs, improving resiliency and availability, and keeping data local, edge computing opens up a new world of IoT use cases. Those described here are just the beginning. It will be exciting to see where we see edge computing turn up next. 

Originaly posted: here

Read more…

It’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations.

I’m sorry about the title of this blog, but I’m feeling a little wackadoodle at the moment. I think the problem is that I’m giddy with excitement at the thought of the forthcoming Thanksgiving holiday.

So, here’s the deal. Starting sometime in 2021, I’m going to be writing a series of columns for Practical Electronics magazine in the UK teaching digital logic fundamentals to absolute beginners.

This will have a hands-on component with an accompanying circuit board. We’re going to start by constructing some simple logic gates at the transistor level, then use primitive logic gates in 7400-series ICs to construct more sophisticated functions, and work our way up to… but I fear I can say no more at the moment.

After we’ve created some really simple combinatorial functions — like a 2:1 multiplexer — by hand, we’re going to introduce things like Boolean algebra, DeMorgan transforms, and Karnaugh maps, and then we are going to use what we’ve learned to implement more complex combinatorial functions, cumulating in a BCD to 7-segment decoder, before we progress to sequential circuits.

I was sketching out some notes this past weekend. Prior to the BCD to 7-segment decoder, we’ll already have tackled a BCD to decimal decoder, so a lot of the groundwork will have been laid. We’ll start by explaining how the segments in the 7-segment display are identified using the letters ‘a’ through ‘f’ and showing the combinations of segments we use to create the decimal digits 0 through 9.

8217684257?profile=RESIZE_710x

Using a 7-segment display to represent the decimal digits 0 through 9 (Click image to see a larger version — Image source: Max Maxfield)

Next, we will create the truth table. We’ll be using a common cathode 7-segment display, which means active-high outputs from our decoder because this is easier for newbies to wrap their brains around.

8217685658?profile=RESIZE_710x

Truth table for BCD to 7-segment decoder with active-high outputs (Click image to see a larger version — Image source: Max Maxfield)

Observe the input combinations shown in red in the truth table. We’ll point out that, in our case, we aren’t planning on using these input combinations, which means we don’t care what the corresponding outputs are because we will never actually see them (we’re using ‘X’ characters to represent the “don’t care” values). In turn, this means we can use these don’t care values in our Karnaugh maps to aid us in our logic minimization and optimization.

The funny thing is that it’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations. Just for giggles and grins, I’ve shown the populated maps below. Before you look at my solutions, why don’t you take a couple of minutes to perform your own minimizations to see how much you remember?

 8217691254?profile=RESIZE_710x

Use these populated maps to perform your own minimizations and optimizations (Click image to see a larger version — Image source: Max Maxfield)

I should point out that I’m a bit rusty at this sort of thing, so you might want to check that I’ve correctly captured the truth table and accurately populated these maps before you leap into the fray with gusto and abandon.

Remember that we’re dealing with absolute beginners here, so — even though I will have recently introduced them to Karnaugh map techniques, I think it would be a good idea to commence this portion of the discussions by walking them through the process for segment ‘a’ step-by-step as illustrated below.

8217692064?profile=RESIZE_710x

Karnaugh map minimizations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Next, I extracted the Boolean equations corresponding to the Karnaugh map minimizations. As shown below, I’ve color-coded any product terms that appear multiple times. I don’t recall seeing this done before, but I think it could be a useful aid for beginners. Once again, I’d be interested to hear your thoughts about this.

8217692289?profile=RESIZE_710x

Boolean equations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Actually, I’d love to hear your thoughts on anything I’ve shown here. Do you think the way I’ve drawn the diagrams is conducive to beginners understanding what’s going on? Can you spot anything I’ve missed or could do better? I can’t wait for you to see what we have planned with regards to the circuit board and the “hands-on” part of this forthcoming series (I will, of course, be reporting back further in the future). Until then, as always, I welcome your comments, questions, and suggestions.

Originally posted HERE.

Read more…

PYNQ is great for accelerating Python applications in programmable logic. Let's take a look at how we can use it with OpenMV camera.

Things used in this project

Hardware:

  • Avnet Ultra96-V2 (Can also use V1 or V3)
  • OpenMV Cam M7
  • Avnet Ultra96 (Can use V1 or V2)

Software:

  • Xilinx PYNQ Framework

Introduction

Image processing is required for a range of applications from vision guided robotics to machine vision in industrial applications.

In this project we are going to look at how we can fuse the OpenMV camera with the Ultra96 running PYNQ. This will allow out PYNQ application to offload some image processing to the camera. Doing so will provide a higher performance system and open the Ultra96 using PYNQ to be able to work with the OpenMV ecosystem.

 

What Is the OpenMV Camera 

The OpenMV camera is a low cost machine vision camera which is developed using Python. Thanks to this architecture of the OpenMV Camera we can therefore offload some of the image processing to the camera. Meaning the image frames received by our Ultra96 already have faces identified, eyes tracked or Sobel filtering, it all depends on how we set up the OpenMV Camera.

As the OpenMV camera has been designed to be extensible it provides 10 external IO which can be used to drive external sensors. These 10 are able to support a range of interfaces from UART to SPI, I2C and PWM. Of course the PWM is very useful for driving servos.

On very useful feature of the OpenMV camera is its LEDs mine (OpenMV M7) provides a tri-colour LED which can be used to output Red, Green, Blue and a separate IR LED. As the sensor is IR sensitive this can be useful for low light performance.

8100406101?profile=RESIZE_400xOpenMV Camera

How Does the OpenMV Camera Work

OpenMV Cam uses micro python to control the imager and output frames over the USB link. Micro python is intended for use on micro controllers and is based on Python 3.4. To use the OpenMV camera we need to first generate a micro python script which configures the camera for the given algorithm we wish to implement. We then execute this script by uploading and running it over the USB link.

This means we need some OpenMV APIs and libraries on a host machine to communicate with the OpenMV Camera.

To develop the script we want to be able to ensure it works, which is where the OpenMV IDE comes into its own, this allows us to develop and test the script which we later use in our Ultra96 application.

We can develop this script using either a Windows, MAC or Linux desktop.

 

Creating the OpenMV Script using the OpenMV IDE

To get started with the OpenMV IDE we frist need to download and install it. Once it is installed the next step is to connect our OpenMV camera to it using the USB link and then running a script on it.

To get started we can run the example hello world provided, which configures the camera to outputs standard RGB image at QVGA resolution. On the right hand side of the IDE you will be able to see the images output from the camera.

 

We can use this IDE to develop scripts for the OpenMV camera such as the one below which detects and identifies circles in the captured image.

Note the frame rate is lower when the camera is connected to the IDE.

 

We can use the scripts developed here in our Ultra96 PYNQ implementation let's take a look at how we set up the Ultra96 and PYNQ

Setting Up the Ultra96 PYNQ Image

The first thing we need to do if we have not already done it, is to download and create a PYNQ SD Card so we can run the PYNQ framework on the Ultra96.

As we want to use the Xilinx image processing overlay we should download the Ultra96 PYNQ v2.3 image.

Once you have this image creating a SD Card is very simple, extract the ISO image from the compressed file and write it to a SD Card. To write the ISO image to the SD Card we need a program such a etcher or win32 disk imager.

With a SD Card available we can then boot the Ultra96 and connect to the PYNQ framework using either

  • Use a USB Ethernet connection over the MicroUSB (upstream USB connection).
  • Connect via WiFi.
  • Use the Ultra96 as a single-board computer and connect a monitor, keyboard and mouse.

For this project I used the USB Ethernet connection.

The next thing to do is to ensure we have the necessary overlays to be able to accelerate image processing functions into the programmable logic. To to this we need to install the PYNQ computer vision overlay. 

Downloading the Image Processing Overlay

Installing this overlay is very straight forward. Open a browser window and connect to the web address of 192.168.3.1 (USB Ethernet address). This will open a log in page to the Jupyter notebooks, the password is Xilinx

 

Upon log in you will see the following folders and scripts

 

Click on new and select terminal, this will open a new terminal window in a browser window. To download and use the PYNQ Computer Vision overlays we enter the following command

sudo pip3 install --upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
 

Once these are downloaded if you look back at the Jupyter home page you will see a new directory called pynqOpenCV.

 

Using these Jupyter notebooks we can test the image processing performance when we accelerate OpenCV functions into the programmable logic.

 

Typically the hardware acceleration as can be seen in the image above greatly out performs implementing the algorithm in SW.

Of course we can call this overlay from our own Jupyter notebooks

 

Setting Up the OpenMV Camera in PYNQ

The next step is to configure the Ultra96 PYNQ instance to be able to control the OpenMV camera using its APIs. We can obtain these by downloading the OpenMV git repo using the command below in a terminal window on the Ultra96.

git clone https://github.com/openmv/openmv
 

Once this is downloaded we need to move the file pyopenmv.py

From openmv/tools

To /usr/lib/python3.6

This will allow us to control the OpenMV camera from within our Jupyter applications.

To be able to do this we need to know which serial port the OpenMV camera enumerates as. This will generally be ttyACM0 or ttyACM1 we can find this out by doing a LS of the /dev directory

 

Now we are ready to begin working with the OpenMV camera in our applications let's take a look at how we set it up our Jupyter Scripts

 

Initial Test of OpenMV Camera

The first thing we need to do in a new Jupyter notebook is to import the necessary packages. This includes the pyopenmv as we just installed.

We will alos be importing numpy as the image is returned as a numpy array so that we can display it using numpy functionality.

import pyopenmvimport timeimport sysimport numpy as np 

The first thing we need to do is define the script we developed in the IDE, for the "first light" with the PYNQ and OpenMV we will use the hello world script to obtain a simple image.

script = """

# Hello World Example

#

# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time

import pyb

sensor.reset()                      # Reset and initialize the sensor.

sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)

sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)

sensor.skip_frames(time = 2000)     # Wait for settings take effect.

clock = time.clock()                # Create a clock object to track the FPS.

red_led = pyb.LED(1)

red_led.off()

red_led.on()

while(True):

   clock.tick() 

   img = sensor.snapshot()         # Take a picture and return the image.

"""

Once the script is defined the next thing we need to do is connect to the OpenMV camera and download the script.

 

portname = "/dev/ttyACM0"

connected = False

pyopenmv.disconnect()

for i in range(10):

   try:

       # opens CDC port.

       # Set small timeout when connecting

       pyopenmv.init(portname, baudrate=921600, timeout=0.050)

       connected = True

       break

   except Exception as e:

       connected = False

       sleep(0.100)

if not connected:

   print ( "Failed to connect to OpenMV's serial port.\n"

           "Please install OpenMV's udev rules first:\n"

           "sudo cp openmv/udev/50-openmv.rules /etc/udev/rules.d/\n"

           "sudo udevadm control --reload-rules\n\n")

   sys.exit(1)

# Set higher timeout after connecting for lengthy transfers.

pyopenmv.set_timeout(1*2) # SD Cards can cause big hicups.

pyopenmv.stop_script()

pyopenmv.enable_fb(True)

pyopenmv.exec_script(script)

Finally once the script has been downloaded and is executing, we want to be able to read out the frame buffer. This Cell below reads out the framebuffer and saves it as a jpg file in the PYNQ file system.

 

running = True

import numpy as np

from PIL import Image

from matplotlib import pyplot as plt

while running:

   fb = pyopenmv.fb_dump()

   if fb != None:

       img = Image.fromarray(fb[2], 'RGB')

       img.save("frame.jpg")

       img = Image.open("frame.jpg")

       img

       time.sleep(0.100)

 

When I ran this script the first light image below was received of me working in my office.

 

Having achieved this the next step is to start working with advanced scripts in the PYNQ Jupyter notebook. using the same approach as above we can redefine scripts which can be used for different processing including

script = """

import sensor, image, time

sensor.reset() # Initialize the camera sensor.

sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565

sensor.set_framesize(sensor.QQVGA) # or sensor.QVGA (or others)

sensor.skip_frames(time = 2000) # Let new settings take affect.

sensor.set_gainceiling(8)

clock = time.clock() # Tracks FPS.

while(True):

   clock.tick() # Track elapsed milliseconds between snapshots().

   img = sensor.snapshot() # Take a picture and return the image.

   # Use Canny edge detector

   img.find_edges(image.EDGE_CANNY, threshold=(50, 80))

   # Faster simpler edge detection

   #img.find_edges(image.EDGE_SIMPLE, threshold=(100, 255))

   print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while

"""

For Canny edge detection when imaging a MiniZed Board

 

Alternatively we can also extract key points from images for tracking in subsequent images.

script = """

import sensor, time, image

# Reset sensor

sensor.reset()

# Sensor settings

sensor.set_contrast(3)

sensor.set_gainceiling(16)

sensor.set_framesize(sensor.VGA)

sensor.set_windowing((320, 240))

sensor.set_pixformat(sensor.GRAYSCALE)

sensor.skip_frames(time = 2000)

sensor.set_auto_gain(False, value=100)

def draw_keypoints(img, kpts):

   if kpts:

       print(kpts)

       img.draw_keypoints(kpts)

       img = sensor.snapshot()

       time.sleep(1000)

kpts1 = None

# NOTE: uncomment to load a keypoints descriptor from file

#kpts1 = image.load_descriptor("/desc.orb")

#img = sensor.snapshot()

#draw_keypoints(img, kpts1)

clock = time.clock()

while (True):

   clock.tick()

   img = sensor.snapshot()

   if (kpts1 == None):

       # NOTE: By default find_keypoints returns multi-scale keypoints extracted from an image pyramid.

       kpts1 = img.find_keypoints(max_keypoints=150, threshold=10, scale_factor=1.2)

       draw_keypoints(img, kpts1)

   else:

       # NOTE: When extracting keypoints to match the first descriptor, we use normalized=True to extract

       # keypoints from the first scale only, which will match one of the scales in the first descriptor.

       kpts2 = img.find_keypoints(max_keypoints=150, threshold=10, normalized=True)

       if (kpts2):

           match = image.match_descriptor(kpts1, kpts2, threshold=85)

           if (match.count()>10):

               # If we have at least n "good matches"

               # Draw bounding rectangle and cross.

               img.draw_rectangle(match.rect())

               img.draw_cross(match.cx(), match.cy(), size=10)

           print(kpts2, "matched:%d dt:%d"%(match.count(), match.theta()))

           # NOTE: uncomment if you want to draw the keypoints

           #img.draw_keypoints(kpts2, size=KEYPOINTS_SIZE, matched=True)

   # Draw FPS

   img.draw_string(0, 0, "FPS:%.2f"%(clock.fps()))

"""

Circle Detection

 

import sensor, image, time

sensor.reset()

sensor.set_pixformat(sensor.RGB565) # grayscale is faster

sensor.set_framesize(sensor.QQVGA)

sensor.skip_frames(time = 2000)

clock = time.clock()

while(True):

   clock.tick()

   img = sensor.snapshot().lens_corr(1.8)

   # Circle objects have four values: x, y, r (radius), and magnitude. The

   # magnitude is the strength of the detection of the circle. Higher is

   # better...

   # `threshold` controls how many circles are found. Increase its value

   # to decrease the number of circles detected...

   # `x_margin`, `y_margin`, and `r_margin` control the merging of similar

   # circles in the x, y, and r (radius) directions.

   # r_min, r_max, and r_step control what radiuses of circles are tested.

   # Shrinking the number of tested circle radiuses yields a big performance boost.

   for c in img.find_circles(threshold = 2000, x_margin = 10, y_margin = 10, r_margin = 10,

           r_min = 2, r_max = 100, r_step = 2):

       img.draw_circle(c.x(), c.y(), c.r(), color = (255, 0, 0))

       print(c)

   print("FPS %f" % clock.fps())

 

 

 

This fusion of ability to offload processing to either the OpenMV camera or the Ultra96 programmable logic running Pynq provides the system designer with maximum flexibility.

 

Wrap Up

The ability to use the OpenMV camera, coupled with the PYNQ computer vision libraries along with other overlays such as the klaman filter and base overlays. We can implement algorithms which can be used to enable us to implement vision guided robotics. Using the base overlay and the Input Output processors also enables us to communicate with lower level drives, interfaces and other sensors required to implement such a solution.

Originaly posted here.

 

Read more…

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

Will We Ever Get Quantum Computers?

In a recent issue of IEEE Spectrum, Mikhail Dyakonov makes a pretty compelling argument that quantum computing (QC) isn't going to fly anytime soon. Now, I'm no expert on QC, and there sure is a lot of money being thrown at the problem by some very smart people, but having watched from the sidelines QC seems a lot like fusion research. Every year more claims are made, more venture capital gets burned, but we don't seem to get closer to useful systems.

Consider D-Wave Systems. They've been trying to build a QC for twenty years, and indeed do have products more or less on the market, including, it's claimed, one of 1024 q-bits. But there's a lot of controversy about whether their machines are either quantum computers at all, or if they offer any speedup over classical machines. One would think that if a 1K q-bit machine really did work the press would be all abuzz, and we'd be hearing constantly of new incredible results. Instead, the machines seem to disappear into research labs.

Mr. Duakonov notes that optimistic people expect useful QCs in the next 5-10 years; those less sanguine expect 20-30 years, a prediction that hasn't changed in two decades. He thinks a window of many decades to never is more realistic. Experts think that a useful machine, one that can do the sort of calculations your laptop is capable of, will require between 1000 and 100,000 q-bits. To me, this level of uncertainty suggests that there is a profound lack of knowledge about how these machines will work and what they will be able to do.

According to the author, a 1000 q-bit machine can be in 21000 states (a classical machine with N transistors can be in only 2N states), which is about 10300, or more than the number of sub-atomic particles in the universe. At 100,000 q-bits we're talking 1030,000, a mind-boggling number.

Because of noise, expect errors. Some theorize that those errors can be eliminated by adding q-bits, on the order of 1000 to 100,000 additional per q-bit. So a useful machine will need at least millions, or perhaps many orders of magnitude more, of these squirrelly microdots that are tamed only by keeping them at 10 millikelvin.

A related article in Spectrum mentions a committee formed of prestigious researchers tasked with assessing the probability of success with QC concluded that:

"[I]t is highly unexpected" that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable "noisy intermediate-scale quantum computers" will be built within that time frame, "there are at present no known algorithms/applications that could make effective use of this class of machine," the committee says."

I don't have a dog in this fight, but am relieved that useful QC seems to be no closer than The Distant Shore (to quote Jan de Hartog, one of my favorite writers). If it were feasible to easily break encryption schemes banking and other systems could collapse. I imagine Blockchain would fail as hash algorithms became reversable. The resulting disruption would not be healthy for our society.

On the other hand, Bruce Schneier's article in the March issue of IEEE Computing Edge suggests that QC won't break all forms of encryption, though he does think a lot of our current infrastructure will be vulnerable. The moral: if and when QC becomes practical, expect chaos.

I was once afraid of quantum computing, as it involves mechanisms that I'll never understand. But then I realized those machines will have an API. Just as one doesn't need to know how a computer works to program in Python, we'll be insulated from the quantum horrors by layers of abstraction.

Originaly posted here

Read more…

A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them 'smart.' The study was published in Electronics.

Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance. For comparison, in ordinary modern computers, random access memory is calculated in gigabytes. Although microcontrollers possess significantly less processing power than laptops and smartphones, they are smaller and can be interfaced with household items. Smart doors, refrigerators, shoes, glasses, kettles and coffee makers create the foundation for so-called ambient intelligece. The term denotes an environment of interconnected smart devices. 

An example of ambient intelligence is a smart home. The devices with limited memory are not able to store a large number of keys for secure data transfer and arrays of neural network settings. It prevents the introduction of artificial intelligence into Internet of Things devices, as they lack the required computing power. However, artificial intelligence would allow smart devices to spend less time on analysis and decision-making, better understand a user and assist them in a friendly manner. Therefore, many new opportunities can arise in the creation of environmental intelligence, for example, in the field of health care.

Andrei Velichko from Petrozavodsk State University, Russia, has created a new neural network architecture that allows efficient use of small volumes of RAM and opens the opportunities for the introduction of low-power devices to the Internet of Things. The network, called LogNNet, is a feed-forward neural network in which the signals are directed exclusively from input to output. Its uses deterministic chaotic filters for the incoming signals. The system randomly mixes the input information, but at the same time extracts valuable data from the information that are invisible initially. A similar mechanism is used by reservoir neural networks. To generate chaos, a simple logistic mapping equation is applied, where the next value is calculated based on the previous one. The equation is commonly used in population biology and as an example of a simple equation for calculating a sequence of chaotic values. In this way, the simple equation stores an infinite set of random numbers calculated by the processor, and the network architecture uses them and consumes less RAM.

7978216495?profile=RESIZE_584x

The scientist tested his neural network on handwritten digit recognition from the MNIST database, which is considered the standard for training neural networks to recognize images. The database contains more than 70,000 handwritten digits. Sixty-thousand of these digits are intended for training the neural network, and another 10,000 for network testing. The more neurons and chaos in the network, the better it recognized images. The maximum accuracy achieved by the network is 96.3%, while the developed architecture uses no more than 29 KB of RAM. In addition, LogNNet demonstrated promising results using very small RAM sizes, in the range of 1-2kB. A miniature controller, Atmega328, can be embedded into a smart door or even a smart insole, has approximately the same amount of memory.

"Thanks to this development, new opportunities for the Internet of Things are opening up, as any device equipped with a low-power miniature controller can be powered with artificial intelligence. In this way, a path is opened for intelligent processing of information on peripheral devices without sending data to cloud services, and it improves the operation of, for example, a smart home. This is an important contribution to the development of IoT technologies, which are actively researched by the scientists of Petrozavodsk State University. In addition, the research outlines an alternative way to investigate the influence of chaos on artificial intelligence," said Andrei Velichko.

Originally posted HERE.

by Russian Science Foundation

Image Credit: Andrei Velichko

 

 

 

 

Read more…

Impact of IoT in Inventory

Internet of Things (IoT) has revolutionized many industries including inventory management. IoT is a concept where devices are interconnected via the internet. It is expected that by 2020, there will be 26 billion devices connected worldwide. These connections are important because it allows data sharing which then can perform actions to make life and business more efficient. Since inventory is a significant portion of a company’s assets, inventory data is vital for an accounting department for the company’s asset management and annual report.

Inventory solutions based on IoT and RFID, individual inventory item receives an RFID tag. Each tag has a unique identification number (ID) that contains information about an inventory item, e.g. a model, a batch number, etc. these tags are scanned by RF reader. Upon scanning, a reader extracts its IDs and transmits them to the cloud for processing. Along with the tag’s ID, the cloud receives location and the time of reading. This data is used for updates about inventory items’, allowing users to monitor the inventory from anywhere, in real-time.

Industrial IoT

The role of IoT in inventory management is to receive data and turn it into meaningful insights about inventory items’ location, status, and giving users a corresponding output. For example, based on the data, and inventory management solution architecture, we can forecast the number of raw materials needed for the upcoming production cycle. The output of the system can also send an alert if any individual inventory item is lost.

Moreover, IoT based inventory management solutions can be integrated with other systems, i.e. ERP and share data with other departments.

RFID in Industrial IoT

RFID consist of three main components tag, antenna, and a reader

Tags: An RFID tag carries information about a specific object. It can be attached to any surface, including raw materials, finished goods, packages, etc.

RFID antennas: An RFID antenna receives signals to supply power and data for tags’ operation

RFID readers: An RFID reader, uses radio signals to read and write to the tags. The reader receives data stored in the tag and transmits it to the cloud.

Benefits of IoT in inventory management

The benefits of IoT on the supply chain are the most exciting physical manifestations we can observe. IoT in the supply chain creates unparalleled transparency that increases efficiencies.

Inventory tracking

The major benefit of inventory management is asset tracking, instead of using barcodes to scan and record data, items have RFID tags which can be registered wirelessly. It is possible to accurately obtain data and track items from any point in the supply chain.

With RFID and IoT, managers don’t have to spend time on manual tracking and reporting on spreadsheets. Each item is tracked and the data about it is recorded automatically. Automated asset tracking and reporting save time and reduce the probability of human error.

Inventory optimization

Real-time data about the quantity and the location of the inventory, manufacturers can reduce the amount of inventory on hand while meeting the needs of the customers at the end of the supply chain.

The data about the amount of available inventory and machine learning can forecast the required inventory which allows manufacturers to reduce the lead time.

Remote tracking

Remote product tracking makes it easy to have an eye on production and business. Knowing production and transit times, allows you to better tweak orders to suit lead times and in response to fluctuating demand. It shows which suppliers are meeting production and shipping criteria and which needs monitoring for the required outcome.

It gives visibility into the flow of raw materials, work-in-progress and finished goods by providing updates about the status and location of the items so that inventory managers see when an individual item enters or leaves a specific location.

Bottlenecks in the operations

With the real-time data about the location and the quantity, manufacturers can reveal bottlenecks in the process and pinpoint the machine with lower utilization rates. For instance, if part of the inventory tends to pile up in front of a machine, a manufacturer assumes that the machine is underutilized and needs to be seen to.

The Outcomes

The data collected by inventory management is more accurate and up-to-date. By reducing these time delays, the manufacturing process can enhance accuracy and reduce wastage. An IoT-based inventory management solution offers complete visibility on inventory by providing real-time information fetched by RFID tags. It helps to track the exact location of raw materials, work-in-progress and finished goods. As a result, manufacturers can balance the amount of on-hand inventory, increase the utilization of machines, reduce lead time, and thus, avoid costs bound to the less effective methods. This is all about optimizing inventory and ensuring anything ordered can be sold through whatever channel necessary.

Originally posted here

Read more…

By: Tom Jeltes, Eindhoven University of Technology

The Internet of Things (IoT) consists of billions of sensors and other devices connected to each other via internet, all of which need to be protected against hackers with malicious purposes. A low-cost and energy efficient solution for the security of IoT devices uses the unique characteristics of the built-in memory chips. Ph.D. candidate Lieneke Kusters investigated how to make optimal use of the chip's digital fingerprint to generate a security key.

The higher the number of devices connected to each other via the Internet of Things, the greater the risk that malicious hackers might gain access to important information, or even take over entire systems. Quite apart from all kinds of privacy issues, it's not hard to imagine that that someone who, for example, has control over temperature sensors in a chemical or nuclear plant, could cause serious damage.

 To prevent problems like these from occurring, each IoT device needs to be able, as it were, to show an identity document—"authentication," in professional terms. Normally, speaking, this is done with a kind of password, which is sent in encrypted form to the person who is communicating with the device. The security key needed for that has to be stored in the IoT device one way or another, Lieneke Kusters explains. "But these are often small and cheap devices that aren't supposed to use much energy. To safely store a key in these devices, you need extra hardware with constant power supply. That's not very practical."

Digital fingerprint

There is a different way: namely by deducing the security key from a unique physical characteristic of the memory chip (Static Random-Access Memory, or SRAM) that can be found in practically every IoT device. Depending on the random circumstances during the chip's manufacturing process, the memory locations have a random default value of 0 or 1.

"That binary code which you can read out when activating the chip, constitutes a kind of digital fingerprint of the device," says Kusters, who gained her doctorate at the Information and Communication Theory Laboratory at the TU/e department of Electrical Engineering. This fingerprint is known as a Physical Unclonable Function (PUF). "The Eindhoven-based company Intrinsic ID sells digital security based on SRAM-PUFs. I collaborated with them for my doctoral research, during which I focused on how to generate, in a reliable way, a key from that digital fingerprint that is as long as possible. The longer, the safer."

The major advantage of security keys based on SRAM-PUFs is that the key exists only at the moment when authentication is required. "The device restarts itself to read out the SRAM-PUF and in doing so creates the key, which subsequently gets erased immediately after use. That makes it all but impossible for an attacker to steal the key."

Noise and reliability

But that's not the entire story, because some bits of the SRAM do not always have the same value during activation, Kusters explains. Ten to fifteen percent of the bits turn out not to be determined, which makes the digital fingerprint a bit fuzzy. How do you use that fuzzy fingerprint to make a key of the highest possible complexity that nevertheless still fits into the receiving lock—practically—each time?

"What you want to prevent is that the generated key won't be recognized by the receiving party as a consequence of the 'noise' in the SRAM-PUF," Kusters explains. "It's alright if that happens one in a million times perhaps, preferably less often." The probability of error is smaller with a shorter key, but such a key is also easier to guess for people with bad intentions. "I've searched for the longest reliable key, given a certain amount of noise in the measurement. It helps if you store extra information about the SRAM-PUF, but that must not be of use to a potential attacker. My thesis is an analysis of how you can reach the optimal result in different situations with that extra information."

Originaly posted here.


 
Read more…

Can AI Replace Firmware?

Scott Rosenthal and I go back about a thousand years; we've worked together, helped midwife the embedded field into being, had some amazing sailing adventures, and recently took a jaunt to the Azores just for the heck of it. Our sons are both big data people; their physics PhDs were perfect entrees into that field, and both now work in the field of artificial intelligence.

At lunch recently we were talking about embedded systems and AI, and Scott posed a thought that has been rattling around in my head since. Could AI replace firmware?

Firmware is a huge problem for our industry. It's hideously expensive. Only highly-skilled people can create it, and there are too few of us.

What if an AI engine of some sort could be dumped into a microcontroller and the "software" then created by training that AI? If that were possible - and that's a big "if" - then it might be possible to achieve what was hoped for when COBOL was invented: programmers would no longer be needed as domain experts could do the work. That didn't pan out for COBOL; the industry learned that accountants couldn't code. Though the language was much more friendly than the assembly it replaced, it still required serious development skills.

But with AI, could a domain expert train an inference engine?

Consider a robot: a "home economics" major could create scenarios of stacking dishes from a dishwasher. Maybe these would be in the form of videos, which were then fed to the AI engine as it tuned the weighting coefficients to achieve what the home ec expert deems worthy goals.

My first objection to this idea was that these sorts of systems have physical constraints. With firmware I'd write code to sample limit switches so the motors would turn off if at an end-of-motion extreme. During training an AI-based system would try and drive the motors into all kinds of crazy positions, banging destructively into stops. But think how a child learns: a parent encourages experimentation but prevents the youngster from self-harm. Maybe that's the role of the future developer training an AI. Or perhaps the training will be done on a simulator of some sort where nothing can go horribly wrong.

Taking this further, a domain expert could define the desired inputs and outputs, and then a poorly-paid person do the actual training. CEOs will love that. With that model a strange parallel emerges to computation a century ago: before the computer age "computers" were people doing simple math to create tables of logs, trig, ballistics, etc. A room full all labored at a problem. They weren't particularly skilled, didn't make much, but did the rote work under the direction of one master. Maybe AI trainers will be somewhat like that.

Like we outsource clothing manufacturing to Bangladesh, I could see training, basically grunt work, being sent overseas as well.

I'm not wild about this idea as it means we'd have an IoT of idiots: billions of AI-powered machines where no one really knows how they work. They've been well-trained but what happens when there's a corner case?

And most of the AI literature I read suggests that inference successes of 97% or so are the norm. That might be fine for classifying faces, but a 3% failure rate of a safety-critical system is a disaster. And the same rate for less-critical systems like factory controllers would also be completely unacceptable.

But the idea is intriguing.

Original post can be viewed here

Feel free to email me with comments.

Back to Jack's blog index page.

Read more…

Theoratical Embedded Linux requirements

Hardware

SoC

A System on Chip (SoC), is essentially an integrated circuit that takes a single platform and integrates an entire computer system onto it. It combines the power of the CPU with other components that it needs to perform and execute its functions. It is in charge of using the other hardware and running your software. The main advantage of SoC includes lower latency and power saving.

It is made of various building blocks:

  • Core + Caches + MMU – An SoC has a processor at its core which will define its functions. Normally, an SoC has multiple processor cores. For a “real” processor, e.g. ARM Cortex-A9. It’s the main thing kept in mind while choosing an SoC. Maybe co-adjuvanted by e.g. a SIMD co-processor like NEON.
  • Internal RAM – IRAM is composed of very high-speed SRAM located alongside the CPU. It acts similar to a CPU cache, and generally very small. It is used in the first phase of the boot sequence.
  • Peripherals – These can be a simple ADC, DSP, or a Graphical Processing Unit which is connected via some bus to the Core. A low power/real-time co-processor helps the main Core with real-time tasks or handle low power states. Examples of such IP cores are USB, PCI-E, SGX, etc.

External RAM

An SoC uses RAM to store temporary data during and after bootstrap. It is the memory an embedded system uses during regular operation.

Non-Volatile Memory

In an Embedded system or single-board computer, it is the SD card. In other cases, it can be a NAND, NOR, or SPI Data flash memory. It is the source of data the SoC reads and stores all the software components needed for the system to work.

External Peripherals

An SoC must have external interfaces for standard communication protocols such as USB, Ethernet, and HDMI. It also includes wireless technology protocols of Wi-Fi and Bluetooth.

Software

Second-Article-01-1024x576.jpghttps://www.tirichlabs.com/storage/2020/09/Second-Article-01-300x169.jpg 300w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-768x432.jpg 768w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-1200x675.jpg 1200w" alt="" />

First of all, we introduce the boot chain which is the series of actions that happens when an SoC is powered up.

Boot ROM: It is a piece of code stored in the ROM which is executed by the booting core when it is powered-on. This code contains instructions for the configuration of SoC to allow it to execute applications. The configurations performed by Boot ROM include initialization of the core’s register and stack pointer, enablement of caches and line buffers, programming of interrupt service routine, clock configuration.

Boot ROM also implements a Boot Assist Module (BAM) for downloading an application image from external memories using interfaces like Ethernet, SD/MMC, USB, CAN, UART, etc.

1st stage bootloader

In the first-stage bootloader performs the following

  • Setup the memory segments and stack used by the bootloader code
  • Reset the disk system
  • Display a string “Loading OS…”
  • Find the 2nd stage boot loader in the FAT directory
  • Read the 2nd stage boot loader image into memory at 1000:0000
  • Transfer control to the second-stage bootloader

It copies the Boot ROM into the SoC’s internal RAM. Must be tiny enough to fit that memory usually well under 100kB. It initializes the External RAM and the SoC’s external memory interface, as well as other peripherals that may be of interest (e.g. disable watchdog timers). Once done, it executes the next stage, depending on the context, which could be called MLO, SPL, or else.

2nd stage bootloader

This is the main bootloader and can be 10 times bigger than the 1st stage, it completes the initialization of the relevant peripherals.

  • Copy the boot sector to a local memory area
  • Find kernel image in the FAT directory
  • Read kernel image in memory at 2000:0000
  • Reset the disk system
  • Enable the A20 line
  • Setup interrupt descriptor table at 0000:0000
  • Setup the global descriptor table at 0000:0800
  • Load the descriptor tables into the CPU
  • Switch to protected mode
  • Clear the prefetch queue
  • Setup protected mode memory segments and stack for use by the kernel code
  • Transfer control to the kernel code using a long jump

Linux Kernel

The Linux kernel is the main component of a Linux OS and is the core interface between hardware and processes. It communicates between the hardware and processes, managing resources as efficiently as possible. The kernel performs following jobs

  • Memory management: Keep track of memory, how much is used to store what, and where
  • Process management: Determine which processes can use the processor, when, and for how long
  • Device drivers: Act as an interpreter between the hardware and the processes
  • System calls and security: Receive requests for the service from processes

To put the kernel in context, they can be interpreted as a Linux machine as having 3 layers:

  • The hardware: The physical machine—the base of the system, made up of memory (RAM) and the processor (CPU), as well as input/output (I/O) devices such as storage, networking, and graphics.
  • The Linux kernel: The core of the OS. It is a software residing in memory that tells the CPU what to do.
  • User processes: These are the running programs that the kernel manages. User processes are what collectively makeup user space. The kernel allows processes and servers to communicate with each other.

Init and rootfs – init is the first non-Kernel task to be run, and has PID 1. It initializes everything needed to use the system. In production embedded systems, it also starts the main application. In such systems, it is either BusyBox or a custom-crafted application.

View original post here

Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

Edge Products Are Now Managed At The Cloud

Now more than ever, there are billions of edge products in the world. But without proper cloud computing, making the most of electronic devices that run on Linux or any other OS would not be possible.

And so, a question most people keep asking is which is the best Software-as-a-service platform that can effectively manage edge devices through cloud computing. Well, while edge device management may not be something, the fact that cloud computing space is not fully exploited means there is a lot to do in the cloud space.

Product remote management is especially necessary for the 21st century and beyond. Because of the increasing number of devices connected to the internet of things (IoT), a reliable SaaS platform should, therefore, help with maintaining software glitches from anywhere in the world. From smart homes, stereo speakers, cars, to personal computers, any product that is connected to the internet needs real-time protection from hacking threats such as unlawful access to business or personal data.

Data being the most vital asset is constantly at risk, especially if individuals using edge products do not connect to trusted, reliable, and secure edge device management platforms.

Bridges the Gap Between Complicated Software And End Users

Cloud computing is the new frontier through which SaaS platforms help manage edge devices in real-time. But something even more noteworthy is the increasing number of complicated software that now run edge devices at homes and in workplaces.

Edge device management, therefore, ensures everything runs smoothly. From fixing bugs, running debugging commands to real-time software patch deployment, cloud management of edge products bridges a gap between end-users and complicated software that is becoming the norm these days.

Even more importantly, going beyond physical firewall barriers is a major necessity in remote management of edge devices. A reliable Software-as-a-Service, therefore, ensures data encryption for edge devices is not only hackproof by also accessed by the right people. Moreover, deployment of secure routers and access tools are especially critical in cloud computing when managing edge devices. And so, developers behind successful SaaS platforms do conduct regular security checks over the cloud, design and implement solutions for edge products.

Reliable IT Infrastructure Is Necessary

Software-as-a-service platforms that manage edge devices focus on having a reliable IT infrastructure and centralized systems through which they can conduct cloud computing. It is all about remotely managing edge devices with the help of an IT infrastructure that eliminates challenges such as connectivity latency.

Originally posted here

Read more…

Introducing Profiler, by Auptimizer: Select the best AI model for your target device — no deployment required.

Profiler is a simulator for profiling the performance of Machine Learning (ML) model scripts. Profiler can be used during both the training and inference stages of the development pipeline. It is particularly useful for evaluating script performance and resource requirements for models and scripts being deployed to edge devices. Profiler is part of Auptimizer. You can get Profiler from the Auptimizer GitHub page or via pip install auptimizer.

The cost of training machine learning models in the cloud has dropped dramatically over the past few years. While this drop has pushed model development to the cloud, there are still important reasons for training, adapting, and deploying models to devices. Performance and security are the big two but cost-savings is also an important consideration as the cost of transferring and storing data, and building models for millions of devices tends to add up. Unsurprisingly, machine learning for edge devices or Edge AI as it is more commonly known continues to become mainstream even as cloud compute becomes cheaper.

Developing models for the edge opens up interesting problems for practitioners.

  1. Model selection now involves taking into consideration the resource requirements of these models.
  2. The training-testing cycle becomes longer due to having a device in the loop because the model now needs to be deployed on the device to test its performance. This problem is only magnified when there are multiple target devices.

Currently, there are three ways to shorten the model selection/deployment cycle:

  • The use of device-specific simulators that run on the development machine and preclude the need for deployment to the device. Caveat: Simulators are usually not generalizable across devices.
  • The use of profilers that are native to the target device. Caveat: They need the model to be deployed to the target device for measurement.
  • The use of measures like FLOPS or Multiply-Add (MAC) operations to give approximate measures of resource usage. Caveat: The model itself is only one (sometimes insignificant) part of the entire pipeline (which also includes data loading, augmentation, feature engineering, etc.)

In practice, if you want to pick a model that will run efficiently on your target devices but do not have access to a dedicated simulator, you have to test each model by deploying on all of the target devices.

Profiler helps alleviate these issues. Profiler allows you to simulate, on your development machine, how your training or inference script will perform on a target device. With Profiler, you can understand CPU- and memory-usage as well as run-time for your model script on the target device.

How Profiler works

Profiler encapsulates the model script, its requirements, and corresponding data into a Docker container. It uses user-inputs on compute-, memory-, and framework-constraints to build a corresponding Docker image so the script can run independently and without external dependencies. This image can then easily be scaled and ported to ease future development and deployment. As the model script is executed within the container, Profiler tracks and records various resource utilization statistics including Average CPU UtilizationMemory UsageNetwork I/O, and Block I/O. The logger also supports setting the Sample Time to control how frequently Profiler samples utilization statistics from the Docker container.

Get Profiler: Click here

How Profiler helps

Our results show that Profiler can help users build a good estimate of model runtime and memory usage for many popular image/video recognition models. We conducted over 300 experiments across a variety of models (InceptionV3, SqueezeNet, Resnet18, MobileNetV2–0.25x, -0.5x, -0.75x, -1.0x, 3D-SqueezeNet, 3D-ShuffleNetV2–0.25x, -0.5x, -1.0x, -1.5x, -2.0x, 3D-MobileNetV2–0.25x, -0.5x, -0.75x, -1.0x, -2.0x) on three different devices — LG G6 and Samsung S8 phones, and NVIDIA Jetson Nano. You can find the full set of experimental results and more information on how to conduct similar experiments on your devices here.

The addition of Profiler brings Auptimizer closer to the vision of a tool that helps machine learning scientists and engineers build models for edge devices. The hyperparameter optimization (HPO) capabilities of Auptimizer help speed up model discovery. Profiler helps with choosing the right model for deployment. It is particularly useful in the following two scenarios:

  1. Deciding between models — The ranking of the run-times and memory usages of the model scripts measured using Profiler on the development machine is indicative of their ranking on the target device. For instance, if Model1 is faster than Model2 when measured using Profiler on the development machine, Model1 will be faster than Model2 on the device. This ranking is valid only when the CPU’s are running at full utilization.
  2. Predicting model script performance on the device — A simple linear relationship relates the run-times and memory usage measured using Profiler on the development machine with the usage measured using a native profiling tool on the target device. In other words, if a model runs in time x when measured using Profiler, it will run approximately in time (a*x+b) on the target device (where a and b can be discovered by profiling a few models on the device with a native profiling tool). The strength of this relationship depends on the architectural similarity between the models but, in general, the models designed for the same task are architecturally similar as they are composed of the same set of layers. This makes Profiler a useful tool for selecting the best suited model.

Looking forward

Profiler continues to evolve. So far, we have tested its efficacy on select mobile- and edge-platforms for running popular image and video recognition models for inference, but there is much more to explore. Profiler might have limitations for certain models or devices and can potentially result in inconsistencies between Profiler outputs and on-device measurements. Our experiment page provides more information on how to best set up your experiment using Profiler and how to interpret potential inconsistencies in results. The exact use case varies from user to user but we believe that Profiler is relevant to anyone deploying models on devices. We hope that Profiler’s estimation capability can enable leaner and faster model development for resource-constrained devices. We’d love to hear (via github) if you use Profiler during deployment.

Originaly posted here


Authors: Samarth Tripathi, Junyao Guo, Vera Serdiukova, Unmesh Kurup, and Mohak Shah — Advanced AI, LG Electronics USA

Read more…

Industrial IoT Revolution

Why the Nvidia Jetson Nano is responsible for the biggest industrial IoT revolution these days

 
c1f0a2_ecaa338269684f82b2661b550075f528~mv2.webp
 
 

It feels like yesterday when the Raspberry Pi foundation released the first-in-line Single Board Computer (SBC) to the market. Back in 2012, Raspberry Pi wasn't alone in the SBC growing market, however, it was the first to make a community-based product that brings the hardware and the software eco-system to a beautiful harmony on the internet. Before those days, embedded Linux based SBC's and SOM's were a place for Linux kernel and embedded hardware experts, no easy-to-use tools, ready Linux based distros, or most importantly without the enormous amount of questions and answers across the internet on anything related.

Today, 8 years later, the "2012 revolution" happens again

This time, it took a year to understand the impact of the new 'kid' in the market, but now, there are a few indications that defiantly build the route to a revolution.

The Raspberry Pi was the first to make embedded Linux easy while keeping the advantages of reliability and flexibility in terms of fitting to different kinds of industries applications. It's almost impossible to ignore the variety of industries where Raspberry Pi is in its hurt of products to save time-to-market and costs. The power of this magical board leans on the software side: The Raspberry Pi foundation and their community, worked hard across the years to improve and share their knowledge, but, at the same time, without notice or targeting, they brought the Pi development to an extremely "serverless" level.

The Nvidia Jetson Nano

Let's stop talking about the Raspberry Pi and focus on today's industry needs to understand better why the new kid in the town is here to change the market of IoT and smart products forever.

 
c1f0a2_2ca55bc3cd744a10a05bc244c4e092c1~mv2.webp
 
 Why do we need to thanks Nvidia and the Jetson Nano?
 

The market is going forward. AI, Robotics, amazing-looking screen app Gui's, image processing, and long data calculations are all become the new standard of smart edge products.

If a few years ago, you would only want to connect your product to the cloud and receive anything valuable, today, product managers and developers compete in a much tougher industry era. This time, the Raspberry Pi can't be the technology hero again, its resources are limited and the eco-system starts to squint to a better-fit solution.

 
c1f0a2_b46f958fa9b543af88a6ad38b2afce82~mv2.webp
 
 

NVIDIA Jetson devices in Upswift.io device management platform

The Jetson Nano is the first SBC to understood the necessary combination that will drive new products to use it. It's the first SBC designed in the mind of industrial powerful use cases, while not forgetting the prototyping stage and the harmony that gave the Raspberry Pi their success. It's the first solution to bring the whole package for developers and for hardware engineers with a "SaaS" feel: The OS is already perfect thanks to Ubuntu, there is plenty of software instructions by Nvidia and open-source ready-to-use tools custom made for the Jetson family, and for the hardware engineers: they are free to go with the System On Module (SOM) that is connected to a carrier board which includes all the necessary outputs and inputs to make the development stage even faster.

The Jetson Nano combination is basically providing the first world infrastructure for producing a "2020" product with complex software while working in a minimal budget and time-to-market. The Jetson Nano enables developers and product managers to imagine further without compromises, bringing tough software missions to the edge easily.

Originally posted here

Read more…

by Dan Carroll, Carnegie Mellon University, Department of Civil and Environmental Engineering

7451650263?profile=RESIZE_400x
Credit: Pixabay/CC0 Public Domain
 
Across the U.S., there has been some criticism of the cost and efficacy of emissions inspection and maintenance (I/M) programs administered at the state and county level. In response, Engineering and Public Policy (EPP) Ph.D. student Prithvi Acharya and his advisor, Civil and Environmental Engineering's Scott Matthews, teamed up with EPP's Paul Fischbeck. They have created a new method for identifying over-emitting vehicles using remote data transmission and machine learning that would be both less expensive and more effective than current I/M programs.
 

Most states in America require passenger vehicles to undergo periodic emissions inspections to preserve air quality by ensuring that a vehicle's exhaust emissions do not exceed standards set at the time the vehicle was manufactured. What some may not know is that the metrics through which emissions are gauged nowadays are usually measured by the car itself through on-board diagnostics (OBD) systems that process all of the vehicle's data. Effectively, these emissions tests are checking whether a vehicle's "check engine light" is on. While over-emitting identified by this system is 87 percent likely to be true, it also has a 50 percent false pass rate of over-emitters when compared to tailpipe testing of actual emissions.

With cars as smart devices increasingly becoming integrated into the Internet of Things (IoT), there's no longer any reason for state and county administrations to force drivers to come in for regular I/M checkups when all the necessary data is stored on their vehicle's OBD. In an attempt to eliminate these unnecessary costs and improve the effectiveness of I/M programs, Acharya, Matthews, and Fischbeck published their recent study in IEEE Transactions on Intelligent Transportation Systems.

Their new method entails sending data directly from the vehicle to a cloud server managed by the state or county within which the driver lives, eliminating the need for them to come in for regular inspections. Instead, the data would be run through machine learning algorithms that identify trends in the data and codes prevalent among over-emitting vehicles. This means that most drivers would never need to report to an inspection site unless their vehicle's data indicates that it's likely over-emitting, at which point they could be contacted to come in for further inspection and maintenance.

Not only has the team's work shown that a significant amount of time and cost could be saved through smarter emissions inspecting programs, but their study has also shown how these methods are more effective. Their model for identifying vehicles likely to be over-emitting was 24 percent more accurate than current OBD systems. This makes it cheaper, less demanding, and more efficient at reducing vehicle emissions.

This study could have major implications for leaders and residents within the 31 states and countless counties across the U.S. where I/M programs are currently in place. As these initiatives face criticism from proponents of both environmental deregulation and fiscal austerity, this team has presented a novel system that promises both significant reductions to cost and demonstrably improved effectiveness in reducing vehicle emissions. Their study may well redefine the testing paradigm for how vehicle emissions are regulated and reduced in America.

 
Originally posted here on Tech Xplore
 
Read more…

Embedded Linux or RTOS: For IoT

by Tirichlabs

Embedded Linux utilizes Linux kernel for an embedded device, but it is quite different from the standard Linux OS. Its application to embedded systems is motivated by the availability of device support, file-systems, network connectivity, and UI support. It is a customized version of Linux for embedded systems, consequently having a much smaller size and minimal features and requires less processing power. Based on embedded system requirements, the Linux kernel is modified and optimized. Such embedded Linux can only run device-specific purpose-built applications.

The Real-Time Operating System (RTOS) with minimal code is used for such applications where least and fix processing time is required. RTOS is a time-sharing system based on clock interrupts that implement priority sequences to execute a process. In the event of a high priority, interrupt is generated by the system, the running low priority processes are stopped and the interrupt is served. The real-time operating system requires less operational memory and synchronizes the processes in such a way they can communicate with each other hence resources can be used efficiently without wastage of time.

 

COMPARISON

Size

The major difference between Embedded Linux and RTOS is in their sizes. RTOS running on an AVR requires approximately 4.4 kilobytes of ROM. Embedded Linux, on the other hand, is relatively larger. The kernel can be stripped of which are not required and even with that, the footprint is generally measured in megabytes.

Embedded Linux RAM requirement is in order of few megabytes. In practical applications, it requires more than that because some other tasks run under these Linux kernels. RTOS has much smaller memory requirements than Linux. A very simple setup, running two tasks, a scheduler, a queue for communication and a semaphore on an 8-bit architecture would use in the vicinity of 200 bytes.

Scheduler

The scheduler in an RT-system is important to ensure that tasks complete in a fixed time. Compared to a regular scheduler for a general-purpose system, it is not the main task of the scheduler to ensure ’fair’ distribution of CPU-time. A common technique is simply to let the task with the highest priority run before all tasks with lower priority. It works fine for a soft real-time system but for hard real-time, the system must provide a better guarantee.

RTOS scheduler

RTOS uses the highest priority first scheduler. It means that the task having the highest priority is always running. This is achieved by having a preemptive scheduler that at a tick-interrupt decides if the currently running task is allowed to continue executing or it needs to be switched for another task based on priority. The scheduler uses the priority to schedule the task with the highest priority. Tasks having the same priority are given a “fair” process time. This schedular allows us to achieve soft real-time but it is difficult to achieve hard real-time by not having any kind of deadline-based scheduling.

For this purpose, there are choices of having a preemptive or a cooperative scheduler. In preemptive mode, a task can be preempted unlike in cooperative mode where it’s up to all tasks to give away the CPU “often” enough so higher priority tasks get to run. Typical RTOS real-time kernel achieves scheduler latencies from zero to a few microseconds.

Embedded Linux scheduler

In Embedded Linux, there are more choices to choose the scheduler. The modular of Embedded Linux allows to change different parts of the system. A simple insmod gives the possibility to change the scheduler. There are a couple of schedulers designed for different things.

First of all, it has a basic highest priority first scheduler that uses the priority of a task and schedules it first. Embedded Linux also implements the Earliest deadline first which uses the periodic feature of Embedded Linux. Assuming that the deadline for every task is when it is next to be run again one can implement a fast EDF. In theory, it is optimal since it can schedule tasks to 100% CPU-usages. In practice, it is not the same due to some overheads. As in idle process Embedded Linux runs a usual Linux kernel and when there are no rt-tasks that can run, Linux gets to run. which can lead to starvation of Linux and thus effectively disabling Linux. But the importance of a real-time system is to run the real-time tasks this is not a big problem for the system. Typical latencies in real-time Linux schedular are in the order of tens to hundreds of microseconds.

CPU resource

Embedded Linux requires a significant amount of CPU resources, perhaps >200MIPS, 32bit processor, ideally with an MMU, 4Mb of ROM and 16MB of RAM and boot may take several seconds.

An RTOS, on the other hand, runs in less than 10Kb, on microcontrollers from 8-bit up and boot in milliseconds.

IoT Implementation of OS

Embedded Linux is often preferred for extremely low-power applications, such as sensors, run for months on batteries. The low-power nature often precludes direct IP connectivity which serves as a gateway for Internet connectivity. The gateway communicates the low-power protocol to the sensors and would translate them to IP. Linux may have an existing protocol to fulfill the requirements.

The basic requirement of an IoT device is network connectivity, typically in the form of IP via a web server. An RTOS can offer IP connectivity but have a risk to be buggy unless you examine it. For example, usually, RTOSs do not isolate the IP stack user from the IP stack itself. Network connectivity requires potentially dealing with low speed or congested links which can lead to obscure and hard-to-debug buffer handling issues when the stack is intermingled with other code. On the other hand, an embedded Linux leverages hardware separation and a widely utilized IP stack that probably has been exposed to corner cases.

Security is essential in IoT devices, which are often exposed to open Internet. A system compromise on the Internet interface is prone to intruders and information or control of the device can be hijacked. Developers can leverage native, embedded Linux features—multiuser, SELinux, and containers—to contain and limit the damage.

Linux certainly is a robust and secure OS and the system has matured in an embedded operating system. Yet one of the drawbacks is its Memory footprint when compared to a real-time operating system even though it can be trimmed down by removing tools and system services that are not required in embedded systems, it still is a large software. It simply cannot run on 8 or 16-bit MCUs and requires more onboard RAM for the Linux kernel. For example, ARM Cortex-M architecture based MCUs, which typically have only a few hundred kilobytes of RAM, and Linux cannot run on these chips.

A common engineering solution for networked systems is to use two processors in the device. In this arrangement, an 8 or 16-bit MCU is used for the sensor or actuator, while a 32-bit processor is used for the network interface which runs an RTOS. Sales of 32-bit MCUs have exploded in the last several years, and have become the largest segment of the MCU market.

ORIGINALLY POSTED HERE ON TIRICH LABS

Read more…

 

max0492-01-arduino-breakout-board-1024x885.jpg

When I work on a development project, I’ve become a big fan of using development boards that have the Arduino headers on them. The vast number of shields that easily connect to these headers is phenomenal. The one problem that I’ve always had though was that there is always a need to use a breadboard to test a circuit or integrate a sensor that just isn’t in an Arduino header format. The result is a wiring mess that can result in loose or missing connections.

I was recently talking with Max Maxfield and he pointed me to a really cool adapter board designed to remove these wiring jumpers to a breadboard. Max wrote about this board here but I’m so excited about this that I thought I’d add my two cents as well.

The BreadShield, which can be purchased at https://www.crowdsupply.com/loser/breadshield, adapts the Arduino headers to a linear set of header pins designed to be plugged into a breadboard. You can see in the image below that this completely removes all the extra jumpers that one would normally require which has the potential to remove quite a few jumper wires.

max0492-03-arduino-breakout-board-1024x675.jpg

When I heard about these, I purchased three assembled units for about $28 which saves me the time from having to assemble the adapter myself. DIY assembly runs for about $15 for a set of three boards. Either way, a great price to remove a bunch of wires from the workbench.

Now I’m still waiting for mine to arrive, but from the image, you can see that the one challenge to using these adapters might be adapting the height of your breadboard to your hardware stack. While this could be an issue, I keep various length spacers around the office so that I can adapt board heights and undoubtedly there will be a length that will ensure these line up properly.

You can view the original post here

Read more…
RSS
Email me when there are new items in this category –

Premier Sponsors