Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Data AI (56)

The Core Costs of Data in IoT

Data is a critical resource in IoT that enables organizations to gain insights into their operations, optimize processes, and improve customer experience. It is important to understand the cost of managing and processing data, as it can be significant. Too often, organizations have more data than they know how to effectively use. Here are some of the major areas of costs:

First, data storage is a major cost. IoT devices generate large amounts of data, and this data needs to be stored in a secure and reliable way. Storing data in the cloud or on remote servers can be expensive, as it requires a robust and scalable infrastructure to support the large amounts of data generated by IoT devices. Additionally, data must be backed up to ensure data integrity and security, which adds to the cost.

Second, data processing and analysis require significant computational resources. Processing large amounts of data generated by IoT devices requires high-performance hardware and software, which can be expensive to acquire and maintain. Additionally, hiring data scientists and other experts to interpret and analyze the data adds to the cost.

Third, data transmission over networks can be costly. IoT devices generate data that needs to be transmitted over networks to be stored and processed. Depending on the location of IoT devices and the network infrastructure, the cost of network connectivity can vary widely.

Finally, data security is a major concern in IoT, and implementing robust security measures can add to the cost. This includes implementing encryption protocols to ensure data confidentiality, as well as implementing measures to prevent unauthorized access to IoT devices and data.

Managing and processing data requires significant resources, including storage, processing and analysis, network connectivity, and security. While data is a valuable resource that can provide significant value, the cost of managing and processing data must be carefully evaluated to ensure that the benefits outweigh the expenses.

Read more…

Adaptive systems and models at runtime refer to the ability of a system or model to dynamically adjust its behavior or parameters based on changing conditions and feedback during runtime. This allows the system or model to better adapt to its environment, improve its performance, and enhance its overall effectiveness.

Some technical details about adaptive systems and models at runtime include:

  1. Feedback loops: Adaptive systems and models rely on feedback loops to gather data and adjust their behavior. These feedback loops can be either explicit or implicit, and they typically involve collecting data from sensors or other sources, analyzing the data, and using it to make decisions about how to adjust the system or model.

  2. Machine learning algorithms: Machine learning algorithms are often used in adaptive systems and models to analyze feedback data and make predictions about future behavior. These algorithms can be supervised, unsupervised, or reinforcement learning-based, depending on the type of feedback data available and the desired outcomes.

  3. Parameter tuning: In adaptive systems and models, parameters are often adjusted dynamically to optimize performance. This can involve changing things like thresholds, time constants, or weighting factors based on feedback data.

  4. Self-organizing systems: Some adaptive systems and models are designed to be self-organizing, meaning that they can reconfigure themselves in response to changing conditions without requiring external input. Self-organizing systems typically use decentralized decision-making and distributed control to achieve their goals.

  5. Context awareness: Adaptive systems and models often incorporate context awareness, meaning that they can adapt their behavior based on situational factors like time of day, location, or user preferences. This requires the use of sensors and other data sources to gather information about the environment in real-time.

Overall, adaptive systems and models at runtime are complex and dynamic, requiring sophisticated algorithms and techniques to function effectively. However, the benefits of these systems can be significant, including improved performance, increased flexibility, and better overall outcomes.

Read more…

With the advent of the Internet of Things, Big Data is becoming more and more important. After all, when you have devices that are constantly collecting data, you need somewhere to store it all. But the Internet of Things is not just changing the way we store data; it’s changing the way we collect and use it as well. In this blog post, we will explore how the Internet of Things is transforming Big Data. From new data sources to new ways of analyzing data, the Internet of Things is changing the Big Data landscape in a big way.

 

 

How is the Internet of Things transforming Big Data?

The Internet of Things is transforming Big Data in a number of ways. One way is by making it possible to collect more data than ever before. This is because devices that are connected to the Internet can generate a huge amount of data. This data can be used to help businesses and organizations make better decisions.

Another way the Internet of Things is transforming Big Data is by making it easier to process and analyze this data. This is because there are now many tools and technologies that can help with this. One example is machine learning, which can be used to find patterns in data.

The Internet of Things is also changing the way we think about Big Data. This is because it’s not just about collecting large amounts of data – it’s also about understanding how this data can be used to improve our lives and businesses.

The Benefits of the Internet of Things for Big Data

  1. The internet of things offers a number of benefits for big data.
  2. It allows for a greater volume of data to be collected and stored.
  3. Also, it provides a more diverse range of data types, which can be used to create more accurate and comprehensive models.
  4. It enables real-time data collection and analysis, which can help organizations make better decisions and take action more quickly.
  5. It can improve the accuracy of predictions by using historical data to train predictive models.
  6. Finally, the internet of things can help reduce the cost of storing and processing big data.

The Challenges of the Internet of Things for Big Data

The internet of things is transforming big data in a number of ways. One challenge is the sheer volume of data that is generated by devices and sensors. Another challenge is the variety of data formats, which can make it difficult to derive insights. Additionally, the real-time nature of data from the internet of things presents challenges for traditional big data infrastructure.

Conclusion

The Internet of Things is bringing a new level of connectivity to the world, and with it, a huge influx of data. This data is transforming how businesses operate, giving them new insights into their customers and operations.

The Internet of Things is also changing how we interact with the world around us, making our lives more convenient and efficient. With so much potential, it's no wonder that the Internet of Things is one of the most talked-about topics in the tech world today.

Read more…
An AI based approach increases accuracy and can even make the impossible possible.
 
What is an Outlier?
 
Put simply, an outlier is a piece of data or observation that differs drastically from a given norm.
 
In the image above, the red fish is an outlier. Clearly differing by color, but also by size, shape, and more obviously direction. As such, the analysis of detecting outliers in data fall into two categories: univariate, and multivariate
  • Univariate: considering a single variable
  • Multivariate: considering multiple variables
 
Outlier Detection in Industrial IoT
 
In Industrial IoT use cases, outlier detection can be instrumental in specific use cases such as understanding the health of your machine. Instead of looking at characteristics of a fish like above, we are looking at characteristics of a machine via data such as sensor readings.
 
The goal is to learn what normal operation looks like where outliers are abnormal activity indicative of a future problem.
 
Statistical Approach to Outlier Detection
Statistics - Normal Distribution 
Statistical/probability based approaches date back centuries. You may recall back the bell curve. The values of your dataset plot to a distribution. In simplest terms, you calculate the mean and standard deviation of that distribution. You then can plot the location of x standard deviations from the mean and anything that falls beyond that is an outlier.
 
A simple example to explore using this approach is outside air temperature. Looking at the low temperature in Boston for the month of January from 2008-2018 we find an average temperature of ~23 degrees F with a standard deviation of ~9.62 degrees. Plotting out 2 standard deviations results in the following.
 
 
 a797d2_2861843bb7ba4a82bab87eef54b09196~mv2.png
 
 
Interpreting the chart above, any temperature above the gray line or below the yellow can be considered outside the range of normal...or an outlier.
 
Why do we need AI?
If we just showed that you can determine outliers using simple statistics, then why do we need AI at all? The answer depends on the type of outlier analysis.
 
Why AI for Univariate Analysis?
In the example above, we successfully analyzed outliers in weather looking at a single variable: temperature.
 
So, why should we complicate things by introducing AI to the equation? The answer has to do with the distribution of your data. You can run univariate analysis using statistical measures, but in order for the results to be accurate, it is assumed that the distribution of your data is "normal". In other words, it needs to fit to the shape of a bell curve (like the left image below).
 
However, in the real world, and specifically in industrial use cases, the resulting sensor data is not perfectly normal (like the right image below).
 6 ways to test for a Normal Distribution — which one to use? | by Joos  Korstanje | Towards Data Science
As a result, statistical analysis on a non-normal dataset would result in more false positives and false negatives.
 
The Need for AI
AI-based methods on the other hand, do not require a normal distribution and finds patterns in the data that result in much higher accuracy. In the case of the weather in Boston, getting the forecast slightly wrong does not have a huge impact. However, in industries such as rail, oil and gas, and industrial equipment, trust in the accuracy of your results has a long lasting impact. An impact that can only be achieved by AI.
 
Why AI for Multivariate Analysis?
The case for AI in a multivariate analysis is a bit more straight forward. Effectively, when we are looking at a single variable we can easily plot the results on a plane such as the temperature chart or the normal and non-normal distribution charts above.
 
However, if we are analyzing multiple points, such as the current, voltage and wattage of a motor, or vibration over 3 axis, or the return temp and discharge temp of an HVAC system, plotting and analyzing with statistics has its limitations. Just visualizing the plot becomes impossible for a human as we go from a single plane to hyperplanes as shown below.
 
MSRI | Hyperplane arrangements and application
 
The Need for AI
For multivariate analysis, visual inspection starts to go beyond human capabilities while technical analysis goes beyond statistical capabilities. Instead, AI can be utilized to find patterns in the underlying data in order to learn normal operation and adequately monitor for outliers. In other words, for multivariate analysis AI starts to make the impossible possible.
 
Summary
Statistics and probability has been around far longer than anyone reading this post. However, not all data is created equal and in the world of industrial IoT, statistical techniques have crucial limitations.
 
AI-based techniques go beyond these limitations helping to reduce false positives/negatives and often times making robust analysis possible for the first time.
 
At Elipsa, we build simple, fast and flexible AI for IoT. Get free access to our Community Edition to start integrating machine learning into your applications.
 
Read more…

For object detection projects, labeling your images with their corresponding bounding boxes and names is a tedious and time-consuming task, often requiring a human to label each image by hand. The Edge Impulse Studio has already dramatically decreased the amount of time it takes to get from raw images to a fully labeled dataset with the Data Acquisition Labeling Queue feature directly in your web browser. To make this process even faster, the Edge Impulse Studio is getting a new feature: AI-Assisted Labeling.

ezgif_com_gif_maker_3_2924bbe7c1.gifAutomatically label common objects with YOLOv5.

 

To get started, create a “Classify multiple objects” images project via the Edge Impulse Studio new project wizard or open your existing object detection project. Upload your object detection images to your Edge Impulse project’s training and testing sets. Then, from the Data Acquisition tab, select “Labeling queue.” 

 

1. Using YOLOv5

By utilizing an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset), common objects in your images can quickly be identified and labeled in seconds without needing to write any code!

To label your objects with YOLOv5 classification, click the Label suggestions dropdown and select “Classify using YOLOv5.” If your object is more specific than what is auto-labeled by YOLOv5, e.g. “coffee” instead of the generic “cup” class, you can modify the auto-labels to the left of your image. These modifications will automatically apply to future images in your labeling queue.

Screen_Shot_2022_02_08_at_11_54_17_AM_70dba7c50d.png

 

Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes!

 

2. Using your own model

You can also use your own trained model to predict and label your new images. From an existing (trained) Edge Impulse object detection project, upload new unlabeled images from the Data Acquisition tab. Then, from the “Labeling queue”, click the Label suggestions dropdown and select “Classify using <your project name>”:

Screen_Shot_2022_02_08_at_1_24_32_PM_caf900313c.png

 

You can also upload a few samples to a new object detection project, train a model, then upload more samples to the Data Acquisition tab and use the AI-Assisted Labeling feature for the rest of your dataset. Classifying using your own trained model is especially useful for objects that are not in YOLOv5, such as industrial objects, etc.

Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes using your own pre-trained model!

 

3. Using object tracking

If you have objects that are a similar size or common between images, you can also track your objects between frames within the Edge Impulse Labeling Queue, reducing the amount of time needed to re-label and re-draw bounding boxes over your entire dataset.

Draw your bounding boxes and label your images, then, after clicking Save labels, the objects will be tracked from frame to frame:

ezgif_com_gif_maker_2_87d2148451.gifTrack and auto-label your objects between frames.

 

Now that your object detection project contains a fully labeled dataset, learn how to train and deploy your model to your edge device: check out our tutorial!

 

Originally posted on the Edge Impulse blog by Jenny Plunkett - Senior Developer Relations Engineer.

Read more…

Connected devices are emerging as a modern way for grocers to decrease food spoilage and energy waste losses. With bottom-line advantages, it is not surprising to experience some of the biggest business giants are putting internet of things (IoT) techniques to work and enhance the operating results.

According to Talk Business, Walmart uses IoT for different tasks like tracking food temperature, equipment energy outputs, etc. IoT apps help monitor refrigeration units for several products such as milk cold, ice cream, etc. It reports back to a support team if sensors have intimate equipment difficulties fixed without serious malfunctions and minimal downtime.

IoT solutions are used broadly during Walmart's massive store footprint. The connected devices send a total of 1.5 billion messages each day. Throughout the grocery business, IoT is leveraged to enhance food safety and decrease excessive energy consumption. IoT solutions allow food retailers to reduce food spoilage by 40% and experience a net energy saving of 30%.
8874258893?profile=RESIZE_710x

Image: (Source)

It was forecasted that in 2018 grocers lose around $70 million per year due to food spoilage. However, large chains are losing hundreds of millions due to the same. Hence most grocers have started implementing sustainability-focused IoT technology to avoid wastage and increase their business profit to a great extent.

8874264280?profile=RESIZE_710xImage: (Source)

Explore How IoT Helps to Offer Safer Shopping Experience to Shoppers

People prepare to stock up food in preparation; however, there are numerous challenges that the pandemic raised in front of retails. But more retail trends are an answer to all the challenges, beginning from product moving to stock and much more. 

It also helps to ensure safe and healthy deliveries to customers' doorsteps, especially whenever they need it. Harvard study shows that grocery shopping is a high-risk activity than traveling on an airplane during COVID 19 pandemic. With COID 19 pandemic raging, retailer stores need to provide an efficient shopping experience; they must look for ways that help them overcome exposure and the risk of infection as customers venture to the store for food.

8874269299?profile=RESIZE_710xImage: (Source)

Most grocers turn towards modern technology such as IoT to help retailers or supermarkets offer safe service and meet the bottom lines. By placing internet of things devices throughout the store, smart grocery carts, baskets, etc., grocers can help make experiences more efficient and safer. Let's check how IoT is helping retailer businesses to overcome today's challenging scenarios and stop food spoilage.

Smart Stock Monitoring

Retailers keep warehouses full of goods to ensure that they don't run when there is high demand. And by integrating IoT-enabled sensors, retailers can easily detect weight on sleeves at warehouses and stores. It also helps them determine popular item lists; keeping track of items helps retailers restock them and prevent overstocking a particular product.

Guaranteeing Timely Deliveries

The report shows that 66% of customers anticipate they will increase online shopping in 2020. Undoubtedly online shopping is a new norm these days; most people prefer to order their daily essentials using a grocery delivery mobile app. However, it becomes vital for brands to ensure timely delivery. It's a critical factor, especially when it comes to customer satisfaction, especially when there is a lack of traditional consumer engagement like a friendly salesperson.

And by integrating IoT-enabled devices into containers and shipments, retailers can quickly obtain insight into shipments. They can even track real-time updates to keep their customers up to date on the approximate delivery time. It's critical, especially when you want to achieve excellent customer experiences in the eCommerce market.

However, data collected using IoT-enabled devices can help you drive the supply chain effectively by empowering retailers with root optimization for ensuring fast delivery. The IoT can play a crucial role, especially when it comes to recognizing warehouse delays. It also helps to optimize delivery operations for better and quicker service.

Manage Store Capacity

With the new COVID 19 safety guidelines to follow, IoT helps retailers ensure their customer's safety by supporting social distancing rules. For example, retailers can place IoT sensors at the entrance and exit to efficiently monitor traffic and grocery carts. The sensors provide accurate and up-to-date details. Details enable retailers to efficiently operate capacity, ensuring safety, and eliminates the need for store "bouncers" at exit and entrance.

Contact Monitoring

Retailers can offer a safe and unique shopping experience by benefiting from IoT. It helps them with contact monitoring and social distancing as well. Retailers can provide shoppers with IoT-enabled wearables paired with the shopper's mobile phone through their branded app. It helps shoppers detect whether they are too close to another shopper and report them through their phone and record the incident.

Combating COVID 19: How IoT is Helping Retailers?

Preventing food spoilage, saving energy, and reducing waste are good practices for grocery stores helping them to increase their profit margin. Due to the coronavirus pandemic, digital resilience has boosted drastically. 

Many brands and retailers, however, put a pause on initiatives during COVID 19. But to ensure their survival and profit margins, they need to start with new strategies and techniques. Check few IoT use cases that retailers are considering these days:

Video Analytics

Although supermarkets have practiced video surveillance technologies for the last many years, some brands are repurposing these systems to enhance their inventory management practices. Cameras would monitor consumer behaviors and help retailers to prevent theft.

As the customer's purchase preference changes constantly, it becomes essential for grocery stores to start stocking more perishable goods. A 2018 survey shows that more than 60% of retailers integrate refrigerators to store fresh products at their stores and meet customers' growing demand. 

And by monitoring customers' purchasing patterns, grocery stores can gauge how much extra produce they need to acquire and how unexpected surges and falls in the market will affect their margins.

Autonomous Cleaning Robots

To promote social distancing, grocery stores are taking all essential precautions. They have implemented a rigorous cleaning schedule to reduce the risk of COVID 19 spread. Retailers are focusing on sanitizing and disinfecting frequent touch surfaces using autonomous cleaning robots.

Robots can be controlled using IoT-based devices and help to sanitize various parts, including doors, shopping carts, countertops, etc. All these tasks demand a reasonable amount of time and employees' attention as well. But performing functions with the help of autonomous cleaning robots can help retailers save their employees time and energy.

Contactless Checkout

Contactless checkout has become increasingly popular over the few years. It has helped supermarkets reduce the requirement for cashier dedication to increase the customers' shopping speed. During the COVID 19 pandemic, the self-service environments have allowed customers a way to acquire essential food, cleaning products, and other day-to-day essentials without having to communicate with supermarket staff directly.

What's Next for Smart Supermarkets?

IoT technologies help supermarkets to tackle new challenges efficiently. Most stores know that they receive products from a different location, which went to a distribution center, making them lose traceability. But with IoT integration, it has become easier for them to track every business activity and provide an internet of shopping experience to customers.

Modern IoT technology makes it possible for grocery stores to track every activity at each stage. It becomes crucial for the health and safety of your customers, while it helps to handle top purchase priorities. It helps grocery stores exist and maintain healthy purchasing trends. IoT initiatives provide the shopping 2.0 infrastructure like smart shelves, carts, cashless, and other options that change the primary service experience.

Read more…

The head is surely the most complex group of organs in the human body, but also the most delicate. The assessment and prevention of risks in the workplace remains the first priority approach to avoid accidents or reduce the number of serious injuries to the head. This is why wearing a hard hat in an industrial working environment is often required by law and helps to avoid serious accidents.

This article will give you an overview of how to detect that the wearing of a helmet is well respected by all workers using a machine learning object detection model.

For this project, we have been using:

  • Edge Impulse Studi to acquire some custom data, visualize the data, train the machine learning model and validate the inference results.
  • Part of this public dataset from Roboflow, where the images containing the smallest bounding boxes has been removed.
  • Part of the Flicker-Faces-HQ (FFHQ) (under Creative Commons BY 2.0 license) to rebalance the classes in our dataset.
  • Google Colab to convert the Yolo v5 PyTorch format from the public dataset to Edge Impulse Ingestion format.
  • A Rasberry Pi, NVIDIA Jetson Nano or with any Intel-based Macbooks to deploy the inference model.

Before we get started, here are some insights of the benefits / drawbacks of using a public dataset versus collecting your own. 

Using a public dataset is a nice-to-have to start developing your application quickly, validate your idea and check the first results. But we often get disappointed with the results when testing on your own data and in real conditions. As such, for very specific applications, you might spend much more time trying to tweak an open dataset rather than collecting your own. Also, remember to always make sure that the license suits your needs when using a dataset you found online.

On the other hand, collecting your own dataset can take a lot of time, it is a repetitive task and most of the time annoying. But, it gives the possibility to collect data that will be as close as possible to your real life application, with the same lighting conditions, the same camera or the same angle for example. Therefore, your accuracy in your real conditions will be much higher. 

Using only custom data can indeed work well in your environment but it might not give the same accuracy in another environment, thus generalization is harder.

The dataset which has been used for this project is a mix of open data, supplemented by custom data.

First iteration, using only the public datasets

At first, we tried to train our model only using a small portion of this public dataset: 176 items in the training set and 57 items in the test set where we took only images containing a bounding box bigger than 130 pixels, we will see later why. 

Rav03Ny7X2bh1iOSftwHgggWj31SyQWk-sl_k4Uot4Jpw3eMg9XgYYrIyajogGfGOfL8j7qttiAWAcsABUgcoHUIg1QfYQRxeZfF_dnSFpsSiXhiIHduAZI9x6qcgikCcluR24r1

If you go through the public dataset, you can see that the entire dataset is strongly missing some “head” data samples. The dataset is therefore considered as imbalanced.

Several techniques exist to rebalance a dataset, here, we will add new images from Flicker-Faces-HQ (FFHQ). These images do not have bounding boxes but drawing them can be done easily in the Edge Impulse Studio. You can directly import them using the uploader portal. Once your data has been uploaded, just draw boxes around the heads and give it a label as below: 

AcihTfl2wibfy9LOzSUuPKEcF7IupGPOzPOmMmNi2LUq8sV7I2IVT5W4-7GGS8wJVD1o7VIQ5e7utCkQ1qT2xLawW7mQsTGL_WNuWIVIp5v89sCZt9gZ9fX7fwHo0PG9A3SDBCqV

Now that the dataset is more balanced, with both images and bounding boxes of hard hats and heads, we can create an impulse, which is a mix of digital signal processing (DSP) blocks and training blocks:

_qwt-WMdXI4Oc7BkNQfyEYZKV5MvziDkt1UUl1Hrx-65u_Uf-L_qEUmHMx_qN5Xh-r5vpn8JxCgpJvcT2v4-hWD9ZHE_wJjDgCCXZXxTkOtcTKSKGizDx9ZQO0KnBvvmaBCA1QvD

In this particular object detection use case, the DSP block will resize an image to fit the 320x320 pixels needed for the training block and extract meaningful features for the Neural Network. Although the extracted features don’t show a clear separation between the classes, we can start distinguishing some clusters:

zr70Lpe0Rg3wap9FWoGrco1pfT6L3TWUxYds3NhM_uHMhFDDr89KcLTH_OXIgKs6BrMdP7iihoz8t64Mk2JtbpTfmBAXyRYukNS9zxLk9zuGjZLqvakkgw6oOBuIhiVAzcMcZu9E

To train the model, we selected the Object Detection training block, which fine tunes a pre-trained object detection model on your data. It gives a good performance even with relatively small image datasets. This object detection learning block relies on MobileNetV2 SSD FPN-Lite 320x320.    

According to Daniel Situnayake, co-author of the TinyML book and founding TinyML engineer at Edge Impulse, this model “works much better for larger objects—if the object takes up more space in the frame it’s more likely to be correctly classified.” This has been one of the reason why we got rid of the images containing the smallest bounding boxes in our import script.

After training the model, we obtained a 61.6% accuracy on the training set and 57% accuracy on the testing set. You also might note a huge accuracy difference between the quantized version and the float32 version. However, during the linux deployment, the default model uses the unoptimized version. We will then focus on the float32 version only in this article.

fWwhQWxxLkAdnsFKuIUc2Lf2Lzji9m2uXux5cr3CmLf2cP8fiE_RQHaqJxekyBI3oIzOS81Jwoe6aBPfi1OFgEJSS3XQWnzR9nJ3eTY2M5JNVG38H3Dro2WZH3ltruXn_pUZkVvw

This accuracy is not satisfying, and it tends to have trouble detecting the right objects in real conditions:

hardhat_bad_82fbd9a22a.gif

Second iteration, adding custom data

On the second iteration of this project, we have gone through the process of collecting some of our own data. A very useful and handy way to collect some custom data is using our mobile phone. You can also perform this step with the same camera you will be using in your factory or your construction site, this will be even closer to the real condition and therefore work best with your use case. In our case, we have been using a white hard hat when collecting data. For example, if your company uses yellow ones, consider collecting your data with the same hard hats. 

Once the data has been acquired, go through the labeling process again and retrain your model. 

_f7J4zddenmarUiTf3VMyOz_kG70nieiEkSwR8kB3JhJE5K1IqCdttj4aOtrfzv4QYWXJ4Y9u_0MU1xKfFsU8hUB5gj00Y1E7oKlixjmhNB2p7VIqoamD9migXXPkAOrFRGVFfIo

We obtain a model that is slightly more accurate when looking at the training performances. However, in real conditions, the model works far better than the previous one.

NXnwDbkaWEia7qyM20U2kexTiWBSOXam_ACEGxzKCJ8kYtmxS7eCTMZsuwXJrjvkFUVb9YbSqwS7EOGiE4wu_FFGQ4YOufAB-JZA_uCOEoHO8D75ke6YU4H6QKnCBJyJA0hD4Lw3

Finally, to deploy your model on yourA Rasberry Pi, NVIDIA Jetson Nano or your Intel-based Macbook, just follow the instructions provided in the links. The command line interface `edge-impulse-linux-runner` will create a lightweight web interface where you can see the results.

hardhat_good_18d9e33d3a.gif

Note that the inference is run locally and you do not need any internet connection to detect your objects. Last but not least, the trained models and the inference SDK are open source. You can use it, modify it and integrate it to a broader application matching specifically to your needs such as stopping a machine when a head is detected for more than 10 seconds.

This project has been publicly released, feel free to have a look at it on Edge Impulse studio, clone the project and go through every steps to get a better understanding: https://studio.edgeimpulse.com/public/34898/latest

The essence of this use case is, Edge Impulse allows with very little effort to develop industry grade solutions in the health and safety context. Now this can be embedded in bigger industrial control and automation systems with a consistent and stringent focus on machine operations linked to H&S complaint measures. Pre-training models, which later can be easily retrained in the final industrial context as a step of “calibration,” makes this a customizable solution for your next project.

Originally posted on the Edge Impulse blog by Louis Moreau - User Success Engineer at Edge Impulse & Mihajlo Raljic - Sales EMEA at Edge Impulse

Read more…

By Ricardo Buranello

What Is the Concept of a Virtual Factory?

For a decade, the first Friday in October has been designated as National Manufacturing Day. This day begins a month-long events schedule at manufacturing companies nationwide to attract talent to modern manufacturing careers.

For some period, manufacturing went out of fashion. Young tech talents preferred software and financial services career opportunities. This preference has changed in recent years. The advent of digital technologies and robotization brought some glamour back.

The connected factory is democratizing another innovation — the virtual factory. Without critical asset connection at the IoT edge, the virtual factory couldn’t have been realized by anything other than brand-new factories and technology implementations.

There are technologies that enable decades-old assets to communicate. Such technologies allow us to join machine data with physical environment and operational conditions data. Benefits of virtual factory technologies like digital twin are within reach for greenfield and legacy implementations.

Digital twin technologies can be used for predictive maintenance and scenario planning analysis. At its core, the digital twin is about access to real-time operational data to predict and manage the asset’s life cycle. It leverages relevant life cycle management information inside and outside the factory. The possibilities of bringing various data types together for advanced analysis are promising.

I used to see a distinction between IoT-enabled greenfield technology in new factories and legacy technology in older ones. Data flowed seamlessly from IoT-enabled machines to enterprise systems or the cloud for advanced analytics in new factories’ connected assets. In older factories, while data wanted to move to the enterprise systems or the cloud, it hit countless walls. Innovative factories were creating IoT technologies in proof of concepts (POCs) on legacy equipment, but this wasn’t the norm.

No matter the age of the factory or equipment, everything looks alike. When manufacturing companies invest in machines, the expectation is this asset will be used for a decade or more. We had to invent something inclusive to new and legacy machines and systems.

We had to create something to allow decades-old equipment from diverse brands and types (PLCs, CNCs, robots, etc.) to communicate with one another. We had to think in terms of how to make legacy machines to talk to legacy systems. Connecting was not enough. We had to make it accessible for experienced developers and technicians not specialized in systems integration.

If plant managers and leaders have clear and consumable data, they can use it for analysis and measurement. Surfacing and routing data has enabled innovative use cases in processes controlled by aged equipment. Prescriptive and predictive maintenance reduce downtime and allow access to data. This access enables remote operation and improved safety on the plant floor. Each line flows better, improving supply chain orchestration and worker productivity.

Open protocols aren’t optimized for connecting to each machine. You need tools and optimized drivers to connect to the machines, cut latency time and get the data to where it needs to be in the appropriate format to save costs. These tools include:

  • Machine data collection
  • Data transformation and visualization
  • Device management
  • Edge logic
  • Embedded security
  • Enterprise integration
This digital copy of the entire factory floor brings more promise for improving productivity, quality, downtime, throughput and lending access to more data and visibility. It enables factories to make small changes in the way machines and processes operate to achieve improvements.

Plants are trying to get and use data to improve overall equipment effectiveness. OEE applications can calculate how many good and bad parts were produced compared to the machine’s capacity. This analysis can go much deeper. Factories can visualize how the machine works down to sub-processes. They can synchronize each movement to the millisecond and change timing to increase operational efficiency.

The technology is here. It is mature. It’s no longer a question of whether you want to use it — you have it to get to what’s next. I think this makes it a fascinating time for smart manufacturing.

Originally posted here.

Read more…

By Tony Pisani

For midstream oil and gas operators, data flow can be as important as product flow. The operator’s job is to safely move oil and natural gas from its extraction point (upstream), to where it’s converted to fuels (midstream), to customer delivery locations (downstream). During this process, pump stations, meter stations, storage sites, interconnection points, and block valves generate a substantial volume and variety of data that can lead to increased efficiency and safety.

“Just one pipeline pump station might have 6 Programmable Logic Controllers (PLCs), 12 flow computers, and 30 field instruments, and each one is a source of valuable operational information,” said Mike Walden, IT and SCADA Director for New Frontier Technologies, a Cisco IoT Design-In Partner that implements OT and IT systems for industrial applications. Until recently, data collection from pipelines was so expensive that most operators only collected the bare minimum data required to comply with industry regulations. That data included pump discharge pressure, for instance, but not pump bearing temperature, which helps predict future equipment failures.

A turnkey solution to modernize midstream operations

Now midstream operators are modernizing their pipelines with Industrial Internet of Things (IIoT) solutions. Cisco and New Frontier Technologies have teamed up to offer a solution combining the Cisco 1100 Series Industrial Integrated Services Router, Cisco Edge Intelligence, and New Frontier’s know-how. Deployed at edge locations like pump stations, the solution extracts data from pipeline equipment and is sent via legacy protocols, transforming data at the edge to a format that analytics and other enterprise applications understand. The transformation also minimizes bandwidth usage.

Mike Walden views the Cisco IR1101 as a game-changer for midstream operators. He shared with me that “Before the Cisco IR1101, our customers needed four separate devices to transmit edge data to a cloud server—a router at the pump station, an edge device to do protocol conversion from the old to the new, a network switch, and maybe a firewall to encrypt messages…With the Cisco IR1101, we can meet all of those requirements with one physical device.”

Collect more data, at almost no extra cost

Using this IIoT solution, midstream operators can for the first time:

  • Collect all available field data instead of just the data on a polling list. If the maintenance team requests a new type of data, the operations team can meet the request using the built-in protocol translators in Edge Intelligence. “Collecting a new type of data takes almost no extra work,” Mike said. “It makes the operations team look like heroes.”
  • Collect data more frequently, helping to spot anomalies. Recording pump discharge pressure more frequently, for example, makes it easier to detect leaks. Interest in predicting (rather than responding to) equipment failure is also growing. The life of pump seals, for example, depends on both the pressure that seals experience over their lifetime and the peak pressures. “If you only collect pump pressure every 30 minutes, you probably missed the spike,” Mike explained. “If you do see the spike and replace the seal before it fails, you can prevent a very costly unexpected outage – saving far more than the cost of a new seal.”
  • Protect sensitive data with end-to-end security. Security is built into the IR1101, with secure boot, VPN, certificate-based authentication, and TLS encryption.
  • Give IT and OT their own interfaces so they don’t have to rely on the other team. The IT team has an interface to set up network templates to make sure device configuration is secure and consistent. Field engineers have their own interface to extract, transform, and deliver industrial data from Modbus, OPC-UA, EIP/CIP, or MQTT devices.

As Mike summed it up, “It’s finally simple to deploy a secure industrial network that makes all field data available to enterprise applications—in less time and using less bandwidth.”

Originally posted here.

Read more…

TinyML focuses on optimizing machine learning (ML) workloads so that they can be processed on microcontrollers no bigger than a grain of rice and consuming only milliwatts of power.

By Arm Blueprint staff
 

TinyML focuses on the optimization of machine learning (ML) workloads so that they can be processed on microcontrollers no bigger than a grain of rice and consuming only a few milliwatts of power.

TinyML gives tiny devices intelligence. We mean tiny in every sense of the word: as tiny as a grain of rice and consuming tiny amounts of power. Supported by Arm, Google, Qualcomm and others, tinyML has the potential to transform the Internet of Things (IoT), where billions of tiny devices, based on Arm chips, are already being used to provide greater insight and efficiency in sectors including consumer, medical, automotive and industrial.

Why target microcontrollers with tinyML?

Microcontrollers such as the Arm Cortex-M family are an ideal platform for ML because they’re already used everywhere. They perform real-time calculations quickly and efficiently, so they’re reliable and responsive, and because they use very little power, can be deployed in places where replacing the battery is difficult or inconvenient. Perhaps even more importantly, they’re cheap enough to be used just about anywhere. The market analyst IDC reports that 28.1 billion microcontrollers were sold in 2018, and forecasts that annual shipment volume will grow to 38.2 billion by 2023.

TinyML on microcontrollers gives us new techniques for analyzing and making sense of the massive amount of data generated by the IoT. In particular, deep learning methods can be used to process information and make sense of the data from sensors that do things like detect sounds, capture images, and track motion.

Advanced pattern recognition in a very compact format

Looking at the math involved in machine learning, data scientists found they could reduce complexity by making certain changes, such as replacing floating-point calculations with simple 8-bit operations. These changes created machine learning models that work much more efficiently and require far fewer processing and memory resources.

TinyML technology is evolving rapidly thanks to new technology and an engaged base of committed developers. Only a few years ago, we were celebrating our ability to run a speech-recognition model capable of waking the system if it detects certain words on a constrained Arm Cortex-M3 microcontroller using just 15 kilobytes (KB) of code and 22KB of data.

Since then, Arm has launched new machine learning (ML) processors, called the Ethos-U55 and Ethos-U65, a microNPU specifically designed to accelerate ML inference in embedded and IoT devices.

The Ethos-U55, combined with the AI-capable Cortex-M55 processor, will provide a significant uplift in ML performance and improvement in energy efficiency over the already impressive examples we are seeing today.

TinyML takes endpoint devices to the next level

The potential use cases of tinyML are almost unlimited. Developers are already working with tinyML to explore all sorts of new ideas: responsive traffic lights that change signaling to reduce congestion, industrial machines that can predict when they’ll need service, sensors that can monitor crops for the presence of damaging insects, in-store shelves that can request restocking when inventory gets low, healthcare monitors that track vitals while maintaining privacy. The list goes on.

TinyML can make endpoint devices more consistent and reliable, since there’s less need to rely on busy, crowded internet connections to send data back and forth to the cloud. Reducing or even eliminating interactions with the cloud has major benefits including reduced energy use, significantly reduced latency in processing data and security benefits, since data that doesn’t travel is far less exposed to attack. 

It’s worth nothing that these tinyML models, which perform inference on the microcontroller, aren’t intended to replace the more sophisticated inference that currently happens in the cloud. What they do instead is bring specific capabilities down from the cloud to the endpoint device. That way, developers can save cloud interactions for if and when they’re needed. 

TinyML also gives developers a powerful new set of tools for solving problems. ML makes it possible to detect complex events that rule-based systems struggle to identify, so endpoint AI devices can start contributing in new ways. Also, since ML makes it possible to control devices with words or gestures, instead of buttons or a smartphone, endpoint devices can be built more rugged and deployable in more challenging operating environments. 

TinyML gaining momentum with an expanding ecosystem

Industry players have been quick to recognize the value of tinyML and have moved rapidly to create a supportive ecosystem. Developers at every level, from enthusiastic hobbyists to experienced professionals, can now access tools that make it easy to get started. All that’s needed is a laptop, an open-source software library and a USB cable to connect the laptop to one of several inexpensive development boards priced as low as a few dollars.

In fact, at the start of 2021, Raspberry Pi released its very first microcontroller board, one of the most affordable development board available in the market at just $4. Named Raspberry Pi Pico, it’s powered by the RP2040 SoC, a surprisingly powerful dual Arm Cortex-M0+ processor. The RP2040 MCU is able to run TensorFlow Lite Micro and we’re expecting to see a wide range of ML use cases for this board over the coming months.

Arm is a strong proponent of tinyML because our microcontroller architectures are so central to the IoT, and because we see the potential of on-device inference. Arm’s collaboration with Google is making it even easier for developers to deploy endpoint machine learning in power-conscious environments.

The combination of Arm CMSIS-NN libraries with Google’s TensorFlow Lite Micro (TFLu) framework, allows data scientists and software developers to take advantage of Arm’s hardware optimizations without needing to become experts in embedded programming.

On top of this, Arm is investing in new tools derived from Keil MDK to help developers get from prototype to production when deploying ML applications.

TinyML would not be possible without a number of early influencers. Pete Warden, a “founding father” of tinyML and a technical lead of TensorFlow Lite Micro at Google,&nbspArm Innovator, Kwabena Agyeman, who developed OpenMV, a project dedicated to low-cost, extensible, Python-powered machine-vision modules that support machine learning algorithms, and Arm Innovator, Daniel Situnayake a founding tinyML engineer and developer from Edge Impulse, a company that offers a full tinyML pipeline that covers data collection, model training and model optimization. Also, Arm partners such as Cartesiam.ai, a company that offers NanoEdge AI, a tool that creates software models on the endpoint based on the sensor behavior observed in real conditions have been pushing the possibilities of tinyML to another level. 

Arm, is also a partner of the TinyML Foundation, an open community that coordinates meet-ups to help people connect, share ideas, and get involved. There are many localised tinyML meet-ups covering UK, Israel and Seattle to name a few, as well as a global series of tinyML Summits. For more information, visit the tinyML foundation website.

Originally posted here.

Read more…

Once again, I’m jumping up and down in excitement because I’m going to be hosting a panel discussion as part of a webinar series — Fast and Fearless: The Future of IoT Software Development — being held under the august auspices of IotCentral.io

At this event, the second of a four-part series, we will be focusing on “AI and IoT Innovation” (see also What the FAQ are AI, ANNs, ML, DL, and DNNs? and What the FAQ are the IoT, IIoT, IoHT, and AIoT?).

9132441666?profile=RESIZE_400x

Panel members Karl Fezer (upper left), Wei Xiao (upper right), Nikhil Bhaskaran (lower left), and Tina Shyuan (bottom right) (Click image to see a larger version)

As we all know, the IoT is transforming the software landscape. What used to be a relatively straightforward embedded software stack has been revolutionized by the IoT, with developers now having to juggle specialized workloads, security, artificial intelligence (AI) and machine learning (ML), real-time connectivity, managing devices that have been deployed into the field… the list goes on.

In this webinar — which will be held on Tuesday 29 June 2021 from 10:00 a.m. to 11:00 a.m. CDT — I will be joined by four industry luminaries to discuss how to juggle the additional complexities that machine learning adds to IoT development, why on-device machine learning is more important now than ever, and what the combination of AI and IoT looks like for developers in the future.

The luminaries in question (and whom I will be questioning) are Karl Fezer (AI Ecosystem Evangelist at Arm), Wei Xiao (Principal Engineer, Sr. Strategic Alliances Manager at Nvidia), Nikhil Bhaskaran (Founder of Shunya OS), and Tina Shyuan (Director of Product Marketing at Qeexo).

So, what say you? Dare I hope that we will have the pleasure of your company and that you will be able to join us to (a) tease your auditory input systems with our discussions and (b) join our question-and-answer free-for-all frensy at the end? If so, may I suggest that you Register Now before all of the good virtual seats are taken, metaphorically speaking, of course.

>> Clicke here to register

Read more…

By Sachin Kotasthane

In his book, 21 Lessons for the 21st Century, the historian Yuval Noah Harari highlights the complex challenges mankind will face on account of technological challenges intertwined with issues such as nationalism, religion, culture, and calamities. In the current industrial world hit by a worldwide pandemic, we see this complexity translate in technology, systems, organizations, and at the workplace.

While in my previous article, Humane IIoT, I discussed the people-centric strategies that enterprises need to adopt while onboarding IoT initiatives of industrial IoT in the workforce, in this article, I will share thoughts on how new-age technologies such as AI, ML, and big data, and of course, industrial IoT, can be used for effective management of complex workforce problems in a factory, thereby changing the way people work and interact, especially in this COVID-stricken world.

Workforce related problems in production can be categorized into:

  1. Time complexity
  2. Effort complexity
  3. Behavioral complexity

Problems categorized in either of the above have a significant impact on the workforce, resulting in a detrimental effect on the outcome—of the product or the organization. The complexity of these problems can be attributed to the fact that the workforce solutions to such issues cannot be found using just engineering or technology fixes as there is no single root-cause, rather, a combination of factors and scenarios. Let us, therefore, explore a few and seek probable workforce solutions.8829066088?profile=RESIZE_584x

Figure 1: Workforce Challenges and Proposed Strategies in Production

  1. Addressing Time Complexity

    Any workforce-related issue that has a detrimental effect on the operational time, due to contributing factors from different factory systems and processes, can be classified as a time complex problem.

    Though classical paper-based schedules, lists, and punch sheets have largely been replaced with IT-systems such as MES, APS, and SRM, the increasing demands for flexibility in manufacturing operations and trends such as batch-size-one, warrant the need for new methodologies to solve these complex problems.

    • Worker attendance

      Anyone who has experienced, at close quarters, a typical day in the life of a factory supervisor, will be conversant with the anxiety that comes just before the start of a production shift. Not knowing who will report absent, until just before the shift starts, is one complex issue every line manager would want to get addressed. While planned absenteeism can be handled to some degree, it is the last-minute sick or emergency-pager text messages, or the transport delays, that make the planning of daily production complex.

      What if there were a solution to get the count that is almost close to the confirmed hands for the shift, an hour or half, at the least, in advance? It turns out that organizations are experimenting with a combination of GPS, RFID, and employee tracking that interacts with resource planning systems, trying to automate the shift planning activity.

      While some legal and privacy issues still need to be addressed, it would not be long before we see people being assigned to workplaces, even before they enter the factory floor.

      During this course of time, while making sure every line manager has accurate information about the confirmed hands for the shift, it is also equally important that health and well-being of employees is monitored during this pandemic time. Use of technologies such as radar, millimeter wave sensors, etc., would ensure the live tracking of workers around the shop-floor and make sure that social distancing norms are well-observed.

    • Resource mapping

      While resource skill-mapping and certification are mostly HR function prerogatives, not having the right resource at the workstation during exigencies such as absenteeism or extra workload is a complex problem. Precious time is lost in locating such resources, or worst still, millions spent in overtime.

      What if there were a tool that analyzed the current workload for a resource with the identified skillset code(s) and gave an accurate estimate of the resource’s availability? This could further be used by shop managers to plan manpower for a shift, keeping them as lean as possible.

      Today, IT teams of OEMs are seen working with software vendors to build such analytical tools that consume data from disparate systems—such as production work orders from MES and swiping details from time systems—to create real-time job profiles. These results are fed to the HR systems to give managers the insights needed to make resource decisions within minutes.

  2. Addressing Effort Complexity

    Just as time complexities result in increased  production time, problems in this category result in an increase in effort by the workforce to complete the same quantity of work. As the effort required is proportionate to the fatigue and long-term well-being of the workforce, seeking workforce solutions to reduce effort would be appreciated. Complexity arises when organizations try to create a method out-of-madness from a variety of factors such as changing workforce profiles, production sequences, logistical and process constraints, and demand fluctuations.

    Thankfully, solutions for this category of problems can be found in new technologies that augment existing systems to get insights and predictions, the results of which can reduce the efforts, thereby channelizing it more productively. Add to this, the demand fluctuations in the current pandemic, having a real-time operational visibility, coupled with advanced analytics, will ensure meeting shift production targets.

    • Intelligent exoskeletons

      Exoskeletons, as we know, are powered bodysuits designed to safeguard and support the user in performing tasks, while increasing overall human efficiency to do the respective tasks. These are deployed in strain-inducing postures or to lift objects that would otherwise be tiring after a few repetitions. Exoskeletons are the new-age answer to reducing user fatigue in areas requiring human skill and dexterity, which otherwise would require a complex robot and cost a bomb.

      However, the complexity that mars exoskeleton users is making the same suit adaptable for a variety of postures, user body types, and jobs at the same workstation. It would help if the exoskeleton could sense the user, set the posture, and adapt itself to the next operation automatically.

      Taking a leaf out of Marvel’s Iron Man, who uses a suit that complements his posture that is controlled by JARVIS, manufacturers can now hope to create intelligent exoskeletons that are always connected to factory systems and user profiles. These suits will adapt and respond to assistive needs, without the need for any intervention, thereby freeing its user to work and focus completely on the main job at hand.

      Given the ongoing COVID situation, it would make the life of workers and the management safe if these suits are equipped with sensors and technologies such as radar/millimeter wave to help observe social distancing, body-temperature measuring, etc.

    • Highlighting likely deviations

      The world over, quality teams on factory floors work with checklists that the quality inspector verifies for every product that comes at the inspection station. While this repetitive task is best suited for robots, when humans execute such repetitive tasks, especially those that involve using visual, audio, touch, and olfactory senses, mistakes and misses are bound to occur. This results in costly reworks and recalls.

      Manufacturers have tried to address this complexity by carrying out rotation of manpower. But this, too, has met with limited success, given the available manpower and ever-increasing workloads.

      Fortunately, predictive quality integrated with feed-forwards techniques and some smart tracking with visuals can be used to highlight the area or zone on the product that is prone to quality slips based on data captured from previous operations. The inspector can then be guided to pay more attention to these areas in the checklist.

  3. Addressing Behavioral Complexity

    Problems of this category usually manifest as a quality issue, but the root cause can often be traced to the workforce behavior or profile. Traditionally, organizations have addressed such problems through experienced supervisors, who as people managers were expected to read these signs, anticipate and align the manpower.

    However, with constantly changing manpower and product variants, these are now complex new-age problems requiring new-age solutions.

    • Heat-mapping workload

      Time and motion studies at the workplace map the user movements around the machine with the time each activity takes for completion, matching the available cycle-time, either by work distribution or by increasing the manpower at that station. Time-consuming and cumbersome as it is, the complexity increases when workload balancing is to be done for teams working on a single product at the workstation. Movements of multiple resources during different sequences are difficult to track, and the different users cannot be expected to follow the same footsteps every time.

      Solving this issue needs a solution that will monitor human motion unobtrusively, link those to the product work content at the workstation, generate recommendations to balance the workload and even out the ‘congestion.’ New industrial applications such as short-range radar and visual feeds can be used to create heat maps of the workforce as they work on the product. This can be superimposed on the digital twin of the process to identify the zone where there is ‘congestion.’ This can be fed to the line-planning function to implement corrective measures such as work distribution or partial outsourcing of the operation.

    • Aging workforce (loss of tribal knowledge)

      With new technology coming to the shop-floor, skills of the current workforce get outdated quickly. Also, with any new hire comes the critical task of training and knowledge sharing from experienced hands. As organizations already face a shortage of manpower, releasing more hands to impart training to a larger workforce audience, possibly at different locations, becomes an even more daunting task.

      Fully realizing the difficulties and reluctance to document, organizations are increasingly adopting AR-based workforce trainings that map to relevant learning and memory needs. These AR solutions capture the minutest of the actions executed by the expert on the shop-floor and can be played back by the novice in-situ as a step-by-step guide. Such tools simplify the knowledge transfer process and also increase worker productivity while reducing costs.

      Further, in extraordinary situations such  as the one we face at present, technologies such as AR offer solutions for effective and personalized support to field personnel, without the need to fly in specialists at multiple sites. This helps keep them safe, and accessible, still.

Key takeaways and Actionable Insights

The shape of the future workforce will be the result of complex, changing, and competing forces. Technology, globalization, demographics, social values, and the changing personal expectations of the workforce will continue to transform and disrupt the way businesses operate, increasing the complexity and radically changing where, and when of future workforce, and how work is done. While the need to constantly reskill and upskill the workforce will be humongous, using new-age techniques and technologies to enhance the effectiveness and efficiency of the existing workforce will come to the spotlight.

8829067296?profile=RESIZE_710x

Figure 2: The Future IIoT Workforce

Organizations will increasingly be required to:

  1. Deploy data farming to dive deep and extract vast amounts of information and process insights embedded in production systems. Tapping into large reservoirs of ‘tribal knowledge’ and digitizing it for ingestion to data lakes is another task that organizations will have to consider.
  2. Augment existing operations systems such as SCADA, DCS, MES, CMMS with new technology digital platforms, AI, AR/VR, big data, and machine learning to underpin and grow the world of work. While there will be no dearth of resources in one or more of the new technologies, organizations will need to ‘acqui-hire’ talent and intellectual property using a specialist, to integrate with existing systems and gain meaningful actionable insights.
  3. Address privacy and data security concerns of the workforce, through the smart use of technologies such as radar and video feeds.

Nonetheless, digital enablement will need to be optimally used to tackle the new normal that the COVID pandemic has set forth in manufacturing—fluctuating demands, modular and flexible assembly lines, reduced workforce, etc.

Originally posted here.

Read more…

IoT in Mining

Flowchart of IoT in Mining

by Vaishali Ramesh

Introduction – Internet of Things in Mining

The Internet of things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, Internet connectivity, and other forms of hardware; these devices can communicate and interact with others over the Internet, and they can be remotely monitored and controlled. In the mining industry, IoT is used as a means of achieving cost and productivity optimization, improving safety measures and developing their artificial intelligence needs.

IoT in the Mining Industry

Considering the numerous incentives it brings, many large mining companies are planning and evaluating ways to start their digital journey and digitalization in mining industry to manage day-to-day mining operations. For instance:

  • Cost optimization & improved productivity through the implementation of sensors on mining equipment and systems that monitor the equipment and its performance. Mining companies are using these large chunks of data – 'big data' to discover more cost-efficient ways of running operations and also reduce overall operational downtime.
  • Ensure the safety of people and equipment by monitoring ventilation and toxicity levels inside underground mines with the help of IoT on a real-time basis. It enables faster and more efficient evacuations or safety drills.
  • Moving from preventive to predictive maintenance
  • Improved and fast-decision making The mining industry faces emergencies almost every hour with a high degree of unpredictability. IoT helps in balancing situations and in making the right decisions in situations where several aspects will be active at the same time to shift everyday operations to algorithms.

IoT & Artificial Intelligence (AI) application in Mining industry

Another benefit of IoT in the mining industry is its role as the underlying system facilitating the use of Artificial Intelligence (AI). From exploration to processing and transportation, AI enhances the power of IoT solutions as a means of streamlining operations, reducing costs, and improving safety within the mining industry.

Using vast amounts of data inputs, such as drilling reports and geological surveys, AI and machine learning can make predictions and provide recommendations on exploration, resulting in a more efficient process with higher-yield results.

AI-powered predictive models also enable mining companies to improve their metals processing methods through more accurate and less environmentally damaging techniques. AI can be used for the automation of trucks and drills, which offers significant cost and safety benefits.

Challenges for IoT in Mining 

Although there are benefits of IoT in the mining industry, implementation of IoT in mining operations has faced many challenges in the past.

  • Limited or unreliable connectivity especially in underground mine sites
  • Remote locations may struggle to pick up 3G/4G signals
  • Declining ore grade has increased the requirements to dig deeper in many mines, which may increase hindrances in the rollout of IoT systems

Mining companies have overcome the challenge of connectivity by implementing more reliable connectivity methods and data-processing strategies to collect, transfer and present mission critical data for analysis. Satellite communications can play a critical role in transferring data back to control centers to provide a complete picture of mission critical metrics. Mining companies worked with trusted IoT satellite connectivity specialists such as ‘Inmarsat’ and their partner eco-systems to ensure they extracted and analyzed their data effectively.

 

Cybersecurity will be another major challenge for IoT-powered mines over the coming years

 As mining operations become more connected, they will also become more vulnerable to hacking, which will require additional investment into security systems.

 

Following a data breach at Goldcorp in 2016, that disproved the previous industry mentality that miners are not typically targets, 10 mining companies established the Mining and Metals Information Sharing and Analysis Centre (MM-ISAC) to share cyber threats among peers in April 2017.

In March 2019, one of the largest aluminum producers in the world, Norsk Hydro, suffered an extensive cyber-attack, which led to the company isolating all plants and operations as well as switching to manual operations and procedures. Several of its plants suffered temporary production stoppages as a result. Mining companies have realized the importance of digital security and are investing in new security technologies.

Digitalization of Mining Industry - Road Ahead

Many mining companies have realized the benefits of digitalization in their mines and have taken steps to implement them. There are four themes that are expected to be central to the digitalization of the mining industry over the next decade are listed below:

8782971674?profile=RESIZE_710x

8782971687?profile=RESIZE_710x

The above graph demonstrates the complexity of each digital technology and its implementation period for the widespread adoption of that technology. There are various factors, such as the complexity and scalability of the technologies involved in the adoption rate for specific technologies and for the overall digital transformation of the mining industry.

The world can expect to witness prominent developments from the mining industry to make it more sustainable. There are some unfavorable impacts of mining on communities, ecosystems, and other surroundings as well. With the intention to minimize them, the power of data is being harnessed through different IoT statements. Overall, IoT helps the mining industry shift towards resource extraction, keeping in mind a particular time frame and footprint that is essential.

Originally posted here.

Read more…

In this blog, we’ll discuss how users of Edge Impulse and Nordic can actuate and stream classification results over BLE using Nordic’s UART Service (NUS). This makes it easy to integrate embedded machine learning into your next generation IoT applications. Seamless integration with nRF Cloud is also possible since nRF Cloud has native support for a BLE terminal. 

We’ve extended the Edge Impulse example functionality already available for the nRF52840 DK and nRF5340 DK by adding the abilities to actuate and stream classification outputs. The extended example is available for download on github, and offers a uniform experience on both hardware platforms. 

Using nRF Toolbox 

After following the instructions in the example’s readme, download the nRF Toolbox mobile application (available on both iOS and Android) and connect to the nRF52840 DK or the nRF5340 DK that will be discovered as “Edge Impulse”. Once connected, set up the interface as follows so that you can get information about the device, available sensors, and start/stop the inferencing process. Save the preset configuration so that you can load it again for future use. Fill out the text of the various commands to use the same convention as what is used for the Edge Impulse AT command set. For example, sending AT+RUNIMPULSE starts the inferencing process on the device. 

IMG_7478_474aa59323.jpg
Figure 1. Setting up the Edge Impulse AT Command set

Once the appropriate AT command set mapping to an icon has been done, hit the appropriate icon. Hitting the ‘play’ button cause the device to start acquiring data and perform inference every couple of seconds. The results can be viewed in the “Logs” menu as shown below.

NUS_ble_logger_view_e9daba3698.jpg
Figure 2. Classification Output over BLE in the Logs View

Using nRF Cloud

Using the nRF Connect for Cloud mobile app for iOS and Android, you can turn your smartphone into a BLE gateway. This allows users to easily connect their BLE NUS devices running Edge Impulse to the nRF Cloud as an easy way to send the inferencing conclusions to the cloud. It’s as easy as setting up the BLE gateway through the app, connecting to the “Edge Impulse” device and watching the same results being displayed in the “Terminal over BLE” window shown below!

Screen_Hunter_229_Feb_16_23_45_26c8913865.jpgFigure 3. Classification Output Shown in nRF Cloud

Summary

Edge Impulse is supercharging IoT with embedded machine learning and we’ve discussed a couple of ways you can easily send conclusions to either the smartphone or to the cloud by leveraging the Nordic UART Service. We look forward to seeing how you’ll leverage Edge Impulse, Nordic and BLE to create your next gen IoT application.  

 

Article originally written for the Edge Impulse blog by Zin Thein Kyaw, Senior User Success Engineer at Edge Impulse.

Read more…

by Evelyn Münster

IoT systems are complex data products: they consist of digital and physical components, networks, communications, processes, data, and artificial intelligence (AI). User interfaces (UIs) are meant to make this level of complexity understandable for the user. However, building a data product that can explain data and models to users in a way that they can understand is an unexpectedly difficult challenge. That is because data products are not your run-of-the-mill software product.

In fact, 85% of all big data and AI projects fail. Why? I can say from experience that it is not the technology but rather the design that is to blame.

So how do you create a valuable data product? The answer lies in a new type of user experience (UX) design. With data products, UX designers are confronted with several additional layers that are not usually found in conventional software products: it’s a relatively complex system, unfamiliar to most users, and comprises data and data visualization as well as AI in some cases. Last but not least, it presents an entirely different set of user problems and tasks than customary software products.

Let’s take things one step at a time. My many years in data product design have taught me that it is possible to create great data products, as long as you keep a few things in mind before you begin.

As a prelude to the UX design process, make sure you and your team answer the following nine questions:

1. Which problem does my product solve for the user?

The user must be able to understand the purpose of your data product in a matter of minutes. The assignment to the five categories of the specific tasks of data products can be helpful: actionable insights, performance feedback loop, root cause analysis, knowledge creation, and trust building.

2. What does the system look like?

Do not expect users to already know how to interpret the data properly. They need to be able to construct a fairly accurate mental model of the system behind the data.

3. What is the level of data quality?

The UI must reflect the quality of the data. A good UI leads the user to trust the product.

4. What is the user’s proficiency level in graphicacy and numeracy?

Conduct user testing to make sure that your audience will be able to read and interpret the data and visuals correctly.

5. What level of detail do I need?

Aggregated data is often too abstract to explain, or to build user trust. A good way to counter this challenge is to use details that explain things. Then again, too much detail can also be overwhelming.

6. Are we dealing with probabilities?

Probabilities are tricky and require explanations. The common practice of cutting out all uncertainties makes the UI deceptively simple – and dangerous.

7. Do we have a data visualization expert on the design team?

UX design applied to data visualization requires a special skillset that covers the entire process, from data analysis to data storytelling. It is always a good idea to have an expert on the team or, alternatively, have someone to reach out to when required.

8. How do we get user feedback?

As soon as the first prototype is ready, you should collect feedback through user testing. The prototype should present content in the most realistic and consistent way possible, especially when it comes to data and figures.

9. Can the user interface boost our marketing and sales?

If the user interface clearly communicates what the data product does and what the process is like, then it could take on a new function: sell your products.

To sum up: we must acknowledge that data products are an unexplored territory. They are not just another software product or dashboard, which is why, in order to create a valuable data product, we will need a specific strategy, new workflows, and a particular set of skills: Data UX Design.

Originally posted HERE 

Read more…

by Stephanie Overby

What's next for edge computing, and how should it shape your strategy? Experts weigh in on edge trends and talk workloads, cloud partnerships, security, and related issues


All year, industry analysts have been predicting that that edge computing – and complimentary 5G network offerings ­­– will see significant growth, as major cloud vendors are deploying more edge servers in local markets and telecom providers pushing ahead with 5G deployments.

The global pandemic has not significantly altered these predictions. In fact, according to IDC’s worldwide IT predictions for 2021, COVID-19’s impact on workforce and operational practices will be the dominant accelerator for 80 percent of edge-driven investments and business model change across most industries over the next few years.

First, what exactly do we mean by edge? Here’s how Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat, defines it: “Edge computing refers to the concept of bringing computing services closer to service consumers or data sources. Fueled by emerging use cases like IoT, AR/VR, robotics, machine learning, and telco network functions that require service provisioning closer to users, edge computing helps solve the key challenges of bandwidth, latency, resiliency, and data sovereignty. It complements the hybrid computing model where centralized computing can be used for compute-intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time.”

Moving data infrastructure, applications, and data resources to the edge can enable faster response to business needs, increased flexibility, greater business scaling, and more effective long-term resilience.

“Edge computing is more important than ever and is becoming a primary consideration for organizations defining new cloud-based products or services that exploit local processing, storage, and security capabilities at the edge of the network through the billions of smart objects known as edge devices,” says Craig Wright, managing director with business transformation and outsourcing advisory firm Pace Harmon.

“In 2021 this will be an increasing consideration as autonomous vehicles become more common, as new post-COVID-19 ways of working require more distributed compute and data processing power without incurring debilitating latency, and as 5G adoption stimulates a whole new generation of augmented reality, real-time application solutions, and gaming experiences on mobile devices,” Wright adds.

8 key edge computing trends in 2021


Noting the steady maturation of edge computing capabilities, Forrester analysts said, “It’s time to step up investment in edge computing,” in their recent Predictions 2020: Edge Computing report. As edge computing emerges as ever more important to business strategy and operations, here are eight trends IT leaders will want to keep an eye on in the year ahead.

1. Edge meets more AI/ML


Until recently, pre-processing of data via near-edge technologies or gateways had its share of challenges due to the increased complexity of data solutions, especially in use cases with a high volume of events or limited connectivity, explains David Williams, managing principal of advisory at digital business consultancy AHEAD. “Now, AI/ML-optimized hardware, container-packaged analytics applications, frameworks such as TensorFlow Lite and tinyML, and open standards such as the Open Neural Network Exchange (ONNX) are encouraging machine learning interoperability and making on-device machine learning and data analytics at the edge a reality.” 

Machine learning at the edge will enable faster decision-making. “Moreover, the amalgamation of edge and AI will further drive real-time personalization,” predicts Mukesh Ranjan, practice director with management consultancy and research firm Everest Group.

“But without proper thresholds in place, anomalies can slowly become standards,” notes Greg Jones, CTO of IoT solutions provider Kajeet. “Advanced policy controls will enable greater confidence in the actions made as a result of the data collected and interpreted from the edge.” 

 

2. Cloud and edge providers explore partnerships


IDC predicts a quarter of organizations will improve business agility by integrating edge data with applications built on cloud platforms by 2024. That will require partnerships across cloud and communications service providers, with some pairing up already beginning between wireless carriers and the major public cloud providers.

According to IDC research, the systems that organizations can leverage to enable real-time analytics are already starting to expand beyond traditional data centers and deployment locations. Devices and computing platforms closer to end customers and/or co-located with real-world assets will become an increasingly critical component of this IT portfolio. This edge computing strategy will be part of a larger computing fabric that also includes public cloud services and on-premises locations.

In this scenario, edge provides immediacy and cloud supports big data computing.

 

3. Edge management takes center stage


“As edge computing becomes as ubiquitous as cloud computing, there will be increased demand for scalability and centralized management,” says Wright of Pace Harmon. IT leaders deploying applications at scale will need to invest in tools to “harness step change in their capabilities so that edge computing solutions and data can be custom-developed right from the processor level and deployed consistently and easily just like any other mainstream compute or storage platform,” Wright says.

The traditional approach to data center or cloud monitoring won’t work at the edge, notes Williams of AHEAD. “Because of the rather volatile nature of edge technologies, organizations should shift from monitoring the health of devices or the applications they run to instead monitor the digital experience of their users,” Williams says. “This user-centric approach to monitoring takes into consideration all of the components that can impact user or customer experience while avoiding the blind spots that often lie between infrastructure and the user.”

As Stu Miniman, director of market insights on the Red Hat cloud platforms team, recently noted, “If there is any remaining argument that hybrid or multi-cloud is a reality, the growth of edge solidifies this truth: When we think about where data and applications live, they will be in many places.”

“The discussion of edge is very different if you are talking to a telco company, one of the public cloud providers, or a typical enterprise,” Miniman adds. “When it comes to Kubernetes and the cloud-native ecosystem, there are many technology-driven solutions competing for mindshare and customer interest. While telecom giants are already extending their NFV solutions into the edge discussion, there are many options for enterprises. Edge becomes part of the overall distributed nature of hybrid environments, so users should work closely with their vendors to make sure the edge does not become an island of technology with a specialized skill set.“

 

4. IT and operational technology begin to converge


Resiliency is perhaps the business term of the year, thanks to a pandemic that revealed most organizations’ weaknesses in this area. IoT-enabled devices (and other connected equipment) drive the adoption of edge solutions where infrastructure and applications are being placed within operations facilities. This approach will be “critical for real-time inference using AI models and digital twins, which can detect changes in operating conditions and automate remediation,” IDC’s research says.

IDC predicts that the number of new operational processes deployed on edge infrastructure will grow from less than 20 percent today to more than 90 percent in 2024 as IT and operational technology converge. Organizations will begin to prioritize not just extracting insight from their new sources of data, but integrating that intelligence into processes and workflows using edge capabilities.

Mobile edge computing (MEC) will be a key enabler of supply chain resilience in 2021, according to Pace Harmon’s Wright. “Through MEC, the ecosystem of supply chain enablers has the ability to deploy artificial intelligence and machine learning to access near real-time insights into consumption data and predictive analytics as well as visibility into the most granular elements of highly complex demand and supply chains,” Wright says. “For organizations to compete and prosper, IT leaders will need to deliver MEC-based solutions that enable an end-to-end view across the supply chain available 24/7 – from the point of manufacture or service  throughout its distribution.”

 

5. Edge eases connected ecosystem adoption


Edge not only enables and enhances the use of IoT, but it also makes it easier for organizations to participate in the connected ecosystem with minimized network latency and bandwidth issues, says Manali Bhaumik, lead analyst at technology research and advisory firm ISG. “Enterprises can leverage edge computing’s scalability to quickly expand to other profitable businesses without incurring huge infrastructure costs,” Bhaumik says. “Enterprises can now move into profitable and fast-streaming markets with the power of edge and easy data processing.”

 

6. COVID-19 drives innovation at the edge


“There’s nothing like a pandemic to take the hype out of technology effectiveness,” says Jason Mann, vice president of IoT at SAS. Take IoT technologies such as computer vision enabled by edge computing: “From social distancing to thermal imaging, safety device assurance and operational changes such as daily cleaning and sanitation activities, computer vision is an essential technology to accelerate solutions that turn raw IoT data (from video/cameras) into actionable insights,” Mann says. Retailers, for example, can use computer vision solutions to identify when people are violating the store’s social distance policy.

 

7. Private 5G adoption increases


“Use cases such as factory floor automation, augmented and virtual reality within field service management, and autonomous vehicles will drive the adoption of private 5G networks,” says Ranjan of Everest Group. Expect more maturity in this area in the year ahead, Ranjan says.

 

8. Edge improves data security


“Data efficiency is improved at the edge compared with the cloud, reducing internet and data costs,” says ISG’s Bhaumik. “The additional layer of security at the edge enhances the user experience.” Edge computing is also not dependent on a single point of application or storage, Bhaumik says. “Rather, it distributes processes across a vast range of devices.”

As organizations adopt DevSecOps and take a “design for security” approach, edge is becoming a major consideration for the CSO to enable secure cloud-based solutions, says Pace Harmon’s Wright. “This is particularly important where cloud architectures alone may not deliver enough resiliency or inherent security to assure the continuity of services required by autonomous solutions, by virtual or augmented reality experiences, or big data transaction processing,” Wright says. “However, IT leaders should be aware of the rate of change and relative lack of maturity of edge management and monitoring systems; consequently, an edge-based security component or solution for today will likely need to be revisited in 18 to 24 months’ time.”

Originally posted here.

Read more…

IoT Sustainability, Data At The Edge.

Recently I've written quite a bit about IOT, and one thing you may have picked up on is that the Internet of Things is made up of a lot of very large numbers.

For starters, the number of connected things is measured in the tens of billions, nearly 100's of billions. Then, behind that very large number is an even bigger number, the amount of data these billions of devices is predicted to generate.

As FutureIoT pointed out, IDC forecasted that the amount of data generated by IoT devices by 2025 is expected to be in excess of 79.4 zettabytes (ZB).

How much is Zettabyte!?

A zettabyte is a very large number indeed, but how big? How can you get your head around it? Does this help...?

A zettabyte is 1,000,000,000,000,000,000,000 bytes. Hmm, that's still not very easy to visualise.

So let's think of it in terms of London busses. Let's image a byte is represented as a human on a bus, a London bus can take 80 people, so you'd need 993 quintillion busses to accommodate 79.4 zettahumans.

I tried to calculate how long 993 quintillion busses would be. Relating it to the distance to the moon, Mars or the Sun wasn't doing it justice, the only comparable scale is the size of the Milky Way. Even with that, our 79.4 zettahumans lined up in London busses, would stretch across the entire Milky Way ... and a fair bit further!

Sustainability Of Cloud Storage For 993 Quintillion Busses Of Data

Everything we do has an impact on the planet. Just by reading this article, you're generating 0.2 grams of Carbon Dioxide (CO2) emissions per second ... so I'll try to keep this short.

Using data from the Stanford Magazine that suggests every 100 gigabytes of data stored in the Cloud could generate 0.2 tons of CO2 per year. Storing 79.4 zettabytes of data in the Cloud could be responsible for the production of 158.8 billion tons of greenhouse gases.

8598308493?profile=RESIZE_710x

 

Putting that number into context, using USA Today numbers, the total emissions for China, USA, India, Russia, Japan and Germany accounted for a little over 21 billion tons in 2019.

So if we just go ahead and let all the IoT devices stream data to the Cloud, those billions of little gadgets would indirectly generate more than seven times the air pollution than the six most industrial countries, combined.

Save The Planet, Store Data At The Edge

As mentioned in a previous article, not all data generated by IoT devices needs to be stored in the Cloud.

Speaking with an expert in data storage, ObjectBox, they say their users on average cut their Cloud data storage by 60%. So how does that work, then? 

First, what does The Edge mean?

The term "Edge" refers to the edge of the network, in other words the last piece of equipment or thing connected to the network closest to the point of usage.

Let me illustrate in rather over-simplified diagram.

8598328665?profile=RESIZE_710x

 

How Can Edge Data Storage Improve Sustainability?

In an article about computer vision and AI on the edge, I talked about how vast amounts of network data could be saved if the cameras themselves could detect what an important event was, and to just send that event over the network, not the entire video stream.

In that example, only the key events and meta data, like the identification marks of a vehicle crossing a stop light, needed to be transmitted across the network. However, it is important to keep the raw content at the edge, so it can be used for post processing, for further learning of the AI or even to be retrieved at a later date, e.g. by law-enforcement.

Another example could be sensors used to detect gas leaks, seismic activity, fires or broken glass. These sensors are capturing volumes of data each second, but they only want to alert someone when something happens - detection of abnormal gas levels, a tremor, fire or smashed window.

Those alerts are the primary purpose of those devices, but the data in between those events can also hold significant value. In this instance, keeping it locally at the edge, but having it as and when needed is an ideal way to reduce network traffic, reduce Cloud storage and save the planet (well, at least a little bit).

Accessible Data At The Edge

Keeping your data at the edge is a great way to save costs and increase performance, but you still want to be able to get access to it, when you need it.

ObjectBox have created not just one of the most efficient ways to store data at the edge, but they've also built a sophisticated and powerful method to synchronise data between edge devices, the Cloud and other edge devices.

Synchronise Data At The Edge - Fog Computing.

Fog Computing (which is computing that happens between the Cloud and the Edge) requires data to be exchanged with devices connected to the edge, but without going all the way to/from the servers in the Cloud. 

In the article on making smarter, safer cities, I talked about how by having AI-equipped cameras share data between themselves they could become smarter, more efficient. 

A solution like that could be using ObjectBox's synchronisation capabilities to efficiently discover and collect relevant video footage from various cameras to help either identify objects or even train the artificial intelligence algorithms running on the AI-equipped cameras at the edge.

Storing Data At The Edge Can Save A Bus Load CO2

Edge computing has a lot of benefits to offer, in this article I've just looked at what could often be overlooked - the cost of transferring data. I've also not really delved into the broader benefits of ObjectBox's technology, for example, from their open source benchmarks, ObjectBox seems to offer a ten times performance benefit compared to other solutions out there, and is being used by more than 300,000 developers.  

The team behind ObjectBox also built technologies currently used by internet heavy-weights like Twitter, Viber and Snapchat, so they seem to be doing something right, and if they can really cut down network traffic by 60%, they could be one of sustainable technology companies to watch.  

Originally posted here.

Read more…

Edge Impulse has joined 1% for Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. To complement this effort we launched the ElephantEdge competition, aiming to create the world’s best elephant tracking device to protect elephant populations that would otherwise be impacted by poaching. In this similar vein, this blog will detail how Lacuna Space, Edge Impulse, a microcontroller and LoraWAN can promote the conservation of endangered species by monitoring bird calls in remote areas.

Over the past years, The Things Networks has worked around the democratization of the Internet of Things, building a global and crowdsourced LoraWAN network carried by the thousands of users operating their own gateways worldwide. Thanks to Lacuna Space’ satellites constellation, the network coverage goes one step further. Lacuna Space uses LEO (Low-Earth Orbit) satellites to provide LoRaWAN coverage at any point around the globe. Messages received by satellites are then routed to ground stations and forwarded to LoRaWAN service providers such as TTN. This technology can benefit several industries and applications: tracking a vessel not only in harbors but across the oceans, monitoring endangered species in remote areas. All that with only 25mW power (ISM band limit) to send a message to the satellite. This is truly amazing!

Most of these devices are typically simple, just sending a single temperature value, or other sensor reading, to the satellite - but with machine learning we can track much more: what devices hear, see, or feel. In this blog post we'll take you through the process of deploying a bird sound classification project using an Arduino Nano 33 BLE Sense board and a Lacuna Space LS200 development kit. The inferencing results are then sent to a TTN application.

Note: Access to the Lacuna Space program and dev kit is closed group at the moment. Get in touch with Lacuna Space for hardware and software access. The technical details to configure your Arduino sketch and TTN application are available in our GitHub repository.

 

Our bird sound model classifies house sparrow and rose-ringed parakeet species with a 92% accuracy. You can clone our public project or make your own classification model following our different tutorials such as Recognize sounds from audio or Continuous Motion Recognition.

3U_BsvrlCU1J-Fvgi4Y_vV_I5u_LPwb7vSFhlV-Y4c3GCbOki958ccFA1GbVN4jVDRIrUVZVAa5gHwTmYKv17oFq6tXrmihcWbUblNACJ9gS1A_0f1sgLsw1WNYeFAz71_5KeimC

Once you have trained your model, head to the Deployment section, select the Arduino library and Build it.

QGsN2Sy7bP1MsEmsnFyH9cbMxsrbSAw-8_Q-K1_X8-YSXmHLXBHQ8SmGvXNv-mVT3InaLUJoJutnogOePJu-5yz2lctPemOrQUaj9rm0MSAbpRhKjxBb3BC5g-U5qHUImf4HIVvT

Import the library within the Arduino IDE, and open the microphone continuous example sketch. We made a few modifications to this example sketch to interact with the LS200 dev kit: we added a new UART link and we transmit classification results only if the prediction score is above 0.8.

Connect with the Lacuna Space dashboard by following the instructions on our application’s GitHub ReadMe. By using a web tracker you can determine when the next good time a Lacuna Space satellite will be flying in your location, then you can receive the signal through your The Things Network application and view the inferencing results on the bird call classification:

    {
       "housesparrow": "0.91406",
       "redringedparakeet": "0.05078",
       "noise": "0.03125",
       "satellite": true,
   }

No Lacuna Space development kit yet? No problem! You can already start building and verifying your ML models on the Arduino Nano 33 BLE Sense or one of our other development kits, test it out with your local LoRaWAN network (by pairing it with a LoRa radio or LoRa module) and switch over to the Lacuna satellites when you get your kit.

Originally posted on the Edge Impulse blog by Aurelien Lequertier - Lead User Success Engineer at Edge Impulse, Jenny Plunkett - User Success Engineer at Edge Impulse, & Raul James - Embedded Software Engineer at Edge Impulse

Read more…
RSS
Email me when there are new items in this category –

Sponsor