Univariate: considering a single variable
Multivariate: considering multiple variables
For object detection projects, labeling your images with their corresponding bounding boxes and names is a tedious and time-consuming task, often requiring a human to label each image by hand. The Edge Impulse Studio has already dramatically decreased the amount of time it takes to get from raw images to a fully labeled dataset with the Data Acquisition Labeling Queue feature directly in your web browser. To make this process even faster, the Edge Impulse Studio is getting a new feature: AI-Assisted Labeling.
Automatically label common objects with YOLOv5.
To get started, create a “Classify multiple objects” images project via the Edge Impulse Studio new project wizard or open your existing object detection project. Upload your object detection images to your Edge Impulse project’s training and testing sets. Then, from the Data Acquisition tab, select “Labeling queue.”
1. Using YOLOv5
By utilizing an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset), common objects in your images can quickly be identified and labeled in seconds without needing to write any code!
To label your objects with YOLOv5 classification, click the Label suggestions dropdown and select “Classify using YOLOv5.” If your object is more specific than what is auto-labeled by YOLOv5, e.g. “coffee” instead of the generic “cup” class, you can modify the auto-labels to the left of your image. These modifications will automatically apply to future images in your labeling queue.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes!
2. Using your own model
You can also use your own trained model to predict and label your new images. From an existing (trained) Edge Impulse object detection project, upload new unlabeled images from the Data Acquisition tab. Then, from the “Labeling queue”, click the Label suggestions dropdown and select “Classify using <your project name>”:
You can also upload a few samples to a new object detection project, train a model, then upload more samples to the Data Acquisition tab and use the AI-Assisted Labeling feature for the rest of your dataset. Classifying using your own trained model is especially useful for objects that are not in YOLOv5, such as industrial objects, etc.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes using your own pre-trained model!
3. Using object tracking
If you have objects that are a similar size or common between images, you can also track your objects between frames within the Edge Impulse Labeling Queue, reducing the amount of time needed to re-label and re-draw bounding boxes over your entire dataset.
Draw your bounding boxes and label your images, then, after clicking Save labels, the objects will be tracked from frame to frame:
Track and auto-label your objects between frames.
Now that your object detection project contains a fully labeled dataset, learn how to train and deploy your model to your edge device: check out our tutorial!
Originally posted on the Edge Impulse blog by Jenny Plunkett - Senior Developer Relations Engineer.
Connected devices are emerging as a modern way for grocers to decrease food spoilage and energy waste losses. With bottom-line advantages, it is not surprising to experience some of the biggest business giants are putting internet of things (IoT) techniques to work and enhance the operating results.
According to Talk Business, Walmart uses IoT for different tasks like tracking food temperature, equipment energy outputs, etc. IoT apps help monitor refrigeration units for several products such as milk cold, ice cream, etc. It reports back to a support team if sensors have intimate equipment difficulties fixed without serious malfunctions and minimal downtime.
IoT solutions are used broadly during Walmart's massive store footprint. The connected devices send a total of 1.5 billion messages each day. Throughout the grocery business, IoT is leveraged to enhance food safety and decrease excessive energy consumption. IoT solutions allow food retailers to reduce food spoilage by 40% and experience a net energy saving of 30%.
It was forecasted that in 2018 grocers lose around $70 million per year due to food spoilage. However, large chains are losing hundreds of millions due to the same. Hence most grocers have started implementing sustainability-focused IoT technology to avoid wastage and increase their business profit to a great extent.
Explore How IoT Helps to Offer Safer Shopping Experience to Shoppers
People prepare to stock up food in preparation; however, there are numerous challenges that the pandemic raised in front of retails. But more retail trends are an answer to all the challenges, beginning from product moving to stock and much more.
It also helps to ensure safe and healthy deliveries to customers' doorsteps, especially whenever they need it. Harvard study shows that grocery shopping is a high-risk activity than traveling on an airplane during COVID 19 pandemic. With COID 19 pandemic raging, retailer stores need to provide an efficient shopping experience; they must look for ways that help them overcome exposure and the risk of infection as customers venture to the store for food.
Most grocers turn towards modern technology such as IoT to help retailers or supermarkets offer safe service and meet the bottom lines. By placing internet of things devices throughout the store, smart grocery carts, baskets, etc., grocers can help make experiences more efficient and safer. Let's check how IoT is helping retailer businesses to overcome today's challenging scenarios and stop food spoilage.
Smart Stock Monitoring
Retailers keep warehouses full of goods to ensure that they don't run when there is high demand. And by integrating IoT-enabled sensors, retailers can easily detect weight on sleeves at warehouses and stores. It also helps them determine popular item lists; keeping track of items helps retailers restock them and prevent overstocking a particular product.
Guaranteeing Timely Deliveries
The report shows that 66% of customers anticipate they will increase online shopping in 2020. Undoubtedly online shopping is a new norm these days; most people prefer to order their daily essentials using a grocery delivery mobile app. However, it becomes vital for brands to ensure timely delivery. It's a critical factor, especially when it comes to customer satisfaction, especially when there is a lack of traditional consumer engagement like a friendly salesperson.
And by integrating IoT-enabled devices into containers and shipments, retailers can quickly obtain insight into shipments. They can even track real-time updates to keep their customers up to date on the approximate delivery time. It's critical, especially when you want to achieve excellent customer experiences in the eCommerce market.
However, data collected using IoT-enabled devices can help you drive the supply chain effectively by empowering retailers with root optimization for ensuring fast delivery. The IoT can play a crucial role, especially when it comes to recognizing warehouse delays. It also helps to optimize delivery operations for better and quicker service.
Manage Store Capacity
With the new COVID 19 safety guidelines to follow, IoT helps retailers ensure their customer's safety by supporting social distancing rules. For example, retailers can place IoT sensors at the entrance and exit to efficiently monitor traffic and grocery carts. The sensors provide accurate and up-to-date details. Details enable retailers to efficiently operate capacity, ensuring safety, and eliminates the need for store "bouncers" at exit and entrance.
Retailers can offer a safe and unique shopping experience by benefiting from IoT. It helps them with contact monitoring and social distancing as well. Retailers can provide shoppers with IoT-enabled wearables paired with the shopper's mobile phone through their branded app. It helps shoppers detect whether they are too close to another shopper and report them through their phone and record the incident.
Combating COVID 19: How IoT is Helping Retailers?
Preventing food spoilage, saving energy, and reducing waste are good practices for grocery stores helping them to increase their profit margin. Due to the coronavirus pandemic, digital resilience has boosted drastically.
Many brands and retailers, however, put a pause on initiatives during COVID 19. But to ensure their survival and profit margins, they need to start with new strategies and techniques. Check few IoT use cases that retailers are considering these days:
Although supermarkets have practiced video surveillance technologies for the last many years, some brands are repurposing these systems to enhance their inventory management practices. Cameras would monitor consumer behaviors and help retailers to prevent theft.
As the customer's purchase preference changes constantly, it becomes essential for grocery stores to start stocking more perishable goods. A 2018 survey shows that more than 60% of retailers integrate refrigerators to store fresh products at their stores and meet customers' growing demand.
And by monitoring customers' purchasing patterns, grocery stores can gauge how much extra produce they need to acquire and how unexpected surges and falls in the market will affect their margins.
Autonomous Cleaning Robots
To promote social distancing, grocery stores are taking all essential precautions. They have implemented a rigorous cleaning schedule to reduce the risk of COVID 19 spread. Retailers are focusing on sanitizing and disinfecting frequent touch surfaces using autonomous cleaning robots.
Robots can be controlled using IoT-based devices and help to sanitize various parts, including doors, shopping carts, countertops, etc. All these tasks demand a reasonable amount of time and employees' attention as well. But performing functions with the help of autonomous cleaning robots can help retailers save their employees time and energy.
Contactless checkout has become increasingly popular over the few years. It has helped supermarkets reduce the requirement for cashier dedication to increase the customers' shopping speed. During the COVID 19 pandemic, the self-service environments have allowed customers a way to acquire essential food, cleaning products, and other day-to-day essentials without having to communicate with supermarket staff directly.
What's Next for Smart Supermarkets?
IoT technologies help supermarkets to tackle new challenges efficiently. Most stores know that they receive products from a different location, which went to a distribution center, making them lose traceability. But with IoT integration, it has become easier for them to track every business activity and provide an internet of shopping experience to customers.
Modern IoT technology makes it possible for grocery stores to track every activity at each stage. It becomes crucial for the health and safety of your customers, while it helps to handle top purchase priorities. It helps grocery stores exist and maintain healthy purchasing trends. IoT initiatives provide the shopping 2.0 infrastructure like smart shelves, carts, cashless, and other options that change the primary service experience.
The head is surely the most complex group of organs in the human body, but also the most delicate. The assessment and prevention of risks in the workplace remains the first priority approach to avoid accidents or reduce the number of serious injuries to the head. This is why wearing a hard hat in an industrial working environment is often required by law and helps to avoid serious accidents.
This article will give you an overview of how to detect that the wearing of a helmet is well respected by all workers using a machine learning object detection model.
For this project, we have been using:
- Edge Impulse Studi to acquire some custom data, visualize the data, train the machine learning model and validate the inference results.
- Part of this public dataset from Roboflow, where the images containing the smallest bounding boxes has been removed.
- Part of the Flicker-Faces-HQ (FFHQ) (under Creative Commons BY 2.0 license) to rebalance the classes in our dataset.
- Google Colab to convert the Yolo v5 PyTorch format from the public dataset to Edge Impulse Ingestion format.
- A Rasberry Pi, NVIDIA Jetson Nano or with any Intel-based Macbooks to deploy the inference model.
Before we get started, here are some insights of the benefits / drawbacks of using a public dataset versus collecting your own.
Using a public dataset is a nice-to-have to start developing your application quickly, validate your idea and check the first results. But we often get disappointed with the results when testing on your own data and in real conditions. As such, for very specific applications, you might spend much more time trying to tweak an open dataset rather than collecting your own. Also, remember to always make sure that the license suits your needs when using a dataset you found online.
On the other hand, collecting your own dataset can take a lot of time, it is a repetitive task and most of the time annoying. But, it gives the possibility to collect data that will be as close as possible to your real life application, with the same lighting conditions, the same camera or the same angle for example. Therefore, your accuracy in your real conditions will be much higher.
Using only custom data can indeed work well in your environment but it might not give the same accuracy in another environment, thus generalization is harder.
The dataset which has been used for this project is a mix of open data, supplemented by custom data.
First iteration, using only the public datasets
At first, we tried to train our model only using a small portion of this public dataset: 176 items in the training set and 57 items in the test set where we took only images containing a bounding box bigger than 130 pixels, we will see later why.
If you go through the public dataset, you can see that the entire dataset is strongly missing some “head” data samples. The dataset is therefore considered as imbalanced.
Several techniques exist to rebalance a dataset, here, we will add new images from Flicker-Faces-HQ (FFHQ). These images do not have bounding boxes but drawing them can be done easily in the Edge Impulse Studio. You can directly import them using the uploader portal. Once your data has been uploaded, just draw boxes around the heads and give it a label as below:
Now that the dataset is more balanced, with both images and bounding boxes of hard hats and heads, we can create an impulse, which is a mix of digital signal processing (DSP) blocks and training blocks:
In this particular object detection use case, the DSP block will resize an image to fit the 320x320 pixels needed for the training block and extract meaningful features for the Neural Network. Although the extracted features don’t show a clear separation between the classes, we can start distinguishing some clusters:
To train the model, we selected the Object Detection training block, which fine tunes a pre-trained object detection model on your data. It gives a good performance even with relatively small image datasets. This object detection learning block relies on MobileNetV2 SSD FPN-Lite 320x320.
According to Daniel Situnayake, co-author of the TinyML book and founding TinyML engineer at Edge Impulse, this model “works much better for larger objects—if the object takes up more space in the frame it’s more likely to be correctly classified.” This has been one of the reason why we got rid of the images containing the smallest bounding boxes in our import script.
After training the model, we obtained a 61.6% accuracy on the training set and 57% accuracy on the testing set. You also might note a huge accuracy difference between the quantized version and the float32 version. However, during the linux deployment, the default model uses the unoptimized version. We will then focus on the float32 version only in this article.
This accuracy is not satisfying, and it tends to have trouble detecting the right objects in real conditions:
Second iteration, adding custom data
On the second iteration of this project, we have gone through the process of collecting some of our own data. A very useful and handy way to collect some custom data is using our mobile phone. You can also perform this step with the same camera you will be using in your factory or your construction site, this will be even closer to the real condition and therefore work best with your use case. In our case, we have been using a white hard hat when collecting data. For example, if your company uses yellow ones, consider collecting your data with the same hard hats.
Once the data has been acquired, go through the labeling process again and retrain your model.
We obtain a model that is slightly more accurate when looking at the training performances. However, in real conditions, the model works far better than the previous one.
Finally, to deploy your model on yourA Rasberry Pi, NVIDIA Jetson Nano or your Intel-based Macbook, just follow the instructions provided in the links. The command line interface `edge-impulse-linux-runner` will create a lightweight web interface where you can see the results.
Note that the inference is run locally and you do not need any internet connection to detect your objects. Last but not least, the trained models and the inference SDK are open source. You can use it, modify it and integrate it to a broader application matching specifically to your needs such as stopping a machine when a head is detected for more than 10 seconds.
This project has been publicly released, feel free to have a look at it on Edge Impulse studio, clone the project and go through every steps to get a better understanding: https://studio.edgeimpulse.com/public/34898/latest
The essence of this use case is, Edge Impulse allows with very little effort to develop industry grade solutions in the health and safety context. Now this can be embedded in bigger industrial control and automation systems with a consistent and stringent focus on machine operations linked to H&S complaint measures. Pre-training models, which later can be easily retrained in the final industrial context as a step of “calibration,” makes this a customizable solution for your next project.
Originally posted on the Edge Impulse blog by Louis Moreau - User Success Engineer at Edge Impulse & Mihajlo Raljic - Sales EMEA at Edge Impulse
By Ricardo Buranello
What Is the Concept of a Virtual Factory?
For a decade, the first Friday in October has been designated as National Manufacturing Day. This day begins a month-long events schedule at manufacturing companies nationwide to attract talent to modern manufacturing careers.
For some period, manufacturing went out of fashion. Young tech talents preferred software and financial services career opportunities. This preference has changed in recent years. The advent of digital technologies and robotization brought some glamour back.
The connected factory is democratizing another innovation — the virtual factory. Without critical asset connection at the IoT edge, the virtual factory couldn’t have been realized by anything other than brand-new factories and technology implementations.
There are technologies that enable decades-old assets to communicate. Such technologies allow us to join machine data with physical environment and operational conditions data. Benefits of virtual factory technologies like digital twin are within reach for greenfield and legacy implementations.
Digital twin technologies can be used for predictive maintenance and scenario planning analysis. At its core, the digital twin is about access to real-time operational data to predict and manage the asset’s life cycle. It leverages relevant life cycle management information inside and outside the factory. The possibilities of bringing various data types together for advanced analysis are promising.
I used to see a distinction between IoT-enabled greenfield technology in new factories and legacy technology in older ones. Data flowed seamlessly from IoT-enabled machines to enterprise systems or the cloud for advanced analytics in new factories’ connected assets. In older factories, while data wanted to move to the enterprise systems or the cloud, it hit countless walls. Innovative factories were creating IoT technologies in proof of concepts (POCs) on legacy equipment, but this wasn’t the norm.
No matter the age of the factory or equipment, everything looks alike. When manufacturing companies invest in machines, the expectation is this asset will be used for a decade or more. We had to invent something inclusive to new and legacy machines and systems.
We had to create something to allow decades-old equipment from diverse brands and types (PLCs, CNCs, robots, etc.) to communicate with one another. We had to think in terms of how to make legacy machines to talk to legacy systems. Connecting was not enough. We had to make it accessible for experienced developers and technicians not specialized in systems integration.
If plant managers and leaders have clear and consumable data, they can use it for analysis and measurement. Surfacing and routing data has enabled innovative use cases in processes controlled by aged equipment. Prescriptive and predictive maintenance reduce downtime and allow access to data. This access enables remote operation and improved safety on the plant floor. Each line flows better, improving supply chain orchestration and worker productivity.
Open protocols aren’t optimized for connecting to each machine. You need tools and optimized drivers to connect to the machines, cut latency time and get the data to where it needs to be in the appropriate format to save costs. These tools include:
- Machine data collection
- Data transformation and visualization
- Device management
- Edge logic
- Embedded security
- Enterprise integration
Plants are trying to get and use data to improve overall equipment effectiveness. OEE applications can calculate how many good and bad parts were produced compared to the machine’s capacity. This analysis can go much deeper. Factories can visualize how the machine works down to sub-processes. They can synchronize each movement to the millisecond and change timing to increase operational efficiency.
The technology is here. It is mature. It’s no longer a question of whether you want to use it — you have it to get to what’s next. I think this makes it a fascinating time for smart manufacturing.
Originally posted here.
By Tony Pisani
For midstream oil and gas operators, data flow can be as important as product flow. The operator’s job is to safely move oil and natural gas from its extraction point (upstream), to where it’s converted to fuels (midstream), to customer delivery locations (downstream). During this process, pump stations, meter stations, storage sites, interconnection points, and block valves generate a substantial volume and variety of data that can lead to increased efficiency and safety.
“Just one pipeline pump station might have 6 Programmable Logic Controllers (PLCs), 12 flow computers, and 30 field instruments, and each one is a source of valuable operational information,” said Mike Walden, IT and SCADA Director for New Frontier Technologies, a Cisco IoT Design-In Partner that implements OT and IT systems for industrial applications. Until recently, data collection from pipelines was so expensive that most operators only collected the bare minimum data required to comply with industry regulations. That data included pump discharge pressure, for instance, but not pump bearing temperature, which helps predict future equipment failures.
A turnkey solution to modernize midstream operations
Now midstream operators are modernizing their pipelines with Industrial Internet of Things (IIoT) solutions. Cisco and New Frontier Technologies have teamed up to offer a solution combining the Cisco 1100 Series Industrial Integrated Services Router, Cisco Edge Intelligence, and New Frontier’s know-how. Deployed at edge locations like pump stations, the solution extracts data from pipeline equipment and is sent via legacy protocols, transforming data at the edge to a format that analytics and other enterprise applications understand. The transformation also minimizes bandwidth usage.
Mike Walden views the Cisco IR1101 as a game-changer for midstream operators. He shared with me that “Before the Cisco IR1101, our customers needed four separate devices to transmit edge data to a cloud server—a router at the pump station, an edge device to do protocol conversion from the old to the new, a network switch, and maybe a firewall to encrypt messages…With the Cisco IR1101, we can meet all of those requirements with one physical device.”
Collect more data, at almost no extra cost
Using this IIoT solution, midstream operators can for the first time:
- Collect all available field data instead of just the data on a polling list. If the maintenance team requests a new type of data, the operations team can meet the request using the built-in protocol translators in Edge Intelligence. “Collecting a new type of data takes almost no extra work,” Mike said. “It makes the operations team look like heroes.”
- Collect data more frequently, helping to spot anomalies. Recording pump discharge pressure more frequently, for example, makes it easier to detect leaks. Interest in predicting (rather than responding to) equipment failure is also growing. The life of pump seals, for example, depends on both the pressure that seals experience over their lifetime and the peak pressures. “If you only collect pump pressure every 30 minutes, you probably missed the spike,” Mike explained. “If you do see the spike and replace the seal before it fails, you can prevent a very costly unexpected outage – saving far more than the cost of a new seal.”
- Protect sensitive data with end-to-end security. Security is built into the IR1101, with secure boot, VPN, certificate-based authentication, and TLS encryption.
- Give IT and OT their own interfaces so they don’t have to rely on the other team. The IT team has an interface to set up network templates to make sure device configuration is secure and consistent. Field engineers have their own interface to extract, transform, and deliver industrial data from Modbus, OPC-UA, EIP/CIP, or MQTT devices.
As Mike summed it up, “It’s finally simple to deploy a secure industrial network that makes all field data available to enterprise applications—in less time and using less bandwidth.”
Originally posted here.
TinyML focuses on optimizing machine learning (ML) workloads so that they can be processed on microcontrollers no bigger than a grain of rice and consuming only milliwatts of power.
Once again, I’m jumping up and down in excitement because I’m going to be hosting a panel discussion as part of a webinar series — Fast and Fearless: The Future of IoT Software Development — being held under the august auspices of IotCentral.io
At this event, the second of a four-part series, we will be focusing on “AI and IoT Innovation” (see also What the FAQ are AI, ANNs, ML, DL, and DNNs? and What the FAQ are the IoT, IIoT, IoHT, and AIoT?).
Panel members Karl Fezer (upper left), Wei Xiao (upper right), Nikhil Bhaskaran (lower left), and Tina Shyuan (bottom right) (Click image to see a larger version)
As we all know, the IoT is transforming the software landscape. What used to be a relatively straightforward embedded software stack has been revolutionized by the IoT, with developers now having to juggle specialized workloads, security, artificial intelligence (AI) and machine learning (ML), real-time connectivity, managing devices that have been deployed into the field… the list goes on.
In this webinar — which will be held on Tuesday 29 June 2021 from 10:00 a.m. to 11:00 a.m. CDT — I will be joined by four industry luminaries to discuss how to juggle the additional complexities that machine learning adds to IoT development, why on-device machine learning is more important now than ever, and what the combination of AI and IoT looks like for developers in the future.
The luminaries in question (and whom I will be questioning) are Karl Fezer (AI Ecosystem Evangelist at Arm), Wei Xiao (Principal Engineer, Sr. Strategic Alliances Manager at Nvidia), Nikhil Bhaskaran (Founder of Shunya OS), and Tina Shyuan (Director of Product Marketing at Qeexo).
So, what say you? Dare I hope that we will have the pleasure of your company and that you will be able to join us to (a) tease your auditory input systems with our discussions and (b) join our question-and-answer free-for-all frensy at the end? If so, may I suggest that you Register Now before all of the good virtual seats are taken, metaphorically speaking, of course.
By Sachin Kotasthane
In his book, 21 Lessons for the 21st Century, the historian Yuval Noah Harari highlights the complex challenges mankind will face on account of technological challenges intertwined with issues such as nationalism, religion, culture, and calamities. In the current industrial world hit by a worldwide pandemic, we see this complexity translate in technology, systems, organizations, and at the workplace.
While in my previous article, Humane IIoT, I discussed the people-centric strategies that enterprises need to adopt while onboarding IoT initiatives of industrial IoT in the workforce, in this article, I will share thoughts on how new-age technologies such as AI, ML, and big data, and of course, industrial IoT, can be used for effective management of complex workforce problems in a factory, thereby changing the way people work and interact, especially in this COVID-stricken world.
Workforce related problems in production can be categorized into:
- Time complexity
- Effort complexity
- Behavioral complexity
Problems categorized in either of the above have a significant impact on the workforce, resulting in a detrimental effect on the outcome—of the product or the organization. The complexity of these problems can be attributed to the fact that the workforce solutions to such issues cannot be found using just engineering or technology fixes as there is no single root-cause, rather, a combination of factors and scenarios. Let us, therefore, explore a few and seek probable workforce solutions.
Figure 1: Workforce Challenges and Proposed Strategies in Production
Addressing Time Complexity
Any workforce-related issue that has a detrimental effect on the operational time, due to contributing factors from different factory systems and processes, can be classified as a time complex problem.
Though classical paper-based schedules, lists, and punch sheets have largely been replaced with IT-systems such as MES, APS, and SRM, the increasing demands for flexibility in manufacturing operations and trends such as batch-size-one, warrant the need for new methodologies to solve these complex problems.
- Worker attendance
Anyone who has experienced, at close quarters, a typical day in the life of a factory supervisor, will be conversant with the anxiety that comes just before the start of a production shift. Not knowing who will report absent, until just before the shift starts, is one complex issue every line manager would want to get addressed. While planned absenteeism can be handled to some degree, it is the last-minute sick or emergency-pager text messages, or the transport delays, that make the planning of daily production complex.
What if there were a solution to get the count that is almost close to the confirmed hands for the shift, an hour or half, at the least, in advance? It turns out that organizations are experimenting with a combination of GPS, RFID, and employee tracking that interacts with resource planning systems, trying to automate the shift planning activity.
While some legal and privacy issues still need to be addressed, it would not be long before we see people being assigned to workplaces, even before they enter the factory floor.
During this course of time, while making sure every line manager has accurate information about the confirmed hands for the shift, it is also equally important that health and well-being of employees is monitored during this pandemic time. Use of technologies such as radar, millimeter wave sensors, etc., would ensure the live tracking of workers around the shop-floor and make sure that social distancing norms are well-observed.
- Resource mapping
While resource skill-mapping and certification are mostly HR function prerogatives, not having the right resource at the workstation during exigencies such as absenteeism or extra workload is a complex problem. Precious time is lost in locating such resources, or worst still, millions spent in overtime.
What if there were a tool that analyzed the current workload for a resource with the identified skillset code(s) and gave an accurate estimate of the resource’s availability? This could further be used by shop managers to plan manpower for a shift, keeping them as lean as possible.
Today, IT teams of OEMs are seen working with software vendors to build such analytical tools that consume data from disparate systems—such as production work orders from MES and swiping details from time systems—to create real-time job profiles. These results are fed to the HR systems to give managers the insights needed to make resource decisions within minutes.
- Worker attendance
Addressing Effort Complexity
Just as time complexities result in increased production time, problems in this category result in an increase in effort by the workforce to complete the same quantity of work. As the effort required is proportionate to the fatigue and long-term well-being of the workforce, seeking workforce solutions to reduce effort would be appreciated. Complexity arises when organizations try to create a method out-of-madness from a variety of factors such as changing workforce profiles, production sequences, logistical and process constraints, and demand fluctuations.
Thankfully, solutions for this category of problems can be found in new technologies that augment existing systems to get insights and predictions, the results of which can reduce the efforts, thereby channelizing it more productively. Add to this, the demand fluctuations in the current pandemic, having a real-time operational visibility, coupled with advanced analytics, will ensure meeting shift production targets.
- Intelligent exoskeletons
Exoskeletons, as we know, are powered bodysuits designed to safeguard and support the user in performing tasks, while increasing overall human efficiency to do the respective tasks. These are deployed in strain-inducing postures or to lift objects that would otherwise be tiring after a few repetitions. Exoskeletons are the new-age answer to reducing user fatigue in areas requiring human skill and dexterity, which otherwise would require a complex robot and cost a bomb.
However, the complexity that mars exoskeleton users is making the same suit adaptable for a variety of postures, user body types, and jobs at the same workstation. It would help if the exoskeleton could sense the user, set the posture, and adapt itself to the next operation automatically.
Taking a leaf out of Marvel’s Iron Man, who uses a suit that complements his posture that is controlled by JARVIS, manufacturers can now hope to create intelligent exoskeletons that are always connected to factory systems and user profiles. These suits will adapt and respond to assistive needs, without the need for any intervention, thereby freeing its user to work and focus completely on the main job at hand.
Given the ongoing COVID situation, it would make the life of workers and the management safe if these suits are equipped with sensors and technologies such as radar/millimeter wave to help observe social distancing, body-temperature measuring, etc.
- Highlighting likely deviations
The world over, quality teams on factory floors work with checklists that the quality inspector verifies for every product that comes at the inspection station. While this repetitive task is best suited for robots, when humans execute such repetitive tasks, especially those that involve using visual, audio, touch, and olfactory senses, mistakes and misses are bound to occur. This results in costly reworks and recalls.
Manufacturers have tried to address this complexity by carrying out rotation of manpower. But this, too, has met with limited success, given the available manpower and ever-increasing workloads.
Fortunately, predictive quality integrated with feed-forwards techniques and some smart tracking with visuals can be used to highlight the area or zone on the product that is prone to quality slips based on data captured from previous operations. The inspector can then be guided to pay more attention to these areas in the checklist.
- Intelligent exoskeletons
Addressing Behavioral Complexity
Problems of this category usually manifest as a quality issue, but the root cause can often be traced to the workforce behavior or profile. Traditionally, organizations have addressed such problems through experienced supervisors, who as people managers were expected to read these signs, anticipate and align the manpower.
However, with constantly changing manpower and product variants, these are now complex new-age problems requiring new-age solutions.
- Heat-mapping workload
Time and motion studies at the workplace map the user movements around the machine with the time each activity takes for completion, matching the available cycle-time, either by work distribution or by increasing the manpower at that station. Time-consuming and cumbersome as it is, the complexity increases when workload balancing is to be done for teams working on a single product at the workstation. Movements of multiple resources during different sequences are difficult to track, and the different users cannot be expected to follow the same footsteps every time.
Solving this issue needs a solution that will monitor human motion unobtrusively, link those to the product work content at the workstation, generate recommendations to balance the workload and even out the ‘congestion.’ New industrial applications such as short-range radar and visual feeds can be used to create heat maps of the workforce as they work on the product. This can be superimposed on the digital twin of the process to identify the zone where there is ‘congestion.’ This can be fed to the line-planning function to implement corrective measures such as work distribution or partial outsourcing of the operation.
- Aging workforce (loss of tribal knowledge)
With new technology coming to the shop-floor, skills of the current workforce get outdated quickly. Also, with any new hire comes the critical task of training and knowledge sharing from experienced hands. As organizations already face a shortage of manpower, releasing more hands to impart training to a larger workforce audience, possibly at different locations, becomes an even more daunting task.
Fully realizing the difficulties and reluctance to document, organizations are increasingly adopting AR-based workforce trainings that map to relevant learning and memory needs. These AR solutions capture the minutest of the actions executed by the expert on the shop-floor and can be played back by the novice in-situ as a step-by-step guide. Such tools simplify the knowledge transfer process and also increase worker productivity while reducing costs.
Further, in extraordinary situations such as the one we face at present, technologies such as AR offer solutions for effective and personalized support to field personnel, without the need to fly in specialists at multiple sites. This helps keep them safe, and accessible, still.
- Heat-mapping workload
Key takeaways and Actionable Insights
The shape of the future workforce will be the result of complex, changing, and competing forces. Technology, globalization, demographics, social values, and the changing personal expectations of the workforce will continue to transform and disrupt the way businesses operate, increasing the complexity and radically changing where, and when of future workforce, and how work is done. While the need to constantly reskill and upskill the workforce will be humongous, using new-age techniques and technologies to enhance the effectiveness and efficiency of the existing workforce will come to the spotlight.
Figure 2: The Future IIoT Workforce
Organizations will increasingly be required to:
- Deploy data farming to dive deep and extract vast amounts of information and process insights embedded in production systems. Tapping into large reservoirs of ‘tribal knowledge’ and digitizing it for ingestion to data lakes is another task that organizations will have to consider.
- Augment existing operations systems such as SCADA, DCS, MES, CMMS with new technology digital platforms, AI, AR/VR, big data, and machine learning to underpin and grow the world of work. While there will be no dearth of resources in one or more of the new technologies, organizations will need to ‘acqui-hire’ talent and intellectual property using a specialist, to integrate with existing systems and gain meaningful actionable insights.
- Address privacy and data security concerns of the workforce, through the smart use of technologies such as radar and video feeds.
Nonetheless, digital enablement will need to be optimally used to tackle the new normal that the COVID pandemic has set forth in manufacturing—fluctuating demands, modular and flexible assembly lines, reduced workforce, etc.
Originally posted here.
Flowchart of IoT in Mining
by Vaishali Ramesh
Introduction – Internet of Things in Mining
The Internet of things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, Internet connectivity, and other forms of hardware; these devices can communicate and interact with others over the Internet, and they can be remotely monitored and controlled. In the mining industry, IoT is used as a means of achieving cost and productivity optimization, improving safety measures and developing their artificial intelligence needs.
IoT in the Mining Industry
Considering the numerous incentives it brings, many large mining companies are planning and evaluating ways to start their digital journey and digitalization in mining industry to manage day-to-day mining operations. For instance:
- Cost optimization & improved productivity through the implementation of sensors on mining equipment and systems that monitor the equipment and its performance. Mining companies are using these large chunks of data – 'big data' to discover more cost-efficient ways of running operations and also reduce overall operational downtime.
- Ensure the safety of people and equipment by monitoring ventilation and toxicity levels inside underground mines with the help of IoT on a real-time basis. It enables faster and more efficient evacuations or safety drills.
- Moving from preventive to predictive maintenance
- Improved and fast-decision making The mining industry faces emergencies almost every hour with a high degree of unpredictability. IoT helps in balancing situations and in making the right decisions in situations where several aspects will be active at the same time to shift everyday operations to algorithms.
IoT & Artificial Intelligence (AI) application in Mining industry
Another benefit of IoT in the mining industry is its role as the underlying system facilitating the use of Artificial Intelligence (AI). From exploration to processing and transportation, AI enhances the power of IoT solutions as a means of streamlining operations, reducing costs, and improving safety within the mining industry.
Using vast amounts of data inputs, such as drilling reports and geological surveys, AI and machine learning can make predictions and provide recommendations on exploration, resulting in a more efficient process with higher-yield results.
AI-powered predictive models also enable mining companies to improve their metals processing methods through more accurate and less environmentally damaging techniques. AI can be used for the automation of trucks and drills, which offers significant cost and safety benefits.
Challenges for IoT in Mining
Although there are benefits of IoT in the mining industry, implementation of IoT in mining operations has faced many challenges in the past.
- Limited or unreliable connectivity especially in underground mine sites
- Remote locations may struggle to pick up 3G/4G signals
- Declining ore grade has increased the requirements to dig deeper in many mines, which may increase hindrances in the rollout of IoT systems
Mining companies have overcome the challenge of connectivity by implementing more reliable connectivity methods and data-processing strategies to collect, transfer and present mission critical data for analysis. Satellite communications can play a critical role in transferring data back to control centers to provide a complete picture of mission critical metrics. Mining companies worked with trusted IoT satellite connectivity specialists such as ‘Inmarsat’ and their partner eco-systems to ensure they extracted and analyzed their data effectively.
Cybersecurity will be another major challenge for IoT-powered mines over the coming years
As mining operations become more connected, they will also become more vulnerable to hacking, which will require additional investment into security systems.
Following a data breach at Goldcorp in 2016, that disproved the previous industry mentality that miners are not typically targets, 10 mining companies established the Mining and Metals Information Sharing and Analysis Centre (MM-ISAC) to share cyber threats among peers in April 2017.
In March 2019, one of the largest aluminum producers in the world, Norsk Hydro, suffered an extensive cyber-attack, which led to the company isolating all plants and operations as well as switching to manual operations and procedures. Several of its plants suffered temporary production stoppages as a result. Mining companies have realized the importance of digital security and are investing in new security technologies.
Digitalization of Mining Industry - Road Ahead
Many mining companies have realized the benefits of digitalization in their mines and have taken steps to implement them. There are four themes that are expected to be central to the digitalization of the mining industry over the next decade are listed below:
The above graph demonstrates the complexity of each digital technology and its implementation period for the widespread adoption of that technology. There are various factors, such as the complexity and scalability of the technologies involved in the adoption rate for specific technologies and for the overall digital transformation of the mining industry.
The world can expect to witness prominent developments from the mining industry to make it more sustainable. There are some unfavorable impacts of mining on communities, ecosystems, and other surroundings as well. With the intention to minimize them, the power of data is being harnessed through different IoT statements. Overall, IoT helps the mining industry shift towards resource extraction, keeping in mind a particular time frame and footprint that is essential.
Originally posted here.
In this blog, we’ll discuss how users of Edge Impulse and Nordic can actuate and stream classification results over BLE using Nordic’s UART Service (NUS). This makes it easy to integrate embedded machine learning into your next generation IoT applications. Seamless integration with nRF Cloud is also possible since nRF Cloud has native support for a BLE terminal.
We’ve extended the Edge Impulse example functionality already available for the nRF52840 DK and nRF5340 DK by adding the abilities to actuate and stream classification outputs. The extended example is available for download on github, and offers a uniform experience on both hardware platforms.
Using nRF Toolbox
After following the instructions in the example’s readme, download the nRF Toolbox mobile application (available on both iOS and Android) and connect to the nRF52840 DK or the nRF5340 DK that will be discovered as “Edge Impulse”. Once connected, set up the interface as follows so that you can get information about the device, available sensors, and start/stop the inferencing process. Save the preset configuration so that you can load it again for future use. Fill out the text of the various commands to use the same convention as what is used for the Edge Impulse AT command set. For example, sending AT+RUNIMPULSE starts the inferencing process on the device.
Figure 1. Setting up the Edge Impulse AT Command set
Once the appropriate AT command set mapping to an icon has been done, hit the appropriate icon. Hitting the ‘play’ button cause the device to start acquiring data and perform inference every couple of seconds. The results can be viewed in the “Logs” menu as shown below.
Figure 2. Classification Output over BLE in the Logs View
Using nRF Cloud
Using the nRF Connect for Cloud mobile app for iOS and Android, you can turn your smartphone into a BLE gateway. This allows users to easily connect their BLE NUS devices running Edge Impulse to the nRF Cloud as an easy way to send the inferencing conclusions to the cloud. It’s as easy as setting up the BLE gateway through the app, connecting to the “Edge Impulse” device and watching the same results being displayed in the “Terminal over BLE” window shown below!
Figure 3. Classification Output Shown in nRF Cloud
Edge Impulse is supercharging IoT with embedded machine learning and we’ve discussed a couple of ways you can easily send conclusions to either the smartphone or to the cloud by leveraging the Nordic UART Service. We look forward to seeing how you’ll leverage Edge Impulse, Nordic and BLE to create your next gen IoT application.
Article originally written for the Edge Impulse blog by Zin Thein Kyaw, Senior User Success Engineer at Edge Impulse.
by Evelyn Münster
IoT systems are complex data products: they consist of digital and physical components, networks, communications, processes, data, and artificial intelligence (AI). User interfaces (UIs) are meant to make this level of complexity understandable for the user. However, building a data product that can explain data and models to users in a way that they can understand is an unexpectedly difficult challenge. That is because data products are not your run-of-the-mill software product.
In fact, 85% of all big data and AI projects fail. Why? I can say from experience that it is not the technology but rather the design that is to blame.
So how do you create a valuable data product? The answer lies in a new type of user experience (UX) design. With data products, UX designers are confronted with several additional layers that are not usually found in conventional software products: it’s a relatively complex system, unfamiliar to most users, and comprises data and data visualization as well as AI in some cases. Last but not least, it presents an entirely different set of user problems and tasks than customary software products.
Let’s take things one step at a time. My many years in data product design have taught me that it is possible to create great data products, as long as you keep a few things in mind before you begin.
As a prelude to the UX design process, make sure you and your team answer the following nine questions:
1. Which problem does my product solve for the user?
The user must be able to understand the purpose of your data product in a matter of minutes. The assignment to the five categories of the specific tasks of data products can be helpful: actionable insights, performance feedback loop, root cause analysis, knowledge creation, and trust building.
2. What does the system look like?
Do not expect users to already know how to interpret the data properly. They need to be able to construct a fairly accurate mental model of the system behind the data.
3. What is the level of data quality?
The UI must reflect the quality of the data. A good UI leads the user to trust the product.
4. What is the user’s proficiency level in graphicacy and numeracy?
Conduct user testing to make sure that your audience will be able to read and interpret the data and visuals correctly.
5. What level of detail do I need?
Aggregated data is often too abstract to explain, or to build user trust. A good way to counter this challenge is to use details that explain things. Then again, too much detail can also be overwhelming.
6. Are we dealing with probabilities?
Probabilities are tricky and require explanations. The common practice of cutting out all uncertainties makes the UI deceptively simple – and dangerous.
7. Do we have a data visualization expert on the design team?
UX design applied to data visualization requires a special skillset that covers the entire process, from data analysis to data storytelling. It is always a good idea to have an expert on the team or, alternatively, have someone to reach out to when required.
8. How do we get user feedback?
As soon as the first prototype is ready, you should collect feedback through user testing. The prototype should present content in the most realistic and consistent way possible, especially when it comes to data and figures.
9. Can the user interface boost our marketing and sales?
If the user interface clearly communicates what the data product does and what the process is like, then it could take on a new function: sell your products.
To sum up: we must acknowledge that data products are an unexplored territory. They are not just another software product or dashboard, which is why, in order to create a valuable data product, we will need a specific strategy, new workflows, and a particular set of skills: Data UX Design.
Originally posted HERE
by Stephanie Overby
What's next for edge computing, and how should it shape your strategy? Experts weigh in on edge trends and talk workloads, cloud partnerships, security, and related issues
All year, industry analysts have been predicting that that edge computing – and complimentary 5G network offerings – will see significant growth, as major cloud vendors are deploying more edge servers in local markets and telecom providers pushing ahead with 5G deployments.
The global pandemic has not significantly altered these predictions. In fact, according to IDC’s worldwide IT predictions for 2021, COVID-19’s impact on workforce and operational practices will be the dominant accelerator for 80 percent of edge-driven investments and business model change across most industries over the next few years.
First, what exactly do we mean by edge? Here’s how Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat, defines it: “Edge computing refers to the concept of bringing computing services closer to service consumers or data sources. Fueled by emerging use cases like IoT, AR/VR, robotics, machine learning, and telco network functions that require service provisioning closer to users, edge computing helps solve the key challenges of bandwidth, latency, resiliency, and data sovereignty. It complements the hybrid computing model where centralized computing can be used for compute-intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time.”
Moving data infrastructure, applications, and data resources to the edge can enable faster response to business needs, increased flexibility, greater business scaling, and more effective long-term resilience.
“Edge computing is more important than ever and is becoming a primary consideration for organizations defining new cloud-based products or services that exploit local processing, storage, and security capabilities at the edge of the network through the billions of smart objects known as edge devices,” says Craig Wright, managing director with business transformation and outsourcing advisory firm Pace Harmon.
“In 2021 this will be an increasing consideration as autonomous vehicles become more common, as new post-COVID-19 ways of working require more distributed compute and data processing power without incurring debilitating latency, and as 5G adoption stimulates a whole new generation of augmented reality, real-time application solutions, and gaming experiences on mobile devices,” Wright adds.
8 key edge computing trends in 2021
Noting the steady maturation of edge computing capabilities, Forrester analysts said, “It’s time to step up investment in edge computing,” in their recent Predictions 2020: Edge Computing report. As edge computing emerges as ever more important to business strategy and operations, here are eight trends IT leaders will want to keep an eye on in the year ahead.
1. Edge meets more AI/ML
Until recently, pre-processing of data via near-edge technologies or gateways had its share of challenges due to the increased complexity of data solutions, especially in use cases with a high volume of events or limited connectivity, explains David Williams, managing principal of advisory at digital business consultancy AHEAD. “Now, AI/ML-optimized hardware, container-packaged analytics applications, frameworks such as TensorFlow Lite and tinyML, and open standards such as the Open Neural Network Exchange (ONNX) are encouraging machine learning interoperability and making on-device machine learning and data analytics at the edge a reality.”
Machine learning at the edge will enable faster decision-making. “Moreover, the amalgamation of edge and AI will further drive real-time personalization,” predicts Mukesh Ranjan, practice director with management consultancy and research firm Everest Group.
“But without proper thresholds in place, anomalies can slowly become standards,” notes Greg Jones, CTO of IoT solutions provider Kajeet. “Advanced policy controls will enable greater confidence in the actions made as a result of the data collected and interpreted from the edge.”
2. Cloud and edge providers explore partnerships
IDC predicts a quarter of organizations will improve business agility by integrating edge data with applications built on cloud platforms by 2024. That will require partnerships across cloud and communications service providers, with some pairing up already beginning between wireless carriers and the major public cloud providers.
According to IDC research, the systems that organizations can leverage to enable real-time analytics are already starting to expand beyond traditional data centers and deployment locations. Devices and computing platforms closer to end customers and/or co-located with real-world assets will become an increasingly critical component of this IT portfolio. This edge computing strategy will be part of a larger computing fabric that also includes public cloud services and on-premises locations.
In this scenario, edge provides immediacy and cloud supports big data computing.
3. Edge management takes center stage
“As edge computing becomes as ubiquitous as cloud computing, there will be increased demand for scalability and centralized management,” says Wright of Pace Harmon. IT leaders deploying applications at scale will need to invest in tools to “harness step change in their capabilities so that edge computing solutions and data can be custom-developed right from the processor level and deployed consistently and easily just like any other mainstream compute or storage platform,” Wright says.
The traditional approach to data center or cloud monitoring won’t work at the edge, notes Williams of AHEAD. “Because of the rather volatile nature of edge technologies, organizations should shift from monitoring the health of devices or the applications they run to instead monitor the digital experience of their users,” Williams says. “This user-centric approach to monitoring takes into consideration all of the components that can impact user or customer experience while avoiding the blind spots that often lie between infrastructure and the user.”
As Stu Miniman, director of market insights on the Red Hat cloud platforms team, recently noted, “If there is any remaining argument that hybrid or multi-cloud is a reality, the growth of edge solidifies this truth: When we think about where data and applications live, they will be in many places.”
“The discussion of edge is very different if you are talking to a telco company, one of the public cloud providers, or a typical enterprise,” Miniman adds. “When it comes to Kubernetes and the cloud-native ecosystem, there are many technology-driven solutions competing for mindshare and customer interest. While telecom giants are already extending their NFV solutions into the edge discussion, there are many options for enterprises. Edge becomes part of the overall distributed nature of hybrid environments, so users should work closely with their vendors to make sure the edge does not become an island of technology with a specialized skill set.“
4. IT and operational technology begin to converge
Resiliency is perhaps the business term of the year, thanks to a pandemic that revealed most organizations’ weaknesses in this area. IoT-enabled devices (and other connected equipment) drive the adoption of edge solutions where infrastructure and applications are being placed within operations facilities. This approach will be “critical for real-time inference using AI models and digital twins, which can detect changes in operating conditions and automate remediation,” IDC’s research says.
IDC predicts that the number of new operational processes deployed on edge infrastructure will grow from less than 20 percent today to more than 90 percent in 2024 as IT and operational technology converge. Organizations will begin to prioritize not just extracting insight from their new sources of data, but integrating that intelligence into processes and workflows using edge capabilities.
Mobile edge computing (MEC) will be a key enabler of supply chain resilience in 2021, according to Pace Harmon’s Wright. “Through MEC, the ecosystem of supply chain enablers has the ability to deploy artificial intelligence and machine learning to access near real-time insights into consumption data and predictive analytics as well as visibility into the most granular elements of highly complex demand and supply chains,” Wright says. “For organizations to compete and prosper, IT leaders will need to deliver MEC-based solutions that enable an end-to-end view across the supply chain available 24/7 – from the point of manufacture or service throughout its distribution.”
5. Edge eases connected ecosystem adoption
Edge not only enables and enhances the use of IoT, but it also makes it easier for organizations to participate in the connected ecosystem with minimized network latency and bandwidth issues, says Manali Bhaumik, lead analyst at technology research and advisory firm ISG. “Enterprises can leverage edge computing’s scalability to quickly expand to other profitable businesses without incurring huge infrastructure costs,” Bhaumik says. “Enterprises can now move into profitable and fast-streaming markets with the power of edge and easy data processing.”
6. COVID-19 drives innovation at the edge
“There’s nothing like a pandemic to take the hype out of technology effectiveness,” says Jason Mann, vice president of IoT at SAS. Take IoT technologies such as computer vision enabled by edge computing: “From social distancing to thermal imaging, safety device assurance and operational changes such as daily cleaning and sanitation activities, computer vision is an essential technology to accelerate solutions that turn raw IoT data (from video/cameras) into actionable insights,” Mann says. Retailers, for example, can use computer vision solutions to identify when people are violating the store’s social distance policy.
7. Private 5G adoption increases
“Use cases such as factory floor automation, augmented and virtual reality within field service management, and autonomous vehicles will drive the adoption of private 5G networks,” says Ranjan of Everest Group. Expect more maturity in this area in the year ahead, Ranjan says.
8. Edge improves data security
“Data efficiency is improved at the edge compared with the cloud, reducing internet and data costs,” says ISG’s Bhaumik. “The additional layer of security at the edge enhances the user experience.” Edge computing is also not dependent on a single point of application or storage, Bhaumik says. “Rather, it distributes processes across a vast range of devices.”
As organizations adopt DevSecOps and take a “design for security” approach, edge is becoming a major consideration for the CSO to enable secure cloud-based solutions, says Pace Harmon’s Wright. “This is particularly important where cloud architectures alone may not deliver enough resiliency or inherent security to assure the continuity of services required by autonomous solutions, by virtual or augmented reality experiences, or big data transaction processing,” Wright says. “However, IT leaders should be aware of the rate of change and relative lack of maturity of edge management and monitoring systems; consequently, an edge-based security component or solution for today will likely need to be revisited in 18 to 24 months’ time.”
Originally posted here.
IoT Sustainability, Data At The Edge.
Recently I've written quite a bit about IOT, and one thing you may have picked up on is that the Internet of Things is made up of a lot of very large numbers.
For starters, the number of connected things is measured in the tens of billions, nearly 100's of billions. Then, behind that very large number is an even bigger number, the amount of data these billions of devices is predicted to generate.
As FutureIoT pointed out, IDC forecasted that the amount of data generated by IoT devices by 2025 is expected to be in excess of 79.4 zettabytes (ZB).
How much is Zettabyte!?
A zettabyte is a very large number indeed, but how big? How can you get your head around it? Does this help...?
A zettabyte is 1,000,000,000,000,000,000,000 bytes. Hmm, that's still not very easy to visualise.
So let's think of it in terms of London busses. Let's image a byte is represented as a human on a bus, a London bus can take 80 people, so you'd need 993 quintillion busses to accommodate 79.4 zettahumans.
I tried to calculate how long 993 quintillion busses would be. Relating it to the distance to the moon, Mars or the Sun wasn't doing it justice, the only comparable scale is the size of the Milky Way. Even with that, our 79.4 zettahumans lined up in London busses, would stretch across the entire Milky Way ... and a fair bit further!
Sustainability Of Cloud Storage For 993 Quintillion Busses Of Data
Everything we do has an impact on the planet. Just by reading this article, you're generating 0.2 grams of Carbon Dioxide (CO2) emissions per second ... so I'll try to keep this short.
Using data from the Stanford Magazine that suggests every 100 gigabytes of data stored in the Cloud could generate 0.2 tons of CO2 per year. Storing 79.4 zettabytes of data in the Cloud could be responsible for the production of 158.8 billion tons of greenhouse gases.
Putting that number into context, using USA Today numbers, the total emissions for China, USA, India, Russia, Japan and Germany accounted for a little over 21 billion tons in 2019.
So if we just go ahead and let all the IoT devices stream data to the Cloud, those billions of little gadgets would indirectly generate more than seven times the air pollution than the six most industrial countries, combined.
Save The Planet, Store Data At The Edge
As mentioned in a previous article, not all data generated by IoT devices needs to be stored in the Cloud.
Speaking with an expert in data storage, ObjectBox, they say their users on average cut their Cloud data storage by 60%. So how does that work, then?
First, what does The Edge mean?
The term "Edge" refers to the edge of the network, in other words the last piece of equipment or thing connected to the network closest to the point of usage.
Let me illustrate in rather over-simplified diagram.
How Can Edge Data Storage Improve Sustainability?
In an article about computer vision and AI on the edge, I talked about how vast amounts of network data could be saved if the cameras themselves could detect what an important event was, and to just send that event over the network, not the entire video stream.
In that example, only the key events and meta data, like the identification marks of a vehicle crossing a stop light, needed to be transmitted across the network. However, it is important to keep the raw content at the edge, so it can be used for post processing, for further learning of the AI or even to be retrieved at a later date, e.g. by law-enforcement.
Another example could be sensors used to detect gas leaks, seismic activity, fires or broken glass. These sensors are capturing volumes of data each second, but they only want to alert someone when something happens - detection of abnormal gas levels, a tremor, fire or smashed window.
Those alerts are the primary purpose of those devices, but the data in between those events can also hold significant value. In this instance, keeping it locally at the edge, but having it as and when needed is an ideal way to reduce network traffic, reduce Cloud storage and save the planet (well, at least a little bit).
Accessible Data At The Edge
Keeping your data at the edge is a great way to save costs and increase performance, but you still want to be able to get access to it, when you need it.
ObjectBox have created not just one of the most efficient ways to store data at the edge, but they've also built a sophisticated and powerful method to synchronise data between edge devices, the Cloud and other edge devices.
Synchronise Data At The Edge - Fog Computing.
Fog Computing (which is computing that happens between the Cloud and the Edge) requires data to be exchanged with devices connected to the edge, but without going all the way to/from the servers in the Cloud.
In the article on making smarter, safer cities, I talked about how by having AI-equipped cameras share data between themselves they could become smarter, more efficient.
A solution like that could be using ObjectBox's synchronisation capabilities to efficiently discover and collect relevant video footage from various cameras to help either identify objects or even train the artificial intelligence algorithms running on the AI-equipped cameras at the edge.
Storing Data At The Edge Can Save A Bus Load CO2
Edge computing has a lot of benefits to offer, in this article I've just looked at what could often be overlooked - the cost of transferring data. I've also not really delved into the broader benefits of ObjectBox's technology, for example, from their open source benchmarks, ObjectBox seems to offer a ten times performance benefit compared to other solutions out there, and is being used by more than 300,000 developers.
The team behind ObjectBox also built technologies currently used by internet heavy-weights like Twitter, Viber and Snapchat, so they seem to be doing something right, and if they can really cut down network traffic by 60%, they could be one of sustainable technology companies to watch.
Originally posted here.
Edge Impulse has joined 1% for Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. To complement this effort we launched the ElephantEdge competition, aiming to create the world’s best elephant tracking device to protect elephant populations that would otherwise be impacted by poaching. In this similar vein, this blog will detail how Lacuna Space, Edge Impulse, a microcontroller and LoraWAN can promote the conservation of endangered species by monitoring bird calls in remote areas.
Over the past years, The Things Networks has worked around the democratization of the Internet of Things, building a global and crowdsourced LoraWAN network carried by the thousands of users operating their own gateways worldwide. Thanks to Lacuna Space’ satellites constellation, the network coverage goes one step further. Lacuna Space uses LEO (Low-Earth Orbit) satellites to provide LoRaWAN coverage at any point around the globe. Messages received by satellites are then routed to ground stations and forwarded to LoRaWAN service providers such as TTN. This technology can benefit several industries and applications: tracking a vessel not only in harbors but across the oceans, monitoring endangered species in remote areas. All that with only 25mW power (ISM band limit) to send a message to the satellite. This is truly amazing!
Most of these devices are typically simple, just sending a single temperature value, or other sensor reading, to the satellite - but with machine learning we can track much more: what devices hear, see, or feel. In this blog post we'll take you through the process of deploying a bird sound classification project using an Arduino Nano 33 BLE Sense board and a Lacuna Space LS200 development kit. The inferencing results are then sent to a TTN application.
Note: Access to the Lacuna Space program and dev kit is closed group at the moment. Get in touch with Lacuna Space for hardware and software access. The technical details to configure your Arduino sketch and TTN application are available in our GitHub repository.
Our bird sound model classifies house sparrow and rose-ringed parakeet species with a 92% accuracy. You can clone our public project or make your own classification model following our different tutorials such as Recognize sounds from audio or Continuous Motion Recognition.
Once you have trained your model, head to the Deployment section, select the Arduino library and Build it.
Import the library within the Arduino IDE, and open the microphone continuous example sketch. We made a few modifications to this example sketch to interact with the LS200 dev kit: we added a new UART link and we transmit classification results only if the prediction score is above 0.8.
Connect with the Lacuna Space dashboard by following the instructions on our application’s GitHub ReadMe. By using a web tracker you can determine when the next good time a Lacuna Space satellite will be flying in your location, then you can receive the signal through your The Things Network application and view the inferencing results on the bird call classification:
No Lacuna Space development kit yet? No problem! You can already start building and verifying your ML models on the Arduino Nano 33 BLE Sense or one of our other development kits, test it out with your local LoRaWAN network (by pairing it with a LoRa radio or LoRa module) and switch over to the Lacuna satellites when you get your kit.
Originally posted on the Edge Impulse blog by Aurelien Lequertier - Lead User Success Engineer at Edge Impulse, Jenny Plunkett - User Success Engineer at Edge Impulse, & Raul James - Embedded Software Engineer at Edge Impulse
In 2020 we saw retailers hard hit by the economic effects of the COVID-19 pandemic with dozens of retailers—Neiman Marcus, J.C. Penney, and Brooks Brothers to name a few— declaring bankruptcy. During the unprecedented chaos of lockdowns and social distancing, consumers accelerated their shift to online shopping. Retailers like Target and Best Buy saw online sales double while Amazon’s e–commerce sales grew 39 percent.1 Retailers navigated supply chain disruptions due to COVID-19, climate change events, trade tensions, and cybersecurity events.
After the last twelve tumultuous months, what will 2021 bring for the retail industry? I spoke with Microsoft Azure IoT partners to understand how they are planning for 2021 and compiled insights about five retail trends. One theme we’re seeing is a focus on efficiency. Retailers will look to pre-configured digital platforms that leverage cloud-based technologies including the Internet of Things (IoT), artificial intelligence (AI), and edge computing to meet their business goals.
Empowering frontline workers with real-time data
In 2021, retailers will increase efficiency by empowering frontline workers with real-time data. Retail employees will be able to respond more quickly to customers and expand their roles to manage curbside pickups, returns, and frictionless kiosks.
In H&M Mitte Garten in Berlin, H&M empowered employee ambassadors with fashionable bracelets connected to the Azure cloud. Ambassadors were able to receive real-time requests via their bracelets when customers needed help in fitting rooms or at a cash desk. The ambassadors also received visual merchandising instructions and promotional updates.
Through the app built on Microsoft partner Turnpike’s wearable SaaS platform leveraging Azure IoT Hub, these frontline workers could also communicate with their peers or their management team during or after store hours. With the real-time data from the connected bracelets, H&M ambassadors were empowered to delivered best-in-class service.
Carl Norberg, Founder, Turnpike explained, “We realized that by connecting store IoT sensors, POS systems, and AI cameras, store staff can be empowered to interact at the right place at the right time.”
Leveraging live stream video to innovate omnichannel
Livestreaming has been exploding in China as influencers sell through their social media channels. Forbes recently projected that nearly 40 percent of China’s population will have viewed livestreams during 2020.2 Retailers in the West are starting to leverage live stream technology to create innovative omnichannel solutions.
For example, Kjell & Company, one of Scandinavia’s leading consumer electronics retailers, is using a solution from Bambuser and Ombori called Omni-queue built on top of the Ombori Grid. Omni-queue enables store employees to handle a seamless combination of physical and online visitors within the same queue using one-to-one live stream video for online visitors.
Kjell & Company ensures e-commerce customers receive the same level of technical expertise and personalized service they would receive in one of their physical locations. Omni-queue also enables its store employees to be utilized highly efficiently with advanced routing and knowledge matching.
Maryam Ghahremani, CEO of Bambuser explains, “Live video shopping is the future, and we are so excited to see how Kjell & Company has found a use for our one-to-one solution.” Martin Knutson, CTO of Kjell & Company added “With physical store locations heavily affected due to the pandemic, offering a new and innovative way for customers to ask questions—especially about electronics—will be key to Kjell’s continued success in moving customers online.”
Augmenting omnichannel with dark stores and micro-fulfillment centers
In 2021, retailers will continue experimenting with dark stores—traditional retail stores that have been converted to local fulfillment centers—and micro-fulfillment centers. These supply chain innovations will increase efficiency by bringing products closer to customers.
Microsoft partner Attabotics, a 3D robotics supply chain company, works with an American luxury department store retailer to reduce costs and delivery time using a micro-fulfillment center. Attabotics’ unique use of both horizontal and vertical space reduces warehouse needs by 85 percent. Attabotics’ structure and robotic shuttles leveraged Microsoft Azure Edge Zones, Azure IoT Central, and Azure Sphere.
The luxury retailer leverages the micro-fulfillment center to package and ship multiple beauty products together. As a result, customers experience faster delivery times. The retailer also reduces costs related to packaging, delivery, and warehouse space.
Scott Gravelle, Founder, CEO, and CTO of Attabotics explained, “Commerce is at a crossroads, and for retailers and brands to thrive, they need to adapt and take advantage of new technologies to effectively meet consumers’ growing demands. Supply chains have not traditionally been set up for e-commerce. We will see supply chain innovations in automation and modulation take off in 2021 as they bring a wider variety of products closer to the consumer and streamline the picking and shipping to support e-commerce.”
Helping keep warehouse workers safe
What will this look like? Cognizant’s recent work with an athletic apparel retailer offers a blueprint. During the peak holiday season, the retailer needed to protect its expanding warehouse workforce while minimizing absenteeism. To implement physical distancing and other safety measures, the retailer leveraged Cognizant’s Safe Buildings solution built with Azure IoT Edge and IoT Hub services.
With this solution, employees maintain physical distancing using smart wristbands. When two smart wristbands were within a pre-defined distance of each other for more than a pre-defined time, the worker’s bands buzzed to reinforce safe behaviors. The results drove nearly 98 percent distancing compliance in the initial pilot. As the retailer plans to scale-up its workforce at other locations, implementing additional safety modules are being considered:
- Touchless temperature checks.
- Occupancy sensors communicate capacity information to the management team for compliance records.
- Air quality sensors provide environmental data so the facility team could help ensure optimal conditions for workers’ health.
“For organizations to thrive during and post-pandemic, enterprise-grade workplace safety cannot be compromised. Real-time visibility of threats is providing essential businesses an edge in minimizing risks proactively while building employee trust and empowering productivity in a safer workplace,” Rajiv Mukherjee, Cognizant’s IoT Practice Director for Retail and Consumer Goods.
Optimizing inventory management with real-time edge data
In 2021, retailers will ramp up the adoption of intelligent edge solutions to optimize inventory management with real-time data. Most retailers have complex inventory management systems. However, no matter how good the systems are, there can still be data gaps due to grocery pick-up services, theft, and sweethearting. The key to addressing these gaps is to combine real-time data from applications running on edge cameras and other edge devices in the physical store with backend enterprise resource planning (ERP) data.
Seattle Goodwill worked with Avanade to implement a new Microsoft-based Dynamics platform across its 24 stores. The new system provided almost real-time visibility into the movement of goods from the warehouses to the stores.
Rasmus Hyltegård, Director of Advanced Analytics at Avanade explained, “To ensure inventory moves quickly off the shelves, retailers can combine real-time inventory insights from Avanade’s smart inventory accelerator with other solutions across the customer journey to meet customer expectations.” Hyltegård continued, “Customers can check online to find the products they want, find the stores with product in stock, and gain insight into which stores have the shortest queues, which is important during the pandemic and beyond. Once a customer is in the store, digital signage allows for endless aisle support.”
The new year 2021 holds a wealth of opportunities for retailers. We foresee retail leaders reimagining their businesses by investing in platforms that integrate IoT, AI, and edge computing technologies. Retailers will focus on increasing efficiencies to reduce costs. Modular platforms supported by an ecosystem of strong partner solutions will empower frontline workers with data, augment omnichannel fulfillment with dark stores and micro-fulfillment, leverage livestream video to enhance omnichannel, prioritize warehouse worker safety, and optimize inventory management with real-time data.
Originally posted here.
Security has long been a worry for the Internet of Things projects, and for many organizations with active or planned IoT deployments, security concerns have hampered digital ambitions. By implementing IoT security best practices, however, risk can be minimized.
Fortunately, IoT security best practices can help organizations reduce the risks facing their deployments and broader digital transformation initiatives. These same best practices can also reduce legal liability and protect an organization’s reputation.
Technological fragmentation is not just one of the biggest barriers to IoT adoption, but it also complicates the goal of securing connected devices and related services. With IoT-related cyberattacks on the rise, organizations must become more adept at managing cyber-risk or face potential reputational and legal consequences. This article summarizes best practices for enterprise and industrial IoT projects.
Key takeaways from this article include the following:
- Data security remains a central technology hurdle related to IoT deployments.
- IoT security best practices also can help organizations curb the risk of broader digital transformation initiatives.
- Securing IoT projects requires a comprehensive view that encompasses the entire life cycle of connected devices and relevant supply chains.
Fragmentation and security have long been two of the most significant barriers to the Internet of Things adoption. The two challenges are also closely related.
Despite the Internet of Things (IoT) moniker, which implies a synthesis of connected devices, IoT technologies vary considerably based on their intended use. Organizations deploying IoT thus rely on an array of connectivity types, standards and hardware. As a result, even a simple IoT device can pose many security vulnerabilities, including weak authentication, insecure cloud integration, and outdated firmware and software.
For many organizations with active or planned IoT deployments, security concerns have hampered digital ambitions. An IoT World Today August 2020 survey revealed data security as the top technology hurdle for IoT deployments, selected by 46% of respondents.
Fortunately, IoT security best practices can help organizations reduce the risks facing their deployments and broader digital transformation initiatives. These same best practices can also reduce legal liability and protect an organization’s reputation.
But to be effective, an IoT-focused security strategy requires a broad view that encompasses the entire life cycle of an organization’s connected devices and projects in addition to relevant supply chains.
Know What You Have and What You Need
Asset management is a cornerstone of effective cyber defence. Organizations should identify which processes and systems need protection. They should also strive to assess the risk cyber attacks pose to assets and their broader operations.
In terms of enterprise and industrial IoT deployments, asset awareness is frequently spotty. It can be challenging given the array of industry verticals and the lack of comprehensive tools to track assets across those verticals. But asset awareness also demands a contextual understanding of the computing environment, including the interplay among devices, personnel, data and systems, as the National Institute of Standards and Technology (NIST) has observed.
There are two fundamental questions when creating an asset inventory: What is on my network? And what are these assets doing on my network?
Answering the latter requires tracking endpoints’ behaviours and their intended purpose from a business or operational perspective. From a networking perspective, asset management should involve more than counting networking nodes; it should focus on data protection and building intrinsic security into business processes.
Relevant considerations include the following:
- Compliance with relevant security and privacy laws and standards.
- Interval of security assessments.
- Optimal access of personnel to facilities, information and technology, whether remote or in-person.
- Data protection for sensitive information, including strong encryption for data at rest and data in transit.
- Degree of security automation versus manual controls, as well as physical security controls to ensure worker safety.
IoT device makers and application developers also should implement a vulnerability disclosure program. Bug bounty programs are another option that should include public contact information for security researchers and plans for responding to disclosed vulnerabilities.
Organizations that have accurately assessed current cybersecurity readiness need to set relevant goals and create a comprehensive governance program to manage and enforce operational and regulatory policies and requirements. Governance programs also ensure that appropriate security controls are in place. Organizations need to have a plan to implement controls and determine accountability for that enforcement. Another consideration is determining when security policies need to be revised.
An effective governance plan is vital for engineering security into architecture and processes, as well as for safeguarding legacy devices with relatively weak security controls. Devising an effective risk management strategy for enterprise and industrial IoT devices is a complex endeavour, potentially involving a series of stakeholders and entities. Organizations that find it difficult to assess the cybersecurity of their IoT project should consider third-party assessments.
Many tools are available to help organizations evaluate cyber-risk and defences. These include the vulnerability database and the Security and Privacy Controls for Information Systems and Organizations document from the National Institute of Standards and Technology. Another resource is the list of 20 Critical Security Controls for Effective Cyber Defense. In terms of studying the threat landscape, the MITRE ATT&CK is one of the most popular frameworks for adversary tactics and techniques.
At this stage of the process, another vital consideration is the degree of cybersecurity savviness and support within your business. Three out of ten organizations deploying IoT cite lack of support for cybersecurity as a hurdle, according to August 2020 research from IoT World Today. Security awareness is also frequently a challenge. Many cyberattacks against organizations — including those with an IoT element — involve phishing, like the 2015 attack against Ukraine’s electric grid.
IoT Security Best Practices
Internet of Things projects demands a secure foundation. That starts with asset awareness and extends into responding to real and simulated cyberattacks.
Step 1: Know what you have.
Building an IoT security program starts with achieving a comprehensive understanding of which systems need to be protected.
Step 2: Deploy safeguards.
Shielding devices from cyber-risk requires a thorough approach. This step involves cyber-hygiene, effective asset control and the use of other security controls.
Step 3: Identify threats
Spotting anomalies can help mitigate attacks. Defenders should hone their skills through wargaming.
Step 4: Respond effectively.
Cyberattacks are inevitable but should provide feedback that feeds back to step 1.
Exploiting human gullibility is one of the most common cybercriminal strategies. While cybersecurity training can help individuals recognize suspected malicious activities, such programs tend not to be entirely effective. “It only takes one user and one-click to introduce an exploit into a network,” wrote Forrester analyst Chase Cunningham in the book “Cyber Warfare.” Recent studies have found that, even after receiving cybersecurity training, employees continue to click on phishing links about 3% of the time.
Security teams should work to earn the support of colleagues, while also factoring in the human element, according to David Coher, former head of reliability and cybersecurity for a major electric utility. “You can do what you can in terms of educating folks, whether it’s as a company IT department or as a consumer product manufacturer,” he said. But it is essential to put controls in place that can withstand user error and occasionally sloppy cybersecurity hygiene.
At the same time, organizations should also look to pool cybersecurity expertise inside and outside the business. “Designing the controls that are necessary to withstand user error requires understanding what users do and why they do it,” Coher said. “That means pulling together users from throughout your organization’s user chain — internal and external, vendors and customers, and counterparts.”
Those counterparts are easier to engage in some industries than others. Utilities, for example, have a strong track record in this regard, because of the limited market competition between them. Collaboration “can be more challenging in other industries, but no less necessary,” Coher added.
Deploy Appropriate Safeguards
Protecting an organization from cyberattacks demands a clear framework that is sensitive to business needs. While regulated industries are obligated to comply with specific cybersecurity-related requirements, consumer-facing organizations tend to have more generic requirements for privacy protections, data breach notifications and so forth. That said, all types of organizations deploying IoT have leeway in selecting a guiding philosophy for their cybersecurity efforts.
A basic security principle is to minimize networked or vulnerable systems’ attack surface — for instance, closing unused network ports and eliminating IoT device communication over the open internet. Generally speaking, building security into the architecture of IoT deployments and reducing attackers’ options to sabotage a system is more reliable than adding layers of defence to an unsecured architecture. Organizations deploying IoT projects should consider intrinsic security functionality such as embedded processors with cryptographic support.
But it is not practical to remove all risk from an IT system. For that reason, one of the most popular options is defence-in-depth, a military-rooted concept espousing the use of multiple layers of security. The basic idea is that if one countermeasure fails, additional security layers are available.
While the core principle of implementing multiple layers of security remains popular, defence in depth is also tied to the concept of perimeter-based defence, which is increasingly falling out of favour. “The defence-in-depth approach to cyber defence was formulated on the basis that everything outside of an organization’s perimeter should be considered ‘untrusted’ while everything internal should be inherently ‘trusted,’” said Andrew Rafla, a Deloitte Risk & Financial Advisory principal. “Organizations would layer a set of boundary security controls such that anyone trying to access the trusted side from the untrusted side had to traverse a set of detection and prevention controls to gain access to the internal network.”
Several trends have chipped away at the perimeter-based model. As a result, “modern enterprises no longer have defined perimeters,” Rafla said. “Gone are the days of inherently trusting any connection based on where the source originates.” Trends ranging from the proliferation of IoT devices and mobile applications to the popularity of cloud computing have fueled interest in cybersecurity models such as zero trust. “At its core, zero trust commits to ‘never trusting, always verifying’ as it relates to access control,” Rafla said. “Within the context of zero trusts, security boundaries are created at a lower level in the stack, and risk-based access control decisions are made based on contextual information of the user, device, workload or network attempting to gain access.”
Zero trust’s roots stretch back to the 1970s when a handful of computer scientists theorized on the most effective access control methods for networks. “Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job,” one of those researchers, Jerome Saltzer, concluded in 1974.
While the concept of least privilege sought to limit trust among internal computing network users, zero trusts extend the principle to devices, networks, workloads and external users. The recent surge in remote working has accelerated interest in the zero-trust model. “Many businesses have changed their paradigm for security as a result of COVID-19,” said Jason Haward-Grau, a leader in KPMG’s cybersecurity practice. “Many organizations are experiencing a surge to the cloud because businesses have concluded they cannot rely on a physically domiciled system in a set location.”
Based on data from Deloitte, 37.4% of businesses accelerated their zero trust adoption plans in response to the pandemic. In contrast, more than one-third, or 35.2%, of those embracing zero trusts stated that the pandemic had not changed the speed of their organization’s zero-trust adoption.
“I suspect that many of the respondents that said their organization’s zero-trust adoption efforts were unchanged by the pandemic were already embracing zero trusts and were continuing with efforts as planned,” Rafla said. “In many cases, the need to support a completely remote workforce in a secure and scalable way has provided a tangible use case to start pursuing zero-trust adoption.”
A growing number of organizations are beginning to blend aspects of zero trust and traditional perimeter-based controls through a model known as secure access service edge (SASE), according to Rafla. “In this model, traditional perimeter-based controls of the defence-in-depth approach are converged and delivered through a cloud-based subscription service,” he said. “This provides a more consistent, resilient, scalable and seamless user experience regardless of where the target application a user is trying to access may be hosted. User access can be tightly controlled, and all traffic passes through multiple layers of cloud-based detection and prevention controls.”
Regardless of the framework, organizations should have policies in place for access control and identity management, especially for passwords. As Forrester’s Cunningham noted in “Cyber Warfare,” the password is “the single most prolific means of authentication for enterprises, users, and almost any system on the planet” — is the lynchpin of failed security in cyberspace. Almost everything uses a password at some stage.” Numerous password repositories have been breached, and passwords are frequently recycled, making the password a common security weakness for user accounts as well as IoT devices.
A significant number of consumer-grade IoT devices have also had their default passwords posted online. Weak passwords used in IoT devices also fueled the growth of the Mirai botnet, which led to widespread internet outages in 2016. More recently, unsecured passwords on IoT devices in enterprise settings have reportedly attracted state-sponsored actors’ attention.
IoT devices and related systems also need an effective mechanism for device management, including tasks such as patching, connectivity management, device logging, device configuration, software and firmware updates and device provisioning. Device management capabilities also extend to access control modifications and include remediation of compromised devices. It is vital to ensure that device management processes themselves are secure and that a system is in place for verifying the integrity of software updates, which should be regular and not interfere with device functionality.
Organizations must additionally address the life span of devices and the cadence of software updates. Many environments allow IT pros to identify a specific end-of-life period and remove or replace expired hardware. In such cases, there should be a plan for device disposal or transfer of ownership. In other contexts, such as in industrial environments, legacy workstations don’t have a defined expiration date and run out-of-date software. These systems should be segmented on the network. Often, such industrial systems cannot be easily patched like IT systems are, requiring security professionals to perform a comprehensive security audit on the system before taking additional steps.
Identify Threats and Anomalies
In recent years, attacks have become so common that the cybersecurity community has shifted its approach from preventing breaches from assuming a breach has already happened. The threat landscape has evolved to the point that cyberattacks against most organizations are inevitable.
“You hear it everywhere: It’s a matter of when, not if, something happens,” said Dan Frank, a principal at Deloitte specializing in privacy and data protection. Matters have only become more precarious in 2020. The FBI has reported a three- to four-fold increase in cybersecurity complaints after the advent of COVID-19.
Advanced defenders have taken a more aggressive stance known as threat hunting, which focuses on proactively identifying breaches. Another popular strategy is to study adversary behaviour and tactics to classify attack types. Models such as the MITRE ATT&CK framework and the Common Vulnerability Scoring System (CVSS) are popular for assessing adversary tactics and vulnerabilities.
While approaches to analyzing vulnerabilities and potential attacks vary according to an organization’s maturity, situational awareness is a prerequisite at any stage. The U.S. Army Field Manual defines the term like this: “Knowledge and understanding of the current situation which promotes timely, relevant and accurate assessment of friendly, enemy and other operations within the battlespace to facilitate decision making.”
In cybersecurity as in warfare, situational awareness requires a clear perception of the elements in an environment and their potential to cause future events. In some cases, the possibility of a future cyber attack can be averted by merely patching software with known vulnerabilities.
Intrusion detection systems can automate some degree of monitoring of networks and operating systems. Intrusion detection systems that are based on detecting malware signatures also can identify common attacks. They are, however, not effective at recognizing so-called zero-day malware, which has not yet been catalogued by security researchers. Intrusion detection based on malware signatures is also ineffective at detecting custom attacks, (i.e., a disgruntled employee who knows just enough Python or PowerShell to be dangerous. Sophisticated threat actors who slip through defences to gain network access can become insiders, with permission to view sensitive networks and files. In such cases, situational awareness is a prerequisite to mitigate damage.
Another strategy for intrusion detection systems is to focus on context and anomalies rather than malware signatures. Such systems could use machine learning to learn legitimate commands, use of messaging protocols and so forth. While this strategy overcomes the reliance on malware signatures, it can potentially trigger false alarms. Such a system can also detect so-called slow-rate attacks, a type of denial of service attack that gradually robs networking bandwidth but is more difficult to detect than volumetric attacks.
Respond Effectively to Cyber-Incidents
The foundation for successful cyber-incident response lies in having concrete security policies, architecture and processes. “Once you have a breach, it’s kind of too late,” said Deloitte’s Frank. “It’s what you do before that matters.”
That said, the goal of warding off all cyber-incidents, which range from violations of security policies and laws to data breaches, is not realistic. It is thus essential to implement short- and long-term plans for managing cybersecurity emergencies. Organizations should have contingency plans for addressing possible attacks, practising how to respond to them through wargaming exercises to improve their ability to mitigate some cyberattacks and develop effective, coordinated escalation measures for successful breaches.
There are several aspects of the zero trust model that enhance organizations’ ability to respond and recover from cyber events. “Network and micro-segmentation, for example, is a concept by which trust zones are created by organizations around certain classes or types of assets, restricting the blast radius of potentially destructive cyberattacks and limiting the ability for an attacker to move laterally within the environment,” Rafla said. Also, efforts to automate and orchestrate zero trust principles can enhance the efficiency of security operations, speeding efforts to mitigate attacks. “Repetitive and manual tasks can now be automated and proactive actions to isolate and remediate security threats can be orchestrated through integrated controls,” Rafla added.
Response to cyber-incidents involves coordinating multiple stakeholders beyond the security team. “Every business function could be impacted — marketing, customer relations, legal compliance, information technology, etc.,” Frank said.
A six-tiered model for cyber incident response from the SANS Institute contains the following steps:
- Preparation: Preparing the team to react to events ranging from cyberattacks to hardware failure and power outages.
- Identification: Determining if an operational anomaly should be classified as a cybersecurity incident, and how to respond to it.
- Containment: Segmenting compromised devices on the network long enough to limit damage in the event of a confirmed cybersecurity incident. Conversely, long-term containment measures involve hardening effective systems to allow them to enable normal operations.
- Eradication: Removing or restoring compromised systems. If a security team detects malware on an IoT device, for instance, this phase could involve reimaging its hardware to prevent reinfection.
- Recovery: Integrating previously compromised systems back into production and ensuring they operate normally after that. In addition to addressing the security event directly, recovery can involve crisis communications with external stakeholders such as customers or regulators.
- Lessons Learned: Documenting and reviewing the factors that led to the cyber-incident and taking steps to avoid future problems. Feedback from this step should create a feedback loop providing insights that support future preparation, identification, etc.
While the bulk of the SANS model focuses on cybersecurity operations, the last step should be a multidisciplinary process. Investing in cybersecurity liability insurance to offset risks identified after ongoing cyber-incident response requires support from upper management and the legal team. Ensuring compliance with the evolving regulatory landscape also demands feedback from the legal department.
A central practice that can prove helpful is documentation — not just for security incidents, but as part of ongoing cybersecurity assessment and strategy. Organizations with mature security documentation tend to be better positioned to deal with breaches.
“If you fully document your program — your policies, procedures, standards and training — that might put you in a more favourable position after a breach,” Frank explained. “If you have all that information summarized and ready, in the event of an investigation by a regulatory authority after an incident, it shows the organization has robust programs in place.”
Documenting security events and controls can help organizations become more proactive and more capable of embracing automation and machine learning tools. As they collect data, they should repeatedly ask how to make the most of it. KPMG’s Haward-Grau said cybersecurity teams should consider the following questions:
- What data should we focus on?
- What can we do to improve our operational decision making?
- How do we reduce our time and costs efficiently and effectively, given the nature of the reality in which we’re operating?
Ultimately, answering those questions may involve using machine learning or artificial intelligence technology, Haward- Grau said. “If your business is using machine learning or AI, you have to digitally enable them so that they can do what they want to do,” he said.
Finally, documenting security events and practices as they relate to IoT devices and beyond can be useful in evaluating the effectiveness of cybersecurity spending and provide valuable feedback for digital transformation programs. “Security is a foundational requirement that needs to be ingrained holistically in architecture and processes and governed by policies,” said Chander Damodaran, chief architect at Brillio, a digital consultancy firm. ”Security should be a common denominator.”
Recent legislation requires businesses to assume responsibility for protecting the Internet of Things (IoT) devices. “Security by Design” approaches are essential since successful applications deploy millions of units and analysts predict billions of devices deployed in the next five to ten years. The cost of fixing compromised devices later could overwhelm a business.
Security risks can never be eliminated: there is no single solution for all concerns, and the cost to counter every possible threat vector is prohibitively expensive. The best we can do is minimize the risk, and design devices and processes to be easily updatable.
It is best to assess damage potential and implement security methods accordingly. For example, for temperature and humidity sensors used in environmental monitoring, data protection needs are not as stringent as devices transmitting credit card information. The first may require anonymization for privacy, and the second may require encryption to prevent unauthorized access.
Senders and receivers must authenticate. IoT devices must transmit to the correct servers and ensure they receive messages from the correct servers.
Mission-critical applications, such as vehicle crash notification or medical alerts, may fail if the connection is not reliable. Lack of communication itself is a lack of security.
Connectivity errors can make good data unreliable, and actions on the content may be erroneous. It is best to select connectivity providers with strong security practices—e.g., whitelisting access and traffic segregation to prevent unauthorized communication.
IoT Security: 360-Degree Approach
Finally, only authorized recipients should access the information. In particular, privacy laws require extra care in accessing the information on individuals.
Developers should implement security best practices at all points in the chain. However, traditional IT security protects servers with access controls, intrusion detection, etc., the farther away from the servers that best practices are implemented, the less impact that remote IoT device breaches have on the overall application.
For example, compromised sensors might send bad data, and servers might take incorrect actions despite data filtering. Thus, gateways offer an ideal location for security with compute capacity for encryption and implement over-the-air (OTA) updates for security fixes.
Servers often automate responses on data content. Simplistic and automated responses to bad data could cascade into much greater difficulty. If devices transmit excessively, servers could overload and fail to provide timely responses to transmissions—retry algorithms resulting from network unavailability often create data storms.
IoT devices often use electrical power rather than batteries, and compromised units could continue to operate for years. Implementing over-the-air (OTA) functions for remotely disabling devices could be critical.
When a breach requires device firmware updates, OTA support is vital when devices are inaccessible or large numbers of units must be modified rapidly. All devices should support OTA, even if it increases costs—for example, adding memory for managing multiple “images” of firmware for updates.
In summary, IoT security best practices of authentication, encryption, remote device disable, and OTA for security fixes, along with traditional IT server protection, offers the best chance of minimizing risks of attacks on IoT applications.
Originally posted here.
The benefits of IoT data are widely touted. Enhanced operational visibility, reduced costs, improved efficiencies and increased productivity have driven organizations to take major strides towards digital transformation. With countless promising business opportunities, it’s no surprise that IoT is expanding rapidly and relentlessly. It is estimated that there will be 75.4 billion IoT devices by 2025. As IoT grows, so do the volumes of IoT data that need to be collected, analyzed and stored. Unfortunately, significant barriers exist that can limit or block access to this data altogether.
Successful IoT data acquisition starts and ends with reliable and scalable IoT connectivity. Selecting the right communications technology is paramount to the long-term success of your IoT project and various factors must be considered from the beginning to build a functional wireless infrastructure that can support and manage the influx of IoT data today and in the future.
Here are five IoT architecture must-haves for unlocking IoT data at scale.
1. Network Ownership
For many businesses, IoT data is one of their greatest assets, if not the most valuable. This intensifies the demand to protect the flow of data at all costs. With maximum data authority and architecture control, the adoption of privately managed networks is becoming prevalent across industrial verticals.
Beyond the undeniable benefits of data security and privacy, private networks give users more control over their deployment with the flexibility to tailor their coverage to the specific needs of their campus style network. On a public network, users risk not having the reliable connectivity needed for indoor, underground and remote critical IoT applications. And since this network is privately owned and operated, users also avoid the costly monthly access, data plans and subscription costs imposed by public operators, lowering the overall total-cost-of-ownership. Private networks also provide full control over network availability and uptime to ensure users have reliable access to their data at all times.
2. Minimal Infrastructure Requirements
Since the number of end devices is often fixed to your IoT use cases, choosing a wireless technology that requires minimal supporting infrastructure like base stations and repeaters, as well as configuration and optimization is crucial to cost-effectively scale your IoT network.
Wireless solutions with long range and excellent penetration capability, such as next-gen low-power wide area networks, require fewer base stations to cover a vast, structurally dense industrial or commercial campuses. Likewise, a robust radio link and large network capacity allow an individual base station to effectively support massive amounts of sensors without comprising performance to ensure a continuous flow of IoT data today and in the future.
3. Network and Device Management
As IoT initiatives move beyond proofs-of-concept, businesses need an effective and secure approach to operate, control and expand their IoT network with minimal costs and complexity.
As IoT deployments scale to hundreds or even thousands of geographically dispersed nodes, a manual approach to connecting, configuring and troubleshooting devices is inefficient and expensive. Likewise, by leaving devices completely unattended, users risk losing business-critical IoT data when it’s needed the most. A network and device management platform provides a single-pane, top-down view of all network traffic, registered nodes and their status for streamlined network monitoring and troubleshooting. Likewise, it acts as the bridge between the edge network and users’ downstream data servers and enterprise applications so users can streamline management of their entire IoT project from device to dashboard.
4. Legacy System Integration
Most traditional assets, machines, and facilities were not designed for IoT connectivity, creating huge data silos. This leaves companies with two choices: building entirely new, greenfield plants with native IoT technologies or updating brownfield facilities for IoT connectivity. Highly integrable, plug-and-play IoT connectivity is key to streamlining the costs and complexity of an IoT deployment. Businesses need a solution that can bridge the gap between legacy OT and IT systems to unlock new layers of data that were previously inaccessible. Wireless IoT connectivity must be able to easily retrofit existing assets and equipment without complex hardware modifications and production downtime. Likewise, it must enable straightforward data transfer to the existing IT infrastructure and business applications for data management, visualization and machine learning.
Each IoT system is a mashup of diverse components and technologies. This makes interoperability a prerequisite for IoT scalability, to avoid being saddled with an obsolete system that fails to keep pace with new innovation later on. By designing an interoperable architecture from the beginning, you can avoid fragmentation and reduce the integration costs of your IoT project in the long run.
Today, technology standards exist to foster horizontal interoperability by fueling global cross-vendor support through robust, transparent and consistent technology specifications. For example, a standard-based wireless protocol allows you to benefit from a growing portfolio of off-the-shelf hardware across industry domains. When it comes to vertical interoperability, versatile APIs and open messaging protocols act as the glue to connect the edge network with a multitude of value-deriving backend applications. Leveraging these open interfaces, you can also scale your deployment across locations and seamlessly aggregate IoT data across premises.
IoT data is the lifeblood of business intelligence and competitive differentiation and IoT connectivity is the crux to ensuring reliable and secure access to this data. When it comes to building a future-proof wireless architecture, it’s important to consider not only existing requirements, but also those that might pop up down the road. A wireless solution that offers data ownership, minimal infrastructure requirements, built-in network management and integration and interoperability will not only ensure access to IoT data today, but provide cost-effective support for the influx of data and devices in the future.
Originally posted here.
Note: this page contains paid content.
Please, subscribe to get an access.
Note: this page contains paid content.
Please, subscribe to get an access.