Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Programming (237)

Adaptive systems and models at runtime refer to the ability of a system or model to dynamically adjust its behavior or parameters based on changing conditions and feedback during runtime. This allows the system or model to better adapt to its environment, improve its performance, and enhance its overall effectiveness.

Some technical details about adaptive systems and models at runtime include:

  1. Feedback loops: Adaptive systems and models rely on feedback loops to gather data and adjust their behavior. These feedback loops can be either explicit or implicit, and they typically involve collecting data from sensors or other sources, analyzing the data, and using it to make decisions about how to adjust the system or model.

  2. Machine learning algorithms: Machine learning algorithms are often used in adaptive systems and models to analyze feedback data and make predictions about future behavior. These algorithms can be supervised, unsupervised, or reinforcement learning-based, depending on the type of feedback data available and the desired outcomes.

  3. Parameter tuning: In adaptive systems and models, parameters are often adjusted dynamically to optimize performance. This can involve changing things like thresholds, time constants, or weighting factors based on feedback data.

  4. Self-organizing systems: Some adaptive systems and models are designed to be self-organizing, meaning that they can reconfigure themselves in response to changing conditions without requiring external input. Self-organizing systems typically use decentralized decision-making and distributed control to achieve their goals.

  5. Context awareness: Adaptive systems and models often incorporate context awareness, meaning that they can adapt their behavior based on situational factors like time of day, location, or user preferences. This requires the use of sensors and other data sources to gather information about the environment in real-time.

Overall, adaptive systems and models at runtime are complex and dynamic, requiring sophisticated algorithms and techniques to function effectively. However, the benefits of these systems can be significant, including improved performance, increased flexibility, and better overall outcomes.

Read more…

IoT forensic science uses technical methods to solve problems related to the investigation of incidents involving IoT devices. Some of the technical ways that IoT forensic science solves problems include:

  1. Data Extraction and Analysis: IoT forensic science uses advanced software tools to extract data from IoT devices, such as logs, sensor readings, and network traffic. The data is then analyzed to identify relevant information, such as timestamps, geolocation, and device identifiers, which can be used to reconstruct events leading up to an incident.

  2. Reverse Engineering: IoT forensic science uses reverse engineering techniques to understand the underlying functionality of IoT devices. This involves analyzing the hardware and software components of the device to identify vulnerabilities, backdoors, and other features that may be relevant to an investigation.

  3. Forensic Imaging: IoT forensic science uses forensic imaging techniques to preserve the state of IoT devices and ensure that the data collected is admissible in court. This involves creating a complete copy of the device's storage and memory, which can then be analyzed without altering the original data.

  4. Cryptography and Data Security: IoT forensic science uses cryptography and data security techniques to ensure the integrity and confidentiality of data collected from IoT devices. This includes the use of encryption, digital signatures, and other security measures to protect data during storage, analysis, and transmission.

  5. Machine Learning: IoT forensic science uses machine learning algorithms to automate the analysis of large amounts of data generated by IoT devices. This can help investigators identify patterns and anomalies that may be relevant to an investigation.

IoT forensic science uses many more (and more advances) technical methods to solve problems related to the investigation of incidents involving IoT devices. By leveraging these techniques, investigators can collect, analyze, and present digital evidence from IoT devices that can be used to reconstruct events and support legal proceedings.

Read more…

Voice-Enabled IoT Applications

The Internet of Things (IoT) has transformed the way we interact with technology. With the rise of voice assistants such as Alexa, Siri, and Google Assistant, voice-enabled IoT applications have become increasingly popular in recent years. Voice-enabled IoT applications have the potential to revolutionize the way we interact with our homes, workplaces, and even our cars. In this article, we will explore the benefits and challenges of voice-enabled IoT applications and their potential for the future.

Voice-enabled IoT applications allow users to control various smart devices using their voice. These devices include smart speakers, smart TVs, smart thermostats, and smart lights, to name a few. By using voice commands, users can turn on the lights, adjust the temperature, play music, and even order food without having to touch any buttons or screens. This hands-free approach has made voice-enabled IoT applications popular among users of all ages, from children to seniors.

Free vector users buying smart speaker applications online. smart assistant applications online store, voice activated digital assistants apps market concept. vector isolated illustration.
One of the significant benefits of voice-enabled IoT applications is their convenience. With voice commands, users can control their smart devices while they are doing other tasks, such as cooking, cleaning, or exercising. This allows for a more seamless and efficient experience, without having to interrupt the task at hand. Additionally, voice-enabled IoT applications can be customized to suit individual preferences, allowing for a more personalized experience.

Another significant benefit of voice-enabled IoT applications is their potential for accessibility. For people with disabilities, voice-enabled IoT applications can provide an easier and more natural way to interact with their devices. By using their voice, people with limited mobility or vision can control their devices without having to rely on buttons or screens. This can improve their quality of life and independence.

However, there are also challenges associated with voice-enabled IoT applications. One of the significant challenges is privacy and security. As voice-enabled IoT applications are always listening for voice commands, they can potentially record and store sensitive information. Therefore, it is crucial for developers to implement strong security measures to protect users' privacy and prevent unauthorized access.

Another challenge is the potential for misinterpretation of voice commands. Accidental triggers or misinterpretation of voice commands can result in unintended actions, which can be frustrating for users. Additionally, voice-enabled IoT applications can struggle to understand certain accents, dialects, or languages, which can limit their accessibility to non-native speakers.

Despite these challenges, the potential for voice-enabled IoT applications is vast. In addition to smart homes, voice-enabled IoT applications can be used in a wide range of industries, including healthcare, retail, and transportation. In healthcare, voice-enabled IoT applications can be used to monitor patients' health conditions and provide real-time feedback. In retail, voice-enabled IoT applications can provide personalized shopping experiences and assist with inventory management. In transportation, voice-enabled IoT applications can be used to provide real-time traffic updates and navigation.

In conclusion, voice-enabled IoT applications have become increasingly popular in recent years, providing a more convenient and accessible way for users to interact with their devices. While there are challenges associated with voice-enabled IoT applications, their potential for revolutionizing various industries is vast. As technology continues to evolve, the future of voice-enabled IoT applications is sure to be exciting and full of potential

Read more…

Wireless Sensor Networks and IoT

We all know how IoT has revolutionized the way we interact with the world. IoT devices are now ubiquitous, from smart homes to industrial applications. A significant portion of these devices are Wireless Sensor Networks (WSNs), which are a key component of IoT systems. However, designing and implementing WSNs presents several challenges for embedded engineers. In this article, we discuss some of the significant challenges that embedded engineers face when working with WSNs.

WSNs are a network of small, low-cost, low-power, and wirelessly connected sensor nodes that can sense, process, and transmit data. These networks can be used in a wide range of applications such as environmental monitoring, healthcare, industrial automation, and smart cities. WSNs are typically composed of a large number of nodes, which communicate with each other to gather and exchange data. The nodes are equipped with sensors, microprocessors, transceivers, and power sources. The nodes can also be stationary or mobile, depending on the application.

One of the significant challenges of designing WSNs is the limited resources of the nodes. WSNs are designed to be low-cost, low-power, and small, which means that the nodes have limited processing power, memory, and energy. This constraint limits the functionality and performance of the nodes. Embedded engineers must design WSNs that can operate efficiently with limited resources. The nodes should be able to perform their tasks while consuming minimal power to maximize their lifetime.

Another challenge of WSNs is the limited communication range. The nodes communicate with each other using wireless radio signals. However, the range of the radio signals is limited, especially in indoor environments where the signals are attenuated by walls and other obstacles. The communication range also depends on the transmission power of the nodes, which is limited to conserve energy. Therefore, embedded engineers must design WSNs that can operate reliably in environments with limited communication range.

WSNs also present a significant challenge for embedded engineers in terms of data management. WSNs generate large volumes of data that need to be collected, processed, and stored. However, the nodes have limited storage capacity, and transferring data to a centralized location may not be practical due to the limited communication range. Therefore, embedded engineers must design WSNs that can perform distributed data processing and storage. The nodes should be able to process and store data locally and transmit only the relevant information to a centralized location.

Security is another significant challenge for WSNs. The nodes in WSNs are typically deployed in open and unprotected environments, making them vulnerable to physical and cyber-attacks. The nodes may also contain sensitive data, making them an attractive target for attackers. Embedded engineers must design WSNs with robust security features that can protect the nodes and the data they contain from unauthorized access.

The deployment and maintenance of WSNs present challenges for embedded engineers. WSNs are often deployed in harsh and remote environments, making it difficult to access and maintain the nodes. The nodes may also need to be replaced periodically due to the limited lifetime of the power sources. Therefore, embedded engineers must design WSNs that are easy to deploy, maintain, and replace. The nodes should be designed for easy installation and removal, and the network should be self-healing to recover from node failures automatically.

Final thought; WSNs present significant challenges for embedded engineers, including limited resources, communication range, data management, security, and deployment and maintenance. Addressing these challenges requires innovative design approaches that can maximize the performance and efficiency of WSNs while minimizing their cost and complexity. Embedded engineers must design WSNs that can operate efficiently with limited resources, perform distributed data processing and storage, provide robust security features, and be easy to deploy

Read more…

IoT is disrupting almost every industry sector including communications. As power consumption has become a challenge for IoT devices, cellular IoT has introduced some standards that are cutting-edge. Let’s take a look at those standards and their device categories.

Remember the days when the “E” icon on the notification bar of our phones used to make us excited? 

Well, if we compare that to today, technology has skyrocketed like anything. It was just a matter of time before that E icon turned to 4G LTE.

Today, there are billions of devices that run on the 4G network providing lightning-fast internet to the users. And it does not end here. The wave of 5G is ready to take on the world. Though some countries have already deployed 5G, it is yet to conquer the entire world.

Now, IoT is not a buzzword anymore. It is an awesome technology that connects various internet-enabled devices and is known to everybody. The use of IoT allows devices to share data at a faster pace. But, there is one challenge!

As these devices are connected to cellular networks like 3G and 4G LTE, they consume a lot of power. In a way, it is acceptable, but not if the devices are sending a small amount of data occasionally. So what’s the solution here? Cellular IoT!

Cellular IoT deals with some of the best IoT standards and devices that make the existing cellular technology fit for low-powered devices. If you are interested to know how; read ahead and find out!

Why are IoT LTE devices necessary?

Well, the need for IoT devices comes into the picture when we analyze applications like predictive maintenance, asset tracking, fleet management, inventory management, remote service, etc.

All these applications are backed by powerful yet sensitive devices that transmit data to ensure that all your business processes are running fine. LTE is the technology that helps them. IoT devices under LTE can be classified based on the LTE standards!

LTE-M/ Cat-M1:

This standard covers devices that run under the bandwidth of 1.4 MHz. Most of the devices under the standard are smart meters, fleet management devices, and asset tracking devices.

Cat-1:

The operating bandwidth of Cat-1 devices is 20 MHz which allows for devices like ATMs, POS terminals, and wearables to operate.

Cat-4:

The devices under Cat-4 have the maximum download and upload speed, which makes them ideal for applications like autonomous vehicles, real-time video, and in-car infotainment.

NB-IoT/ Cat-NB1:

The IoT LTE devices under NB-IoT have the maximum latency, which makes them crucial for applications like parking sensors, street lighting, industrial monitors, and more.

What are the various IoT LTE devices categories?

Well, if we talk about the device categories, IoT LTE devices can be classified into four categories based on cellular IoT standards. The newest of these four standards are LTE-M and NB-IoT.

10890697683?profile=RESIZE_710x

Let’s read ahead and find out about the IoT LTE device categories!

 

1. LTE-M/ Cat-M1

Let’s begin with the LTE-M standard. The LTE-M standard is an excellent discovery that is ideal for devices that require less power and less bandwidth. Here are some key pointers related to the device categories of LTE-M!

  • The devices based on the LTE-M standard have an upload speed of 1 Mbps, and the same is the download speed.
  • On top of that, the latency in the case of LTE-M devices is 10-15 milliseconds. The latency is enough to ensure that the required data is transmitted at regular intervals.
  • The bandwidth of the LTE-M is enough to ensure that the devices are able to function well in the prevailing 2G and 3G applications.
  • The best thing about the LTE-M standard is handoff for devices. It allows seamless handoff that makes the standard ideal for applications like asset tracking and fleet management where devices are on the move.
  • Cat-M1 was created as an integral part of Release 13 of the 3GPP’s LTE standards.

2. Cat-1

Apart from the above-described device categories, Cat-1 is a category that is a part of Release 8 of the 3GPP standard. Though it is a part of the old technology, it is still widely used across the globe. Here are some features of the Cat-1!

  • The Cat-1 standard is made for IoT device categories that have low and medium bandwidth needs.
  • The speed of the Cat-1 device is more than that of LTE-M. The upload speed of the Cat-1 devices is 5 Mbps, and the download speed is 10 Mbps.
  • One of the best things about Cat-1 is that it has less latency. The latency of the signals is just 50-100 milliseconds.
  • The Cat-1 standard uses a massive bandwidth of 20 Mhz in a full duplex. The full duplex capability of the devices allows for smooth handoff, making it ideal for wearables, ATMs, POS terminals, etc.

3. Cat-4

Well, the Cat-4 standard is what it takes to support applications like autonomous cars. The speed of devices in this standard is way more than Cat-1. It can provide you with 50 Mbps upload speed, and 150 Mbps download speed.

The best advantage of the Cat-4 standard is that it supports in-car infotainment, in-car hotspots, and video surveillance.

4. NB-IoT/ Cat-NB1

After the LTE-M, there is NB-IoT or Cat-NB1 standard. Just like LTE-M, there are many aspects that make it a bit different and unique. Here are some key pointers about the devices supporting the NB1 standard.

  • The low-cost technology makes use of DSSS modulation technology vs. LTE spread technology to ensure connectivity.
  • The cost factor of the technology is not the only USP. The devices that come under Cat-NB1 have less power consumption, offer excellent in-building coverage, and have longer battery life.
  • If we talk about the upload and download speed of the NB-IoT device category, it is relatively less compared to LTE-M. The upload speed is 66 kbps, and the download speed is 26 kbps. This is in half duplex mode.
  • The latency of NB-IoT is also more than the LTE-M. It oscillates between 1.6 to 10 seconds. Though it seems way more, there are advantages to it. The latency is ideal for small, intermittent data transmissions.
  • NB-IoT is also part of Release 13 of the 3GPP’s LTE standard. It is an LPWAN technology that works on a licensed spectrum.
  • The devices that come under this standard are smart gas, street lights, parking sensors, etc.

Other than these device and standard categories, there are two more standards:

5. Cat-0

As there is a need for low-cost devices and processes, Cat-0 lays the groundwork for that. It eliminates the need for features that require a high data rate in Cat-1. On top of all, Cat-0 is slowly doing the groundwork for Cat-M by replacing 2G.

6. EC-GSM

It is a standard that does not have as much buzz as the LTE-M and NB-IoT. But, it has been tested by brands like Ericsson and Intel for supreme practicality and modularity.

Why Do We Need To Care?

Well, if you are a cellular carrier service provider, you have to care about it. There are many factors that need to be considered while choosing the IoT LTE device category. Here is a brief elaboration of some of the critical ones!

10890699663?profile=RESIZE_710x

1. Power consumption:

Out of all the IoT LTE devices listed above, those who come under the Cat-4 consume the maximum power. After that come the devices under Cat-1. Cat-M1 and NB-IoT devices are the ones that have the minimum power consumption.

2. Battery life:

Battery life is the key factor if the devices are placed in remote locations like the agricultural field. If you are choosing LTE IoT devices, go for devices under standards Cat-M1 and NB-IoT.

3. Cost:

If cost is your concern, then again, Cat-M1 and NB-IoT are the ideal picks for you. They are best for high-volume device applications. Devices under Cat-1 and Cat-4 are more pricey.

4. Adoption:

When it comes to adoption, the adoption of LTE-M and NB-IoT are quickly being adopted by carrier service providers across the globe.

5. Latency:

Latency is the highest in NB-IoT, which makes it ideal for applications that do not need to send continuous data. LTE-M is a bit faster than NB-IoT. Cat-4 is the fastest, which makes it ideal for video applications.

Conclusion

So, now we are clear about what type of IoT devices are under each standard of LTE. LTE-M and NB-IoT are the standards that are being quickly adopted as they are low cost, consume less power, and have max battery life. To make an informed choice, it is necessary for you to analyze each aspect closely. As of now, carrier companies are inclined toward adopting  NB-IoT and LTE-M as they can serve vast applications while being balanced in all aspects.

Read more…

The concepts of AI and ML have been widely discussed and these concepts are also inspiring many entrepreneurs to invest in such products. This huge amount of investment brings a huge opportunity for young people to learn these concepts and get a full-time job in the same field.

To help you to get a full-time job in this field, we bring the top best concepts to learn before getting started with AI and machine learning. But before that, let's learn about the concepts of Artificial intelligence and machine learning.

About AI

AI stands for artificial intelligence which is expected to replace current man-force with accuracy. There are two main concepts of Artificial Intelligence and these concepts are Machine learning and deep learning. Both of these concepts are used to predict an out based on the data it has. Many big organizations have been implementing these concepts over the years successfully for the finance, gaming, technology, and image processing industries. In the typical machine learning product, you will train a model with a huge amount of data. Once the model is fully trained, the model will be able to decide on its own. A deep learning model contains a huge amount of data while the machine learning model contains comparatively less data but it is also equally amazing.

Below are the top best concepts that you must learn before learning all other AI/ML concepts.

1) Binary translation

A binary number is nothing but a two-bit number that contains just zeros and ones. All the advanced and traditional electronics systems understand just binary numbers. These systems cannot understand any other language apart from the binary. For that reason, it becomes really important for the software engineer to learn and understand the concepts of the binary conversion tool.

Along with these concepts, you can also learn about other numbering systems like hexadecimal, decimal, and octal numbers. Among these numbers, decimal numbers are the easiest numbers to learn and understand. Most apps and software that requires human involvement, need to be written in decimal number in the back-end and binary numbers on the front-end.

2) Programming

Most of the Machine learning and deep learning apps are written in the Python programming language. Though there are a lot of ready-made libraries and frameworks available in the Python programming language, you need basic programming skills to implement this framework on your project. You can start learning these concepts from the c language and then start learning the python programming language.

3) Probability in maths

The math concepts are not that important for programming but they can be helpful in machine learning and deep learning concepts especially probability concepts. In deep learning, you need to deal with a lot of data, and to search from this data you need to develop a tool that implements probability applications for a faster search.

So, these are the top three concepts that you must learn before developing your first AI/ML app to get the best job in the field. Do share your thoughts on this subject.

Read more…
An AI based approach increases accuracy and can even make the impossible possible.
 
What is an Outlier?
 
Put simply, an outlier is a piece of data or observation that differs drastically from a given norm.
 
In the image above, the red fish is an outlier. Clearly differing by color, but also by size, shape, and more obviously direction. As such, the analysis of detecting outliers in data fall into two categories: univariate, and multivariate
  • Univariate: considering a single variable
  • Multivariate: considering multiple variables
 
Outlier Detection in Industrial IoT
 
In Industrial IoT use cases, outlier detection can be instrumental in specific use cases such as understanding the health of your machine. Instead of looking at characteristics of a fish like above, we are looking at characteristics of a machine via data such as sensor readings.
 
The goal is to learn what normal operation looks like where outliers are abnormal activity indicative of a future problem.
 
Statistical Approach to Outlier Detection
Statistics - Normal Distribution 
Statistical/probability based approaches date back centuries. You may recall back the bell curve. The values of your dataset plot to a distribution. In simplest terms, you calculate the mean and standard deviation of that distribution. You then can plot the location of x standard deviations from the mean and anything that falls beyond that is an outlier.
 
A simple example to explore using this approach is outside air temperature. Looking at the low temperature in Boston for the month of January from 2008-2018 we find an average temperature of ~23 degrees F with a standard deviation of ~9.62 degrees. Plotting out 2 standard deviations results in the following.
 
 
 a797d2_2861843bb7ba4a82bab87eef54b09196~mv2.png
 
 
Interpreting the chart above, any temperature above the gray line or below the yellow can be considered outside the range of normal...or an outlier.
 
Why do we need AI?
If we just showed that you can determine outliers using simple statistics, then why do we need AI at all? The answer depends on the type of outlier analysis.
 
Why AI for Univariate Analysis?
In the example above, we successfully analyzed outliers in weather looking at a single variable: temperature.
 
So, why should we complicate things by introducing AI to the equation? The answer has to do with the distribution of your data. You can run univariate analysis using statistical measures, but in order for the results to be accurate, it is assumed that the distribution of your data is "normal". In other words, it needs to fit to the shape of a bell curve (like the left image below).
 
However, in the real world, and specifically in industrial use cases, the resulting sensor data is not perfectly normal (like the right image below).
 6 ways to test for a Normal Distribution — which one to use? | by Joos  Korstanje | Towards Data Science
As a result, statistical analysis on a non-normal dataset would result in more false positives and false negatives.
 
The Need for AI
AI-based methods on the other hand, do not require a normal distribution and finds patterns in the data that result in much higher accuracy. In the case of the weather in Boston, getting the forecast slightly wrong does not have a huge impact. However, in industries such as rail, oil and gas, and industrial equipment, trust in the accuracy of your results has a long lasting impact. An impact that can only be achieved by AI.
 
Why AI for Multivariate Analysis?
The case for AI in a multivariate analysis is a bit more straight forward. Effectively, when we are looking at a single variable we can easily plot the results on a plane such as the temperature chart or the normal and non-normal distribution charts above.
 
However, if we are analyzing multiple points, such as the current, voltage and wattage of a motor, or vibration over 3 axis, or the return temp and discharge temp of an HVAC system, plotting and analyzing with statistics has its limitations. Just visualizing the plot becomes impossible for a human as we go from a single plane to hyperplanes as shown below.
 
MSRI | Hyperplane arrangements and application
 
The Need for AI
For multivariate analysis, visual inspection starts to go beyond human capabilities while technical analysis goes beyond statistical capabilities. Instead, AI can be utilized to find patterns in the underlying data in order to learn normal operation and adequately monitor for outliers. In other words, for multivariate analysis AI starts to make the impossible possible.
 
Summary
Statistics and probability has been around far longer than anyone reading this post. However, not all data is created equal and in the world of industrial IoT, statistical techniques have crucial limitations.
 
AI-based techniques go beyond these limitations helping to reduce false positives/negatives and often times making robust analysis possible for the first time.
 
At Elipsa, we build simple, fast and flexible AI for IoT. Get free access to our Community Edition to start integrating machine learning into your applications.
 
Read more…

By Bee Hayes-Thakore

The Android Ready SE Alliance, announced by Google on March 25th, paves the path for tamper resistant hardware backed security services. Kigen is bringing the first secure iSIM OS, along with our GSMA certified eSIM OS and personalization services to support fast adoption of emerging security services across smartphones, tablets, WearOS, Android Auto Embedded and Android TV.

Google has been advancing their investment in how tamper-resistant secure hardware modules can protect not only Android and its functionality, but also protect third-party apps and secure sensitive transactions. The latest android smartphone device features enable tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the hardware-backed Keystore that resides in a hardware security module.

To accelerate adoption of new Android use cases with stronger security, Google announced the formation of the Android Ready SE Alliance. Secure Element (SE) vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. On March 25th, Google launched the General Availability (GA) version of StrongBox for SE.

8887974290?profile=RESIZE_710x

Hardware based security modules are becoming a mainstay of the mobile world. Juniper Research’s latest eSIM research, eSIMs: Sector Analysis, Emerging Opportunities & Market Forecasts 2021-2025, independently assessed eSIM adoption and demand in the consumer sector, industrial sector, and public sector, and predicts that the consumer sector will account for 94% of global eSIM installations by 2025. It anticipates that established adoption of eSIM frameworks from consumer device vendors such as Google, will accelerate the growth of eSIMs in consumer devices ahead of the industrial and public sectors.


Consumer sector will account for 94% of global eSIM installations by 2025

Juniper Research, 2021.

Expanding the secure architecture of trust to consumer wearables, smart TV and smart car

What’s more? A major development is that now this is not just for smartphones and tablets, but also applicable to WearOS, Android Auto Embedded and Android TV. These less traditional form factors have huge potential beyond being purely companion devices to smartphones or tablets. With the power, size and performance benefits offered by Kigen’s iSIM OS, OEMs and chipset vendors can consider the full scope of the vast Android ecosystem to deliver new services.

This means new secure services and innovations around:

🔐 Digital keys (car, home, office)

🛂 Mobile Driver’s License (mDL), National ID, ePassports

🏧 eMoney solutions (for example, Wallet)

How is Kigen supporting Google’s Android Ready SE Alliance?

The alliance was created to make discrete tamper resistant hardware backed security the lowest common denominator for the Android ecosystem. A major goal of this alliance is to enable a consistent, interoperable, and demonstrably secure applets across the Android ecosystem.

Kigen believes that enabling the broadest choice and interoperability is fundamental to the architecture of digital trust. Our secure, standards-compliant eSIM and iSIM OS, and secure personalization services are available to all chipset or device partners in the Android Ready SE Alliance to leverage the benefits of iSIM for customer-centric innovations for billions of Android users quickly.

Vincent Korstanje, CEO of Kigen

Kigen’s support for the Android Ready SE Alliance will allow our industry partners to easily leapfrog to the enhanced security and power efficiency benefits of iSIM technology or choose a seamless transition from embedded SIM so they can focus on their innovation.

We are delighted to partner with Kigen to further strengthen the security of Android through StrongBox via Secure Element (SE). We look forward to widespread adoption by our OEM partners and developers and the entire Android ecosystem.

Sudhi Herle, Director of Android Platform Security 

In the near term, the Google team is prioritizing and delivering the following Applets in conjunction with corresponding Android feature releases:

  • Mobile driver’s license and Identity Credentials
  • Digital car keys

Kigen brings the ability to bridge the physical embedded security hardware to a fully integrated form factor. Our Kigen standards-compliant eSIM OS (version 2.2. eUICC OS) is available to support chipsets and device makers now. This announcement is a start to what will bring a whole host of new and exciting trusted services offering better experience for users on Android.

Kigen’s eSIM (eUICC) OS brings

8887975464?profile=RESIZE_710x

The smallest operating system, allowing OEMs to select compact, cost-effective hardware to run it on.

Kigen OS offers the highest level of logical security when employed on any SIM form factor, including a secure enclave.

On top of Kigen OS, we have a broad portfolio of Java Card™ Applets to support your needs for the Android SE Ready Alliance.

Kigen’s Integrated SIM or iSIM (iUICC) OS further this advantage

8887975878?profile=RESIZE_710x

Integrated at the heart of the device and securely personalized, iSIM brings significant size and battery life benefits to cellular Iot devices. iSIM can act as a root of trust for payment, identity, and critical infrastructure applications

Kigen’s iSIM is flexible enough to support dual sim capability through a single profile or remote SIM provisioning mechanisms with the latter enabling out-of-the-box connectivity, secure and remote profile management.

For smartphones, set top boxes, android auto applications, auto car display, Chromecast or Google Assistant enabled devices, iSIM can offer significant benefits to incorporate Artificial intelligence at the edge.

Kigen’s secure personalization services to support fast adoption

SIM vendors have in-house capabilities for data generation but the eSIM and iSIM value chains redistribute many roles and responsibilities among new stakeholders for the personalization of operator credentials along different stages of production or over-the-air when devices are deployed.

Kigen can offer data generation as a service to vendors new to the ecosystem.

Partner with us to provide cellular chipset and module makers with the strongest security, performance for integrated SIM leading to accelerate these new use cases.

Security considerations for eSIM and iSIM enabled secure connected services

Designing a secure connected product requires considerable thought and planning and there really is no ‘one-size-fits-all’ solution. How security should be implemented draws upon a multitude of factors, including:

  • What data is being stored or transmitted between the device and other connected apps?
  • Are there regulatory requirements for the device? (i.e. PCI DSS, HIPAA, FDA, etc.)
  • What are the hardware or design limitations that will affect security implementation?
  • Will the devices be manufactured in a site accredited by all of the necessary industry bodies?
  • What is the expected lifespan of the device?

End-to-end ecosystem and services thinking needs to be a design consideration from the very early stage especially when considering the strain on battery consumption in devices such as wearables, smart watches and fitness devices as well as portable devices that are part of the connected consumer vehicles.

Originally posted here.

Read more…

As the app development industry continues to be a driving force in the global market and IoT becomes increasingly prevalent, more and more businesses are turning to React Native. Because it appears to offer benefits for such development, of course. It enhances the efficiency of apps as well as their productivity. Code is stable too.

Just a recap on technology concepts;
React JS is an open-source JavaScript library created by Facebook. It allows the inclusion of interactive elements and stores all the necessary data required for the creation of stable user interfaces for mobile or web applications. IoT, or the internet of things, on the other hand, is an ecosystem that enables connecting a variety of devices over the internet. It gives each of these machines/devices unique identifiers or UID’s that eases data transferring. React Developer Tools can help you in this regard.

Here are some other benefits of using React Native for the development of IoT apps.
1. Stability of code: The information flow structure is such that it moves downwards no matter the changes or updates to the structure. What this means is the programmer only needs to adjust the state before making changes. Once that is done only specific segments are updated when changes are made to them. About IoT apps, such an ability helps developers write a solid code database and ensure it can be executed seamlessly.
2. Extensive collection of tools: A crucial advantage that React Native offers in this context is its collection of tools aimed at enabling the development of top-notch front ends. It may also help to know that the library developers gain access to React Native is not only free but also comes in handy at various points throughout the development process.
3. Individual components: React Native involves individual components which mean issues with one component do not affect the other components in the app. About IoT apps, this segregation of components means the receipt of data, as well as the processing of said data, are distinctive. This can come in handy when the developer wants to endow the IoT app with avant-garde controls and functionalities.
4. Structural advantage: Since React Native is based on a composition model, the code the developer ends up writing for a React Native app is inherently organized. Now, what does that have to do with IoT? Well, since IoT relies on older larger, and a tad complicated models, React Native’s composition structure offers the scope to streamline the application development process.

Now let us also take a look at some of the limitations of React Native in this context.
1. High level of development expertise: There is no doubt about the fact that React Native is among the foremost options for UI frameworks. Unfortunately, its use also necessitates extensive expertise, especially when dealing with the development of intricate apps or complex functionalities or perhaps when the requests must switch from native code to JavaScript or vice versa.
2. Abstract layer: To make React Native’s functionalities work seamlessly with native apps, native OS platforms get an abstract layer. Now given the role this abstract layer plays, one can imagine why any errors in this layer can throw up errors across the entire app.
3. Third-party library-dependency: While it enables the development of top-notch mobile apps, the fact remains that often developers find themselves needing a lot of third-party libraries for native platforms when using React Native to develop an app.

No doubt React Native comes with its own set of limitations, which is something that would hold for practically everything. With that being said, this open-source UI development framework offers an impressive array of benefits for anyone seeking to develop robust and dynamic apps for a variety of platforms. So, go ahead and start looking for a trusted service provider for React Native mobile app development ASAP.

Read more…

In my last post, I explored how OTA updates are typically performed using Amazon Web Services and FreeRTOS. OTA updates are critically important to developers with connected devices. In today’s post, we are going to explore several best practices developers should keep in mind with implementing their OTA solution. Most of these will be generic although I will point out a few AWS specific best practices.

Best Practice #1 – Name your S3 bucket with afr-ota

There is a little trick with creating S3 buckets that I was completely oblivious to for a long time. Thankfully when I checked in with some colleagues about it, they also had not been aware of it so I’m not sure how long this has been supported but it can help an embedded developer from having to wade through too many AWS policies and simplify the process a little bit.

Anyone who has attempted to create an OTA Update with AWS and FreeRTOS knows that you have to setup several permissions to allow an OTA Update Job to access the S3 bucket. Well if you name your S3 bucket so that it begins with “afr-ota”, then the S3 bucket will automatically have the AWS managed policy AmazonFreeRTOSOTAUpdate attached to it. (See Create an OTA Update service role for more details). It’s a small help, but a good best practice worth knowing.

Best Practice #2 – Encrypt your firmware updates

Embedded software must be one of the most expensive things to develop that mankind has ever invented! It’s time consuming to create and test and can consume a large percentage of the development budget. Software though also drives most features in a product and can dramatically different a product. That software is intellectual property that is worth protecting through encryption.

Encrypting a firmware image provides several benefits. First, it can convert your firmware binary into a form that seems random or meaningless. This is desired because a developer shouldn’t want their binary image to be easily studied, investigated or reverse engineered. This makes it harder for someone to steal intellectual property and more difficult to understand for someone who may be interested in attacking the system. Second, encrypting the image means that the sender must have a key or credential of some sort that matches the device that will decrypt the image. This can be looked at a simple source for helping to authenticate the source, although more should be done than just encryption to fully authenticate and verify integrity such as signing the image.

Best Practice #3 – Do not support firmware rollbacks

There is often a debate as to whether firmware rollbacks should be supported in a system or not. My recommendation for a best practice is that firmware rollbacks be disabled. The argument for rollbacks is often that if something goes wrong with a firmware update then the user can rollback to an older version that was working. This seems like a good idea at first, but it can be a vulnerability source in a system. For example, let’s say that version 1.7 had a bug in the system that allowed remote attackers to access the system. A new firmware version, 1.8, fixes this flaw. A customer updates their firmware to version 1.8, but an attacker knows that if they can force the system back to 1.7, they can own the system. Firmware rollbacks seem like a convenient and good idea, in fact I’m sure in the past I used to recommend them as a best practice. However, in today’s connected world where we perform OTA updates, firmware rollbacks are a vulnerability so disable them to protect your users.

Best Practice #4 – Secure your bootloader

Updating firmware Over-the-Air requires several components to ensure that it is done securely and successfully. Often the focus is on getting the new image to the device and getting it decrypted. However, just like in traditional firmware updates, the bootloader is still a critical piece to the update process and in OTA updates, the bootloader can’t just be your traditional flavor but must be secure.

There are quite a few methods that can be used with the onboard bootloader, but no matter the method used, the bootloader must be secure. Secure bootloaders need to be capable of verifying the authenticity and integrity of the firmware before it is ever loaded. Some systems will use the application code to verify and install the firmware into a new application slot while others fully rely on the bootloader. In either case, the secure bootloader needs to be able to verify the authenticity and integrity of the firmware prior to accepting the new firmware image.

It’s also a good idea to ensure that the bootloader is built into a chain of trust and cannot be easily modified or updated. The secure bootloader is a critical component in a chain-of-trust that is necessary to keep a system secure.

Best Practice #5 – Build a Chain-of-Trust

A chain-of-trust is a sequence of events that occur while booting the device that ensures each link in the chain is trusted software. For example, I’ve been working with the Cypress PSoC 64 secure MCU’s recently and these parts come shipped from the factory with a hardware-based root-of-trust to authenticate that the MCU came from a secure source. That Root-of-Trust (RoT) is then transferred to a developer, who programs a secure bootloader and security policies onto the device. During the boot sequence, the RoT verifying the integrity and authenticity of the bootloader, which then verifies the integrity and authenticity of any second stage bootloader or software which then verifies the authenticity and integrity of the application. The application then verifies the authenticity and integrity of its data, keys, operational parameters and so on.

This sequence creates a Chain-Of-Trust which is needed and used by firmware OTA updates. When the new firmware request is made, the application must decrypt the image and verify that authenticity and integrity of the new firmware is intact. That new firmware can then only be used if the Chain-Of-Trust can successfully make its way through each link in the chain. The bottom line, a developer and the end user know that when the system boots successfully that the new firmware is legitimate. 

Conclusions

OTA updates are a critical infrastructure component to nearly every embedded IoT device. Sure, there are systems out there that once deployed will never update, however, those are probably a small percentage of systems. OTA updates are the go-to mechanism to update firmware in the field. We’ve examined several best practices that developers and companies should consider when they start to design their connected systems. In fact, the bonus best practice for today is that if you are building a connected device, make sure you explore your OTA update solution sooner rather than later. Otherwise, you may find that building that Chain-Of-Trust necessary in today’s deployments will be far more expensive and time consuming to implement.

Originally posted here.

Read more…

4 key questions to ask tech vendors

Posted by Terri Hiskey

Without mindful and strategic investments, a company’s supply chain could become wedged in its own proverbial Suez Canal, ground to a halt by outside forces and its inflexible, complex systems.

 

It’s a dramatic image, but one that became reality for many companies in the last year. Supply chain failures aren’t typically such high-profile events as the Suez Canal blockage, but rather death by a thousand inefficiencies, each slowing business operations and affecting the customer experience.

Delay by delay and spreadsheet by spreadsheet, companies are at risk of falling behind more nimble, cloud-enabled competitors. And as we emerge from the pandemic with a new understanding of how important adaptable, integrated supply chains are, company leaders have critical choices to make.

The Hannover Messe conference (held online from April 12-16) gives manufacturing and supply chain executives around the world a chance to hear perspectives from industry leaders and explore the latest manufacturing and supply chain technologies available.

Technology holds great promise. But if executives don’t ask key strategic questions to supply chain software vendors, they could unknowingly introduce a range of operational and strategic obstacles into their company’s future.

If you’re attending Hannover Messe, here are a few critical questions to ask:

Are advanced technologies like machine learning, IoT, and blockchain integrated into your supply chain applications and business processes, or are they addressed separately?

It’s important to go beyond the marketing. Is the vendor actually promoting pilots of advanced technologies that are simply customized use cases for small parts of an overall business process hosted on a separate platform? If so, it may be up to your company to figure out how to integrate it with the rest of that vendor’s applications and to maintain those integrations.

To avoid this situation, seek solutions that have been purpose-built to leverage advanced technologies across use cases that address the problems you hope to solve. It’s also critical that these solutions come with built-in connections to ensure easy integration across your enterprise and to third party applications.

Are your applications or solutions written specifically for the cloud?

If a vendor’s solution for a key process (like integrated business planning or plan to produce, for example) includes applications developed over time by a range of internal development teams, partners, and acquired companies, what you’re likely to end up with is a range of disjointed applications and processes with varying user interfaces and no common data model. Look for a cloud solution that helps connect and streamline your business processes seamlessly.

Update schedules for the various applications could also be disjointed and complicated, so customers can be tempted to skip updates. But some upgrades may be forced, causing disruption in key areas of your business at various times.

And if some of the applications in the solution were written for the on-premises world, business processes will likely need customization, making them hard-wired and inflexible. The convenience of cloud solutions is that they can take frequent updates more easily, resulting in greater value driven by the latest innovations.

Are your supply chain applications fully integrated—and can they be integrated with other key applications like ERP or CX?

A lack of integration between and among applications within the supply chain and beyond means that end users don’t have visibility into the company’s operations—and that directly affects the quality and speed of business decisions. When market disruptions or new opportunities occur, unintegrated systems make it harder to shift operations—or even come to an agreement on what shift should happen.

And because many key business processes span multiple areas—like manufacturing forecast to plan, order to cash, and procure to pay—integration also increases efficiency. If applications are not integrated across these entire processes, business users resort to pulling data from the various systems and then often spend time debating whose data is right.

Of course, all of these issues increase operational costs and make it harder for a company to adapt to change. They also keep the IT department busy with maintenance tasks rather than focusing on more strategic projects.

Do you rely heavily on partners to deliver functionality in your supply chain solutions?

Ask for clarity on which products within the solution belong to the vendor and which were developed by partners. Is there a single SLA for the entire solution? Will the two organizations’ development teams work together on a roadmap that aligns the technologies? Will their priority be on making a better solution together or on enhancements to their own technology? Will they focus on enabling data to flow easily across the supply chain solution, as well as to other systems like ERP? Will they be able to overcome technical issues that arise and streamline customer support?

It’s critical for supply chain decision-makers to gain insight into these crucial questions. If the vendor is unable to meet these foundational needs, the customer will face constant obstacles in their supply chain operations.

Originally posted here.

Read more…

The demand for Computer Numerical Control (CNC) equipment is gradually increasing and performing to expect a huge growth over the coming years. For this an annual growth rate of more than six percent. CNC machining plays a major role in present manufacturing and helps us create a diverse range of products in several industries, from agriculture, automotive, and aerospace to Semiconductor and circuit boards.

Nowadays, machining has developed rapidly in periods of processing complexity, precision, machine scale, and automation level. In the development of processing quality and efficiency, CNC machine tools play a vital role. IoT-enabled CNC machine monitoring solutions, which creates machine-to-machine interaction resulting in automated operations and less manual intervention.

Embedded the IoT sensors on CNC machines that can measure various parameters and send them to a platform from where the state and operation of the machines can be fully supervised. Furthermore, CNC machines can scrutinize the data collected from sensors to perpetually replace tools, change the degree of freedom, or perform any other action. 

ADVANTAGES:

An Enterprise can leverage the following advantages by coalescence of Industry 4.0 and CNC. 

Predictive Maintenance:

CNC Machine operators and handlers embrace the Industrial IoT which allows them to appropriately interconnect with their CNC machines in many ways through smartphones or tablets. Therefore the operators can monitor the condition of machines at all times remotely using Faststream’s IoT-based CNC machine monitoring.

This remote and real-time monitoring aids the machine operating person to program a CNC for a checkup or restore.

On the other hand, these can also arrange their CNC machines to send alerts or notifications to operators whenever machines deem themselves due for tuning or maintenance. In other terms, the machine will raise red flags about complications such as a rise in temperature, increased vibrations, or tool damage.

Reducing Downtime and Efficient Machine Monitoring :

Digital Transformation in CNC Machine solutions has broad competence and is not restricted to distant control and programmed maintenance for CNC machines. Reduce machine downtime and escalate overall equipment effectiveness by using our IoT system and grasping its real-time alert features. The Alerts received from machines can be used to do predictive measures and unexpected breakdown of tools or any other element of a CNC machine.

Faststream Technologies similar solutions to its clients by arranging the IoT energy management solution for their CNC machines. Pre-executing these solutions, the client was facing difficulties with the future breakdown of their machines. Faststream's IoT solution guided them to retain a clear insight into the running hours of their CNC machines, which in turn gave them exact thoughts of how they were maintaining their production run-time.

Machine downtime reducing solutions can be utilized for a chain of CNC machines to not only ameliorate their processing but also to boost the machine synchronization process in industrial inception and realize the operational eminence.

Less manual effort and Worker Safety:

For the bigger enactment, the technology of Industrial IoT can also be implemented to bring down manual efforts, or in other terms, mitigate the possibility of workers’ injury in the factory operation process.

From this action, machine-to-machine synchronization and interrelation come into the picture. The synergy between machines will result in more interpretation between various electromechanical devices, which will lead to automated operations in a Manufacturing unit.

Many companies are already working towards the development of smart robots and machines that can.

Several Companies that perform on smart robots and machine development can work on pre-programmed tasks and retaliation to the existing needs of CNC machines for bringing down the extra strain of quality operation from the manual workforce. All these robots can perform confined and elegant work like opening & close the slab of a CNC machine or reform the tool whenever sharpness is required.

Apart from the lowering injuries in the workshop, our Industry 4.0 in CNC Machine also helps in lowering material wastage and betterment the efficiency of CNC machines, which will help in the rise in production of exact elements in a shorter time frame.

CONCLUSION

CNC machines are electromechanical devices that can operate tools on a different range of axes with more accuracy to generate a small part as per command put through a computer program. These can run faster than any other non-automated machine as well as can generate further objects with high accuracy from any type of design.

Read more…

By Ashley Ferguson

Thanks to the introduction of connected products, digital services, and increased customer expectations, it has been the trend for IoT enterprise spend to consistently increase. The global IoT market is projected to reach $1.4 trillion USD by 2027. The pressure to build IoT solutions and get a return on those investments has teams on a frantic search for IoT engineers to secure in-house IoT expertise. However, due to the complexity of IoT solutions, finding this in a single engineer is a difficult or impossible proposition.

So how do you adjust your search for an IoT engineer? The first step is to acknowledge that IoT solution development requires the fusion of multiple disciplines. Even simple IoT applications require hardware and software engineering, knowledge of protocols and connectivity, web development skills, and analytics. Certainly, there are many engineers with IoT knowledge, but complete IoT solutions require a team of partners with diverse skills. This often requires utilizing external sources to supplement the expertise gaps.

THE ANATOMY OF AN IoT SOLUTION

IoT solutions provide enterprises with opportunities for innovation through new product offerings and cost savings through refined operations. An IoT solution is an integrated bundle of technologies that help users answer a question or solve a specific problem by receiving data from devices connected to the internet. One of the most common IoT use cases is asset tracking solutions for enterprises who want to monitor trucks, equipment, inventory, or other items with IoT. The anatomy of an asset tracking IoT solution includes the following:

9266380467?profile=RESIZE_710x

This is a simple asset tracking example. For more complex solutions including remote monitoring or predictive maintenance, enterprises must also consider installation, increased bandwidth, post-development support, and UX/UI for the design of the interface for customers or others who will use the solution. Enterprise IoT solutions require an ecosystem of partners, components, and tools to be brought to market successfully.

Consider the design of your desired connected solution. Do you know where you will need to augment skills and services?

If you are in the early stages of IoT concept development and at the center of a buy vs. build debate, it may be a worthwhile exercise to assess your existing team’s skills and how they correspond with the IoT solution you are trying to build.

IoT SKILLS ASSESSMENT

  • Hardware
  • Firmware
  • Connectivity
  • Programming
  • Cloud
  • Data Science
  • Presentation
  • Technical Support and Maintenance
  • Security
  • Organizational Alignment

MAKING TIME FOR IoT APPLICATION DEVELOPMENT

The time it will take your organization to build a solution is dependent on the complexity of the application. One way to estimate the time and cost of IoT application development is with Indeema’s IoT Cost Calculator. This tool can help roughly estimate the hours required and the cost associated with the IoT solution your team is interested in building. In MachNation’s independent comparison of the Losant Enterprise IoT Platform and Azure, it was determined that developers could build an IoT solution in 30 hours using Losant and in 74-94 hours using Microsoft Azure.

As you consider IoT application development, consider the makeup of your team. Is your team prepared to dedicate hours to the development of a new solution, or will it be a side project? Enterprise IT teams are often in place to maintain existing operating systems and to ensure networks are running smoothly. In the event that an IT team is tapped to even partially build an IoT solution, there is a great chance that the IT team will need to invite partners to build or provide part of the stack.

HOW THE IoT JOB GETS DONE

Successful enterprises recognize early on that some of these skills will need to be augmented through additional people, through an ecosystem, or with software. It will require more than one ‘IoT engineer’ for the job. According to the results of a McKinsey survey, “the preferences of IoT leaders suggest a greater willingness to draw capabilities from an ecosystem of technology partners, rather than rely on homegrown capabilities.”

IoT architecture alone is intricate. Losant, an IoT application enablement platform, is designed with many of the IoT-specific components already in place. Losant enables users to build applications in a low-to-no code environment and scale them up to millions of devices. Losant is one piece in the wider scope of an IoT solution. In order to build a complete solution, an enterprise needs hardware, software, connectivity, and integration. For those components, our team relies on additional partners from the IoT ecosystem.

The IoT ecosystem, also known as the IoT landscape, refers to the network of IoT suppliers (hardware, devices, software platforms, sensors, connectivity, software, systems integrators, data scientists, data analytics) whose combined services help enterprises create complete IoT solutions. At Losant, we’ve built an IoT ecosystem with reliable experienced partners. When IoT customers need custom hardware, connectivity, system integrators, dev shops, or other experts with proven IoT expertise, we can tap one of our partners to help in their areas of expertise.

SECURE, SCALABLE, SEAMLESS IoT

Creating secure, scalable, and seamless IoT solutions for your environment begins by starting small. Starting small gives your enterprise the ability to establish its ecosystem. Teams can begin with a small investment and apply learnings to subsequent projects. Many IoT success stories begin with enterprises setting out to solve one problem. The simple beginnings have enabled them to now reap the benefits of the data harvest in their environments.

Originally posted here.

Read more…

TinyML focuses on optimizing machine learning (ML) workloads so that they can be processed on microcontrollers no bigger than a grain of rice and consuming only milliwatts of power.

By Arm Blueprint staff
 

TinyML focuses on the optimization of machine learning (ML) workloads so that they can be processed on microcontrollers no bigger than a grain of rice and consuming only a few milliwatts of power.

TinyML gives tiny devices intelligence. We mean tiny in every sense of the word: as tiny as a grain of rice and consuming tiny amounts of power. Supported by Arm, Google, Qualcomm and others, tinyML has the potential to transform the Internet of Things (IoT), where billions of tiny devices, based on Arm chips, are already being used to provide greater insight and efficiency in sectors including consumer, medical, automotive and industrial.

Why target microcontrollers with tinyML?

Microcontrollers such as the Arm Cortex-M family are an ideal platform for ML because they’re already used everywhere. They perform real-time calculations quickly and efficiently, so they’re reliable and responsive, and because they use very little power, can be deployed in places where replacing the battery is difficult or inconvenient. Perhaps even more importantly, they’re cheap enough to be used just about anywhere. The market analyst IDC reports that 28.1 billion microcontrollers were sold in 2018, and forecasts that annual shipment volume will grow to 38.2 billion by 2023.

TinyML on microcontrollers gives us new techniques for analyzing and making sense of the massive amount of data generated by the IoT. In particular, deep learning methods can be used to process information and make sense of the data from sensors that do things like detect sounds, capture images, and track motion.

Advanced pattern recognition in a very compact format

Looking at the math involved in machine learning, data scientists found they could reduce complexity by making certain changes, such as replacing floating-point calculations with simple 8-bit operations. These changes created machine learning models that work much more efficiently and require far fewer processing and memory resources.

TinyML technology is evolving rapidly thanks to new technology and an engaged base of committed developers. Only a few years ago, we were celebrating our ability to run a speech-recognition model capable of waking the system if it detects certain words on a constrained Arm Cortex-M3 microcontroller using just 15 kilobytes (KB) of code and 22KB of data.

Since then, Arm has launched new machine learning (ML) processors, called the Ethos-U55 and Ethos-U65, a microNPU specifically designed to accelerate ML inference in embedded and IoT devices.

The Ethos-U55, combined with the AI-capable Cortex-M55 processor, will provide a significant uplift in ML performance and improvement in energy efficiency over the already impressive examples we are seeing today.

TinyML takes endpoint devices to the next level

The potential use cases of tinyML are almost unlimited. Developers are already working with tinyML to explore all sorts of new ideas: responsive traffic lights that change signaling to reduce congestion, industrial machines that can predict when they’ll need service, sensors that can monitor crops for the presence of damaging insects, in-store shelves that can request restocking when inventory gets low, healthcare monitors that track vitals while maintaining privacy. The list goes on.

TinyML can make endpoint devices more consistent and reliable, since there’s less need to rely on busy, crowded internet connections to send data back and forth to the cloud. Reducing or even eliminating interactions with the cloud has major benefits including reduced energy use, significantly reduced latency in processing data and security benefits, since data that doesn’t travel is far less exposed to attack. 

It’s worth nothing that these tinyML models, which perform inference on the microcontroller, aren’t intended to replace the more sophisticated inference that currently happens in the cloud. What they do instead is bring specific capabilities down from the cloud to the endpoint device. That way, developers can save cloud interactions for if and when they’re needed. 

TinyML also gives developers a powerful new set of tools for solving problems. ML makes it possible to detect complex events that rule-based systems struggle to identify, so endpoint AI devices can start contributing in new ways. Also, since ML makes it possible to control devices with words or gestures, instead of buttons or a smartphone, endpoint devices can be built more rugged and deployable in more challenging operating environments. 

TinyML gaining momentum with an expanding ecosystem

Industry players have been quick to recognize the value of tinyML and have moved rapidly to create a supportive ecosystem. Developers at every level, from enthusiastic hobbyists to experienced professionals, can now access tools that make it easy to get started. All that’s needed is a laptop, an open-source software library and a USB cable to connect the laptop to one of several inexpensive development boards priced as low as a few dollars.

In fact, at the start of 2021, Raspberry Pi released its very first microcontroller board, one of the most affordable development board available in the market at just $4. Named Raspberry Pi Pico, it’s powered by the RP2040 SoC, a surprisingly powerful dual Arm Cortex-M0+ processor. The RP2040 MCU is able to run TensorFlow Lite Micro and we’re expecting to see a wide range of ML use cases for this board over the coming months.

Arm is a strong proponent of tinyML because our microcontroller architectures are so central to the IoT, and because we see the potential of on-device inference. Arm’s collaboration with Google is making it even easier for developers to deploy endpoint machine learning in power-conscious environments.

The combination of Arm CMSIS-NN libraries with Google’s TensorFlow Lite Micro (TFLu) framework, allows data scientists and software developers to take advantage of Arm’s hardware optimizations without needing to become experts in embedded programming.

On top of this, Arm is investing in new tools derived from Keil MDK to help developers get from prototype to production when deploying ML applications.

TinyML would not be possible without a number of early influencers. Pete Warden, a “founding father” of tinyML and a technical lead of TensorFlow Lite Micro at Google,&nbspArm Innovator, Kwabena Agyeman, who developed OpenMV, a project dedicated to low-cost, extensible, Python-powered machine-vision modules that support machine learning algorithms, and Arm Innovator, Daniel Situnayake a founding tinyML engineer and developer from Edge Impulse, a company that offers a full tinyML pipeline that covers data collection, model training and model optimization. Also, Arm partners such as Cartesiam.ai, a company that offers NanoEdge AI, a tool that creates software models on the endpoint based on the sensor behavior observed in real conditions have been pushing the possibilities of tinyML to another level. 

Arm, is also a partner of the TinyML Foundation, an open community that coordinates meet-ups to help people connect, share ideas, and get involved. There are many localised tinyML meet-ups covering UK, Israel and Seattle to name a few, as well as a global series of tinyML Summits. For more information, visit the tinyML foundation website.

Originally posted here.

Read more…

Once again, I’m jumping up and down in excitement because I’m going to be hosting a panel discussion as part of a webinar series — Fast and Fearless: The Future of IoT Software Development — being held under the august auspices of IotCentral.io

At this event, the second of a four-part series, we will be focusing on “AI and IoT Innovation” (see also What the FAQ are AI, ANNs, ML, DL, and DNNs? and What the FAQ are the IoT, IIoT, IoHT, and AIoT?).

9132441666?profile=RESIZE_400x

Panel members Karl Fezer (upper left), Wei Xiao (upper right), Nikhil Bhaskaran (lower left), and Tina Shyuan (bottom right) (Click image to see a larger version)

As we all know, the IoT is transforming the software landscape. What used to be a relatively straightforward embedded software stack has been revolutionized by the IoT, with developers now having to juggle specialized workloads, security, artificial intelligence (AI) and machine learning (ML), real-time connectivity, managing devices that have been deployed into the field… the list goes on.

In this webinar — which will be held on Tuesday 29 June 2021 from 10:00 a.m. to 11:00 a.m. CDT — I will be joined by four industry luminaries to discuss how to juggle the additional complexities that machine learning adds to IoT development, why on-device machine learning is more important now than ever, and what the combination of AI and IoT looks like for developers in the future.

The luminaries in question (and whom I will be questioning) are Karl Fezer (AI Ecosystem Evangelist at Arm), Wei Xiao (Principal Engineer, Sr. Strategic Alliances Manager at Nvidia), Nikhil Bhaskaran (Founder of Shunya OS), and Tina Shyuan (Director of Product Marketing at Qeexo).

So, what say you? Dare I hope that we will have the pleasure of your company and that you will be able to join us to (a) tease your auditory input systems with our discussions and (b) join our question-and-answer free-for-all frensy at the end? If so, may I suggest that you Register Now before all of the good virtual seats are taken, metaphorically speaking, of course.

>> Clicke here to register

Read more…

WEBINAR SERIES:
 
Fast and Fearless - The Future of IoT Software Development
 8995382285?profile=RESIZE_400x

SUMMARY

The IoT is transforming the software landscape. What was a relatively straightforward embedded software stack, has been revolutionized due to the IoT where developers juggle specialized workloads, security, machine learning, real-time connectivity, managing devices in the field - the list goes on.

How can our industry help developers prototype ‘fearlessly’ because the tools and platforms allow them to navigate varying IoT components? How can developers move to production quickly, capitalizing on innovation opportunities in emerging IoT markets? 

This webinar series will take you through the fundamental steps, tools and opportunities for simplifying IoT development. Each webinar will be a panel discussion with industry experts who will share their experience and development tips on the below topics.

 

Part One of Four: The IoT Software Developer Experience

Date: Tuesday, May 11, 2021

Webinar Recording Available Here
 

Part Two of Four: AI and IoT Innovation

Date: Tuesday, June 29, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Two
 

Part Three of Four: Making the Most of IoT Connectivity

Date: Tuesday, September 28, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Three
 

Part Four of Four: IoT Security Solidified and Simplified

Date: Tuesday, November 16, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Four
 
Read more…

Happy Friday (or whatever day it is when you find yourself reading this). I’m currently bouncing off the walls in excitement because I’ve been invited to host a panel discussion as part of a webinar series — Fast and Fearless: The Future of IoT Software Development — being held under the august auspices of IoTCentral.io

8902866889?profile=RESIZE_584x

Panel members Joe Alderson (upper left), Pamela Cortez (upper right), Katherine Scott (lower left), and Ihor Dvoretskyi (bottom right)

At this event, the first of a 4-part series, we will be focusing on “The IoT Software Developer Experience.”

As we all know, the IoT is transforming the software landscape. What used to be a relatively straightforward embedded software stack has been revolutionized by the IoT, with developers now having to juggle specialized workloads, security, machine learning, real-time connectivity, managing devices that have been deployed into the field… the list goes on.

In this webinar — which will be held on Tuesday 11 May 2021 from 10:00 a.m. to 11:00 a.m. CDT — I will be joined by four industry luminaries to discuss the development challenges engineers are facing today, how the industry is helping to make IoT development easier, an overview of development processes (including cloud-based continuous integration (CI) workflows and low-code development), and what the future looks like for developers who are building for the IoT. 

The luminaries in question (and whom I will be questioning) are Joe Alderson (Director of Embedded Tools and User Experience at Arm), Pamela Cortez (IoT Developer Advocate and Sr. Program Manager at Microsoft Azure IoT), Katherine Scott, Developer Advocate at Open Robotics, and Ihor Dvoretskyi (Developer Advocate at Cloud Native Computing Foundation).

So, what say you? Dare I hope that we will have the pleasure of your company and that you will be able to join us to (a) tease your auditory input systems with our discussions and (b) join our question-and-answer free-for-all at the end?

Recording available:

Read more…

Before getting stuck to the point, which language is best for development? Let us know what exactly IoT is. Why is it important?

What is the Internet of Things (IoT)? 

The Internet of Things (IoT) epitomizes the pattern of once formerly autonomous devices getting progressively associated with the Internet. IoT alludes to different "things" that can speak with each other to accomplish more than if they were working all alone. Devices that fuse a microchip and information correspondence abilities are IoT gadgets.

The “Internet” alludes to the capacity for gadgets to speak with each other. In numerous IoT frameworks, correspondence between things is not really over the Internet. Things may utilize Internet conventions to speak with one another. On the other hand, they may utilize restrictive conventions. Nonetheless, in many frameworks, an association with the Internet is available eventually. Basic models utilizing the Internet includes gadgets conveying to one of the accompanying's: 

  • A mobile phone
  • A gateway device
  • An embedded cellular connection

This is true even if the IoT devices themselves do not use a connection, but when the user is the mobile device does. The Internet of Things will produce the data concerning the connected objects and analyze them, and create the decisions; in straightforward terms, we can say that one can tell that the web of Things is far smarter than the web. The protection cameras, sensors, vehicles, buildings, and therefore the software package a number of the examples are of things, which will exchange the info among one another.

The top 6 programming language suited for IoT and will be the best choice are:

1. C/C++

Java is not the only famous programming language in IoT programming. C and C++ are most popularly being utilized for IoT projects for an assortment of purposes. For example, designers may utilize the C language with IoT sheets or C++in installed IoT frameworks. Given that the two dialects have moderately low energy utilization and progressed adaptability, developers can utilize them to viably code for inserted frameworks that interface with the underlying hardware.


  • As you would have speculated, many "things" will not exist without quite possibly the main programming language, C. It is fundamentally a beginning stage and is the most famous language for embedded devices. C has been utilized regularly even though different languages may rank much higher. C is probably the most established language still generally utilized today. Despite the numerous languages to tag along since there are still a lot of activities that utilize C. Some even just utilize C. There is a valid justification for this, as well: execution. Different languages use this utilization at runtime, which implies that either bytecode or the actual code you write is being interpreted when your program runs. C, then again, accumulates to machine code. This implies that C projects are for the most part a lot quicker than their reciprocals in different languages.
  • C++ 
    C++ programming language has preparation control over C. This benefit makes C++ ideal as a pre-handling impetus for C. C++ invigorates the handling force of C, assisting it with running more significant level programming dialects. In spite of the fact that C++ is an intricate language and designers can have errors with it, it remains software engineers' top pick. This programming language shows its strength in Linux projects and the inserted programming space with its capacity for reflections and item layers. C++ is an improved variant of the C language ordinarily utilized for object-arranged programming. It was intended to run huge scope applications, an impediment in C. C++ is broadly utilized in implanted frameworks, GUI-based applications, internet browsers, working frameworks with applications across businesses like medical care, money, etc.

2. JAVA

Another of the broadly utilized programming languages that are making IoT-controlled devices a reality. JavaScript when joined with Node.js turns out magnificently for creating both public just as private IoT organizations. Additionally, Tessel and Espruino microcontrollers utilize this programming language. This half makes it a lot of appropriate arrangement once we use low-force or short microcontrollers. JavaScript has a precarious expectation to absorb information and even students with no experience can begin chipping away at IoT advancement projects without the need to go through years dominating it. This is utilized in web development and HTML programs. This is a benefit as the code composed of this language can be effortlessly adjusted for an IoT application. This is likewise one of the suggested languages for the IoT application improvement organization. Designers will not have to get familiar with any new language to create code for sensors. JavaScript runs on Node.js, which is a decent choice for gathering information and sending them to the center. All that said, Java requires explicit libraries to work with specific hardware Nevertheless, However, it is amongst the most preferred tools used by developers today for IoT development.

3. PYTHON

The vast majority of web applications use Python as their programming language. Python has been extremely well known among IoT designers as it is not difficult to learn, adaptable, speedy and its force permits specialists to work with information-heavy applications. This is a flexible language as it tends to be adjusted on any type of device. Any designer can learn a moderately simple programming language. Consequently, this can be utilized by the IoT application advancement organization. The syntax is simple and readable. This makes the advancement of an IoT application simple. Python is likewise well known for keeping up complex codes. It is the best programming language to send complex information. As of late, this is viewed as a steady language. It is useful for all little to medium estimated projects. The handling power is moderate and is acquiring prevalence for IoT frameworks. A universally useful language, Python turns out impeccably for backend web improvement, information examination, computerized reasoning, and logical processing. Developers likewise use it to build up efficient devices, games, and work area applications. It is one of the quickest developing languages for installed figuring.

4. JAVASCRIPT

Another of the broadly utilized programming languages that are making IoT-controlled devices a reality. JavaScript when joined with Node.js turns out magnificently for creating both public just as private IoT organizations. Additionally, Tessel and Espruino microcontrollers utilize this programming language. This makes it a suitable arrangement when utilizing low-force or quick microcontrollers. JavaScript has a precarious expectation to absorb information and even students with no experience can begin chipping away at IoT advancement projects without the need to go through years dominating it. This is utilized in web development and HTML programs. This is a benefit as the code composed of this language can be effortlessly adjusted for an IoT application. This is likewise one of the suggested languages for the IoT application improvement organization. Designers will not have to get familiar with any new language to create code for sensors. JavaScript runs on Node.js, which is a decent choice for gathering information and sending them to the center.

5. SWIFT

Swift is the programming language that is utilized for making the applications for macOS or Apple's iOS devices. On the off chance that you need to communicate with the iPhones and iPads with your focal home center, Swift is the way. Swift is acquiring notoriety as a programming language that its processor Objective-C. Apple to accomplish its objective of turning into the head of IoT at home is building libraries. These libraries can deal with a large part of the work; it will make it simpler for designers to zero in on the agenda. A universally useful, multi-worldview, and gathered programming language, Swift is assembled utilizing the current way to deal with security, execution, and programming configuration designs. Swift is an open-source language, an extraordinary decision for the improvement of amazing Home kit arrangements. The home kit is a system by Apple Inc. for speaking with and controlling associated frills in a client's home. Swift will be accessible for Cloud kit too. Cloud kit is JavaScript that keeps your applications associated and state-of-the-art across iOS and macOS. 

Swift being an amazing, open-source, stage-viable improvement arrangement equipped for running applications both on the gadget and in the cloud, settles on it as an obvious decision for IoT items.

6. PHP

PHP is an open-source, language, open-source interpreted, object-oriented, server-side scripting language. A PHP script can execute quicker than scripts written in different languages. It is cross-stage, which implies that a PHP application created in one operating system can be effortlessly executed in other operating systems. Aside from that, PHP code can be effectively implanted inside HTML labels and scripts. The engineers to their heap of codes are adding PHP. The code's fundamental target is to shuffle microservices on the worker. They can transform the lowliest thing of the web into a full web worker. With the assistance of PHP, applications are created utilizing GPS Data from IoT gadgets. 

PHP is not a harder language to comprehend. Nevertheless, it is a bit more troublesome than HTML and CSS. PHP being a sensible language with rationale-based orders and articulations set aside an effort to dominate. It is prescribed to learn PHP after HTML/CSS and JavaScript.

How IoT application benefits your business

  1. IoT applications bring more business openings by improving the business modules and quality of the service provided.
  2. It improves resource use by observing hardware through sensors and taking preventive support for continuous access. 
  3. IoT applications can without much of a stretch associate with cameras and sensors to screen the hardware to keep away from actual dangers. 
  4. It expands business profitability by offering preparation to representatives and upgrading their work effectiveness and keeping away from the ability to improve business efficiency. 
  5. By upgrading the business module, resource usage, gear observing, and worker preparing administrations, IoT applications likewise save your general business cost.

Significance of Internet of Thing Technology 

IoT is viewed as the huge outskirts that can improve practically all exercises in our lives. The majority of the gadgets, which have not recently been associated with the web, can be arranged and react in a similar path as brilliant gadgets. By 2020, the world is set to be IoT arranged. Here are the advantages, which accompany this innovation.

Innovation is currently important for our lives, it is reevaluating the fun of each action and the web of things takes a critical offer in making it conceivable. In a world dominated by advanced innovation, the IoT assumes a conspicuous part in our lives. It has made an environment that joins numerous frameworks to give brilliant exhibitions in each undertaking. The expansion of the IoT has made another advancement of phones, homes, and other installed applications that are associated with the web. They have immaculately incorporated human correspondence in manners we never anticipated. These devices can infer significant data utilizing orders dependent on information investigation, share the information on the cloud, and break down it securely to give the necessary yield. Numerous organizations are quickly changing from multiple points of view, because of the IoT. 

The IoT is making various changes in our lives. It is interfacing with a large number of gadgets that were recently segregated. This is dramatically expanding the worth of huge information and smoothing out numerous regular errands. Many organizations in the world such as Eddie Stobart Transport and Logistics Company, Amazon, Dell, Aviva, John Deere Company, and Walt Disney Land are all utilizing the Internet of Things technology to monitor various activities and advance their existing systems. The IoT is making various changes in our lives. It is associating a great many devices that were recently disconnected. This is exponentially expanding the worth of huge information and smoothing out numerous regular undertakings.

  • IoT advances effective asset usage. 
  • It limits human endeavors in numerous life aspects. a 
  • Empowering IoT will decrease the expense of creating and expanding the profits 
  • It settles on examination choices quicker and precisely 
  • It helps the continuous advertising of items 
  • Give a superior customer experience 
  • It guarantees high-quality data and secured processing

Conclusion

Every one of the programming languages recorded above has its qualities and shortcomings, so organizations need to completely look at the attributes of each language and discover which of them coordinates with the d they will utilize. The openness of an advancement climate, devices, and libraries might be another factor to consider. Engineers might need to pick open-source language as they give solid local area support and a wide scope of tools.

Author Bio: 

Sidharth Jain, Proud Founder of Graffersid, Web and Mobile App Development Company based in India. Graffersid has a team of designers and dedicated developers. Hire laravel Developers, Trusted by start-ups in YC, Harvard, Google Incubation, BluChilli. He understands how to solve problems using technology and contributes his knowledge to the leading blogging sites.

Read more…

By Sachin Kotasthane

In his book, 21 Lessons for the 21st Century, the historian Yuval Noah Harari highlights the complex challenges mankind will face on account of technological challenges intertwined with issues such as nationalism, religion, culture, and calamities. In the current industrial world hit by a worldwide pandemic, we see this complexity translate in technology, systems, organizations, and at the workplace.

While in my previous article, Humane IIoT, I discussed the people-centric strategies that enterprises need to adopt while onboarding IoT initiatives of industrial IoT in the workforce, in this article, I will share thoughts on how new-age technologies such as AI, ML, and big data, and of course, industrial IoT, can be used for effective management of complex workforce problems in a factory, thereby changing the way people work and interact, especially in this COVID-stricken world.

Workforce related problems in production can be categorized into:

  1. Time complexity
  2. Effort complexity
  3. Behavioral complexity

Problems categorized in either of the above have a significant impact on the workforce, resulting in a detrimental effect on the outcome—of the product or the organization. The complexity of these problems can be attributed to the fact that the workforce solutions to such issues cannot be found using just engineering or technology fixes as there is no single root-cause, rather, a combination of factors and scenarios. Let us, therefore, explore a few and seek probable workforce solutions.8829066088?profile=RESIZE_584x

Figure 1: Workforce Challenges and Proposed Strategies in Production

  1. Addressing Time Complexity

    Any workforce-related issue that has a detrimental effect on the operational time, due to contributing factors from different factory systems and processes, can be classified as a time complex problem.

    Though classical paper-based schedules, lists, and punch sheets have largely been replaced with IT-systems such as MES, APS, and SRM, the increasing demands for flexibility in manufacturing operations and trends such as batch-size-one, warrant the need for new methodologies to solve these complex problems.

    • Worker attendance

      Anyone who has experienced, at close quarters, a typical day in the life of a factory supervisor, will be conversant with the anxiety that comes just before the start of a production shift. Not knowing who will report absent, until just before the shift starts, is one complex issue every line manager would want to get addressed. While planned absenteeism can be handled to some degree, it is the last-minute sick or emergency-pager text messages, or the transport delays, that make the planning of daily production complex.

      What if there were a solution to get the count that is almost close to the confirmed hands for the shift, an hour or half, at the least, in advance? It turns out that organizations are experimenting with a combination of GPS, RFID, and employee tracking that interacts with resource planning systems, trying to automate the shift planning activity.

      While some legal and privacy issues still need to be addressed, it would not be long before we see people being assigned to workplaces, even before they enter the factory floor.

      During this course of time, while making sure every line manager has accurate information about the confirmed hands for the shift, it is also equally important that health and well-being of employees is monitored during this pandemic time. Use of technologies such as radar, millimeter wave sensors, etc., would ensure the live tracking of workers around the shop-floor and make sure that social distancing norms are well-observed.

    • Resource mapping

      While resource skill-mapping and certification are mostly HR function prerogatives, not having the right resource at the workstation during exigencies such as absenteeism or extra workload is a complex problem. Precious time is lost in locating such resources, or worst still, millions spent in overtime.

      What if there were a tool that analyzed the current workload for a resource with the identified skillset code(s) and gave an accurate estimate of the resource’s availability? This could further be used by shop managers to plan manpower for a shift, keeping them as lean as possible.

      Today, IT teams of OEMs are seen working with software vendors to build such analytical tools that consume data from disparate systems—such as production work orders from MES and swiping details from time systems—to create real-time job profiles. These results are fed to the HR systems to give managers the insights needed to make resource decisions within minutes.

  2. Addressing Effort Complexity

    Just as time complexities result in increased  production time, problems in this category result in an increase in effort by the workforce to complete the same quantity of work. As the effort required is proportionate to the fatigue and long-term well-being of the workforce, seeking workforce solutions to reduce effort would be appreciated. Complexity arises when organizations try to create a method out-of-madness from a variety of factors such as changing workforce profiles, production sequences, logistical and process constraints, and demand fluctuations.

    Thankfully, solutions for this category of problems can be found in new technologies that augment existing systems to get insights and predictions, the results of which can reduce the efforts, thereby channelizing it more productively. Add to this, the demand fluctuations in the current pandemic, having a real-time operational visibility, coupled with advanced analytics, will ensure meeting shift production targets.

    • Intelligent exoskeletons

      Exoskeletons, as we know, are powered bodysuits designed to safeguard and support the user in performing tasks, while increasing overall human efficiency to do the respective tasks. These are deployed in strain-inducing postures or to lift objects that would otherwise be tiring after a few repetitions. Exoskeletons are the new-age answer to reducing user fatigue in areas requiring human skill and dexterity, which otherwise would require a complex robot and cost a bomb.

      However, the complexity that mars exoskeleton users is making the same suit adaptable for a variety of postures, user body types, and jobs at the same workstation. It would help if the exoskeleton could sense the user, set the posture, and adapt itself to the next operation automatically.

      Taking a leaf out of Marvel’s Iron Man, who uses a suit that complements his posture that is controlled by JARVIS, manufacturers can now hope to create intelligent exoskeletons that are always connected to factory systems and user profiles. These suits will adapt and respond to assistive needs, without the need for any intervention, thereby freeing its user to work and focus completely on the main job at hand.

      Given the ongoing COVID situation, it would make the life of workers and the management safe if these suits are equipped with sensors and technologies such as radar/millimeter wave to help observe social distancing, body-temperature measuring, etc.

    • Highlighting likely deviations

      The world over, quality teams on factory floors work with checklists that the quality inspector verifies for every product that comes at the inspection station. While this repetitive task is best suited for robots, when humans execute such repetitive tasks, especially those that involve using visual, audio, touch, and olfactory senses, mistakes and misses are bound to occur. This results in costly reworks and recalls.

      Manufacturers have tried to address this complexity by carrying out rotation of manpower. But this, too, has met with limited success, given the available manpower and ever-increasing workloads.

      Fortunately, predictive quality integrated with feed-forwards techniques and some smart tracking with visuals can be used to highlight the area or zone on the product that is prone to quality slips based on data captured from previous operations. The inspector can then be guided to pay more attention to these areas in the checklist.

  3. Addressing Behavioral Complexity

    Problems of this category usually manifest as a quality issue, but the root cause can often be traced to the workforce behavior or profile. Traditionally, organizations have addressed such problems through experienced supervisors, who as people managers were expected to read these signs, anticipate and align the manpower.

    However, with constantly changing manpower and product variants, these are now complex new-age problems requiring new-age solutions.

    • Heat-mapping workload

      Time and motion studies at the workplace map the user movements around the machine with the time each activity takes for completion, matching the available cycle-time, either by work distribution or by increasing the manpower at that station. Time-consuming and cumbersome as it is, the complexity increases when workload balancing is to be done for teams working on a single product at the workstation. Movements of multiple resources during different sequences are difficult to track, and the different users cannot be expected to follow the same footsteps every time.

      Solving this issue needs a solution that will monitor human motion unobtrusively, link those to the product work content at the workstation, generate recommendations to balance the workload and even out the ‘congestion.’ New industrial applications such as short-range radar and visual feeds can be used to create heat maps of the workforce as they work on the product. This can be superimposed on the digital twin of the process to identify the zone where there is ‘congestion.’ This can be fed to the line-planning function to implement corrective measures such as work distribution or partial outsourcing of the operation.

    • Aging workforce (loss of tribal knowledge)

      With new technology coming to the shop-floor, skills of the current workforce get outdated quickly. Also, with any new hire comes the critical task of training and knowledge sharing from experienced hands. As organizations already face a shortage of manpower, releasing more hands to impart training to a larger workforce audience, possibly at different locations, becomes an even more daunting task.

      Fully realizing the difficulties and reluctance to document, organizations are increasingly adopting AR-based workforce trainings that map to relevant learning and memory needs. These AR solutions capture the minutest of the actions executed by the expert on the shop-floor and can be played back by the novice in-situ as a step-by-step guide. Such tools simplify the knowledge transfer process and also increase worker productivity while reducing costs.

      Further, in extraordinary situations such  as the one we face at present, technologies such as AR offer solutions for effective and personalized support to field personnel, without the need to fly in specialists at multiple sites. This helps keep them safe, and accessible, still.

Key takeaways and Actionable Insights

The shape of the future workforce will be the result of complex, changing, and competing forces. Technology, globalization, demographics, social values, and the changing personal expectations of the workforce will continue to transform and disrupt the way businesses operate, increasing the complexity and radically changing where, and when of future workforce, and how work is done. While the need to constantly reskill and upskill the workforce will be humongous, using new-age techniques and technologies to enhance the effectiveness and efficiency of the existing workforce will come to the spotlight.

8829067296?profile=RESIZE_710x

Figure 2: The Future IIoT Workforce

Organizations will increasingly be required to:

  1. Deploy data farming to dive deep and extract vast amounts of information and process insights embedded in production systems. Tapping into large reservoirs of ‘tribal knowledge’ and digitizing it for ingestion to data lakes is another task that organizations will have to consider.
  2. Augment existing operations systems such as SCADA, DCS, MES, CMMS with new technology digital platforms, AI, AR/VR, big data, and machine learning to underpin and grow the world of work. While there will be no dearth of resources in one or more of the new technologies, organizations will need to ‘acqui-hire’ talent and intellectual property using a specialist, to integrate with existing systems and gain meaningful actionable insights.
  3. Address privacy and data security concerns of the workforce, through the smart use of technologies such as radar and video feeds.

Nonetheless, digital enablement will need to be optimally used to tackle the new normal that the COVID pandemic has set forth in manufacturing—fluctuating demands, modular and flexible assembly lines, reduced workforce, etc.

Originally posted here.

Read more…
RSS
Email me when there are new items in this category –

Sponsor