Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Devices (326)

Low-power microcontrollers are a suitable choice for object detection in various scenarios where energy efficiency and resource constraints are important considerations. Here are some key situations where low-power controllers are particularly advantageous:

IoT and Battery-powered Devices: Low-power microcontrollers are ideal for IoT devices and battery-powered applications. Their efficient power management and optimized hardware allow for extended battery life, making them well-suited for energy-constrained environments. Object detection in such devices can operate continuously without draining the battery quickly.

Embedded Systems: In resource-constrained embedded systems, where limited processing power and memory are available, low-power microcontrollers excel. They provide a balance between computational capabilities and power consumption, making them capable of running object detection algorithms with minimal resources.

Real-time Requirements: Real-time object detection applications demand quick and accurate processing of incoming data. Low-power microcontrollers designed for real-time processing can handle time-sensitive tasks efficiently. They offer fast response times, minimizing latency and ensuring real-time decision-making.

Edge Computing: Low-power microcontrollers are well-suited for edge computing scenarios, where data processing occurs close to the data source. Object detection at the edge reduces the need for sending large amounts of data to a remote server for analysis, enabling faster and more efficient decision-making at the device level.

Cost-sensitive Deployments: Low-power microcontrollers are generally more affordable compared to high-end processors. They are a cost-effective solution for object detection in applications where budget constraints exist, making them accessible for a wide range of projects and deployments.

Harsh Environments: Low-power microcontrollers often have enhanced ruggedness and can withstand harsh operating conditions. This makes them suitable for object detection in environments with temperature variations, vibrations, or other challenging conditions.

Scalability and Distributed Systems: Low-power microcontrollers offer scalability, enabling distributed systems with multiple connected devices. Object detection can be performed at each device, allowing for parallel processing and distributed decision-making, which is beneficial in large-scale deployments.

By leveraging low-power microcontrollers for object detection, developers can achieve energy efficiency, cost savings, real-time capabilities, and scalability in a variety of IoT, embedded, and edge computing applications. Careful consideration of the project requirements, power constraints, and processing needs will help determine if low-power microcontrollers are the right choice for a specific object detection implementation.

Read more…

Air Quality Monitoring

Air quality monitoring has been increasingly important over the years. The use cases for monitoring air quality include both indoors and outdoors. Monitoring the air is also not just for human health, monitoring air quality in regards to temperature, humidly and more can be important for building maintenance, agriculture and any environment where the air affects it’s surroundings. Let’s walk through some of the core factors in smart air monitoring:

Accuracy: One of the most important factors of smart air quality monitoring is accuracy. It is important that the sensors used are able to detect even small changes in air quality. This means that the sensors need to be sensitive enough to detect even low levels of pollutants. Additionally, the sensors need to be reliable and consistent in their measurements.

Connectivity: Smart air quality monitoring systems need to be able to connect to the internet and transmit data in real-time. This is essential for providing up-to-date information about air quality to users. Additionally, it allows for the collection of large amounts of data, which can be used to identify trends and patterns in air quality.

Accessibility: Smart air quality monitoring systems need to be accessible to everyone, regardless of their technical ability. This means that they need to be easy to set up and use, with clear instructions provided. Additionally, they need to be affordable, so that they can be used by people on all income levels.

Integration: Smart air quality monitoring systems need to be able to integrate with other systems and devices. For example, they may need to be able to connect to smart home devices, such as thermostats, to automatically adjust settings based on air quality data. Additionally, they may need to integrate with public health systems to provide real-time data to medical professionals.

Battery Life: Smart air quality monitoring systems need to be able to operate for extended periods of time without needing to be recharged or replaced. This is especially important for outdoor sensors, which may be located in remote areas. Battery life can be extended by using low-power sensors and optimizing the power usage of the device. 

User Interface: Smart air quality monitoring systems need to have a user-friendly interface that allows users to quickly and easily access the information they need. This may include a mobile app or a web interface that displays air quality data in a clear and understandable format. Additionally, the interface should allow users to set up alerts when air quality reaches certain levels.

Data Visualization: Smart air quality monitoring systems need to be able to display data in a way that is easy to understand. This may include graphs, charts, and other visualizations that show trends over time. Additionally, the system should allow users to customize the way that data is displayed to best suit their needs.

Developers and engineers should consider these factors when planning and operating smart air quality monitoring systems for them to be effective.  

Read more…

IoT forensic science uses technical methods to solve problems related to the investigation of incidents involving IoT devices. Some of the technical ways that IoT forensic science solves problems include:

  1. Data Extraction and Analysis: IoT forensic science uses advanced software tools to extract data from IoT devices, such as logs, sensor readings, and network traffic. The data is then analyzed to identify relevant information, such as timestamps, geolocation, and device identifiers, which can be used to reconstruct events leading up to an incident.

  2. Reverse Engineering: IoT forensic science uses reverse engineering techniques to understand the underlying functionality of IoT devices. This involves analyzing the hardware and software components of the device to identify vulnerabilities, backdoors, and other features that may be relevant to an investigation.

  3. Forensic Imaging: IoT forensic science uses forensic imaging techniques to preserve the state of IoT devices and ensure that the data collected is admissible in court. This involves creating a complete copy of the device's storage and memory, which can then be analyzed without altering the original data.

  4. Cryptography and Data Security: IoT forensic science uses cryptography and data security techniques to ensure the integrity and confidentiality of data collected from IoT devices. This includes the use of encryption, digital signatures, and other security measures to protect data during storage, analysis, and transmission.

  5. Machine Learning: IoT forensic science uses machine learning algorithms to automate the analysis of large amounts of data generated by IoT devices. This can help investigators identify patterns and anomalies that may be relevant to an investigation.

IoT forensic science uses many more (and more advances) technical methods to solve problems related to the investigation of incidents involving IoT devices. By leveraging these techniques, investigators can collect, analyze, and present digital evidence from IoT devices that can be used to reconstruct events and support legal proceedings.

Read more…

Voice-Enabled IoT Applications

The Internet of Things (IoT) has transformed the way we interact with technology. With the rise of voice assistants such as Alexa, Siri, and Google Assistant, voice-enabled IoT applications have become increasingly popular in recent years. Voice-enabled IoT applications have the potential to revolutionize the way we interact with our homes, workplaces, and even our cars. In this article, we will explore the benefits and challenges of voice-enabled IoT applications and their potential for the future.

Voice-enabled IoT applications allow users to control various smart devices using their voice. These devices include smart speakers, smart TVs, smart thermostats, and smart lights, to name a few. By using voice commands, users can turn on the lights, adjust the temperature, play music, and even order food without having to touch any buttons or screens. This hands-free approach has made voice-enabled IoT applications popular among users of all ages, from children to seniors.

Free vector users buying smart speaker applications online. smart assistant applications online store, voice activated digital assistants apps market concept. vector isolated illustration.
One of the significant benefits of voice-enabled IoT applications is their convenience. With voice commands, users can control their smart devices while they are doing other tasks, such as cooking, cleaning, or exercising. This allows for a more seamless and efficient experience, without having to interrupt the task at hand. Additionally, voice-enabled IoT applications can be customized to suit individual preferences, allowing for a more personalized experience.

Another significant benefit of voice-enabled IoT applications is their potential for accessibility. For people with disabilities, voice-enabled IoT applications can provide an easier and more natural way to interact with their devices. By using their voice, people with limited mobility or vision can control their devices without having to rely on buttons or screens. This can improve their quality of life and independence.

However, there are also challenges associated with voice-enabled IoT applications. One of the significant challenges is privacy and security. As voice-enabled IoT applications are always listening for voice commands, they can potentially record and store sensitive information. Therefore, it is crucial for developers to implement strong security measures to protect users' privacy and prevent unauthorized access.

Another challenge is the potential for misinterpretation of voice commands. Accidental triggers or misinterpretation of voice commands can result in unintended actions, which can be frustrating for users. Additionally, voice-enabled IoT applications can struggle to understand certain accents, dialects, or languages, which can limit their accessibility to non-native speakers.

Despite these challenges, the potential for voice-enabled IoT applications is vast. In addition to smart homes, voice-enabled IoT applications can be used in a wide range of industries, including healthcare, retail, and transportation. In healthcare, voice-enabled IoT applications can be used to monitor patients' health conditions and provide real-time feedback. In retail, voice-enabled IoT applications can provide personalized shopping experiences and assist with inventory management. In transportation, voice-enabled IoT applications can be used to provide real-time traffic updates and navigation.

In conclusion, voice-enabled IoT applications have become increasingly popular in recent years, providing a more convenient and accessible way for users to interact with their devices. While there are challenges associated with voice-enabled IoT applications, their potential for revolutionizing various industries is vast. As technology continues to evolve, the future of voice-enabled IoT applications is sure to be exciting and full of potential

Read more…

Wearable technology: role in respiratory health and disease | European  Respiratory Society

Wearable devices, such as smartwatches, fitness trackers, and health monitors, have become increasingly popular in recent years. These devices are designed to be worn on the body and can measure various physiological parameters, such as heart rate, blood pressure, and body temperature. Wearable devices can also track physical activity, sleep patterns, and even detect falls and accidents.

Body sensor networks (BSNs) take the concept of wearables to the next level. BSNs consist of a network of wearable sensors that can communicate with each other and with other devices. BSNs can provide real-time monitoring of multiple physiological parameters, making them useful for a range of applications, including medical monitoring, sports performance monitoring, and military applications.

Smart portable devices, such as smartphones and tablets, are also an essential component of the IoT ecosystem. These devices are not worn on the body, but they are portable and connected to the internet, allowing for seamless communication and data transfer. Smart portable devices can be used for a wide range of applications, such as mobile health, mobile banking, and mobile commerce.

The development of wearables, BSNs, and smart portable devices requires a unique set of skills and expertise, including embedded engineering. Embedded engineers are responsible for designing and implementing the hardware and software components that make these devices possible. Embedded engineers must have a deep understanding of electronics, sensors, microcontrollers, and wireless communication protocols.

One of the significant challenges of developing wearables, BSNs, and smart portable devices is power consumption. These devices are designed to be small, lightweight, and portable, which means that they have limited battery capacity. Therefore, embedded engineers must design devices that can operate efficiently with minimal power consumption. This requires careful consideration of power management strategies, such as sleep modes and low-power communication protocols.

Another challenge of developing wearables, BSNs, and smart portable devices is data management. These devices generate large volumes of data that need to be collected, processed, and stored. The data generated by these devices can be highly sensitive and may need to be protected from unauthorized access. Therefore, embedded engineers must design devices that can perform efficient data processing and storage while providing robust security features.

The communication protocols used by wearables, BSNs, and smart portable devices also present a significant challenge for embedded engineers. These devices use wireless communication protocols, such as Bluetooth and Wi-Fi, to communicate with other devices and the internet. However, the communication range of these protocols is limited, which can make it challenging to establish and maintain reliable connections. Embedded engineers must design devices that can operate efficiently in environments with limited communication range and intermittent connectivity.

Finally, the user interface and user experience of wearables, BSNs, and smart portable devices are critical for their success. These devices must be easy to use and intuitive, with a user interface that is designed for small screens and limited input methods. Embedded engineers must work closely with user experience designers to ensure that the devices are user-friendly and provide a seamless user experience.

Read more…

Wireless Sensor Networks and IoT

We all know how IoT has revolutionized the way we interact with the world. IoT devices are now ubiquitous, from smart homes to industrial applications. A significant portion of these devices are Wireless Sensor Networks (WSNs), which are a key component of IoT systems. However, designing and implementing WSNs presents several challenges for embedded engineers. In this article, we discuss some of the significant challenges that embedded engineers face when working with WSNs.

WSNs are a network of small, low-cost, low-power, and wirelessly connected sensor nodes that can sense, process, and transmit data. These networks can be used in a wide range of applications such as environmental monitoring, healthcare, industrial automation, and smart cities. WSNs are typically composed of a large number of nodes, which communicate with each other to gather and exchange data. The nodes are equipped with sensors, microprocessors, transceivers, and power sources. The nodes can also be stationary or mobile, depending on the application.

One of the significant challenges of designing WSNs is the limited resources of the nodes. WSNs are designed to be low-cost, low-power, and small, which means that the nodes have limited processing power, memory, and energy. This constraint limits the functionality and performance of the nodes. Embedded engineers must design WSNs that can operate efficiently with limited resources. The nodes should be able to perform their tasks while consuming minimal power to maximize their lifetime.

Another challenge of WSNs is the limited communication range. The nodes communicate with each other using wireless radio signals. However, the range of the radio signals is limited, especially in indoor environments where the signals are attenuated by walls and other obstacles. The communication range also depends on the transmission power of the nodes, which is limited to conserve energy. Therefore, embedded engineers must design WSNs that can operate reliably in environments with limited communication range.

WSNs also present a significant challenge for embedded engineers in terms of data management. WSNs generate large volumes of data that need to be collected, processed, and stored. However, the nodes have limited storage capacity, and transferring data to a centralized location may not be practical due to the limited communication range. Therefore, embedded engineers must design WSNs that can perform distributed data processing and storage. The nodes should be able to process and store data locally and transmit only the relevant information to a centralized location.

Security is another significant challenge for WSNs. The nodes in WSNs are typically deployed in open and unprotected environments, making them vulnerable to physical and cyber-attacks. The nodes may also contain sensitive data, making them an attractive target for attackers. Embedded engineers must design WSNs with robust security features that can protect the nodes and the data they contain from unauthorized access.

The deployment and maintenance of WSNs present challenges for embedded engineers. WSNs are often deployed in harsh and remote environments, making it difficult to access and maintain the nodes. The nodes may also need to be replaced periodically due to the limited lifetime of the power sources. Therefore, embedded engineers must design WSNs that are easy to deploy, maintain, and replace. The nodes should be designed for easy installation and removal, and the network should be self-healing to recover from node failures automatically.

Final thought; WSNs present significant challenges for embedded engineers, including limited resources, communication range, data management, security, and deployment and maintenance. Addressing these challenges requires innovative design approaches that can maximize the performance and efficiency of WSNs while minimizing their cost and complexity. Embedded engineers must design WSNs that can operate efficiently with limited resources, perform distributed data processing and storage, provide robust security features, and be easy to deploy

Read more…

In battery-powered microcontroller applications, energy savings are critical. Reduce battery charging times and replacement time by reducing current consumption. Microcontroller software design should follow these guidelines to reduce current consumption:

1、Using the appropriate energy model

Utilize low-energy peripherals

Close unused modules/peripherals

Disable clock to unused modules/peripherals

Reduce the clock frequenc

Operating voltage reduction

Optimize the code

2、Using the appropriate energy model

The most effective way to save energy is to spend as little time as possible in active mode.

Five custom energy modes allow the microcontroller to operate in an energy-optimal state at any given time.

3、Utilize low-energy peripherals

All peripherals are built on energy consumption and can be used in a variety of energy modes. Whenever possible, select the appropriate peripheral to let it work while the CPU is sleeping (or performing other tasks).

A few examples:

Use RTC and sleep instead of waiting for a certain loop

Transfer data between memory and U(S) using DMA

Monitor sensors with low energy sensor interface (LESENSE) instead of wake up and poll

4、Close unused modules/peripherals

There are modules/peripherals that are not in use at any given time for each microcontroller application. Turn these off and save energy. This also applies to the CPU itself. If the core is idle (for example, waiting for data reception), you can turn it off and save energy. This is one of the main features of the different EFM32 energy modes. Remember to consider start and stop conditions when disabling peripherals. For example, if it is completely turned off, the ADC needs some time to warm up before the conversion can be initiated. Similarly, USART simultaneous transmissions should be allowed on the progress. Thus, the receiver's shift register will not be in an indeterminate state.

5、Disable clocks to unused modules/peripherals

Even if a module/peripheral device is disabled (for example, TIMER0 stops), the various circuits in the module will still consume energy if its clock is running. Therefore, it is important to turn off the clocks of all unused modules.

6、Reduce clock frequency

Current is plotted at clock frequency. Generally speaking, a task or peripheral device should run at the lowest possible frequency.

For example, if a timer requests interruption every few milliseconds, it should be locked at several kHz instead of several MHz. This can be easily achieved by pre-scaling in CMU. Similarly, one way to choose CPU frequency is that it should be so low that the CPU will not be idle (some blanks should be added). However, in many cases, it is best to complete the current task quickly and then enter the appropriate energy model until new tasks have to be addressed.

7、Reduce working voltage

By reducing the working voltage, the energy consumption is further reduced. The Gecko series of microcontrollers can operate at low voltage.

There are absolute minimum values in the data table of each device

8、Optimization code

Optimizing code usually reduces energy consumption byincreasing the speed and efficiency of programs.A faster program spends less time in active mode, and in amore efficient program, each task executes fewer instructions. A simple way tooptimize code is to build it in release mode with the highest optimizationsettings rather than in debug mode.

9、Energy model

The EFR32 provides features that make it easier to configurelow-power peripherals and switch between energy modes. The EFR32 providesfeatures that make it easier to configure low-power peripherals and switchbetween energy modes.
Let's take a look at several modes

9.1 Operation mode (EM0)

This is the default mode. In this mode, the CPU fetches and executesinstructions from flash or RAM, all peripherals may be enabled, and theoperating power consumption is only 63 μA/MHz.

9.2 Sleep mode (EM1)

In sleep mode, the CPU's clock is disabled. All peripherals, as wellas RAM and flash memory, are available. Automated execution of multipleoperations can be achieved by using a Peripheral Reflection System (PRS) andDMA. For example, a timer can trigger an ADC conversion at regular intervals.When the conversion is complete, the result is moved to RAM by the DMA. When agiven number of conversions are performed, the DMA can request and interrupt towake up the CPU. Enter the sleep mode or the "Wait for Event (WFE)"instruction by executing "Wait for Interrupt (WFI)". Use the functionEMUILATEMEM1 () to enter sleep mode

9.3 Deep sleep mode (EM2)

In deep sleep mode, no high frequency oscillator is running, whichmeans only asynchronous and low frequency peripherals are available.This model further increases energy efficiency while still allowing arange of activities, including:

Low energy sensor interface(LESENSE) monitoring sensor,

LCD monitor drives LCD monitor,

LEUART that receives ortransmits one byte of data,

Perform address matching check.

RTC wakes up the CPU after theprogram is coded.

Analog Comparator (ACMP) tocompare voltage to programmed threshold

A GPIO to check the conversionon the I/O line.

The deep sleep mode isto first set the sleep depth in the system control register (SCR), and thenexecute the "Wait for Interrupt (WFI)" or "Wait for Event(WFE)" instruction. Use the function EMU_EnterEM2() to enter the deepsleep mode.

9.4 Stop mode (EM3)

The stop modediffers from the deep sleep mode in that no oscillator (except ULFRCO orAUXHFRCO) is running.
Modules/functions, if present on the device, canstill be used in stop mode when the appropriate clock source remains active:

I2C address

Supervision

GPIO interrupt

Pulse counter (fund)

Low energy timer (LETIMER)

Low energy sensor interface (LESENSE)

Real-time counter and calendar (RTCC)

Analog comparator (ACMP)

Voltage monitoring (VMON)

Ultra-low energy timer/counter(CRYOTIMER)

TemperatureSensor

Stop mode is the same as deep sleepmode, except that the low frequency oscillator must be manually disabled

9.5 Sleep mode (EM4H)

This feature is called EFM32'shibernate mode and wireless SoC Series 1, and is enabled using dedicatedcontrol register logic. Write the sequence 0x2, 0x3, 0x2, 0x3, 0x2, 0x3, 0x2,0x2, 0x2, 0x2, 0x2 to the EM4ENTRY bit field in the EMU_EM4CTRL register, andplace the device in hibernate mode when the EM4STATE bit is set; otherwise, Thedevice enters shutdown mode as usual. In sleep mode, most peripherals areturned off to reduce leakage power. There are some selected peripheralsavailable. System memory and registers do not retain values. The GPIO PADstatus and RTCC RAM are reserved. Wake up from EM4 sleep requires a reset tothe system and return to the EM0 activity. Sleep mode wake-up is possible, fromthe same shutdown mode to the power loop, nRESET, and the user-specified pinsource, as well as:

RTCC

CRYOTIMER

 

Measuretemperature outside the defined range (TEMPCHANGE)

9.6 Shutdown mode (EM4S)

The shutdown mode is the lowest energystate of the EFM32 Series 0, EFM32 or Wireless SoC Series 1 microcontroller.
The power is turned off to most devices, includinginternal RAM, and all clocks are disabled. Only recovery logic, if the GPIO padstatus is explicitly enabled, is retained. Wake up from off mode alwaysrequires a reset. When resetting from a RESETn pin or through one of a set ofdevice-specific pins explicitly enabled for this purpose, the current drawingin off mode can be as low as 20na. Some devices can replace pin-based wakeups;however, waking up from these sources requires a low-frequency oscillator toremain active, increasing the current attractiveness.

Read more…

I remember when the Arduino Uno first came out circa 2005. Even though this 8-bit processor employs only a 16-MHz clock and offers only 32KB of Flash memory and 2KB of RAM, I still use these little rascals in a lot of my hobby projects to this day.

Of course, things have moved on since those days of yore. For example, one of the latest and greatest offerings from the folks at Arduino Pro is the Portenta X8.

maxncb-0408-01-arduino-portenta-x8-300x210.jpg?profile=RESIZE_400x

Portenta X8 (Click image to see a larger version — Image source: Arduino)

Oh, my goodness gracious me! Have you seen this little beauty, which is described as being an “Industrial-grade, secure system-on-module (SOM) with outstanding computational density”?

What we are talking about here is something that’s only around the size of a stick of chewing gum (66.04 x 25.40 mm) while boasting nine processor cores and coming pre-loaded with the Linux operating system (OS).

According to the web, it has a Cortex-A53 quad-core up to 1.8GHz per core + a Cortex-M4 up to 400MHz, along with a dual-core Cortex-M7 up to 480Mhz + another Cortex-M4 up to 240MHz.

Either I’m losing the ability to count, or the above adds up to only eight cores. Where’s the missing core?

Well, I am in a great position to learn the answer to this conundrum, because I’m going to be hosting a webinar on this little scamp — Arduino Portenta X8: Superpower Your Linux Applications with Real-Time Execution — tomorrow as I pen these words.

During this webinar, which will be presented by IoT Central, I will be chatting with Andrea Richetta, who is the Head of Customer Success at Arduino Pro, and who will be introducing the Portenta X8 and answering all of our questions.

This 1-hour webinar will commence at 10:00am USA Central Time (so that’s 8:00am Pacific Time and 11:00am Eastern Time). And, speaking of time, now would be a great time to register before all of the good seats are taken.

I’ll be the one in the Hawaiian shirt. Dare I hope to see you there? Register here.

Read more…

How Doews IoT help in Retail? Continuous and seamless communication is now a reality between people, processes and things.  IoT has been enabling retailers to connect with people and businesses and gain useful insight about product performance and engagement of people with such products. 

Importance of IoT in Retail

  • It helps improve customer experience in new ways and helps brick and mortar shops compete with their online counterparts by engaging customers in different ways.
  • IoT can track customer preferences, analyze their habits and share relevant information with the marketing teams and help improve the product or brand features and design and keep the customer updated on new products, delivery status etc.
  • Using IoT retailers can increase efficiency and profitability in various ways for their benefit.
  • IoT can significantly improve the overall customer experience, like automated checkouts and integration with messaging platforms and order systems.
  • It helps increase efficiency in transportation and logistics by reducing the time to deliver goods to market or store. It helps in vehicle management, and tracking deliveries. This helps in reducing costs, improving the bottom line and increasing customer satisfaction.
  • Inventory management becomes easier with IoT. Tracking inventory is much easier and simpler from the stocking of goods to initiating a purchase.
  • It helps increase operational efficiency in warehouses, by optimizing temperature controls, improving maintenance, and managing the warehouse. 

Use Cases of IoT in Retail

  1. IoT is used in Facility management to ensure day-to-day areas are clean and can be used to monitor consumable supplies levels. It can be used to monitor store environments like temperature, lighting, ventilation and refrigeration. IoT can identify key areas that can provide a complete 360 degrees view of facility management.
  2. It can help in tracking the number of persons entering a facility. This is especially useful because of the pandemic situation, to ensure that no overcrowding takes place.
    Occupancy sensors provide vital data on store traffic patterns and also on the time spent in any particular area. This helps retailers with better planning and product placement strategies. This helps in guided selling with more effective display setups, layouts, and space management.
  3. IoT helps in a big way for Supply chain and logistics, by providing information on the stock levels. 
  4. IoT helps in asset tracking in items like shopping carts and baskets. Sensors can ensure that location data is available for all carts making retrieval easy. It can help lock carts if they are taken out of location.
  5. IoT devices can and are being used to personalize user experience. Bluetooth beacons are used to send personalized real-time alerts to phones when the customer is near an aisle or a store. This can prompt a customer to enter the store or look at the aisle area and take advantage of offers etc. IoT-based beacons, helps Target, collect user data and also send hyper-personalized content to customers.
  6. Smart shelves are another example of innovative IoT ideas. Maintaining shelves to refill products or ensure correct items are placed on the right shelves is a time-consuming task. Smart shelves automate these tasks easily. They can help save time and resolve manual errors.

Businesses should utilize new technologies to revolutionize the retail sector in a better way. Digitalization or digital transformation of brick and mortar stores is not a new concept. With every industry wanting to improve its services and facilities and trying to stay ahead of the competition, digitalization in retail industry is playing a big role in this transformation. To summarize, digitalization helps in enhanced data collection, helps data-driven customer insights, gives a better customer experience, and increases profits and productivity. It encourages a digital culture.

Read more…

By Bee Hayes-Thakore

The Android Ready SE Alliance, announced by Google on March 25th, paves the path for tamper resistant hardware backed security services. Kigen is bringing the first secure iSIM OS, along with our GSMA certified eSIM OS and personalization services to support fast adoption of emerging security services across smartphones, tablets, WearOS, Android Auto Embedded and Android TV.

Google has been advancing their investment in how tamper-resistant secure hardware modules can protect not only Android and its functionality, but also protect third-party apps and secure sensitive transactions. The latest android smartphone device features enable tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the hardware-backed Keystore that resides in a hardware security module.

To accelerate adoption of new Android use cases with stronger security, Google announced the formation of the Android Ready SE Alliance. Secure Element (SE) vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. On March 25th, Google launched the General Availability (GA) version of StrongBox for SE.

8887974290?profile=RESIZE_710x

Hardware based security modules are becoming a mainstay of the mobile world. Juniper Research’s latest eSIM research, eSIMs: Sector Analysis, Emerging Opportunities & Market Forecasts 2021-2025, independently assessed eSIM adoption and demand in the consumer sector, industrial sector, and public sector, and predicts that the consumer sector will account for 94% of global eSIM installations by 2025. It anticipates that established adoption of eSIM frameworks from consumer device vendors such as Google, will accelerate the growth of eSIMs in consumer devices ahead of the industrial and public sectors.


Consumer sector will account for 94% of global eSIM installations by 2025

Juniper Research, 2021.

Expanding the secure architecture of trust to consumer wearables, smart TV and smart car

What’s more? A major development is that now this is not just for smartphones and tablets, but also applicable to WearOS, Android Auto Embedded and Android TV. These less traditional form factors have huge potential beyond being purely companion devices to smartphones or tablets. With the power, size and performance benefits offered by Kigen’s iSIM OS, OEMs and chipset vendors can consider the full scope of the vast Android ecosystem to deliver new services.

This means new secure services and innovations around:

🔐 Digital keys (car, home, office)

🛂 Mobile Driver’s License (mDL), National ID, ePassports

🏧 eMoney solutions (for example, Wallet)

How is Kigen supporting Google’s Android Ready SE Alliance?

The alliance was created to make discrete tamper resistant hardware backed security the lowest common denominator for the Android ecosystem. A major goal of this alliance is to enable a consistent, interoperable, and demonstrably secure applets across the Android ecosystem.

Kigen believes that enabling the broadest choice and interoperability is fundamental to the architecture of digital trust. Our secure, standards-compliant eSIM and iSIM OS, and secure personalization services are available to all chipset or device partners in the Android Ready SE Alliance to leverage the benefits of iSIM for customer-centric innovations for billions of Android users quickly.

Vincent Korstanje, CEO of Kigen

Kigen’s support for the Android Ready SE Alliance will allow our industry partners to easily leapfrog to the enhanced security and power efficiency benefits of iSIM technology or choose a seamless transition from embedded SIM so they can focus on their innovation.

We are delighted to partner with Kigen to further strengthen the security of Android through StrongBox via Secure Element (SE). We look forward to widespread adoption by our OEM partners and developers and the entire Android ecosystem.

Sudhi Herle, Director of Android Platform Security 

In the near term, the Google team is prioritizing and delivering the following Applets in conjunction with corresponding Android feature releases:

  • Mobile driver’s license and Identity Credentials
  • Digital car keys

Kigen brings the ability to bridge the physical embedded security hardware to a fully integrated form factor. Our Kigen standards-compliant eSIM OS (version 2.2. eUICC OS) is available to support chipsets and device makers now. This announcement is a start to what will bring a whole host of new and exciting trusted services offering better experience for users on Android.

Kigen’s eSIM (eUICC) OS brings

8887975464?profile=RESIZE_710x

The smallest operating system, allowing OEMs to select compact, cost-effective hardware to run it on.

Kigen OS offers the highest level of logical security when employed on any SIM form factor, including a secure enclave.

On top of Kigen OS, we have a broad portfolio of Java Card™ Applets to support your needs for the Android SE Ready Alliance.

Kigen’s Integrated SIM or iSIM (iUICC) OS further this advantage

8887975878?profile=RESIZE_710x

Integrated at the heart of the device and securely personalized, iSIM brings significant size and battery life benefits to cellular Iot devices. iSIM can act as a root of trust for payment, identity, and critical infrastructure applications

Kigen’s iSIM is flexible enough to support dual sim capability through a single profile or remote SIM provisioning mechanisms with the latter enabling out-of-the-box connectivity, secure and remote profile management.

For smartphones, set top boxes, android auto applications, auto car display, Chromecast or Google Assistant enabled devices, iSIM can offer significant benefits to incorporate Artificial intelligence at the edge.

Kigen’s secure personalization services to support fast adoption

SIM vendors have in-house capabilities for data generation but the eSIM and iSIM value chains redistribute many roles and responsibilities among new stakeholders for the personalization of operator credentials along different stages of production or over-the-air when devices are deployed.

Kigen can offer data generation as a service to vendors new to the ecosystem.

Partner with us to provide cellular chipset and module makers with the strongest security, performance for integrated SIM leading to accelerate these new use cases.

Security considerations for eSIM and iSIM enabled secure connected services

Designing a secure connected product requires considerable thought and planning and there really is no ‘one-size-fits-all’ solution. How security should be implemented draws upon a multitude of factors, including:

  • What data is being stored or transmitted between the device and other connected apps?
  • Are there regulatory requirements for the device? (i.e. PCI DSS, HIPAA, FDA, etc.)
  • What are the hardware or design limitations that will affect security implementation?
  • Will the devices be manufactured in a site accredited by all of the necessary industry bodies?
  • What is the expected lifespan of the device?

End-to-end ecosystem and services thinking needs to be a design consideration from the very early stage especially when considering the strain on battery consumption in devices such as wearables, smart watches and fitness devices as well as portable devices that are part of the connected consumer vehicles.

Originally posted here.

Read more…

by Carsten Gregersen

With how fast the IoT industry is growing, it’s paramount your business isn’t left behind.

IoT technology has brought a ton of benefits and makes systems more efficient and easier to manage. As a result, it’s no surprise that more businesses are adopting IoT solutions. On top of that, businesses starting new projects have the slight advantage of buying all new technology and, therefore, not having to deal with legacy systems. 

On the other hand, if you have an already operational legacy system and you want to implement IoT, you may think you have to buy entirely new technology to get it online, right? Not necessarily. After all, if your legacy systems are still functional and your staff is comfortable with them, why should you waste all of that time and money?

Legacy systems can still bend to your will and be used for adopting IoT. Sticking rather than twisting can help your business save money on your IoT project.

In this blog, we’ll go over the steps you would need to follow for integrating IoT technology into your legacy systems and the different options you have to get this done.

1. Analyze Your Current Systems

First things first, take a look at your current system and take note of their purpose, the way they work, the type of data that they collect, and the way they could benefit by communicating with each other.

This step is important because it will allow you to plan out IoT integration more efficiently. When analyzing your current systems make sure you focus on these key aspects:

  • Automation – See how automation is currently accomplished and what other aspects should be automated.
  • Efficiency – What aspects are routinely tedious or slow and could become more efficient?
  • Data – How it’s taken, stored, and processed, and how it could be used better
  • Money – Analyze how much some processes cost and keep them in mind to know what aspects could be done for cheaper with IoT
  • Computing – The way data is processed, whether it be cloud, edge, or hybrid.

Following these steps will help you know your project in and out and apply IoT in the areas that truly matter.

2. Plan for IoT Integration

In order to integrate IoT into your legacy systems, you must get everything in order. 

In order to successfully integrate IoT into your system, you will need to have strong planning, design, and implementation phases. Steps you will need to follow in order to achieve this can be

  • Decide what IoT hardware is going to be needed
  • Set a budget taking software, hardware, and maintenance into account
  • Decide on a communication protocol
  • Develop software tools for interacting with the system
  • Decide on a security strategy

This process can be daunting if you don’t know how IoT works, but by following the right tutorials and developing with the right tools, your IoT project can be easily realizable. 

Nabto has tools that can not only help you set up an IoT project but also adding legacy systems and newer IoT devices to it.

Here are several ways in which we can help get your legacy systems IoT ready. 

  • You can integrate the Nabto SDK to add IoT remote control access to your devices.
  • Use the Nabto application to move data from one network to another – otherwise known as TCP tunneling.
  • Add secure remote access to your existing solutions. 
  • Build mobile apps for remote control of embedded devices our IoT app solution.

3. Implement IoT Sensors to Existing Hardware

IoT has the capability to automize, control, and make systems more efficient. Therefore, interconnecting your legacy systems to allow for communication is a great idea.

There’s a high chance your legacy systems don’t currently have the ability to sense or communicate data. However, adding new IoT sensors can give them these capabilities.

IoT sensors are small devices that can detect when something changes. Then, they capture and send information to a main computer over the internet to be processed or execute commands. These could measure (but not limited to):

  • Temperature
  • Humidity
  • Pressure
  • Gyroscope
  • Accelerometer

These sensors are cheap and easy to install, therefore, adding them to your existing legacy systems can be the simplest and quickest way to get to communicate over the internet.

Set up which inputs the sensor should respond to and under what conditions, and what it should do with the collected data. You could be surprised by the benefits that making a simple device to collect data can have for your project!

4. Connect Existing PLCs to the Internet

If you already have an automated system managed by a PLC (Programmable Logic Controller,) devices already share data with each other. Therefore, the next step is to get them online.

With access to the internet, these systems can be controlled remotely from anywhere in the world. Data can be accessed, modified, and analyzed more easily. On top of that, updates can be pushed globally at any time.

Given that some PLCs utilize proprietary protocols and have a weird way of making devices communicate with each other, an IoT gateway is the best way to take the PLC to the internet.

An IoT gateway is a device that acts as a bridge between IoT devices and the cloud, and allows for communication between them. This allows you to implement IoT to a PLC without having to restructure it or change it too much.

5. Connect Legacy using an IO port

A lot of times a legacy system has some kind of interface for data input/output. Sometimes, this is implemented for debugging when the product was developed. However, at other times, this is done to make it possible for service organizations to be able to interface with products in the field and to help customers with setup and/or debug problems.

These debug ports are similar to real serial ports, such as an RS-485 RS-232, etc. That being said, they can be more raw UART, SPI, or I2C. What’s more, the majority of the time the protocol on top of the serial connection is proprietary.

This kind of interface is great. It allows you a “black box” to be created via a physical interface matching the legacy system and firmware running on this black box. This can translate “internet” requests to the proprietary protocol of the legacy system. In addition,  this new system can be used as a design for newer internet-accessible versions of the system simply by adopting the black box onto the internal legacy design.

Bottom Line

Getting your legacy systems to work in IoT is not as much of a challenge as you might have initially thought.

Following some fairly simple strategies can let you set them up relatively quickly. However, don’t forget the planning phase for your IoT strategy and deciding how it’s going to be implemented in your own legacy system. This will allow you to streamline the process even more, and make you take full advantage of all the benefits that IoT brings to your project.

Originally posted here.

Read more…

The head is surely the most complex group of organs in the human body, but also the most delicate. The assessment and prevention of risks in the workplace remains the first priority approach to avoid accidents or reduce the number of serious injuries to the head. This is why wearing a hard hat in an industrial working environment is often required by law and helps to avoid serious accidents.

This article will give you an overview of how to detect that the wearing of a helmet is well respected by all workers using a machine learning object detection model.

For this project, we have been using:

  • Edge Impulse Studi to acquire some custom data, visualize the data, train the machine learning model and validate the inference results.
  • Part of this public dataset from Roboflow, where the images containing the smallest bounding boxes has been removed.
  • Part of the Flicker-Faces-HQ (FFHQ) (under Creative Commons BY 2.0 license) to rebalance the classes in our dataset.
  • Google Colab to convert the Yolo v5 PyTorch format from the public dataset to Edge Impulse Ingestion format.
  • A Rasberry Pi, NVIDIA Jetson Nano or with any Intel-based Macbooks to deploy the inference model.

Before we get started, here are some insights of the benefits / drawbacks of using a public dataset versus collecting your own. 

Using a public dataset is a nice-to-have to start developing your application quickly, validate your idea and check the first results. But we often get disappointed with the results when testing on your own data and in real conditions. As such, for very specific applications, you might spend much more time trying to tweak an open dataset rather than collecting your own. Also, remember to always make sure that the license suits your needs when using a dataset you found online.

On the other hand, collecting your own dataset can take a lot of time, it is a repetitive task and most of the time annoying. But, it gives the possibility to collect data that will be as close as possible to your real life application, with the same lighting conditions, the same camera or the same angle for example. Therefore, your accuracy in your real conditions will be much higher. 

Using only custom data can indeed work well in your environment but it might not give the same accuracy in another environment, thus generalization is harder.

The dataset which has been used for this project is a mix of open data, supplemented by custom data.

First iteration, using only the public datasets

At first, we tried to train our model only using a small portion of this public dataset: 176 items in the training set and 57 items in the test set where we took only images containing a bounding box bigger than 130 pixels, we will see later why. 

Rav03Ny7X2bh1iOSftwHgggWj31SyQWk-sl_k4Uot4Jpw3eMg9XgYYrIyajogGfGOfL8j7qttiAWAcsABUgcoHUIg1QfYQRxeZfF_dnSFpsSiXhiIHduAZI9x6qcgikCcluR24r1

If you go through the public dataset, you can see that the entire dataset is strongly missing some “head” data samples. The dataset is therefore considered as imbalanced.

Several techniques exist to rebalance a dataset, here, we will add new images from Flicker-Faces-HQ (FFHQ). These images do not have bounding boxes but drawing them can be done easily in the Edge Impulse Studio. You can directly import them using the uploader portal. Once your data has been uploaded, just draw boxes around the heads and give it a label as below: 

AcihTfl2wibfy9LOzSUuPKEcF7IupGPOzPOmMmNi2LUq8sV7I2IVT5W4-7GGS8wJVD1o7VIQ5e7utCkQ1qT2xLawW7mQsTGL_WNuWIVIp5v89sCZt9gZ9fX7fwHo0PG9A3SDBCqV

Now that the dataset is more balanced, with both images and bounding boxes of hard hats and heads, we can create an impulse, which is a mix of digital signal processing (DSP) blocks and training blocks:

_qwt-WMdXI4Oc7BkNQfyEYZKV5MvziDkt1UUl1Hrx-65u_Uf-L_qEUmHMx_qN5Xh-r5vpn8JxCgpJvcT2v4-hWD9ZHE_wJjDgCCXZXxTkOtcTKSKGizDx9ZQO0KnBvvmaBCA1QvD

In this particular object detection use case, the DSP block will resize an image to fit the 320x320 pixels needed for the training block and extract meaningful features for the Neural Network. Although the extracted features don’t show a clear separation between the classes, we can start distinguishing some clusters:

zr70Lpe0Rg3wap9FWoGrco1pfT6L3TWUxYds3NhM_uHMhFDDr89KcLTH_OXIgKs6BrMdP7iihoz8t64Mk2JtbpTfmBAXyRYukNS9zxLk9zuGjZLqvakkgw6oOBuIhiVAzcMcZu9E

To train the model, we selected the Object Detection training block, which fine tunes a pre-trained object detection model on your data. It gives a good performance even with relatively small image datasets. This object detection learning block relies on MobileNetV2 SSD FPN-Lite 320x320.    

According to Daniel Situnayake, co-author of the TinyML book and founding TinyML engineer at Edge Impulse, this model “works much better for larger objects—if the object takes up more space in the frame it’s more likely to be correctly classified.” This has been one of the reason why we got rid of the images containing the smallest bounding boxes in our import script.

After training the model, we obtained a 61.6% accuracy on the training set and 57% accuracy on the testing set. You also might note a huge accuracy difference between the quantized version and the float32 version. However, during the linux deployment, the default model uses the unoptimized version. We will then focus on the float32 version only in this article.

fWwhQWxxLkAdnsFKuIUc2Lf2Lzji9m2uXux5cr3CmLf2cP8fiE_RQHaqJxekyBI3oIzOS81Jwoe6aBPfi1OFgEJSS3XQWnzR9nJ3eTY2M5JNVG38H3Dro2WZH3ltruXn_pUZkVvw

This accuracy is not satisfying, and it tends to have trouble detecting the right objects in real conditions:

hardhat_bad_82fbd9a22a.gif

Second iteration, adding custom data

On the second iteration of this project, we have gone through the process of collecting some of our own data. A very useful and handy way to collect some custom data is using our mobile phone. You can also perform this step with the same camera you will be using in your factory or your construction site, this will be even closer to the real condition and therefore work best with your use case. In our case, we have been using a white hard hat when collecting data. For example, if your company uses yellow ones, consider collecting your data with the same hard hats. 

Once the data has been acquired, go through the labeling process again and retrain your model. 

_f7J4zddenmarUiTf3VMyOz_kG70nieiEkSwR8kB3JhJE5K1IqCdttj4aOtrfzv4QYWXJ4Y9u_0MU1xKfFsU8hUB5gj00Y1E7oKlixjmhNB2p7VIqoamD9migXXPkAOrFRGVFfIo

We obtain a model that is slightly more accurate when looking at the training performances. However, in real conditions, the model works far better than the previous one.

NXnwDbkaWEia7qyM20U2kexTiWBSOXam_ACEGxzKCJ8kYtmxS7eCTMZsuwXJrjvkFUVb9YbSqwS7EOGiE4wu_FFGQ4YOufAB-JZA_uCOEoHO8D75ke6YU4H6QKnCBJyJA0hD4Lw3

Finally, to deploy your model on yourA Rasberry Pi, NVIDIA Jetson Nano or your Intel-based Macbook, just follow the instructions provided in the links. The command line interface `edge-impulse-linux-runner` will create a lightweight web interface where you can see the results.

hardhat_good_18d9e33d3a.gif

Note that the inference is run locally and you do not need any internet connection to detect your objects. Last but not least, the trained models and the inference SDK are open source. You can use it, modify it and integrate it to a broader application matching specifically to your needs such as stopping a machine when a head is detected for more than 10 seconds.

This project has been publicly released, feel free to have a look at it on Edge Impulse studio, clone the project and go through every steps to get a better understanding: https://studio.edgeimpulse.com/public/34898/latest

The essence of this use case is, Edge Impulse allows with very little effort to develop industry grade solutions in the health and safety context. Now this can be embedded in bigger industrial control and automation systems with a consistent and stringent focus on machine operations linked to H&S complaint measures. Pre-training models, which later can be easily retrained in the final industrial context as a step of “calibration,” makes this a customizable solution for your next project.

Originally posted on the Edge Impulse blog by Louis Moreau - User Success Engineer at Edge Impulse & Mihajlo Raljic - Sales EMEA at Edge Impulse

Read more…

Today the world is obsessed with the IoT, as if this is a new concept. We've been building the IoT for decades, but it was only recently some marketing "genius" came up with the new buzz-acronym.

Before there was an IoT, before there was an Internet, many of us were busy networking. For the Internet itself was a (brilliant) extension of what was already going on in the industry.

My first experience with networking was in 1971 at the University of Maryland. The school had a new computer, a $10 million Univac 1108 mainframe. This was a massive beast that occupied most of the first floor of a building. A dual-processor machine it was transistorized, though the control console did have some ICs. Rows of big tape drives mirrored the layman's idea of computers in those days. Many dishwasher-sized disk drives were placed around the floor and printers, card readers and other equipment were crammed into every corner. Two Fastrand drum memories, each consisting of a pair of six-foot long counterrotating drums, stored a whopping 90 MB each. Through a window you could watch the heads bounce around.

The machine was networked. It had a 300 baud modem with which it could contact computers at other universities. A primitive email system let users create mail which was queued till nightfall. Then, when demands on the machine were small, it would call the appropriate remote computer and forward mail. The system operated somewhat like today's "hot potato" packets, where the message might get delivered to the easiest machine available, which would then attempt further forwarding. It could take a week to get an email, but at least one saved the $0.08 stamp that the USPS charged.

The system was too slow to be useful. After college I lost my email account but didn't miss it at all.

By the late 70s many of us had our own computers. Mine was a home-made CP/M machine with a Z80 processor and a small TV set as a low-res monitor. Around this time Compuserve came along and I, like so many others, got an account with them. Among other features, users had email addresses. Pretty soon it was common to dial into their machines over a 300 baud modem and exchange email and files. Eventually Compuserve became so ubiquitous that millions were connected, and at my tools business during the 1980s it was common to provide support via this email. The CP/M machine gave way to a succession of PCs, Modems ramped up to 57 K baud.

My tools business expanded rapidly and soon we had a number of employees. Sneakernet was getting less efficient so we installed an Arcnet network using Windows 3.11. That morphed into Ethernet connections, though the cursing from networking problems multiplied about as fast as the data transfers. Windows was just terrible at maintaining reliable connectivity.

In 1992 Mike Lee, a friend from my Boys Night Out beer/politics/sailing/great friends group, which still meets weekly (though lately virtually) came by the office with his laptop. "You have GOT to see this" he intoned, and he showed me the world-wide web. There wasn't much to see as there were few sites. But the promise was shockingly clear. I was stunned.

The tools business had been doing well. Within a month we spent $100k on computers, modems and the like and had a new business: Softaid Internet Services. SIS was one of Maryland's first ISPs and grew quickly to several thousand customers. We had a T1 connection to MAE-EAST in the DC area which gave us a 1.5 Mb/s link… for $5000/month. Though a few customers had ISDN connections to us, most were dialup, and our modem shelf grew to over 100 units with many big fans keeping the things cool.

The computers all ran BSD Unix, which was my first intro to that OS.

I was only a few months back from a failed attempt to singlehand my sailboat across the Atlantic and had written a book-length account of that trip. I hastily created a web page of that book to learn about using the web. It is still online and has been read several million times in the intervening years. We put up a site for the tools business which eventually became our prime marketing arm.

The SIS customers were sometimes, well, "interesting." There was the one who claimed to be a computer expert, but who tried to use the mouse by waving it around over the desk. Many had no idea how to connect a modem. Others complained about our service because it dropped out when mom would pick up the phone to make a call over the modem's beeping. A lot of handholding and training was required.

The logs showed a shocking (to me at the time) amount of porn consumption. Over lunch an industry pundit explained how porn drove all media, from the earliest introduction of printing hundreds of years earlier.

The woman who ran the ISP was from India. She was delightful and had a wonderful marriage. She later told me it had been arranged; they met  their wedding day. She came from a remote and poor village and had had no exposure to computers, or electricity, till emigrating to the USA.

Meanwhile many of our tools customers were building networking equipment. We worked closely with many of them and often had big routers, switches and the like onsite that our engineers were working on. We worked on a lot of what we'd now call IoT gear: sensors et al connected to the net via a profusion of interfaces.

I sold both the tools and Internet businesses in 1997, but by then the web and Internet were old stories.

Today, like so many of us, I have a fast (250 Mb/s) and cheap connection into the house with four wireless links and multiple computers chattering to each other. Where in 1992 the web was incredibly novel and truly lacking in useful functionality, now I can't imagine being deprived of it. Remember travel agents? Ordering things over the phone (a phone that had a physical wire connecting it to Ma Bell)? Using 15 volumes of an encyclopedia? Physically mailing stuff to each other?

As one gets older the years spin by like microseconds, but it is amazing to stop and consider just how much this world has changed. My great grandfather lived on a farm in a world that changed slowly; he finally got electricity in his last year of life. His daughter didn't have access to a telephone till later in life, and my dad designed spacecraft on vellum and starched linen using a slide rule. My son once saw a typewriter and asked me what it was; I mumbled that it was a predecessor of Microsoft Word.

That he understood. I didn't have the heart to try and explain carbon paper.

Originally posted HERE.

Read more…

Only for specific jobs

Just a few decades ago, headsets were meant for use only with specific job functions – primarily B2B. They were used as simply extensions of communication devices, reserved for astronauts, mission control engineers, air traffic controllers, call center agents, fire fighters, etc. who all had mission critical communication to convey while their hands had to deal with something more important than holding a communication device. In the B2C consumers space you rarely saw anyone wearing headsets in public. The only devices you saw attached to one’s ears were hearing aids.

image_20b5b2d8ad.png

Tale of two cities: Telephony and music

Most headsets were used for communication purposes, which also referred to as ‘Telephony’ mode. As with most communications, this requires bi-directional audio. Except for serious audiophiles and audio professionals, headsets were not used for music consumption. Any type of half-duplex audio consumption was referred to as ‘Music' mode.

Deskphones and speakerphones

Within the enterprise, a deskphone was the primary communication device for a long time. Speakerphones were becoming a common staple in meeting rooms, facilitating active collaboration amongst geographically distributed team members. So, there were ‘handsets’ but no ‘headsets’ quite yet. 

image_7fe510e5e8.png

Mobile revolution: Communication and consumption

As the Internet and the browser were taking shape in the early ’90s, deskphones were getting untethered in the form of big and bulky cellular phones. At around the same time, a Body Area Network (BAN) wireless technology called Bluetooth was invented. Its original purpose was simply to replace the cords used for connecting a keyboard and mouse to the personal computer.

image_f918f50af3.png

As cellular phones were slimming down and becoming more mainstream, scientists figured out how to use Bluetooth radio for short-range full-duplex audio communications as well. Fueled by rapid cell-phone proliferation, along with the need for convenient hands-free communication by enterprise executives and professionals (for whom hands-free communication while being mobile was important), monaural Bluetooth headsets started becoming a loyal companion to cell phones.

While headsets were used with various telephony devices for communications, portable analog music (Sony Walkman, anybody?) started giving way to portable digital music. Cue the iPod era. The portable music players primarily used simple wired speakers on a rope. These early ‘earbuds’ didn’t even have a microphone in them because they were meant solely for audio consumption – not for audio capture. 

The app economy, softphones and SaaS

Mobile revolution transformed simple communication devices into information exchange devices and then more recently, into mini super computers that have applications to take care of functions served by numerous individual devices like a telephony device, camera, calculator, music player, etc. As narrowband networks gave way to broadband networks for both the wired and wireless worlds, ‘communication’ and ‘media consumption’ began to transform in a significant way as well. 

Communication: Deskphones or ‘hard’-phones started being replaced by VoIP-based soft-phones. A new market segment called Unified Communications (UC) was born because of this hard- to soft-phone transition. UC has been a key growth driver for the enterprise headset market for the last several years, and it continues to show healthy growth. Enterprises could not part ways with circuit-switched telephony devices completely, but they started adopting packet-switched telephony services called soft-phones. So, UC communication device companies are effectively helping enterprises by being the bridge from ‘old’ to ‘new’ technology. UC has recently evolved into UC&C – where the second ‘C’ represents ‘Collaboration.’ Collaboration using audio and video (like Zoom or Teams calls) got a real shot in the arm because of the COVID-19-induced remote work scenario that has been playing out globally for the last year and a half.

Media consumption: ‘Static’ storage media (audio cassettes, VHS tapes, CDs, DVDs) and their corresponding media players, including portable digital music devices like iPods, were replaced by ‘streaming’ services in a swift fashion. 

Why did this transformation matter to the headset world?

Communication & collaboration by the enterprise users as well as media consumption by consumers collided head-on. Because of this, monaural headsets have almost become irrelevant. Nearly all headsets today are binaural or stereo, and have microphone(s) in them.

This is because the same device needs to serve the purposes of both: consuming half-duplex audio when listening to music, podcasts, or watching movies or webinars, and enabling full-duplex audio for a telephone conversation, a conference call, or video conference.

Fewer form factors… more smarts 

From: Very few companies building manifold headset form factors that catered to the needs of every diverse persona out there.

To: Quite a few companies (obviously, a handful of them a great deal more successful than the others) driving the headset space to effectively just two form factors:

  1. Tiny True Wireless Stereo (TWS) earbuds and
  2. Big binaural occluding cans!

image_b63701dec3.png

Less hardware… more software

Such a trend has been in place for quite some time impacting several industries. Headsets are no exception. Ever so sophisticated semiconductor components and proliferation of miniaturized Microelectromechanical Systems, or MEMS in short, components have taken the place of numerous bulkier hardware components.

What do modern headsets primarily do with regards to audio?

  1. Render received audio in the wearer’s ear
  2. Capture spoken audio from the wearer’s mouth
  3. Calculate anti-noise and render it in the wearer’s ear (in noise-cancelling headsets)

Sounds straightforward, right? It is not as simple as it sounds – at least for enterprise-grade professional headsets. Audio is processed in the digital domain in all modern headsets using sophisticated digital signal processing techniques. DSP algorithms running on the DSP cores of the processors are the most compute-intensive aspects of these devices. Capture/transmit/record audio DSP is relatively more complicated than the render/receive/llayback audio DSP. Depending on the acoustic design (headset boom, number of microphones, speaker/microphone placement), audio performance requirements, and other audio feature requirements, the DSP workload varies.

Intelligence right at the edge!

Headsets are true edge devices. Most headset designs have severe constraints around several factors: cost, size, weight, power, MIPS, memory, etc.

Headsets are right at the horse’s mouth (pun intended) of massive trends and modern use cases like:

  • Wake word detection for virtual private assistants (VPAs)
  • Keyword detection for device control and various other data/analytics purposes
  • Modern user interface (UI) techniques like voice-as-UI, touch-as-UI, and gestures-as-UI
  • Transmit noise cancellation/suppression (TxNC or TxNS)
  • Adaptive ambient noise cancellation (ANC) mode selection
  • Real-time transcription assistance
  • Ambient noise identification
  • Speech synthesis, speaker identification, speaker authentication, etc.

Most importantly, note that there is immense end customer value for all these capabilities.

Until recently, even if one wanted to, very little could be done to support most of these advanced capabilities right in the headset. Just the features and functionalities that were addressable within the computational limits of the on-board DSP cores using traditional DSP techniques were all that could be supported.

Enter edge compute, AutoML, tinyML, and MLOps revolutions…

Several DSP-only workloads of the past are rapidly transitioning to an efficient hybrid model of DSP+ML workloads. Quite a few ML only capabilities that were not even possible using traditional DSP techniques are becoming possible right now as well. All of this is happening within the same constraints that existed before.

Silicon as well as software innovations are behind such possibilities. Silicon innovations are relatively slow to be adopted into device architectures at the moment, but they will be over time. Software innovations extract more value out of existing silicon architectures while helping converge on more efficient hardware architecture designs for next-generation products.

Thanks to embedded machine learning, tasks and features that were close to impossible are becoming a reality now. Production-grade Inference models with tiny program and data memory footprints in addition to impressive performance are possible today because of major advancements in AutoML and tinyML techniques. Building these models does not require massive amounts of data either. The ML-framework and automated yet flexible process offered by platforms like those from Edge Impulse make the ML model creation process simple and efficient compared to traditional methods of building such models.

Microphones and sensors galore

All headsets feature at least one microphone, and many feature multiple, sometimes up to 16 of them! The field of ML for audio is vast, and it is continuing to expand further. Many of the ML inferencing that was possible only at the cloud backends or sophisticated compute-rich endpoints are now fully possible in most of the resource-constrained embedded IoT silicon.

Microphones themselves are sensors, but many other sensors like accelerometers, capacitive touch, passive infrared (PIR), ultrasonic, radar, and ultra-wideband (UWB) are making their way into headsets to meet and exceed customer expectations. Spatial audio, aka 3D audio, is one such application that utilizes several sensors to give the end-user an immersive audio experience. Sensor fusion is the concept of utilizing data from multiple sensors concurrently to arrive at intelligent decisions. Sensor fusion implementations that use modern ML techniques have been shown to have impressive performance metrics compared to traditional non-ML methods.

Transmit noise suppression (TxNS) has always been the holy grail of all premium enterprise headsets. It is an important aspect of enterprise collaboration. A magical combination of physical acoustic design – which is more art than science – combined with optimally tuned complex audio DSP algorithms implemented under severe MIPS, memory, latency, and other constraints. In recent years, some groundbreaking work has been done in utilizing recursive neural network (RNN) techniques to improve TxNS performance to levels that were never seen before. Because of their complexity and high-compute footprint, these techniques have been incorporated into devices that have mobile phone platform-like compute capabilities. The challenge of bringing such solutions to the resource-constrained embedded systems, such as enterprise headsets, while staying within the constraints laid out earlier, remains unsolved to a major extent. Advancements in embedded silicon technology, combined with tinyML/AutoML software innovations listed above, is helping address this and several other ML challenges.

image_f0f445aabc.png

Conclusion

Modern use cases that enable the hearables to become ‘smart’ are compelling. Cloud-based frameworks and tools necessary to build, iterate, optimize, and maintain high performance small footprint ML models to address these applications are readily available from entities like Edge Impulse. Any hearable entity that doesn’t take full advantage of this staggering advancement in technology will be at a competitive disadvantage.

Originally posted on the Edge Impulse blog by Arun Rajasekaran.

Read more…

In my last post, I explored how OTA updates are typically performed using Amazon Web Services and FreeRTOS. OTA updates are critically important to developers with connected devices. In today’s post, we are going to explore several best practices developers should keep in mind with implementing their OTA solution. Most of these will be generic although I will point out a few AWS specific best practices.

Best Practice #1 – Name your S3 bucket with afr-ota

There is a little trick with creating S3 buckets that I was completely oblivious to for a long time. Thankfully when I checked in with some colleagues about it, they also had not been aware of it so I’m not sure how long this has been supported but it can help an embedded developer from having to wade through too many AWS policies and simplify the process a little bit.

Anyone who has attempted to create an OTA Update with AWS and FreeRTOS knows that you have to setup several permissions to allow an OTA Update Job to access the S3 bucket. Well if you name your S3 bucket so that it begins with “afr-ota”, then the S3 bucket will automatically have the AWS managed policy AmazonFreeRTOSOTAUpdate attached to it. (See Create an OTA Update service role for more details). It’s a small help, but a good best practice worth knowing.

Best Practice #2 – Encrypt your firmware updates

Embedded software must be one of the most expensive things to develop that mankind has ever invented! It’s time consuming to create and test and can consume a large percentage of the development budget. Software though also drives most features in a product and can dramatically different a product. That software is intellectual property that is worth protecting through encryption.

Encrypting a firmware image provides several benefits. First, it can convert your firmware binary into a form that seems random or meaningless. This is desired because a developer shouldn’t want their binary image to be easily studied, investigated or reverse engineered. This makes it harder for someone to steal intellectual property and more difficult to understand for someone who may be interested in attacking the system. Second, encrypting the image means that the sender must have a key or credential of some sort that matches the device that will decrypt the image. This can be looked at a simple source for helping to authenticate the source, although more should be done than just encryption to fully authenticate and verify integrity such as signing the image.

Best Practice #3 – Do not support firmware rollbacks

There is often a debate as to whether firmware rollbacks should be supported in a system or not. My recommendation for a best practice is that firmware rollbacks be disabled. The argument for rollbacks is often that if something goes wrong with a firmware update then the user can rollback to an older version that was working. This seems like a good idea at first, but it can be a vulnerability source in a system. For example, let’s say that version 1.7 had a bug in the system that allowed remote attackers to access the system. A new firmware version, 1.8, fixes this flaw. A customer updates their firmware to version 1.8, but an attacker knows that if they can force the system back to 1.7, they can own the system. Firmware rollbacks seem like a convenient and good idea, in fact I’m sure in the past I used to recommend them as a best practice. However, in today’s connected world where we perform OTA updates, firmware rollbacks are a vulnerability so disable them to protect your users.

Best Practice #4 – Secure your bootloader

Updating firmware Over-the-Air requires several components to ensure that it is done securely and successfully. Often the focus is on getting the new image to the device and getting it decrypted. However, just like in traditional firmware updates, the bootloader is still a critical piece to the update process and in OTA updates, the bootloader can’t just be your traditional flavor but must be secure.

There are quite a few methods that can be used with the onboard bootloader, but no matter the method used, the bootloader must be secure. Secure bootloaders need to be capable of verifying the authenticity and integrity of the firmware before it is ever loaded. Some systems will use the application code to verify and install the firmware into a new application slot while others fully rely on the bootloader. In either case, the secure bootloader needs to be able to verify the authenticity and integrity of the firmware prior to accepting the new firmware image.

It’s also a good idea to ensure that the bootloader is built into a chain of trust and cannot be easily modified or updated. The secure bootloader is a critical component in a chain-of-trust that is necessary to keep a system secure.

Best Practice #5 – Build a Chain-of-Trust

A chain-of-trust is a sequence of events that occur while booting the device that ensures each link in the chain is trusted software. For example, I’ve been working with the Cypress PSoC 64 secure MCU’s recently and these parts come shipped from the factory with a hardware-based root-of-trust to authenticate that the MCU came from a secure source. That Root-of-Trust (RoT) is then transferred to a developer, who programs a secure bootloader and security policies onto the device. During the boot sequence, the RoT verifying the integrity and authenticity of the bootloader, which then verifies the integrity and authenticity of any second stage bootloader or software which then verifies the authenticity and integrity of the application. The application then verifies the authenticity and integrity of its data, keys, operational parameters and so on.

This sequence creates a Chain-Of-Trust which is needed and used by firmware OTA updates. When the new firmware request is made, the application must decrypt the image and verify that authenticity and integrity of the new firmware is intact. That new firmware can then only be used if the Chain-Of-Trust can successfully make its way through each link in the chain. The bottom line, a developer and the end user know that when the system boots successfully that the new firmware is legitimate. 

Conclusions

OTA updates are a critical infrastructure component to nearly every embedded IoT device. Sure, there are systems out there that once deployed will never update, however, those are probably a small percentage of systems. OTA updates are the go-to mechanism to update firmware in the field. We’ve examined several best practices that developers and companies should consider when they start to design their connected systems. In fact, the bonus best practice for today is that if you are building a connected device, make sure you explore your OTA update solution sooner rather than later. Otherwise, you may find that building that Chain-Of-Trust necessary in today’s deployments will be far more expensive and time consuming to implement.

Originally posted here.

Read more…

Would you like to live in a city where everything around you is digitalized? It’s always a better option to upgrade from a traditional way of living to a smart way of living. Every city should implement electric vehicle charging, smart parking, and an IoT-based smart waste management system for better living.

The evolution of IoT and sensors has evolved the concept of smart city technology. When it comes to keeping the city clean, involving such systems is a smart move smart waste management has become the new frontier for local authorities to reduce and recycle solid waste.

 

How is Smart Waste Management Making Cities Smarter?

 

In the olden days, people managed the trash by sending the trucks to collect the waste every day at scheduled routes even if the bin was not full. This is a waste of time and resources; instead, imposing smart waste management at every scheduled route is the best solution for timely trash pickup with the right resource management. To understand the solution to this problem, and propose a smart waste management system, read further.

Before going towards why implementing, smart waste management is important, understand what exactly is the problem.

Defining the Problem of collecting the trash

Currently, the trash that people create is thrown in nearby trash cans. These cans are then emptied by the municipalities or private truck companies that manage to remove the wastes at a scheduled time and transfer the same for recycling.

This process is followed in every city, and it may solve the waste issue partially but leaves other critical problems such as;

  • Overfilled bins are not catered, and underfilled bins are collected before time.
  • Overfilled bins may create unhygienic conditions.
  • Unoptimized truck routes may result in excess usage of fuel and environmental pollution.
  • Collective trash is combined that becomes difficult to sort during recycling.

Well, the best way to sort out all the above issues is to implement smart waste management systems.

 

Alleviate these problems with IoT based smart waste management systems for smart city

The right way of waste management can prevent environmental issues and air pollution. It is necessary to take care of hygiene and control the overloading carrier of waste disposal. There are many cities where IoT systems have been implemented.

By 2027, the smart waste management market will reach $4.10B with a 15.1% CAGR globally.

The smart bins work with the help of a sensor attached to the bin. It can help measure the fill level to further communicate the trash collectors with the data processed in the cloud. It optimizes the route of collection trucks without wasting time and fuel.

The simple solution to the traditional waste collection is to implement smart waste management for a smart city. With the increase in mitigating the waste issues with smart IoT systems, even urban areas are willing to implement smart waste management programs for clean and hygienic environments.

 

Improved Smart Waste Management for Smart City

 

The amount of city garbage that city dwellers produce is on the target of reaching six million tons in the next few years. Investing in the new IoT smart-based waste management system can help in optimized waste collection. The below points can help you understand how the IoT smart system can convert your city into a smart one.

 

1. Timely pickup of trash

IoT-based smart waste management will signal the waste collection companies before the trash bins start overflowing. Once the trash cans are full, the collectors are alerted to reach the area to empty the bins.

 

2. Re-route the pickup

Solid wastes can differ daily or weekly. As you can see, trash cans are everywhere in condos, commercial buildings, and public places; smart waste management companies can take a step to attach a sensor to the trash cans to measure the filled levels. The IoT solutions can collect the data and route it to the collectors based on which smart bin needs to be emptied in priority.

 

3. Data Analysis

The connected sensors collect the data whenever the trash is filled and when it was last emptied. The designed system can prove how important it is to have IoT based smart waste management system. It helps in planning the distribution of the dumpsters and eliminates the incorrect ways of removing the wastes.

With the information mentioned above, you can understand how implementing IoT-based smart waste management systems can change the environment and improve picking up solid wastes smartly.

 

Conclusion - Transform your City into a Smart One

 

Smart waste management services can benefit the cities and the citizens with smart waste management. The companies can use the smart-built sensors to increase efficiency and enhance customer satisfaction by preventing the overflowing of the waste bins. It is advisable to start implementing smart IoT systems in every city.

Read more…

Wi-Fi, NB-IoT, Bluetooth, LoRaWAN… This webinar will help you to choose the appropriate connectivity protocol for your IoT application.

Connectivity is cool! The cornucopia of connectivity choices available to us today would make engineers gasp in awe and disbelief just a few short decades ago.

I was just pondering this point and – as usual – random thoughts started to bounce around my poor old noggin. Take the topic of interoperability, for example (for the purposes of these discussions, we will take “interoperability” to mean “the ability of computer systems or software to exchange and make use of information”).

Don’t get me started on the subject of the Endian Wars. Instead, let’s consider the 7-bit American Standard Code for Information Interchange (ASCII) that we know and love. The currently used ASCII standard of 96 printing characters and 32 control characters was first defined in 1968. For machines that supported ASCII, this greatly facilitated their ability to exchange information.

For reasons of their own, the folks at IBM decided to go their own way by developing a proprietary 8-bit code called the Extended Binary Coded Decimal Interchange Code (EBCDIC). This code was first used on the IBM 360 computer, which was presented to the market in 1964. Just for giggles and grins, IBM eventually introduced 57 different variants EBCDIC targeted at different countries (a “standard” that came in 57 different flavors!). This obviously didn’t help IBM machines in different countries to make use of each other’s files. Even worse, different types of IBM computers found difficult to talk to each other, let alone with machines from other manufacturers.

There’s an old joke that goes, “Standard are great – everyone should have one.” The problem is that almost everybody did. Sometime around late-1980 or early 1981, for example, I was working at International Computers (ICL) in Manchester, England. I recall being invited to what I was told was going to be a milestone event. This turned out to be a demonstration in which a mainframe computer was connected to a much smaller computer (akin to one of the first PCs) via a proprietary wired network. With great flourish and fanfare, the presenter created and saved a simple ASCII text file on the mainframe, then – to the amazement of all present – opened and edited the same file on the small computer.

This may sound like no big deal to the young folks of today, but it was an event of such significance at that time that journalists from the national papers came up on the train from London to witness this august occasion with their own eyes so that they could report back to the unwashed masses.

Now, of course, we have a wide variety of wired standards, from simple (short range) protocols like I2C and SPI, to sophisticated (longer range) offerings like Ethernet. And, of course, we have a cornucopia of wireless standards like Wi-Fi, NB-IoT, Bluetooth, and LoRaWAN. In some respects, this is almost an embarrassment of riches … there are so many options … how can we be expected to choose the most appropriate connectivity protocol for our IoT applications?

Well, I’m glad you asked, because I will be hosting a one-hour webinar on this very topic on Tuesday 28 September 2021, starting at 8:00 a.m. Pacific Time (11:00 a.m. Eastern Time).

Presented by IoT Central and sponsored by ARM, yours truly will be joined in this webinar by Samuele Falconer (Principal Product Manager at u-blox), Omer Cheema (Head of the Wi-Fi Business Unit at Renesas Semiconductor), Wienke Giezeman (Co-Founder and CEO at The Things Industries), and Thomas Cuyckens (System Architect at Qorvo).

If you are at all interested in connectivity for your cunning IoT creations, then may I make so bold as to suggest you Register Now before all of the good virtual seats are taken. I’m so enthused by this event that I’m prepared to pledge on my honor that – if you fail to learn something new – I will be very surprised (I was going to say that I would return the price of your admission but, since this event is free, that would have been a tad pointless).

So, what say you? Can I dare to hope to see you there? Register Now

Read more…
RSS
Email me when there are new items in this category –

Sponsor