Subscribe to our Newsletter | Join our LinkedIn Group | Post on IoT Central


Devices (300)

What is 5G NR (New Radio)?

by Gus Vos

Unless you have been living under a rock, you have been seeing and hearing a lot about&nbsp5G these days. In addition, if you are at all involved in Internet of Things (IoT) or other initiatives at your organization that use cellular networking technologies, you have also likely heard about 5G New Radio, otherwise known as 5G NR, the new 5G radio access technology specification.

However, all the jargon, hype, and sometimes contradictory statements made by solution providers, the media, and analysts regarding 5G and 5G NR can make it difficult to understand what 5G NR actually is, how it works, what its advantages are, to what extent it is different than other cellular radio access technologies, and perhaps most importantly, how your organization can use this new radio access technology.

In this blog, we will provide you with an overview on 5G NR, offering you answers to these and other basic 5G NR questions – with a particular focus on what these answers mean for those in the IoT industry. 

We can’t promise to make you a 5G NR expert with this blog – but we can say that if you are confused about 5G NR before reading it, you will come away afterward with a better understanding of what 5G NR is, how it works, and how it might transform your industry.

What is the NR in 5G NR?

As its name implies, 5G New Radio or 5G NR is the new radio access technology specification found in the 5G standard. 

Set by the 3rd Generation Partnership Project (3GPP) telecommunications standards group, the 5G NR specification defines how 5G NR edge devices (smart phones, embedded modules, routers, and gateways) and 5G NR network infrastructure (base stations, small cells, and other Radio Access Network equipment) wirelessly transmit data. To put it another way, 5G NR describes how 5G NR edge devices and 5G NR network infrastructure use radio waves to talk to each other. 

5G NR is a very important part of 5G. After all, it describes how 5G solutions will use radio waves to wirelessly transmit data faster and with less latency than previous radio access technology specifications. However, while 5G NR is a very important part of the new 5G standard, it does not encompass everything related to 5G. 

For example, 5G includes a new core network architecture standard (appropriately named 5G Core Network or 5GCN) that specifies the architecture of the network that collects, processes, and routes data from edge devices and then sends this data to the cloud, other edge devices, or elsewhere. The 5GCN will improve 5G networks’ operational capacity, efficiency, and performance.

However, 5GCN is not a radio access technology like 5G NR, but rather a core network technology. In fact, networks using the 5GCN core network will be able to work with previous types of radio access technologies – like LTE. 

Is 5G NR one of 5G’s most important new technological advancements? Yes. But it is not the only technological advancement to be introduced by 5G.  

How does 5G NR work?

Like all radio access communications technology specifications, the 5G NR specification describes how edge devices and network infrastructure transmit data to each other using electromagnetic radio waves. Depending on the frequency of the electromagnetic waves (how long the wave is), it occupies a different part of the wireless spectrum.

Some of the waves that 5G NR uses have frequencies of between 400 MHz and 6 GHz. These waves occupy what is called sub-6 spectrum (since their frequencies are all under 6 GHz).

This sub-6 spectrum is used by other cellular radio access technologies, like LTE, as well. In the past, using different cellular radio access technologies like this over the same spectrum would lead to unmanageable interference problems, with the different technologies radio waves interfering with each other. 

One of 5G NR’s many advantages is that it’s solved this problem, using a technology called Dynamic Spectrum Sharing (DSS). This DSS technology allows 5G NR signals to use the same band of spectrum as LTE and other cellular technologies, like LTE-M and NB-IoT. This allows 5G NR networks to be rolled out without shutting down LTE or other networks that support existing LTE smart phones or IoT devices. You can learn more about DSS, and how it speeds the rollout of 5G NR while also extending the life of IoT devices, here.

One of 5G NR’s other major advancements is that it does not just use waves in the sub-6 spectrum to transmit data. The 5G NR specification also specifies how edge devices and network infrastructure can use radio waves in bands between 24 GHz and 52 GHz to transmit data.

These millimeter wave (mmWave) bands greatly expand the amount of spectrum available for wireless data communications. The lack of spectrum capacity has been a problem in the past, as there is a limited number of bands of sub-6 spectrum available for organizations to use for cellular communications, and many of these bands are small. Lack of available capacity and narrow spectrum bands led to network congestion, which limits the amount of data that can be transmitted over networks that use sub-6 spectrum. 

mmWave opens up a massive amount of new wireless spectrum, as well as much broader bands of wireless spectrum for cellular data transmission. This additional spectrum and these broader spectrum bands increase the capacity (amount of data) that can be transmitted over these bands, enabling 5G NR mmWave devices to achieve data speeds that are four or more times faster than devices that use just sub-6 spectrum. 

The additional wireless capacity provided by mmWave also reduces latency (the time between when device sends a signal and when it receives a response). By reducing latency from 10 milliseconds with sub-6 devices to 3-4 milliseconds or lower with 5G NR mmWave devices, 5G enables new industrial automation, autonomous vehicle and immersive gaming use cases, as well as Virtual Reality (VR), Augmented Reality (AR), and similar Extended Reality (XR) use cases, all of which require very low latency. 

On the other hand, these new mmWave devices and network infrastructure come with new technical requirements, as well as drawbacks associated with their use of mmWave spectrum. For example, mmWave devices use more power and generate more heat than sub-6 devices. In addition, mmWave signals have less range and do not penetrate walls and other physical objects as easily as sub-6 waves. 5G NR includes some technologies, such as beamforming and massive Multiple Input Multiple Output (MIMO) that lessen some of these range and obstacle penetration limitations – but they do not eliminate them. 

To learn more about the implications of 5G NR mmWave on the design of IoT and other products, read our blog, Seven Tips For Designing 5G NR mmWave Products.

In addition, there has been a lot written on these two different “flavors” (sub-6 and mmWave) of 5G NR. If you are interested in learning more about the differences between sub-6 5G NR and mmWave 5G NR, and how together they enable both evolutionary and revolutionary changes for Fixed Wireless Access (FWA), mobile broadband, IoT and other wireless applications, read our previous blog A Closer Look at the Five Waves of 5G.

What is the difference between 5G NR and LTE?

Though sub-6 and mmWave are very different, both types of 5G NR provide data transfer speed, latency, and other performance improvements compared to LTE, the previous radio access technology specification used for cellular communications. 

For example, outside of its use of mmWave, 5G NR features other technical advancements designed to improve network performance, including:

• Flexible numerology, which enables 5G NR network infrastructure to set the spacing between subcarriers in a band of wireless spectrum at 15, 30, 60, 120 and 240 kHz, rather than only use 15 kHz spacing, like LTE. This flexible numerology is what allows 5G NR to use mmWave spectrum in the first place. It also improves the performance of 5G NR devices that use higher sub-6 spectrum, such as 3.5 GHz C-Band spectrum, since the network can adjust the subcarrier spacing to meet the particular spectrum and use case requirements of the data it is transmitting. For example, when low latency is required, the network can use wider subcarrier spacing to help improve the latency of the transmission.
• Beamforming, in which massive MIMO (multiple-input and multiple-output) antenna technologies are used to focus wireless signal and then sweep them across areas till they make a strong connection. Beamforming helps extend the range of networks that use mmWave and higher sub-6 spectrum.  
• Selective Hybrid Automatic Repeat Request (HARQ), which allows 5G NR to break large data blocks into smaller blocks, so that when there is an error, the retransmission is smaller and results in higher data transfer speeds than LTE, which transfers data in larger blocks. 
• Faster Time Division Duplexing (TDD), which enables 5G NR networks to switch between uplink and downlink faster, reducing latency. 
• Pre-emptive scheduling, which lowers latency by allowing higher-priority data to overwrite or pre-empt lower-priority data, even if the lower-priority data is already being transmitted. 
• Shorter scheduling units that trim the minimum scheduling unit to just two symbols, improving latency.
• A new inactive state for devices. LTE devices had two states – idle and connected. 5G NR includes a new state – inactive – that reduces the time needed for an edge device to move in and out of its connected state (the state used for transmission), making the device more responsive. 

These and the other technical advancements made to 5G NR are complicated, but the result of these advancements is pretty simple – faster data speeds, lower latency, more spectrum agility, and otherwise better performance than LTE. 

Are LPWA radio access technology specifications, like NB-IoT and LTE-M, supported by 5G?

Though 5G features a new radio access technology, 5G NR, 5G supports other radio access technologies as well. This includes the Low Power Wide Area (LPWA) technologies, Narrowband IoT (NB-IoT), and Long Term Evolution for Machines (LTE-M). In fact, these LPWA standards are the standards that 5G uses to address one of its three main use cases – Massive, Machine-Type Communications (mMTC). 

Improvements have been and continue to be made to these 5G LPWA standards to address these mMTC use cases – improvements that further lower the cost of LPWA devices, reduce these devices’ power usage, and enable an even larger number of LPWA devices to connect to the network in a given area.

What are the use cases for 5G NR and 5G LPWA Radio Access Technologies?

Today, LTE supports three basic use cases:

• Voice: People today can use LTE to talk to each other using mobile devices. 
• Mobile broadband (MBB): People can use smartphones, tablets, mobile and other edge devices to view videos, play games, and use other applications that require broadband data speeds.
• IoT: People can use cellular modules, routers, and other gateways embedded in practically anything – a smart speaker, a dog collar, a commercial washing machine, a safety shoe, an industrial air purifier, a liquid fertilizer storage tank – to transmit data from the thing to the cloud or a private data center and back via the internet.  

5G NR, as well as 5G’s LPWA radio access technologies (NB-IoT and LTE-M) will continue to support these existing IoT and voice use cases. 

However, 5G also expands on the MBB use case with a new Enhanced Mobile Broadband (eMBB) use case. These eMBB use cases leverage 5G NR’s higher peak and average speeds and lower latency to enable smart phones and other devices to support high-definition cloud-based immersive video games, high quality video calls and new VR, AR, and other XR applications.

In addition, 5G NR also supports a new use case, called Ultra-Reliable, Low-Latency Communications (URLLC). 5G NR enables devices to create connections that are ultra-reliable with very low latency. With these new 5G NR capabilities, as well as 5G NR’s support for very fast handoffs and high mobility, organizations can now deploy new factory automation, smart city 2.0 and other next generation Industrial IoT (IIoT) applications, as well as Vehicle-to-everything (V2X) applications, such as autonomous vehicles. 

As we mentioned above, 5G will also support the new mMTC use case, which represents an enhancement of the existing IoT use case. However, in the case of mMTC, new use cases will be enabled by improvements to LTE-M and NB-IoT radio access technology standards, not 5G NR. Examples of these types of new mMTC use cases include large-scale deployments of small, low cost edge devices (like sensors) for smart city, smart logistics, smart grid, and similar applications.

But this is not all. 3GPP is looking at additional new use cases (and new technologies for these use cases), as discussed in this recent blog on Release 17 of the 5G standard. One of these new technologies is a new Reduced Capability (RedCap) device – sometimes referred to as NR Light – for IoT or MTC use cases that require faster data speeds than LPWA devices can provide, but also need devices that are less expensive than the 5G NR devices being deployed today.

3GPP is also examining standard changes to NR, LTE-M, and NB-IoT in 5G Release 17 that would make it possible for satellites to use these technologies for Non-Terrestrial Network (NTN) communications. This new NTN feature would help enable the deployment of satellites able to provide NR, LTE-M, and NB-IoT coverage in very remote areas, far away from cellular base stations.

What should you look for in a 5G NR module, router or gateway solution?

While all 5G NR edge devices use the 5G NR technology specification, they are not all created equal. In fact, the flexibility, performance, quality, security, and other capabilities of a 5G NR edge device can make the difference between a successful 5G NR application rollout and a failed one. 

As they evaluate 5G NR edge devices for their application, organizations should ask themselves the following questions:

• Is the edge device multi-mode? 
While Mobile Network Operators (MNOs) are rapidly expanding their 5G NR networks, there are still many areas where 5G NR coverage is not available. Multi-mode edge devices that can support LTE, or even 3G, help ensure that wherever the edge device is deployed, it will be able to connect to a MNO’s network – even if this connection does not provide the data speed, latency, or other performance needed to maximize the value of the 5G NR application. 

In addition, many MNOs are rolling out non-standalone (NSA) 5G NR networks at first. These NSA 5G NR networks need a LTE connection in addition to a 5G NR connection to transmit data from and to 5G NR devices. If your edge device does not include support for LTE, it will not be able to use 5G NR on these NSA networks. 

• How secure are the edge devices? 
Data is valuable and sensitive – and the data transmitted by 5G NR devices is no different. To limit the risk that this data is exposed, altered, or destroyed, organizations need to adopt a Defense in Depth approach to 5G NR cybersecurity, with layers of security implemented at the cloud, network, and edge device levels. 

At the edge device level, organizations should ensure their devices have security built-in with features such as HTTPS, secure socket, secure boot, and free unlimited firmware over-the-air (FOTA) updates. 

Organizations will also want to use edge devices from trustworthy companies that are headquartered in countries that have strict laws in place to protect customer data. In doing so you will ensure these companies are committed to working with you to prevent state or other malicious actors from gaining access to your 5G NR data.

• Are the 5G NR devices future-proof? 
Over time, organizations are likely to want to upgrade their applications. In addition, the 5G NR specification is not set in stone, and updates to it are made periodically. Organizations will want to ensure their 5G NR edge devices are futureproof, with capabilities that include the ability to update them with new firmware over the air, so they can upgrade their applications and take advantage of new 5G NR capabilities in the future. 

• Can the 5G NR device do edge processing? 
While 5G NR increases the amount of data that can be transmitted over cellular wireless networks, in many cases organizations will want to filter, prioritize, or otherwise process some of their 5G NR application’s data at the edge. This edge processing can enable these organizations to lower their data transmission costs, improve application performance, and lower their devices energy use. 

5G NR edge devices that offer organizations the ability to easily process data at the edge allow them to lower their data transmission expenses, optimize application performance, and maximize their devices’ battery lives. 

Originally posted here.

Read more…

Introduction   

Over the years, there has been an extensive shift of digitalization that has called for new concepts and new technologies. Especially when it comes to improving human life and reducing effort in routine tasks, one thing that has gained immense popularity is the very idea of IoT.   

The Internet of Things (IoT) is a network of physical objects (vehicles, devices, buildings, and other items) embedded with software sensors, electronics, and network connectivity to collect and exchange data. It is the network of those inter-connected objects or smart devices that can exchange information using a method of agreement and data schema.    

According to Statista, the total installed base of IoT (Internet of Things) connected devices worldwide estimated to reach 30.9 billion units by 2025, a significant increase from the 13.8 billion units anticipated in 2021.    

Common Challenges in IoT    

Do you know how IoT works? Well, IoT devices are capable of providing automated facilities because they have inbuilt sensors and mini-computer processors, in which sensors collect the data with the help of machine learning. But unfortunately, these devices are connected to the internet and are more vulnerable to hacking and malware.    

Nevertheless, we are living in the digital world where your car will soon be a better driver than you, and your smart security systems will provide:  

  • Better protection to your residence,   
  • Your Industries,   
  • and your commercial places against damage and theft.  

Your smart refrigerators will better communicate with the internet. It will be more responsible for ordering your grocery items. All these miracles can happen with automation and the advancements of embedded systems into the Internet of Things.    

However, improving the performance and quality of such systems is a significant challenge because IoT devices generate a large variety and volume of data that are difficult to test if the IoT testing service provider that you’ve hired for testing doesn’t have the best resources, tools, test environments, and test methods to ensure the quality, performance, speed, and scalability of such systems. Consequently, IoT testing services are the key to ensuring flawless performance and functionality of your IoT systems.   

As long as it comes to testing of IoT devices, organizations face severe challenges that you can discover below:   

Testing Across Several Cloud IoT Platforms    

Every IoT device has its own hardware, and this device is dependent on software to operate it. When it comes to integration, IoT devices require application software to run commands to the devices and analyze data collected by the devices. Also, each device comes with different operating systems, versions, firmware, hardware, and software, which may not be possible to test with various combinations of devices.    

Before conducting testing on IoT devices, one needs to collect information from the end-users about which software they’re using to run the IoT devices. One of the most widely used cloud IoT platforms that assist in connecting different components of the IoT value chain is IBM Watson, Azure IoT, and AWS, among others. To run IoT devices across all cloud IoT platforms, it is necessary to consider the experienced IoT testing service provider or experts from the software testing company, mainly those who are well-versed in the testing of cloud IoT platforms and can ensure their practical usability.    

One should know about an IoT environment and understand how devices generate data with a wide variety, velocity, and veracity. Make sure IoT devices produce the data into a structured or unstructured form and then send the enormous amounts of data to the cloud. If you plan to get IoT testing services, you need to test your IoT application across various platforms. Testing should be performed in a real-time environment. If the device often introduces firmware updates or new version upgrades, it is crucial to perform specific testing by keeping all such factors in mind.    

Data Security Threats    

The volume of data gathered and communicated by connected devices is enormous. The higher amount of data generated by devices, the higher number of data leaks or any other risks your system can experience from outside entries.    

Testing of IoT devices is vital from the best IoT testing service provider. Otherwise, your IoT device can become vulnerable to security threats. With QA experts or IoT testing services, you can quickly identify security bottlenecks from the system and address them early as possible.    

When performing IoT testing, it is necessary to test credentials, passwords, and data interface to ensure that there are no risks for security breaches. Today, IoT engineers implement layered security, and with this process, they can get multiple levels of protection for the system and prevent the system from potential attacks or data leaks.  

 Too Many IoT Communication Protocols    

Nowadays, IoT devices use several distinct communication protocols from Message Queuing Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and common Extensible Messaging and Presence Protocol (XMPP) to interact with controllers and with each other.    

But the most popular protocol that ensures the IoT device will communicate and perform well even in high latency and low bandwidth situations is MQTT (Message Queuing Telemetry Transport (MQTT).  

However, due to the popularity of MQTT, it is crucial to ensure the security of this protocol as it is open to attacks and doesn’t provide excellent protection beyond the Transmission Control Protocol layer. Therefore, one should hire a diligent IoT testing service provider to assure that testing will perform rigorously. In addition, it ensures that the communication between controllers and disparate devices will happen more reliably and safely.    

Lack of Standardization    

Due to the increasing number of connected devices, it becomes imperative to improve the standardization of an IoT system in different levels: platforms, standard business models, connectivity, and application.    

Standardization for each IoT device should be uniformed while testing. Otherwise, your users can face severe problems at the time of connecting IoT devices with different systems.    

For this, the IoT testing service provider should have detailed expertise in performing connected device testing based on the intended use or use case of the system. Also, there should be a uniform standardization for all levels of IoT systems before providing quality-based IoT products to end-users.    

Conclusion    

IoT testing approach can vary based on the architecture or system involved. Therefore, businesses should focus more on reliable IoT testing services and allow testers to focus more on the Test-As-a-User (TAAS) approach instead of testing based on the requirements.   

Always choose the trustworthy IoT testing service provider for integration testing of IoT systems. One should have a comprehensive strategy to discover the bugs in the system through integration testing.   

Numerous challenges occur while implementing IoT testing, but it is an exciting job if the testing service provider is ready to offer you end-to-end functional and non-functional validation services for different implementations.    

The company should be certified to test IoT connected devices with a complicated mesh of devices, hardware, protocols, operating systems, firmware, etc. In addition, they should have industry best practices with IoT testing tools to address challenges that you face every day while using IoT systems.    

Read more…

It’s no secret that I love just about everything to do with what we now refer to as STEM; that is, science, technology, engineering, and math. When I was a kid, my parents gifted me with what was, at that time, a state-of-the-art educational electronics kit containing a collection of basic components (resistors, capacitors, inductors), a teensy loudspeaker, some small (6-volt) incandescent bulbs… that sort of thing. Everything was connected using a patch-board of springs (a bit like the 130-in-1 Electronic Playground from SparkFun).

The funny thing is, now that I come to look back on it, most electronics systems in the real world at that time weren’t all that much more sophisticated than my kit. In our house, for example, we had one small vacuum tube-based black-and-white television in the family room and one rotary-dial telephone that was hardwired to the wall in the hallway. We never even dreamed of color televisions and I would have laughed my socks off if you’d told me that the day would come when we’d have high-definition color televisions in almost every room in the house, smart phones so small you could carry them your pocket and use them to take photos and videos and make calls around the world, smart devices that you could control with your voice and that would speak back to you… the list goes on.

Now, of course, we have the Internet of Things (IoT), which boasts more “things” than you can throw a stick at (according to Statista, there were ~22 billion IoT devices in 2018, there will be ~38 billion in 2025, and there are expected to be ~50 billion by 2030).

One of the decisions required when embarking on an IoT deployment pertains to connectivity. Some devices are hardwired, many use Bluetooth or Wi-Fi or some form of wireless mesh, and many more employ cellular technology as their connectivity solution of choice.

In order to connect to a cellular network, the IoT device must include some form of subscriber identity module (SIM). Over the years, the original SIMs (which originated circa 1991) evolved in various ways. A few years ago, the industry saw the introduction of embedded SIM (eSIM) technology. Now, the next-generation integrated SIM (iSIM) is poised to shake the IoT world once more.

“But what is iSIM,” I hear you cry. Well, I’m glad you asked because, by some strange quirk of fate, I’ve been invited to host a panel discussion — Accelerating Innovation on the IoT Edge with Integrated SIM (iSIM) — which is being held under the august auspices of IotCentral.io

In this webinar — which will be held on Thursday 20 May 2021 from 10:00 a.m. to 11:00 a.m. CDT — I will be joined by four industry gurus to discuss how cellular IoT is changing and how to navigate through the cornucopia of SIM, eSIM, and iSIM options to decide what’s best for your product. As part of this, we will see quick-start tools and cool demos that can move you from concept to product. Also (and of particular interest to your humble narrator), we will experience the supercharge potential of TinyML and iSIM.

8929356061?profile=RESIZE_584x

Panel members Loic Bonvarlet (upper left), Brian Partridge (upper right),

Dr. Juan Nogueira (lower left), and Jan Jongboom (bottom right)

The gurus in question (and whom I will be questioning) are Loic Bonvarlet, VP Product and Marketing at Kigen; Brian Partridge, Research Director for Infrastructure and Cloud Technologies at 451 Research; Dr. Juan Nogueira, Senior Director, Connectivity, Global Technology Team at FLEX; and Jan Jongboom, CTO and Co-Founder at Edge Impulse.

So, what say you? Dare I hope that we will have the pleasure of your company and that you will be able to join us to (a) tease your auditory input systems with our discussions and (b) join our question-and-answer free-for-all at the end?

 

Video recording available:

Read more…

By Bee Hayes-Thakore

The Android Ready SE Alliance, announced by Google on March 25th, paves the path for tamper resistant hardware backed security services. Kigen is bringing the first secure iSIM OS, along with our GSMA certified eSIM OS and personalization services to support fast adoption of emerging security services across smartphones, tablets, WearOS, Android Auto Embedded and Android TV.

Google has been advancing their investment in how tamper-resistant secure hardware modules can protect not only Android and its functionality, but also protect third-party apps and secure sensitive transactions. The latest android smartphone device features enable tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the hardware-backed Keystore that resides in a hardware security module.

To accelerate adoption of new Android use cases with stronger security, Google announced the formation of the Android Ready SE Alliance. Secure Element (SE) vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. On March 25th, Google launched the General Availability (GA) version of StrongBox for SE.

8887974290?profile=RESIZE_710x

Hardware based security modules are becoming a mainstay of the mobile world. Juniper Research’s latest eSIM research, eSIMs: Sector Analysis, Emerging Opportunities & Market Forecasts 2021-2025, independently assessed eSIM adoption and demand in the consumer sector, industrial sector, and public sector, and predicts that the consumer sector will account for 94% of global eSIM installations by 2025. It anticipates that established adoption of eSIM frameworks from consumer device vendors such as Google, will accelerate the growth of eSIMs in consumer devices ahead of the industrial and public sectors.


Consumer sector will account for 94% of global eSIM installations by 2025

Juniper Research, 2021.

Expanding the secure architecture of trust to consumer wearables, smart TV and smart car

What’s more? A major development is that now this is not just for smartphones and tablets, but also applicable to WearOS, Android Auto Embedded and Android TV. These less traditional form factors have huge potential beyond being purely companion devices to smartphones or tablets. With the power, size and performance benefits offered by Kigen’s iSIM OS, OEMs and chipset vendors can consider the full scope of the vast Android ecosystem to deliver new services.

This means new secure services and innovations around:

🔐 Digital keys (car, home, office)

🛂 Mobile Driver’s License (mDL), National ID, ePassports

🏧 eMoney solutions (for example, Wallet)

How is Kigen supporting Google’s Android Ready SE Alliance?

The alliance was created to make discrete tamper resistant hardware backed security the lowest common denominator for the Android ecosystem. A major goal of this alliance is to enable a consistent, interoperable, and demonstrably secure applets across the Android ecosystem.

Kigen believes that enabling the broadest choice and interoperability is fundamental to the architecture of digital trust. Our secure, standards-compliant eSIM and iSIM OS, and secure personalization services are available to all chipset or device partners in the Android Ready SE Alliance to leverage the benefits of iSIM for customer-centric innovations for billions of Android users quickly.

Vincent Korstanje, CEO of Kigen

Kigen’s support for the Android Ready SE Alliance will allow our industry partners to easily leapfrog to the enhanced security and power efficiency benefits of iSIM technology or choose a seamless transition from embedded SIM so they can focus on their innovation.

We are delighted to partner with Kigen to further strengthen the security of Android through StrongBox via Secure Element (SE). We look forward to widespread adoption by our OEM partners and developers and the entire Android ecosystem.

Sudhi Herle, Director of Android Platform Security 

In the near term, the Google team is prioritizing and delivering the following Applets in conjunction with corresponding Android feature releases:

  • Mobile driver’s license and Identity Credentials
  • Digital car keys

Kigen brings the ability to bridge the physical embedded security hardware to a fully integrated form factor. Our Kigen standards-compliant eSIM OS (version 2.2. eUICC OS) is available to support chipsets and device makers now. This announcement is a start to what will bring a whole host of new and exciting trusted services offering better experience for users on Android.

Kigen’s eSIM (eUICC) OS brings

8887975464?profile=RESIZE_710x

The smallest operating system, allowing OEMs to select compact, cost-effective hardware to run it on.

Kigen OS offers the highest level of logical security when employed on any SIM form factor, including a secure enclave.

On top of Kigen OS, we have a broad portfolio of Java Card™ Applets to support your needs for the Android SE Ready Alliance.

Kigen’s Integrated SIM or iSIM (iUICC) OS further this advantage

8887975878?profile=RESIZE_710x

Integrated at the heart of the device and securely personalized, iSIM brings significant size and battery life benefits to cellular Iot devices. iSIM can act as a root of trust for payment, identity, and critical infrastructure applications

Kigen’s iSIM is flexible enough to support dual sim capability through a single profile or remote SIM provisioning mechanisms with the latter enabling out-of-the-box connectivity, secure and remote profile management.

For smartphones, set top boxes, android auto applications, auto car display, Chromecast or Google Assistant enabled devices, iSIM can offer significant benefits to incorporate Artificial intelligence at the edge.

Kigen’s secure personalization services to support fast adoption

SIM vendors have in-house capabilities for data generation but the eSIM and iSIM value chains redistribute many roles and responsibilities among new stakeholders for the personalization of operator credentials along different stages of production or over-the-air when devices are deployed.

Kigen can offer data generation as a service to vendors new to the ecosystem.

Partner with us to provide cellular chipset and module makers with the strongest security, performance for integrated SIM leading to accelerate these new use cases.

Security considerations for eSIM and iSIM enabled secure connected services

Designing a secure connected product requires considerable thought and planning and there really is no ‘one-size-fits-all’ solution. How security should be implemented draws upon a multitude of factors, including:

  • What data is being stored or transmitted between the device and other connected apps?
  • Are there regulatory requirements for the device? (i.e. PCI DSS, HIPAA, FDA, etc.)
  • What are the hardware or design limitations that will affect security implementation?
  • Will the devices be manufactured in a site accredited by all of the necessary industry bodies?
  • What is the expected lifespan of the device?

End-to-end ecosystem and services thinking needs to be a design consideration from the very early stage especially when considering the strain on battery consumption in devices such as wearables, smart watches and fitness devices as well as portable devices that are part of the connected consumer vehicles.

Originally posted here.

Read more…

In my last post, I explored how OTA updates are typically performed using Amazon Web Services and FreeRTOS. OTA updates are critically important to developers with connected devices. In today’s post, we are going to explore several best practices developers should keep in mind with implementing their OTA solution. Most of these will be generic although I will point out a few AWS specific best practices.

Best Practice #1 – Name your S3 bucket with afr-ota

There is a little trick with creating S3 buckets that I was completely oblivious to for a long time. Thankfully when I checked in with some colleagues about it, they also had not been aware of it so I’m not sure how long this has been supported but it can help an embedded developer from having to wade through too many AWS policies and simplify the process a little bit.

Anyone who has attempted to create an OTA Update with AWS and FreeRTOS knows that you have to setup several permissions to allow an OTA Update Job to access the S3 bucket. Well if you name your S3 bucket so that it begins with “afr-ota”, then the S3 bucket will automatically have the AWS managed policy AmazonFreeRTOSOTAUpdate attached to it. (See Create an OTA Update service role for more details). It’s a small help, but a good best practice worth knowing.

Best Practice #2 – Encrypt your firmware updates

Embedded software must be one of the most expensive things to develop that mankind has ever invented! It’s time consuming to create and test and can consume a large percentage of the development budget. Software though also drives most features in a product and can dramatically different a product. That software is intellectual property that is worth protecting through encryption.

Encrypting a firmware image provides several benefits. First, it can convert your firmware binary into a form that seems random or meaningless. This is desired because a developer shouldn’t want their binary image to be easily studied, investigated or reverse engineered. This makes it harder for someone to steal intellectual property and more difficult to understand for someone who may be interested in attacking the system. Second, encrypting the image means that the sender must have a key or credential of some sort that matches the device that will decrypt the image. This can be looked at a simple source for helping to authenticate the source, although more should be done than just encryption to fully authenticate and verify integrity such as signing the image.

Best Practice #3 – Do not support firmware rollbacks

There is often a debate as to whether firmware rollbacks should be supported in a system or not. My recommendation for a best practice is that firmware rollbacks be disabled. The argument for rollbacks is often that if something goes wrong with a firmware update then the user can rollback to an older version that was working. This seems like a good idea at first, but it can be a vulnerability source in a system. For example, let’s say that version 1.7 had a bug in the system that allowed remote attackers to access the system. A new firmware version, 1.8, fixes this flaw. A customer updates their firmware to version 1.8, but an attacker knows that if they can force the system back to 1.7, they can own the system. Firmware rollbacks seem like a convenient and good idea, in fact I’m sure in the past I used to recommend them as a best practice. However, in today’s connected world where we perform OTA updates, firmware rollbacks are a vulnerability so disable them to protect your users.

Best Practice #4 – Secure your bootloader

Updating firmware Over-the-Air requires several components to ensure that it is done securely and successfully. Often the focus is on getting the new image to the device and getting it decrypted. However, just like in traditional firmware updates, the bootloader is still a critical piece to the update process and in OTA updates, the bootloader can’t just be your traditional flavor but must be secure.

There are quite a few methods that can be used with the onboard bootloader, but no matter the method used, the bootloader must be secure. Secure bootloaders need to be capable of verifying the authenticity and integrity of the firmware before it is ever loaded. Some systems will use the application code to verify and install the firmware into a new application slot while others fully rely on the bootloader. In either case, the secure bootloader needs to be able to verify the authenticity and integrity of the firmware prior to accepting the new firmware image.

It’s also a good idea to ensure that the bootloader is built into a chain of trust and cannot be easily modified or updated. The secure bootloader is a critical component in a chain-of-trust that is necessary to keep a system secure.

Best Practice #5 – Build a Chain-of-Trust

A chain-of-trust is a sequence of events that occur while booting the device that ensures each link in the chain is trusted software. For example, I’ve been working with the Cypress PSoC 64 secure MCU’s recently and these parts come shipped from the factory with a hardware-based root-of-trust to authenticate that the MCU came from a secure source. That Root-of-Trust (RoT) is then transferred to a developer, who programs a secure bootloader and security policies onto the device. During the boot sequence, the RoT verifying the integrity and authenticity of the bootloader, which then verifies the integrity and authenticity of any second stage bootloader or software which then verifies the authenticity and integrity of the application. The application then verifies the authenticity and integrity of its data, keys, operational parameters and so on.

This sequence creates a Chain-Of-Trust which is needed and used by firmware OTA updates. When the new firmware request is made, the application must decrypt the image and verify that authenticity and integrity of the new firmware is intact. That new firmware can then only be used if the Chain-Of-Trust can successfully make its way through each link in the chain. The bottom line, a developer and the end user know that when the system boots successfully that the new firmware is legitimate. 

Conclusions

OTA updates are a critical infrastructure component to nearly every embedded IoT device. Sure, there are systems out there that once deployed will never update, however, those are probably a small percentage of systems. OTA updates are the go-to mechanism to update firmware in the field. We’ve examined several best practices that developers and companies should consider when they start to design their connected systems. In fact, the bonus best practice for today is that if you are building a connected device, make sure you explore your OTA update solution sooner rather than later. Otherwise, you may find that building that Chain-Of-Trust necessary in today’s deployments will be far more expensive and time consuming to implement.

Originally posted here.

Read more…

IoT in Mining

Flowchart of IoT in Mining

by Vaishali Ramesh

Introduction – Internet of Things in Mining

The Internet of things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, Internet connectivity, and other forms of hardware; these devices can communicate and interact with others over the Internet, and they can be remotely monitored and controlled. In the mining industry, IoT is used as a means of achieving cost and productivity optimization, improving safety measures and developing their artificial intelligence needs.

IoT in the Mining Industry

Considering the numerous incentives it brings, many large mining companies are planning and evaluating ways to start their digital journey and digitalization in mining industry to manage day-to-day mining operations. For instance:

  • Cost optimization & improved productivity through the implementation of sensors on mining equipment and systems that monitor the equipment and its performance. Mining companies are using these large chunks of data – 'big data' to discover more cost-efficient ways of running operations and also reduce overall operational downtime.
  • Ensure the safety of people and equipment by monitoring ventilation and toxicity levels inside underground mines with the help of IoT on a real-time basis. It enables faster and more efficient evacuations or safety drills.
  • Moving from preventive to predictive maintenance
  • Improved and fast-decision making The mining industry faces emergencies almost every hour with a high degree of unpredictability. IoT helps in balancing situations and in making the right decisions in situations where several aspects will be active at the same time to shift everyday operations to algorithms.

IoT & Artificial Intelligence (AI) application in Mining industry

Another benefit of IoT in the mining industry is its role as the underlying system facilitating the use of Artificial Intelligence (AI). From exploration to processing and transportation, AI enhances the power of IoT solutions as a means of streamlining operations, reducing costs, and improving safety within the mining industry.

Using vast amounts of data inputs, such as drilling reports and geological surveys, AI and machine learning can make predictions and provide recommendations on exploration, resulting in a more efficient process with higher-yield results.

AI-powered predictive models also enable mining companies to improve their metals processing methods through more accurate and less environmentally damaging techniques. AI can be used for the automation of trucks and drills, which offers significant cost and safety benefits.

Challenges for IoT in Mining 

Although there are benefits of IoT in the mining industry, implementation of IoT in mining operations has faced many challenges in the past.

  • Limited or unreliable connectivity especially in underground mine sites
  • Remote locations may struggle to pick up 3G/4G signals
  • Declining ore grade has increased the requirements to dig deeper in many mines, which may increase hindrances in the rollout of IoT systems

Mining companies have overcome the challenge of connectivity by implementing more reliable connectivity methods and data-processing strategies to collect, transfer and present mission critical data for analysis. Satellite communications can play a critical role in transferring data back to control centers to provide a complete picture of mission critical metrics. Mining companies worked with trusted IoT satellite connectivity specialists such as ‘Inmarsat’ and their partner eco-systems to ensure they extracted and analyzed their data effectively.

 

Cybersecurity will be another major challenge for IoT-powered mines over the coming years

 As mining operations become more connected, they will also become more vulnerable to hacking, which will require additional investment into security systems.

 

Following a data breach at Goldcorp in 2016, that disproved the previous industry mentality that miners are not typically targets, 10 mining companies established the Mining and Metals Information Sharing and Analysis Centre (MM-ISAC) to share cyber threats among peers in April 2017.

In March 2019, one of the largest aluminum producers in the world, Norsk Hydro, suffered an extensive cyber-attack, which led to the company isolating all plants and operations as well as switching to manual operations and procedures. Several of its plants suffered temporary production stoppages as a result. Mining companies have realized the importance of digital security and are investing in new security technologies.

Digitalization of Mining Industry - Road Ahead

Many mining companies have realized the benefits of digitalization in their mines and have taken steps to implement them. There are four themes that are expected to be central to the digitalization of the mining industry over the next decade are listed below:

8782971674?profile=RESIZE_710x

8782971687?profile=RESIZE_710x

The above graph demonstrates the complexity of each digital technology and its implementation period for the widespread adoption of that technology. There are various factors, such as the complexity and scalability of the technologies involved in the adoption rate for specific technologies and for the overall digital transformation of the mining industry.

The world can expect to witness prominent developments from the mining industry to make it more sustainable. There are some unfavorable impacts of mining on communities, ecosystems, and other surroundings as well. With the intention to minimize them, the power of data is being harnessed through different IoT statements. Overall, IoT helps the mining industry shift towards resource extraction, keeping in mind a particular time frame and footprint that is essential.

Originally posted here.

Read more…

From a salt shaker with a built-in speaker to smart water devices that bring clean water to communities with weak infrastructure, connected devices are increasingly advancing into all areas of our lives. But more connectivity brings more possibilities for crippling issues that can impact product development, operations, and maintenance. IoT developers must consider how to plan for firmware architecture that leads to a better, stickier product.

Competition among connected device manufacturers is swelling in every corner of the industry, and user patience for clunky products won’t get the benefit of the doubt that developers might otherwise have had in the IoT’s nascent days. As users become more dependent on connected devices, consumer demands that those devices consistently function well - and securely - become the expectation. There remains, of course, work to be done: a quick Google search reveals stories like the Fitbit firmware update that destroyed the device battery, or the Tesla key fobs that could be overwritten and hijacked until a patch was rolled out.    

These stories underscore that the IoT ecosystem’s connected nature requires that hardware developers approach product development differently - and take firmware updates seriously. It used to be that developers could write static firmware for specific device use cases or commoditized products and, once released, have no further interaction or engagement with the product. That system no longer works. To have a successful product, IoT device manufacturers need to invest in design and in firmware development equally.

Whether it’s BLE on phones or LTE or Zigby and other mesh networks, IoT devices are connected, regularly transmitting sensitive and personal data to and from the cloud. The near limitless reach of modern connected devices across all areas of our lives, paired with the high price point of most IoT devices underscores that IoT developers must have a plan (and not an after-the-fact reaction) for firmware maintenance. Putting that plan in motion requires three considerations:

Device monitoring

Ubiquitous connectivity brings with it major challenges, but it also brings opportunities. Among other things, it allows automated device health monitoring. The typical process of releasing a product relies on users’ reporting a problem and requiring them to physically return the device to be evaluated, repaired, and returned. Simply put, this is a huge waste of money and time, and it also risks frustrating the customer to the point of losing them entirely. Using customers as your testers is simply a terrible business decision. (Maybe you could get away with it if you were the only game in town, but IoT device makers don’t have that luxury anymore). Automated device monitoring is the solution. By regularly analyzing the health of devices and flagging potential problems immediately, a monitoring system can help device makers catch and fix issues in hours that would have otherwise taken them weeks to root cause. Designing embedded systems with such capabilities gives critical observability into performance, stability, and overall health - either of a single device or of a fleet of millions. 

Repair

Shipping products that require an update or patch is inevitable for even the most talented and thorough teams. Just ask NASA. While no one can avoid updates entirely, it is possible to detect fleet-wide issues and solve them without burdening users. The key is to roll out updates incrementally, starting with a small number of devices and ramping up over time. This limits the impact of any new issues and insulates most of your users from the churn of getting a few bugfix releases in a row.  Another good option is to implement an A/B update system if you have enough flash memory. This allows your device to download an update in the background with no user impact and simply prompts the user to reboot once the update is ready. Fast and simple update flows like A/B updates are key to compliance, and prevent too much fragmentation across your fleet. Last but not least, it is important to pair regular updates with a monitoring system so you can quickly identify problems with the update, and rollouts can be paused or aborted altogether.

Building with security in mind

The ubiquity of IoT devices has accelerated customer demands for robust device security in lockstep, with regulatory bodies becoming more serious (and punitive) about security requirements and standards. For those building smart devices, I would offer these principles as table stakes for security: 

  1. Devices must be updateable. 
  2. Trusted boot is no longer optional. You need a chain of trust to control the firmware running on your device.
  3. Rotate secrets and don’t use a master secret. Whether that means a set of encryption keys or other secrets to make devices functional, they must be dynamically changed, so the compromise of one device does not lead to the compromise of others. 

Software teams have long embraced iterative processes, and IoT device developers can learn much from this process. Focusing on firmware architecture that is responsive, observable, and proactive, lets device manufacturers ship a better product and create a happier customer base.

François Baldassari is the Founder and CEO of Memfault, a cloud-based observability platfrom for hardware devices. Prior to Memfault, François worked on developer infrastructure initiatives at Pebble and Oculus.

Read more…

Image Source: SEGGER.com

Nearly every embedded software developer working in the IoT space is now building secure devices. Developers have been mostly focused on how to handle secure applications and the basic microcontroller technologies such as how to use Arms TrustZone or leverage multicore processors. A looming problem that many companies and teams are overlooking is that figuring out how to develop secure applications is just the first step. There are three stages to secure product lifecycle management and in today’s post, we will review what is involved in each stage.

As a quick overview, the stages, which can be seen in the diagram below, are:

  • Development
  • Test and Production Deployment
  • Maintenance and In-field Servicing

Let us look at each of these stages in a little more detail. 

Stage #1 – Development

Development is probably the area that most developers are the most familiar with, but at the same time, the area that they are learning to adapt to the most. Many developers have designed and built systems without ever having to take security into account. Development involves a lot more than just deciding which components to isolate and how to separate the software into secure and non-secure regions.

For example, during the development phase developers now need to learn how to develop in the environment where a secure bootloader is in place. They need to consider how to handle firmware fallbacks, if they are allowed and if so, under what conditions. Firmware images may need to be compressed on top of the need for authentication.

While the development stage has become more complicated, developers should not struggle too much to extrapolate their past experiences to developing secure firmware successfully.

Stage #2 – Test and Production Deployment

The area that developers will probably struggle with the most is the test and production deployment stage. Testing secure software requires additional steps to be taken that authenticate debug hardware so that the developer can access secure memory regions to test their code and successfully debug it. Even more importantly, care must be taken to install that secure software onto a product during production.

There are several ways this can be done, but one method is to use a secure flashing device like SEGGERS Flasher Secure. These devices can follow a multistep process that involves validating a user ID which allows the secure firmware to be installed on the device. The devices themselves limit how many and on what devices the firmware can be installed which helps to protect a team’s intellectual property and prevents unauthorized production of a product.

8782955684?profile=RESIZE_710x

Stage #3 – Maintenance and In-field Servicing

Finally, there is the maintenance and in-field servicing stage which is a partial continuation of the development phase. Once a product has been deployed into the field, it needs to be securely updated. Updates can be done manually in-field, or they can be done using an over-the-air update process. This involves a device being able to contact a secure firmware server that can compress and encrypt the image and transport it to the device. Once the device has received the image, it must decrypt, decompress and validate the contents of the image. If everything looks good, the image can then be loaded as the primary firmware for the device.

Conclusions

 There is much more to designing and deploying a secure device than simply developing a secure application. The entire process is broken up into three main stages that we have looked at in greater detail today. Unfortunately, we have only just scratched the surface!

Orignally posted here.

Read more…

In this blog, we’ll discuss how users of Edge Impulse and Nordic can actuate and stream classification results over BLE using Nordic’s UART Service (NUS). This makes it easy to integrate embedded machine learning into your next generation IoT applications. Seamless integration with nRF Cloud is also possible since nRF Cloud has native support for a BLE terminal. 

We’ve extended the Edge Impulse example functionality already available for the nRF52840 DK and nRF5340 DK by adding the abilities to actuate and stream classification outputs. The extended example is available for download on github, and offers a uniform experience on both hardware platforms. 

Using nRF Toolbox 

After following the instructions in the example’s readme, download the nRF Toolbox mobile application (available on both iOS and Android) and connect to the nRF52840 DK or the nRF5340 DK that will be discovered as “Edge Impulse”. Once connected, set up the interface as follows so that you can get information about the device, available sensors, and start/stop the inferencing process. Save the preset configuration so that you can load it again for future use. Fill out the text of the various commands to use the same convention as what is used for the Edge Impulse AT command set. For example, sending AT+RUNIMPULSE starts the inferencing process on the device. 

IMG_7478_474aa59323.jpg
Figure 1. Setting up the Edge Impulse AT Command set

Once the appropriate AT command set mapping to an icon has been done, hit the appropriate icon. Hitting the ‘play’ button cause the device to start acquiring data and perform inference every couple of seconds. The results can be viewed in the “Logs” menu as shown below.

NUS_ble_logger_view_e9daba3698.jpg
Figure 2. Classification Output over BLE in the Logs View

Using nRF Cloud

Using the nRF Connect for Cloud mobile app for iOS and Android, you can turn your smartphone into a BLE gateway. This allows users to easily connect their BLE NUS devices running Edge Impulse to the nRF Cloud as an easy way to send the inferencing conclusions to the cloud. It’s as easy as setting up the BLE gateway through the app, connecting to the “Edge Impulse” device and watching the same results being displayed in the “Terminal over BLE” window shown below!

Screen_Hunter_229_Feb_16_23_45_26c8913865.jpgFigure 3. Classification Output Shown in nRF Cloud

Summary

Edge Impulse is supercharging IoT with embedded machine learning and we’ve discussed a couple of ways you can easily send conclusions to either the smartphone or to the cloud by leveraging the Nordic UART Service. We look forward to seeing how you’ll leverage Edge Impulse, Nordic and BLE to create your next gen IoT application.  

 

Article originally written for the Edge Impulse blog by Zin Thein Kyaw, Senior User Success Engineer at Edge Impulse.

Read more…

By Adam Dunkels

When you have to install thousands of IoT devices, you need to make device installation impressively fast. Here is how to do it.

Every single IoT device out there has been be installed by someone.

Installation is the activity that requires the most attention during that device’s lifetime.

This is particularly true for large scale IoT deployments.

We at Thingsquare have been involved in many IoT products and projects. Many of these have involved large scale IoT deployments with hundreds or thousands of devices per deployment site.

In this article, we look at why installation is so important for large IoT deployments – and a list of 6 installation tactics to make installation impressively fast while being highly useful:

  1. Take photos
  2. Make it easy to identify devices
  3. Record the location of every device
  4. Keep a log of who did what
  5. Develop an installation checklist, and turn it into an app
  6. Measure everything

And these tactics are useful even if you only have a handful of devices per site, but thousands or tens of thousands of devices in total.

Why Installation Tactics are Important in Large IoT Deployments

Installation is a necessary step of an IoT device’s life.

Someone – maybe your customers, your users, or a team of technicians working for you – will be responsible for the installation. The installer turns your device from a piece of hardware into a living thing: a valuable producer of information for your business.

But most of all, installation is an inevitable part of the IoT device life cycle.

The life cycle of an IoT device can be divided into four stages:

  1. Produce the device, at the factory (usually with a device programming tool).
  2. Install the device.
  3. Use the device. This is where the device generates the value that we created it for. The device may then be either re-installed at a new location, or we:
  4. Retire the device.

Two stages in the list contain the installation activity: both Install and Use.

So installation is inevitable – and important. We need to plan to deal with it.

Installation is the Most Time-Consuming Activity

Most devices should spend most of their lifetime in the Use stage of their life cycle.

But a device’s lifetime is different from the attention time that we need to spend on them.

Devices usually don’t need much attention in their Use stage. At this stage, they should mostly be sitting there and generate valuable information.

By contrast, for the people who work with the devices, most of their attention and time will be spent in the Install stage. Since those are people who’s salary you are paying for, you want to be as efficient as possible.

How To Make Installation Impressively Fast - and Useful

At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.

These are our top six tactics to make installation fast – and useful:

1. Take Photos

After installation, you will need to maintain and troubleshoot the system. This is a normal part of the Use stage.

Photos are a goldmine of information. Particularly if it is difficult to get to the location afterward.

Make sure you take plenty of photos of each device as they are installed. In fact, you should include multiple photos in your installation checklist – more about this below.

We have been involved in several deployments where we have needed to remotely troubleshoot installations after they were installed. Having a bunch of photos of how and where the devices were installed helps tremendously.

The photos don’t need to be great. Having a low-quality photo beats having no photo, every time.

 

2. Make it Easy to Identify Devices

When dealing with hundreds of devices, you need to make sure that you know exactly which you installed, and where.

You therefore need to make it easy to identify each device. Device identification can be made in several ways, and we recommend you to use more than one way to identify the devices. This will reduce the risk of manual errors.

The two ways we typically use are:

  • A printed unique ID number on the device, which you can take a photo of
  • Automatic secure device identification via Bluetooth – this is something the Thingsquare IoT platform supports out of the box

Being certain about where devices were installed will make maintenance and troubleshooting much easier – particularly if it is difficult to visit the installation site.

3. Record the Location of Every Device

When devices are installed, make sure to record their location.

The easiest way to do this is to take the GPS coordinates of the devices as it is being deployed. Preferably with the installation app, which can do this automatically – see below.

For indoor installations, exact GPS locations may be unreliable. But even for those devices, having a coarse-grained GPS location is useful.

The location is useful both when analyzing the data that the devices produce, and when troubleshooting problems in the network.

 

4. Keep a Log of Who Did What

In large deployments, there will be many people involved.

Being able to trace the installation actions, as well as who took what action, is enormously useful. Sometimes just knowing the steps that were taken when installing each device is important. And sometimes you need to talk to the person who did the installation.

5. Develop an Installation Checklist - and Turn it into an App

Determine what steps are needed to install each device, and develop a step-by-step checklist for each step.

Then turn this checklist into an app that installation personnel can run on their own phones.

Each step of each checklist should be really easy understand to avoid mistakes along the way. And it should be easy to go back and forth in the steps, if needed.

Ideally, the app should run on both Android and iOS, because you would like everyone to be able to use it on their own phones.

Here is an example checklist, that we developed for a sensor device in a retail IoT deployment:

  • Check that sensor has battery installed
  • Attach sensor to appliance
  • Make sure that the sensor is online
  • Check that the sensor has a strong signal
  • Check that the GPS location is correct
  • Move hand in front of sensor, to make sure sensor correctly detects movement
  • Be still, to make sure sensor correctly detects no movement
  • Enter description of sensor placement (e.g. “on top of the appliance”)
  • Enter description of appliance
  • Take a photo of the sensor
  • Take a photo of the appliance
  • Take a photo of the appliance and the two beside it
  • Take a photo of the appliance and the four beside it
 

6. Measure Everything

Since installation costs money, we want it to be efficient.

And the best way to make a process more efficient is to measure it, and then improve it.

Since we have an installation checklist app, measuring installation time is easy – just build it into the app.

Once we know how much time each step in the installation process needs, we are ready to revise the process and improve it. We should focus on the most time-consuming step first and measure the successive improvements to make sure we get the most bang for the buck.

Conclusions

Every IoT device needs to be installed and making the installation process efficient saves us attention time for everyone involved – and ultimately money.

At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.

We use our experience to solve hard problems in the IoT space, such as how to best install large IoT systems – get in touch with us to learn more!

Originally posted here.

Read more…

by Stephanie Overby

What's next for edge computing, and how should it shape your strategy? Experts weigh in on edge trends and talk workloads, cloud partnerships, security, and related issues


All year, industry analysts have been predicting that that edge computing – and complimentary 5G network offerings ­­– will see significant growth, as major cloud vendors are deploying more edge servers in local markets and telecom providers pushing ahead with 5G deployments.

The global pandemic has not significantly altered these predictions. In fact, according to IDC’s worldwide IT predictions for 2021, COVID-19’s impact on workforce and operational practices will be the dominant accelerator for 80 percent of edge-driven investments and business model change across most industries over the next few years.

First, what exactly do we mean by edge? Here’s how Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat, defines it: “Edge computing refers to the concept of bringing computing services closer to service consumers or data sources. Fueled by emerging use cases like IoT, AR/VR, robotics, machine learning, and telco network functions that require service provisioning closer to users, edge computing helps solve the key challenges of bandwidth, latency, resiliency, and data sovereignty. It complements the hybrid computing model where centralized computing can be used for compute-intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time.”

Moving data infrastructure, applications, and data resources to the edge can enable faster response to business needs, increased flexibility, greater business scaling, and more effective long-term resilience.

“Edge computing is more important than ever and is becoming a primary consideration for organizations defining new cloud-based products or services that exploit local processing, storage, and security capabilities at the edge of the network through the billions of smart objects known as edge devices,” says Craig Wright, managing director with business transformation and outsourcing advisory firm Pace Harmon.

“In 2021 this will be an increasing consideration as autonomous vehicles become more common, as new post-COVID-19 ways of working require more distributed compute and data processing power without incurring debilitating latency, and as 5G adoption stimulates a whole new generation of augmented reality, real-time application solutions, and gaming experiences on mobile devices,” Wright adds.

8 key edge computing trends in 2021


Noting the steady maturation of edge computing capabilities, Forrester analysts said, “It’s time to step up investment in edge computing,” in their recent Predictions 2020: Edge Computing report. As edge computing emerges as ever more important to business strategy and operations, here are eight trends IT leaders will want to keep an eye on in the year ahead.

1. Edge meets more AI/ML


Until recently, pre-processing of data via near-edge technologies or gateways had its share of challenges due to the increased complexity of data solutions, especially in use cases with a high volume of events or limited connectivity, explains David Williams, managing principal of advisory at digital business consultancy AHEAD. “Now, AI/ML-optimized hardware, container-packaged analytics applications, frameworks such as TensorFlow Lite and tinyML, and open standards such as the Open Neural Network Exchange (ONNX) are encouraging machine learning interoperability and making on-device machine learning and data analytics at the edge a reality.” 

Machine learning at the edge will enable faster decision-making. “Moreover, the amalgamation of edge and AI will further drive real-time personalization,” predicts Mukesh Ranjan, practice director with management consultancy and research firm Everest Group.

“But without proper thresholds in place, anomalies can slowly become standards,” notes Greg Jones, CTO of IoT solutions provider Kajeet. “Advanced policy controls will enable greater confidence in the actions made as a result of the data collected and interpreted from the edge.” 

 

2. Cloud and edge providers explore partnerships


IDC predicts a quarter of organizations will improve business agility by integrating edge data with applications built on cloud platforms by 2024. That will require partnerships across cloud and communications service providers, with some pairing up already beginning between wireless carriers and the major public cloud providers.

According to IDC research, the systems that organizations can leverage to enable real-time analytics are already starting to expand beyond traditional data centers and deployment locations. Devices and computing platforms closer to end customers and/or co-located with real-world assets will become an increasingly critical component of this IT portfolio. This edge computing strategy will be part of a larger computing fabric that also includes public cloud services and on-premises locations.

In this scenario, edge provides immediacy and cloud supports big data computing.

 

3. Edge management takes center stage


“As edge computing becomes as ubiquitous as cloud computing, there will be increased demand for scalability and centralized management,” says Wright of Pace Harmon. IT leaders deploying applications at scale will need to invest in tools to “harness step change in their capabilities so that edge computing solutions and data can be custom-developed right from the processor level and deployed consistently and easily just like any other mainstream compute or storage platform,” Wright says.

The traditional approach to data center or cloud monitoring won’t work at the edge, notes Williams of AHEAD. “Because of the rather volatile nature of edge technologies, organizations should shift from monitoring the health of devices or the applications they run to instead monitor the digital experience of their users,” Williams says. “This user-centric approach to monitoring takes into consideration all of the components that can impact user or customer experience while avoiding the blind spots that often lie between infrastructure and the user.”

As Stu Miniman, director of market insights on the Red Hat cloud platforms team, recently noted, “If there is any remaining argument that hybrid or multi-cloud is a reality, the growth of edge solidifies this truth: When we think about where data and applications live, they will be in many places.”

“The discussion of edge is very different if you are talking to a telco company, one of the public cloud providers, or a typical enterprise,” Miniman adds. “When it comes to Kubernetes and the cloud-native ecosystem, there are many technology-driven solutions competing for mindshare and customer interest. While telecom giants are already extending their NFV solutions into the edge discussion, there are many options for enterprises. Edge becomes part of the overall distributed nature of hybrid environments, so users should work closely with their vendors to make sure the edge does not become an island of technology with a specialized skill set.“

 

4. IT and operational technology begin to converge


Resiliency is perhaps the business term of the year, thanks to a pandemic that revealed most organizations’ weaknesses in this area. IoT-enabled devices (and other connected equipment) drive the adoption of edge solutions where infrastructure and applications are being placed within operations facilities. This approach will be “critical for real-time inference using AI models and digital twins, which can detect changes in operating conditions and automate remediation,” IDC’s research says.

IDC predicts that the number of new operational processes deployed on edge infrastructure will grow from less than 20 percent today to more than 90 percent in 2024 as IT and operational technology converge. Organizations will begin to prioritize not just extracting insight from their new sources of data, but integrating that intelligence into processes and workflows using edge capabilities.

Mobile edge computing (MEC) will be a key enabler of supply chain resilience in 2021, according to Pace Harmon’s Wright. “Through MEC, the ecosystem of supply chain enablers has the ability to deploy artificial intelligence and machine learning to access near real-time insights into consumption data and predictive analytics as well as visibility into the most granular elements of highly complex demand and supply chains,” Wright says. “For organizations to compete and prosper, IT leaders will need to deliver MEC-based solutions that enable an end-to-end view across the supply chain available 24/7 – from the point of manufacture or service  throughout its distribution.”

 

5. Edge eases connected ecosystem adoption


Edge not only enables and enhances the use of IoT, but it also makes it easier for organizations to participate in the connected ecosystem with minimized network latency and bandwidth issues, says Manali Bhaumik, lead analyst at technology research and advisory firm ISG. “Enterprises can leverage edge computing’s scalability to quickly expand to other profitable businesses without incurring huge infrastructure costs,” Bhaumik says. “Enterprises can now move into profitable and fast-streaming markets with the power of edge and easy data processing.”

 

6. COVID-19 drives innovation at the edge


“There’s nothing like a pandemic to take the hype out of technology effectiveness,” says Jason Mann, vice president of IoT at SAS. Take IoT technologies such as computer vision enabled by edge computing: “From social distancing to thermal imaging, safety device assurance and operational changes such as daily cleaning and sanitation activities, computer vision is an essential technology to accelerate solutions that turn raw IoT data (from video/cameras) into actionable insights,” Mann says. Retailers, for example, can use computer vision solutions to identify when people are violating the store’s social distance policy.

 

7. Private 5G adoption increases


“Use cases such as factory floor automation, augmented and virtual reality within field service management, and autonomous vehicles will drive the adoption of private 5G networks,” says Ranjan of Everest Group. Expect more maturity in this area in the year ahead, Ranjan says.

 

8. Edge improves data security


“Data efficiency is improved at the edge compared with the cloud, reducing internet and data costs,” says ISG’s Bhaumik. “The additional layer of security at the edge enhances the user experience.” Edge computing is also not dependent on a single point of application or storage, Bhaumik says. “Rather, it distributes processes across a vast range of devices.”

As organizations adopt DevSecOps and take a “design for security” approach, edge is becoming a major consideration for the CSO to enable secure cloud-based solutions, says Pace Harmon’s Wright. “This is particularly important where cloud architectures alone may not deliver enough resiliency or inherent security to assure the continuity of services required by autonomous solutions, by virtual or augmented reality experiences, or big data transaction processing,” Wright says. “However, IT leaders should be aware of the rate of change and relative lack of maturity of edge management and monitoring systems; consequently, an edge-based security component or solution for today will likely need to be revisited in 18 to 24 months’ time.”

Originally posted here.

Read more…

By Natallia Babrovich

My experience shows that most of the visits to doctors are likely to become virtual in the future. Let’s see how IoT solutions make the healthcare environment more convenient for patients and medical staff.

What are IoT and IoMT?

My colleague Alex Grizhnevich, IoT consultant at ScienceSoft, defines Internet of Things as a network of physical devices with sensors and actuators, software, and network connectivity that enable devices to gather and transmit data and fulfill users' tasks. Today, IoT becomes a key component of the digital transformation of healthcare, so we can distinguish a separate group of initiatives, the so-called IoHT (Internet of Health Things) or IoMT (Internet of Medical Things).

Popular IoMT Use Cases

IoT-based patient care

Medication intake tracking

IoT-based medication tracking allows doctors to monitor the impact of a prescribed medication’s dosage on a patient’s condition. In their turn, patients can control medication intake, e.g., by using in-app reminders and note in the app how their symptoms change for their doctor’s further analysis. The patient app can be connected to smart devices, (e.g., a smart pill bottle) for easier management of multiple medications.

Remote health monitoring

Among examples of employing IoT in healthcare, this use case is especially viable for chronic disease management. Patients can use connected medical devices or body-worn biosensors to allow doctors or nurses to check their vitals (blood pressure, glucose level, heart rate, etc.) via doctor/nurse-facing apps. Health professionals can monitor this data 24/7 and study app-generated reports to get insights into health trends. Patients who show signs of deteriorating health are scheduled for in-person visits.

IoT- and RFID-based medical asset monitoring

Medical inventory and equipment tracking

All medical tools and durable assets (beds, medical equipment) are equipped with RFID (radio frequency identification) tags. Fixed RFID readers (e.g., on the walls) collect the info about the location of assets. Medical staff can view it using a mobile or web application with a map.

Drug tracking

RFID-enabled drug tracking helps pharmacies and hospitals verify the authenticity of medication packages and timely spot medication shortages.

Smart hospital space

Cloud-connected ward sensors (e.g., a light switch, door and window contacts) and ambient sensors (e.g., hydrometers, noise detectors) allow patients to control their environment for a comfortable hospital stay.

Advantages of using IoT technology in healthcare

Patient-centric care

Medical IoT helps turn patients into active participants of the treatment process, thus improving care outcomes. Besides, IoMT helps increase patient satisfaction with care delivery, from communication with medical staff to physical comfort (smart lighting, climate control, etc.).

Reduced care-related costs

Non-critical patients can stay at home and use cloud-connected medical IoT devices, which gather, track and send health data to the medical facility. And with the help of telehealth technology, patients can schedule e-visits with nurses and doctors without traveling to the hospital.

Reduced readmissions

Patient apps connected to biosensors help ensure compliance with a discharge plan, enable prompt detection of health state deviations, and provide an opportunity to timely contact a health professional remotely.

Challenges of IoMT and how to address them

Potential health data security breaches

The connected nature of IoT brings about information security challenges for healthcare providers and patients.

Tip from ScienceSoft

We recommend implementing HIPAA-compliant IoMT solutions and conduct vulnerability assessment and penetration testing regularly to ensure the highest level of protection.

Integration difficulties

Every medical facility has its unique set of applications to be integrated with an IoMT solution (e.g., EHR, EMR). Some of these applications may be heavily customized or outdated.

Tip from ScienceSoft

Develop the integrations strategy from the start of your IoMT project, including the scope and the nature of custom integrations.

Enhance care delivery with IoMT

According to my estimates, the use of IoT technology in healthcare will continue to rise during the next decade, driven by the impact of the COVID situation and the growing demand for remote care. If you need help with creating and implementing a fitting IoMT solution, you’re welcome to turn to ScienceSoft’s healthcare IT team.

Originally posted here.

Read more…

OMG! Three 32-bit processor cores each running at 300 MHz, each with its own floating-point unit (FPU), and each with more memory than you can throw a stick at!

In a recent column on Recreating Retro-Futuristic 21-Segment Victorian Displays, I noted that I’ve habitually got a number of hobby projects on the go. I also joked that, “one day, I hope to actually finish one or more of the little rascals.” Unfortunately, I’m laughing through my tears because some of my projects do appear to be never-ending.

For example, shortly after the internet first impinged on the popular consciousness with the launch of the Mosaic web browser in 1993, a number of humorous memes started to bounce around. It was one of these that sparked my Pedagogical and Phantasmagorical Inamorata Prognostication Engine project, which has been a “work in progress” for more than 20 years as I pen these words.

Feast your orbs on the Prognostication Engine (Click image to see a larger version — Image source: Max Maxfield)

As you can see in the image to the right, the Prognostication Engine has grown in the telling. The main body of the engine is housed in a wooden radio cabinet from 1929. My chum, Carpenter Bob, created the smaller section on the top, with great attention to detail like the matching hand-carved rosettes.

The purported purpose of this bodacious beauty is to forecast the disposition of my wife (Gina the Gorgeous) when I’m poised to leave the office and head for home at the end of the day (hence the “Prognostication” portion of the engine’s moniker). Paradoxically, should Gina ever discover the true nature of the engine, I won’t actually need it to predict her mood of the moment.

As we see, the control panels are festooned with antique knobs, toggle switches, pushbuttons, and analog meters. The knobs are mounted on motorized potentiometers, so if an unauthorized user attempts to modify their settings, they will automatically return to their original values under program control. The switches and pushbuttons are each accompanied by two LEDs, while each knob is equipped with a ring of 16 LEDs, resulting in 116 LEDs in all. Then there are 64 LEDs in the furnace and 140 LEDs in the rings surrounding the bases of the large vacuum tubes mounted on the top of the beast.

I was just reflecting on how much technology has changed over the past couple of decades. For example, today’s “smart LEDs” like WS2812Bs (a.k.a. NeoPixels) can be daisy-chained together, allowing multiple devices to be controlled using a single pin on the microcontroller. It probably goes without saying (but I’ll say it anyway) that all of the LEDs in the current incarnation of the engine are tricolor devices in the form of NeoPixels, but this was not always the case.

An early prototype of a shift register capable of driving only 13 tricolor LEDs (Click image to see a larger version — Image source: Max Maxfield)

The tricolor LEDs I was planning on using 20 years ago each required three pins to be controlled. The solution at that time would have been to implement a huge external shift register. The image to the left shows an early shift register prototype sufficient to drive only 13 “old school” devices.

And, of course, developments in computing have been even more staggering. When I commenced this project, I was using a PIC microcontroller that I programmed in BASIC. After the Arduino first hit the scene circa 2005, I migrated to using Arduino Unos, followed by Arduino Megas, that I programmed in C/C++.

One of the reasons I like the Arduino Mega is its high pin count, boasting 54 digital input/output (I/O) pins, of which 15 can be used as pulse-width modulated (PWM) outputs, 16 analog inputs, and 4 UARTS. On the other hand, the Mega is only an 8-bit machine running at 16 MHz, it offers only 256 KB of Flash (program) memory and 8 KB of SRAM, and it doesn’t have hardware support for floating-point operations.

The thing is that the Prognostication Engine has a lot of things going on. In addition to reading the states of all the switches and pushbuttons and potentiometers, it has to control the motors behind the knobs and drive the analog meters. Currently, the LEDs are being driven with simple test patterns, but these are going to be upgraded to support much more sophisticated animation and fading effects. The engine is also constantly performing calculations of an astronomical and astrological nature, determining things like the dates of forthcoming full moons and blue moons.

In the fullness of time, the engine is going to be connected to the internet so it can monitor things like the weather. It’s also going to have its own environmental sensors (temperature, humidity, barometric pressure) and proximity detection sensors. Furthermore, the engine will also boast a suite of sound effects such that flicking a simple switch, for example, may result in myriad sounds of mechanical mechanisms performing their magic. At some stage, I’m even hoping to add things like artificial intelligence (AI) and facial recognition.

The current state of computational play (Click image to see a larger version — Image source: Max Maxfield)

Sad to relate, my existing computing solution is not capable of handling all the tasks I wish the engine to perform. The image to the right shows the current state of computational play. As we see, there is one Arduino Mega in the lower cabinet controlling the 116 LEDs on the front panel. Meanwhile, there are two Megas in the upper cabinet, with one controlling the LEDs in the furnace and the other controlling the LEDs associated with the large vacuum tubes.

Up until a couple of years ago, I was vaguely planning on adding more and more Megas. I was also cogitating and ruminating as to how I was going to get these little rascals to talk to each other so that everyone knew (a) what we were trying to do and (b) what everyone else was actually doing.

Unfortunately, the whole computational architecture was becoming unwieldy, so I started to look for another solution. You can only imagine my surprise and delight when I was first introduced to the original ShieldBuddy TC275 from the folks at Hitex (see Everybody Needs a ShieldBuddy). This little beauty, which has an Arduino Mega footprint, features the Aurix TC275 processor from Infineon. The TC275 boasts three 32-bit cores, all running at 200 MHz, each with its own floating-point unit (FPU), and all sharing 4 Mbytes of Flash and 500 Kbytes of RAM (this is a bit of a simplification, but it will suffice for now).

Processors like the Aurix are typically to be found only in state-of-the-art embedded systems and they rarely make it into the maker world. To be honest, when I first saw the ShieldBuddy TC275, I thought to myself, “Life can’t get any better than this!” Well, I was wrong, because the guys and gals at Hitex have just announced the ShieldBuddy TC375, which features an Aurix TC375 processor!

O.M.G! I just took delivery of one of these bodacious beauties, and I’m so excited that I was moved to make this video.

I don’t know where to start. As before, we have three 32-bit cores, each with its own FPU. This time, however, the cores run at 300 MHz. Although each core runs independently, they can communicate and coordinate between themselves using techniques like shared memory and software interrupts. With regard to memory, the easiest way to summarize this is as follows: The TC375 processor has:

  • 6MB Flash ROM
  • 384 KB Data flash

And each of the three cores has:

  • 240 KB Data Scratch-Pad RAM (DSPR)
  • 64 KB Instruction Scratch-Pad RAM (PSPR)
  • 32 KB Instruction Cache (ICACHE)
  • 16 KB Data Cache (DCACHE)
  • 64 KB DLMU RAM

Actually, there’s a lot more to this than meets the eye. For example, the main SRAMs (the DSPRs) associated with each of the cores appear at two locations in the memory map. In the case of Core 0, for example, the first location in its DSPR is located at address 0xD0000000 where it is considered to be local (i.e., it appears directly on Core 0’s local internal bus) and can be accessed quickly. However, this DSPR is also visible to Cores 1 and 2 at 0x70000000 via the main on-chip system bus, which allows them to read and write to this memory freely, but at a lower speed than Core 0. Similarly, Cores 1 and 2 access their own memories locally and each other’s memories globally.

Meet the ShieldBuddy TC375 (Click image to see a larger version — Image source: Hitex)

As for the original ShieldBuddy TC275, if you are a professional programmer, you’ll be delighted to hear that the main ShieldBuddy TC375 toolchain is the Eclipse-based “FreeEntryToolchain” from HighTec/PLS/Infineon. This is a full-on C/C++ development environment with source-level debugger and suchlike.

By comparison, if you are a novice programmer like your humble narrator, you’ll be overjoyed to hear that the ShieldBuddy TC375 can be programmed via the Arduino’s integrated development environment (IDE). As far as I’m concerned, this programming model is where things start to get very clever indeed.

An Arduino sketch (program) always contains two functions: setup(), which runs only one time, and loop(), which runs over and over again (the system automatically inserts a main() function while your back is turned). If you take an existing sketch and compile it for the ShieldBuddy, then it will run on Core 0 by default. You can achieve the same effect by renaming your setup() and loop() functions to be setup0() and loop0(), respectively.

Similarly, you can create setup1() and loop1() functions, which will automatically be compiled to run on Core 1, and you can create setup2() and loop2() functions, which will automatically be compiled to run on Core 2. Any of your “homegrown” functions will be compiled in such a way as to run on whichever of the cores need to use them. I know that, like Pooh, I’m a bear of little brain, but even I can wrap my poor old noggin around this usage model.

There’s much, much more to this incredible board than I can cover here, but if you are interested in learning more, then may I recommend that you visit this portion of the Hitex site where you will find all sorts of goodies, including the ShieldBuddy Forum and the ShieldBuddy TC375 User Manual.

Now, if you will forgive me, I must away because I have to go and gloat over “my precious” (my ShieldBuddy TC375) and commence preparations to upgrade the Prognostication Engine by removing all of its existing processors and replacing them with a single ShieldBuddy TC375.

Actually, I just had a parting thought, which is that the Prognostication Engine’s role in life is to help me predict the future but — when I started out on this project — I would never have predicted that technology would develop so fast that I would one day have a triple-core 300 MHz processor driving “the beast.” How about you? What are your thoughts on all of this?

Originally posted here.

Read more…

IoT Sustainability, Data At The Edge.

Recently I've written quite a bit about IOT, and one thing you may have picked up on is that the Internet of Things is made up of a lot of very large numbers.

For starters, the number of connected things is measured in the tens of billions, nearly 100's of billions. Then, behind that very large number is an even bigger number, the amount of data these billions of devices is predicted to generate.

As FutureIoT pointed out, IDC forecasted that the amount of data generated by IoT devices by 2025 is expected to be in excess of 79.4 zettabytes (ZB).

How much is Zettabyte!?

A zettabyte is a very large number indeed, but how big? How can you get your head around it? Does this help...?

A zettabyte is 1,000,000,000,000,000,000,000 bytes. Hmm, that's still not very easy to visualise.

So let's think of it in terms of London busses. Let's image a byte is represented as a human on a bus, a London bus can take 80 people, so you'd need 993 quintillion busses to accommodate 79.4 zettahumans.

I tried to calculate how long 993 quintillion busses would be. Relating it to the distance to the moon, Mars or the Sun wasn't doing it justice, the only comparable scale is the size of the Milky Way. Even with that, our 79.4 zettahumans lined up in London busses, would stretch across the entire Milky Way ... and a fair bit further!

Sustainability Of Cloud Storage For 993 Quintillion Busses Of Data

Everything we do has an impact on the planet. Just by reading this article, you're generating 0.2 grams of Carbon Dioxide (CO2) emissions per second ... so I'll try to keep this short.

Using data from the Stanford Magazine that suggests every 100 gigabytes of data stored in the Cloud could generate 0.2 tons of CO2 per year. Storing 79.4 zettabytes of data in the Cloud could be responsible for the production of 158.8 billion tons of greenhouse gases.

8598308493?profile=RESIZE_710x

 

Putting that number into context, using USA Today numbers, the total emissions for China, USA, India, Russia, Japan and Germany accounted for a little over 21 billion tons in 2019.

So if we just go ahead and let all the IoT devices stream data to the Cloud, those billions of little gadgets would indirectly generate more than seven times the air pollution than the six most industrial countries, combined.

Save The Planet, Store Data At The Edge

As mentioned in a previous article, not all data generated by IoT devices needs to be stored in the Cloud.

Speaking with an expert in data storage, ObjectBox, they say their users on average cut their Cloud data storage by 60%. So how does that work, then? 

First, what does The Edge mean?

The term "Edge" refers to the edge of the network, in other words the last piece of equipment or thing connected to the network closest to the point of usage.

Let me illustrate in rather over-simplified diagram.

8598328665?profile=RESIZE_710x

 

How Can Edge Data Storage Improve Sustainability?

In an article about computer vision and AI on the edge, I talked about how vast amounts of network data could be saved if the cameras themselves could detect what an important event was, and to just send that event over the network, not the entire video stream.

In that example, only the key events and meta data, like the identification marks of a vehicle crossing a stop light, needed to be transmitted across the network. However, it is important to keep the raw content at the edge, so it can be used for post processing, for further learning of the AI or even to be retrieved at a later date, e.g. by law-enforcement.

Another example could be sensors used to detect gas leaks, seismic activity, fires or broken glass. These sensors are capturing volumes of data each second, but they only want to alert someone when something happens - detection of abnormal gas levels, a tremor, fire or smashed window.

Those alerts are the primary purpose of those devices, but the data in between those events can also hold significant value. In this instance, keeping it locally at the edge, but having it as and when needed is an ideal way to reduce network traffic, reduce Cloud storage and save the planet (well, at least a little bit).

Accessible Data At The Edge

Keeping your data at the edge is a great way to save costs and increase performance, but you still want to be able to get access to it, when you need it.

ObjectBox have created not just one of the most efficient ways to store data at the edge, but they've also built a sophisticated and powerful method to synchronise data between edge devices, the Cloud and other edge devices.

Synchronise Data At The Edge - Fog Computing.

Fog Computing (which is computing that happens between the Cloud and the Edge) requires data to be exchanged with devices connected to the edge, but without going all the way to/from the servers in the Cloud. 

In the article on making smarter, safer cities, I talked about how by having AI-equipped cameras share data between themselves they could become smarter, more efficient. 

A solution like that could be using ObjectBox's synchronisation capabilities to efficiently discover and collect relevant video footage from various cameras to help either identify objects or even train the artificial intelligence algorithms running on the AI-equipped cameras at the edge.

Storing Data At The Edge Can Save A Bus Load CO2

Edge computing has a lot of benefits to offer, in this article I've just looked at what could often be overlooked - the cost of transferring data. I've also not really delved into the broader benefits of ObjectBox's technology, for example, from their open source benchmarks, ObjectBox seems to offer a ten times performance benefit compared to other solutions out there, and is being used by more than 300,000 developers.  

The team behind ObjectBox also built technologies currently used by internet heavy-weights like Twitter, Viber and Snapchat, so they seem to be doing something right, and if they can really cut down network traffic by 60%, they could be one of sustainable technology companies to watch.  

Originally posted here.

Read more…

Edge Impulse has joined 1% for Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. To complement this effort we launched the ElephantEdge competition, aiming to create the world’s best elephant tracking device to protect elephant populations that would otherwise be impacted by poaching. In this similar vein, this blog will detail how Lacuna Space, Edge Impulse, a microcontroller and LoraWAN can promote the conservation of endangered species by monitoring bird calls in remote areas.

Over the past years, The Things Networks has worked around the democratization of the Internet of Things, building a global and crowdsourced LoraWAN network carried by the thousands of users operating their own gateways worldwide. Thanks to Lacuna Space’ satellites constellation, the network coverage goes one step further. Lacuna Space uses LEO (Low-Earth Orbit) satellites to provide LoRaWAN coverage at any point around the globe. Messages received by satellites are then routed to ground stations and forwarded to LoRaWAN service providers such as TTN. This technology can benefit several industries and applications: tracking a vessel not only in harbors but across the oceans, monitoring endangered species in remote areas. All that with only 25mW power (ISM band limit) to send a message to the satellite. This is truly amazing!

Most of these devices are typically simple, just sending a single temperature value, or other sensor reading, to the satellite - but with machine learning we can track much more: what devices hear, see, or feel. In this blog post we'll take you through the process of deploying a bird sound classification project using an Arduino Nano 33 BLE Sense board and a Lacuna Space LS200 development kit. The inferencing results are then sent to a TTN application.

Note: Access to the Lacuna Space program and dev kit is closed group at the moment. Get in touch with Lacuna Space for hardware and software access. The technical details to configure your Arduino sketch and TTN application are available in our GitHub repository.

 

Our bird sound model classifies house sparrow and rose-ringed parakeet species with a 92% accuracy. You can clone our public project or make your own classification model following our different tutorials such as Recognize sounds from audio or Continuous Motion Recognition.

3U_BsvrlCU1J-Fvgi4Y_vV_I5u_LPwb7vSFhlV-Y4c3GCbOki958ccFA1GbVN4jVDRIrUVZVAa5gHwTmYKv17oFq6tXrmihcWbUblNACJ9gS1A_0f1sgLsw1WNYeFAz71_5KeimC

Once you have trained your model, head to the Deployment section, select the Arduino library and Build it.

QGsN2Sy7bP1MsEmsnFyH9cbMxsrbSAw-8_Q-K1_X8-YSXmHLXBHQ8SmGvXNv-mVT3InaLUJoJutnogOePJu-5yz2lctPemOrQUaj9rm0MSAbpRhKjxBb3BC5g-U5qHUImf4HIVvT

Import the library within the Arduino IDE, and open the microphone continuous example sketch. We made a few modifications to this example sketch to interact with the LS200 dev kit: we added a new UART link and we transmit classification results only if the prediction score is above 0.8.

Connect with the Lacuna Space dashboard by following the instructions on our application’s GitHub ReadMe. By using a web tracker you can determine when the next good time a Lacuna Space satellite will be flying in your location, then you can receive the signal through your The Things Network application and view the inferencing results on the bird call classification:

    {
       "housesparrow": "0.91406",
       "redringedparakeet": "0.05078",
       "noise": "0.03125",
       "satellite": true,
   }

No Lacuna Space development kit yet? No problem! You can already start building and verifying your ML models on the Arduino Nano 33 BLE Sense or one of our other development kits, test it out with your local LoRaWAN network (by pairing it with a LoRa radio or LoRa module) and switch over to the Lacuna satellites when you get your kit.

Originally posted on the Edge Impulse blog by Aurelien Lequertier - Lead User Success Engineer at Edge Impulse, Jenny Plunkett - User Success Engineer at Edge Impulse, & Raul James - Embedded Software Engineer at Edge Impulse

Read more…

The possibilities of what you can do with digital twin technology are only as limited as your imagination

Today, forward-thinking companies across industries are implementing digital twin technology in increasingly fascinating and ground-breaking ways. With Internet of Things (IoT) technology improving every day and more and more compute power readily available to organizations of all sizes, the possibilities of what you can do with digital twin technology are only as limited as your imagination.

What Is a Digital Twin?

A digital twin is a virtual representation of a physical asset that is practically indistinguishable from its physical counterpart. It is made possible thanks to IoT sensors that gather data from the physical world and send it to be virtually reconstructed. This data includes design and engineering details that describe the asset’s geometry, materials, components, and behavior or performance.

When combined with analytics, digital twin data can unlock hidden value for an organization and provide insights about how to improve operations, increase efficiency or discover and resolve problems before the real-world asset is affected.

These 4 Steps Are Critical for Digital Twin Success:

Involve the Entire Product Value Chain

It’s critical to involve stakeholders across the product value chain in your design and implementation. Each department faces diverse business challenges in their day-to-day operations, and a digital twin provides ready solutions to problems such as the inability to coordinate across end-to-end supply chain processes, minimal or no cross-functional collaboration, the inability to make data-driven decisions, or clouded visibility across the supply chain. Decision-makers at each level of the value chain have extensive knowledge on critical and practical challenges. Including their inputs will ensure a better and more efficient design of the digital twin and ensure more valuable and relevant insights.

Establish Well-Documented Practices

Standardized and well-documented design practices help organizations communicate ideas across departments, or across the globe, and make it easier for multiple users of the digital twin to build or alter the model without destroying existing components or repeating work. Best-in-class modelling practices increase transparency while simplifying and streamlining collaborative work.

Include Data From Multiple Sources

Data from multiple sources—both internal and external—is an essential part of creating realistic and helpful simulations. 3D modeling and geometry is sufficient to show how parts fit together and how a product works, but more input is required to model how various faults or errors might occur somewhere in the product’s lifecycle. Because many errors and problems can be nearly impossible to accurately predict by humans alone, a digital twin needs a vast amount of data and a robust analytics program to be able to run algorithms to make accurate forecasts and prevent downtime.

Ensure Long Access Lifecycles 

Digital twins implemented using proprietary design software have a risk of locking owners into a single vendor, which ties the long-term viability of the digital twin to the longevity of the supplier’s product. This risk is especially significant for assets with long lifecycles such as buildings, industrial machinery, airplanes, etc., since the lifecycles of these assets are usually much longer than software lifecycles. This proprietary dependency only becomes riskier and less sustainable over time. To overcome these risks, IT architects and digital twin owners need to carefully set terms with software vendors to ensure data compatibility is maintained and vendor lock-in can be avoided.

Common Pitfalls to Digital Twin Implementation

Digital twin implementation requires an extraordinary investment of time, capital, and engineering might, and as with any project of this scale, there are several common pitfalls to implementation success.

Pitfall 1: Using the Same Platform for Different Applications

Although it’s tempting to try and repurpose a digital twin platform, doing so can lead to incorrect data at best and catastrophic mistakes at worst. Each digital twin is completely unique to a part or machine, therefore assets with unique operating conditions and configurations cannot share digital twin platforms.

Pitfall 2: Going Too Big, Too Fast

In the long run, a digital twin replica of your entire production line or building is possible and could provide incredible insights, but it is a mistake to try and deploy digital twins for all of your pieces of equipment or programs all at once. Not only is doing too much, too fast costly, but it might cause you to rush and miss critical data and configurations along the way. Rather than rushing to do it all at once, perfect a few critical pieces of machinery first and work your way up from there.

Pitfall 3: Inability to Source Quality Data

Data collected in the field is subject to quality errors due to human mistakes or duplicate entries. The insights your digital twin provides you are only as valuable as the data it runs off of. Therefore, it is imperative to standardize data collection practices across your organization and to regularly cleanse your data to remove duplicate and erroneous entries.

Pitfall 4: Lack of Device Communication Standards

If your IoT devices do not speak a common language, miscommunications can muddy your processes and compromise your digital twin initiative. Build an IT framework that allows your IoT devices to communicate with one another seamlessly to ensure success.

Pitfall 5: Failing to Get User Buy-In

As mentioned earlier in this eBook, a successful digital twin strategy includes users from across your product value chain. It is critical that your users understand and appreciate the value your digital twin brings to them individually and to your organization as a whole. Lack of buy-in due to skepticism, lack of confidence, or resistance can lead to a lack of user participation, which can undermine all of your efforts.

The Challenge of Measuring Digital Twin Success

Each digital twin is unique and completely separate in its function and end-goal from others on the market, which can make measuring success challenging. Depending on the level of the twin implemented, businesses need to create KPIs for each individual digital twin as it relates to larger organizational goals.

The configuration of digital twins is determined by the type of input data, number of data sources and the defined metrics. The configuration determines the value an organization can extract from the digital twin. Therefore, a twin with a higher configuration can yield better predictions than can a twin with a lower configuration. The reality is that success can be relative, and it is impossible to compare the effectiveness of two different digital twins side by side.

Conclusion

It’s possible — probable even — that in the future all people, enterprises, and even cities will have a digital twin. With the enormous growth predicted in the digital twin market in the coming years, it’s evident that the technology is here to stay. The possible applications of digital twins are truly limitless, and as IoT technology becomes more advanced and widely accessible, we’re likely to see many more innovative and disruptive use cases.

However, a technology with this much potential must be carefully and thoughtfully implemented in order to ensure its business value and long-term viability. Before embracing a digital twin, an organization must first audit its maturity, standardize processes, and prepare its culture and staff for this radical change in operations. Is your organization ready?

Originally posted here.

Read more…

Skoltech researchers and their colleagues from Russia and Germany have designed an on-chip printed "electronic nose" that serves as a proof of concept for this kind of low-cost and sensitive devices to be used in portable electronics and healthcare. The paper was published in the journal ACS Applied Materials Interfaces.

The rapidly growing fields of Internet of Things (IoT) and advanced medical diagnostics require small, cost-effective, low-powered yet reasonably sensitive and selective gas-analytical systems like so-called "electronic noses." These systems can be used for noninvasive diagnostics of human breath, such as diagnosing chronic obstructive pulmonary disease (COPD) with a compact sensor system also designed at Skoltech. Some of these sensors work a lot like actual noses—say, yours—by using an array of sensors to better detect the complex signal of a gaseous compound.

One approach to creating these sensors is by additive manufacturing technologies, which have achieved enough power and precision to be able to produce the most intricate devices. Skoltech senior research scientist Fedor Fedorov, Professor Albert Nasibulin, research scientist Dmitry Rupasov and their collaborators created a multisensor "electronic nose" by printing nanocrystalline films of eight different metal oxides onto a multielectrode chip (they were manganese, cerium, zirconium, zinc, chromium, cobalt, tin, and titanium). The Skoltech team came up with the idea for this project.

"For this work, we used microplotter printing and true solution inks. There are a few things that make it valuable. First, the resolution of the printing is close to the distance between electrodes on the chip which is optimized for more convenient measurements. We show these technologies are compatible. Second, we managed to use several different oxides which enables more orthogonal signal from the chip resulting in improved selectivity. We can also speculate that this technology is reproducible and easy to be implemented in industry to obtain chips with similar characteristics, and that is really important for the 'e-nose' industry," Fedorov explained.

In subsequent experiments, the device was able to sniff out the difference between different alcohol vapors (methanol, ethanol, isopropanol, and n-butanol), which are chemically very similar and hard to tell apart, at low concentrations in the air. Since methanol is extremely toxic, detecting it in beverages and differentiating between methanol and ethanol can even save lives. To process the data, the team used linear discriminant analysis (LDA), a pattern recognition algorithm, but other machine learning algorithms could also be used for this task.

So far the device operates at rather high temperatures of 200-400 degrees Celsius, but the researchers believe that new quasi-2-D materials such as MXenes, graphene and so on could be used to increase the sensitivity of the array and ultimately allow it to operate at room temperature. The team will continue working in this direction, optimizing the materials used to lower power consumption.

Originally posted here.

Read more…

The benefits of IoT data are widely touted. Enhanced operational visibility, reduced costs, improved efficiencies and increased productivity have driven organizations to take major strides towards digital transformation. With countless promising business opportunities, it’s no surprise that IoT is expanding rapidly and relentlessly. It is estimated that there will be 75.4 billion IoT devices by 2025. As IoT grows, so do the volumes of IoT data that need to be collected, analyzed and stored. Unfortunately, significant barriers exist that can limit or block access to this data altogether.

Successful IoT data acquisition starts and ends with reliable and scalable IoT connectivity. Selecting the right communications technology is paramount to the long-term success of your IoT project and various factors must be considered from the beginning to build a functional wireless infrastructure that can support and manage the influx of IoT data today and in the future.

Here are five IoT architecture must-haves for unlocking IoT data at scale.

1. Network Ownership

For many businesses, IoT data is one of their greatest assets, if not the most valuable. This intensifies the demand to protect the flow of data at all costs. With maximum data authority and architecture control, the adoption of privately managed networks is becoming prevalent across industrial verticals.

Beyond the undeniable benefits of data security and privacy, private networks give users more control over their deployment with the flexibility to tailor their coverage to the specific needs of their campus style network. On a public network, users risk not having the reliable connectivity needed for indoor, underground and remote critical IoT applications. And since this network is privately owned and operated, users also avoid the costly monthly access, data plans and subscription costs imposed by public operators, lowering the overall total-cost-of-ownership. Private networks also provide full control over network availability and uptime to ensure users have reliable access to their data at all times.

2. Minimal Infrastructure Requirements

Since the number of end devices is often fixed to your IoT use cases, choosing a wireless technology that requires minimal supporting infrastructure like base stations and repeaters, as well as configuration and optimization is crucial to cost-effectively scale your IoT network.

Wireless solutions with long range and excellent penetration capability, such as next-gen low-power wide area networks, require fewer base stations to cover a vast, structurally dense industrial or commercial campuses. Likewise, a robust radio link and large network capacity allow an individual base station to effectively support massive amounts of sensors without comprising performance to ensure a continuous flow of IoT data today and in the future.

3. Network and Device Management

As IoT initiatives move beyond proofs-of-concept, businesses need an effective and secure approach to operate, control and expand their IoT network with minimal costs and complexity.

As IoT deployments scale to hundreds or even thousands of geographically dispersed nodes, a manual approach to connecting, configuring and troubleshooting devices is inefficient and expensive. Likewise, by leaving devices completely unattended, users risk losing business-critical IoT data when it’s needed the most. A network and device management platform provides a single-pane, top-down view of all network traffic, registered nodes and their status for streamlined network monitoring and troubleshooting. Likewise, it acts as the bridge between the edge network and users’ downstream data servers and enterprise applications so users can streamline management of their entire IoT project from device to dashboard.

4. Legacy System Integration

Most traditional assets, machines, and facilities were not designed for IoT connectivity, creating huge data silos. This leaves companies with two choices: building entirely new, greenfield plants with native IoT technologies or updating brownfield facilities for IoT connectivity. Highly integrable, plug-and-play IoT connectivity is key to streamlining the costs and complexity of an IoT deployment. Businesses need a solution that can bridge the gap between legacy OT and IT systems to unlock new layers of data that were previously inaccessible. Wireless IoT connectivity must be able to easily retrofit existing assets and equipment without complex hardware modifications and production downtime. Likewise, it must enable straightforward data transfer to the existing IT infrastructure and business applications for data management, visualization and machine learning.

5. Interoperability

Each IoT system is a mashup of diverse components and technologies. This makes interoperability a prerequisite for IoT scalability, to avoid being saddled with an obsolete system that fails to keep pace with new innovation later on. By designing an interoperable architecture from the beginning, you can avoid fragmentation and reduce the integration costs of your IoT project in the long run. 

Today, technology standards exist to foster horizontal interoperability by fueling global cross-vendor support through robust, transparent and consistent technology specifications. For example, a standard-based wireless protocol allows you to benefit from a growing portfolio of off-the-shelf hardware across industry domains. When it comes to vertical interoperability, versatile APIs and open messaging protocols act as the glue to connect the edge network with a multitude of value-deriving backend applications. Leveraging these open interfaces, you can also scale your deployment across locations and seamlessly aggregate IoT data across premises.  

IoT data is the lifeblood of business intelligence and competitive differentiation and IoT connectivity is the crux to ensuring reliable and secure access to this data. When it comes to building a future-proof wireless architecture, it’s important to consider not only existing requirements, but also those that might pop up down the road. A wireless solution that offers data ownership, minimal infrastructure requirements, built-in network management and integration and interoperability will not only ensure access to IoT data today, but provide cost-effective support for the influx of data and devices in the future.

Originally posted here.

Read more…

by Ariane Elena Fuchs

Solar power, wind energy, micro cogeneration power plants: energy from renewable sources has become indispensable, but it makes power generation and distribution far more complex. How the Internet of Things is helping make energy management sustainable.

It feels like Ground Hog Day yet again – in 2020 it happened on August 22. That was the point when the demand for raw materials exceeded the Earth’s supply and capacity to reproduce these natural resources. All reserves that are consumed from that date on cannot be regenerated in the current year. In other words, humanity is living above its means, consuming around 50 percent more energy than the Earth provides naturally.

To conserve these precious resources and reduce climate-damaging CO2 emissions, the energy we need must come from renewable sources such as wind, sun and water. This is the only way to reduce both greenhouse gases and our fossil fuel use. Fortunately, a start has now been made: In 2019, renewable energies – predominantly from wind and sun – will already cover almost 43 percent of Germany's energy requirements and the trend is rising.

DECENTRALIZING ENERGY PRODUCTION

This also means, however, that the traditional energy management model – a few power plants supplying a lot of consumers – is outdated. After all, the phasing out of large nuclear and coal-fired power plants doesn’t just have consequences for Germany’s CO2 balance. Shifting electricity production to wind, solar and smaller cogeneration plants reverses the previous pattern of energy generation and distribution from a highly centralized to an increasingly decentralized organizational structure. Instead of just a few large power plants sending electricity to the grid, there are now many smaller energy sources such as solar panels and wind turbines. This has made the management of it all – including the optimal distribution of the electricity – far more complex as a result. It’s up to the energy sector to wrangle this challenging transformation. As the country’s energy is becoming more sustainable, it’s also becoming harder to organize, since the energy generated from wind and sun cannot be planned in advance as easily as coal and nuclear power can. What’s more, there are thousands of wind turbines and solar panels making electricity and feeding it into the grid. This has made the management of the power network extremely difficult. In particular, there’s a lack of real-time information about the amount of electricity being generated.

KEY TECHNOLOGY IOT: FROM ENERGY FLOW TO DATA STREAM

This is where the Internet of Things comes into play: IoT can supply exactly this data from every power generator and send it to a central location. Once there, it can be evaluated before ideally enabling the power grid to be controlled automatically. The result is an IoT ecosystem. In order to operate wind farms more efficiently and reliably, a project team is currently developing an IoT-supported system that bundles and processes all relevant parameters and readings at a wind farm. They can then reconstruct the current operating and maintenance status of individual turbines. This information can be used to detect whether certain components are about to wear out and replace them before a turbine fails.

POTENTIAL FOR NEW BUSINESS MODELS

According to a recent Gartner study, the Internet of Things (IoT) is becoming a key technology for monitoring and orchestrating the complex energy and water ecosystem. In addition, consumers want more control over energy prices and more environmentally friendly power products. With the introduction of smart metering, data from so-called prosumers is becoming increasingly important. These energy producing consumers act like operators of the photovoltaic systems on their roofs. IoT sensors are used to collect the necessary power generation information. Although they are only used locally and for specific purposes, they provide energy companies with a lot of data. In order to be able to use the potential of this information for the expansion of renewable energy, it must be combined and evaluated intelligently. According to Gartner, IoT has the potential to change the energy value chain in four key areas: It enables new business models, optimizes asset management, automates operations and digitalizes the entire value chain from energy source to kWh.

ENERGY TRANSITION REQUIRES TECHNOLOGICAL CHANGE

Installing smaller power-generating systems will soon no longer pose the greatest challenge for operators. In the near future, coherently linking, integrating and controlling them will be the order of the day. The energy transition is therefore spurring technological change on a grand scale. For example, smart grids will only function properly and increase overall capacity when data on generation, consumption and networks is available in real-time. The Internet of Things enables the necessary fast data processing, even from the smallest consumers and prosumers on the grid. With the help of the Internet of Things, more and more household appliances can communicate with the Internet. These devices are then in turn connected to a smart meter gateway, i.e. a hub for the intelligent management of consumers, producers and storage locations at private households and commercial enterprises. To be able to use the true potential of this information, however, all the data must flow together into a common data platform, so that it can be analyzed intelligently.

FROM INDIVIDUAL APPLICATIONS TO AN ECOSYSTEM

For the transmission of data from the Internet of Things, Germany has national fixed-line and mobile networks available. New technology such as the 5G mobile standard allows data to be securely and reliably transferred to the cloud either directly via the 5G network or a 5G campus networks. Software for data analytics and AI tailored to energy firms are now available – including monitoring, analysis, forecasting and optimization tools. Any analyzed data can be accessed via web browsers and in-house data centers. Taken together, it all provides the energy sector with a comprehensive IoT ecosystem for the future.

Originally posted here.

Read more…

by Philipp Richert

New digital and IoT use cases are becoming more and more important. When it comes to the adoption of these new technologies, there are several different maturity levels, depending on the domain. Within the retail industry, and specifically food retail, we are currently seeing the emergence of a host of IoT use cases.

Two forces are driving this: a technology push, in which suppliers in the retail domain have technologies available to build retail IoT use cases within a connected store; and a market pull by their customers, who are boosting the demand for such use cases.

Retail-IoT-use-case-technology-push-and-market-pull-1136x139.png

However, we also need to ask the following questions: What are IoT use cases good for? And what are they aiming at? We currently see three different fields of application:

  • Increasing efficiency and optimizing processes
  • Increasing customer satisfaction
  • Increasing revenues with new business models

No matter what is most important for your organization or whatever your focus, it is crucial to set up a process that provides guidance for identifying the right use cases. In the following section, we share some insights on how retailers can best design this process. We collated these insights together with the team from the Food Tech Campus.

How to identify the right retail IoT use cases

When identifying the right use cases for their stores, retailers should make sure to look into all phases within the entire innovation process: from problem description and idea collation to solution concept and implementation. Within this process, it is also essential to consider the so-called innovator’s trilemma and ensure that use cases are:

  • Desirable ones that your customer really needs
  • Technically feasible
  • Profitable for your sustainable business development

Before we can actually start identifying retail IoT use cases, we need to define search fields so that we can work on one topic with greater dedication and focus. We must then open up the problem space in order to extract the most relevant problems and pain points. Starting with prioritized and selected pain points, we then open up the solution space in order to define several solution concepts. Once these have been validated, the result should be a well-defined problem statement that concisely describes one singular pain point.

In the following, we want to take a deep dive into the different phases of the process while giving concrete examples, tips and our top-rated tools. Enjoy!

Search fields

Retailers possess expertise and face challenges at various stages along their complex process chains. It helps here to focus on a specific target group in order to avoid distraction. Target groups are typically users or customers in a defined environment. A good example would be to focus your search on processes that happen inside a store location and are relevant to the customer (e.g., the food shopper).

Understand and observe problems

User research, observation and listening are keys to a well-defined problem statement that allows for further ideation. Embedding yourself in various situations and conducting interviews with all the stakeholders visiting or operating a store should be the first steps. Join employees around the store for a day or two and support them during their everyday tasks. Empathize, look for any friction and ask questions. Take your key findings into workshops and spend some time isolating specific causes. Use personas based on your user research and make use of frameworks and canvas templates in order to structure your findings. Use working titles to name the specific problem statements. One example might be: Long queueing as a major nuisance for customers.

Synthesize findings

Are your findings somehow connected? Single-purpose processes and their owners within a store environment are prone to isolated views. Creating a common problem space increases the chances of adoption of any solution later. So it is worth taking the time to map out all findings and take a look at projects in the past and their outcome. In our example, queueing is linked to staff planning, lack of communication and unpredictable customer behavior.

Prioritize problems and pain points

Ask users or stakeholders to give their view on defined problem statements and let them vote. Challenge their view and make them empathize and broaden their view towards a more holistic benefit. Once the quality of a problem statement has been assessed, evaluate the economic implications. In our example, this could mean that queueing affects most employees in the store, directly or indirectly. This problem might be solved through technology and should be further explored.

The result of a well-structured problem statement list should consist of a few new insights that might result in quick gains; one or two major known pain points, where the solution might be viable and feasible; and a list with additional topics that exist but are not too pressing at the moment.

Define opportunity areas

Map technologies and problems together. Are there any strategic goals that these problem statements might be assigned to? Have things changed in terms of technical feasibility (e.g., has the cost of a technology dropped over the past three years?). Can problems be validated within a larger setup easily or are we talking about singular use cases? All these considerations should lead towards the most attractive problem to solve. Again, in our example, this might be: Queuing is a major problem in most locations, satisfying our customers should be our main goal, existing solutions are too expensive or inflexible.

Retail-IoT-use-case-problem-solution-space-1-1136x580.png

When identifying the right use cases for their stores, retailers should make sure to look into all phases within the entire innovation process: from problem description and idea collation to solution concept and implementation.

Ideate and explore use cases

When conducting an ideation session, it is very helpful to bring in trends that are relevant to the defined problem areas so as to help boost creativity. In our example, for instance, this might be technology trends such as frictionless checkout for retail, hybrid checkout concepts, bring your own device (BYOD) and sensor approaches. It is always important to keep the following in mind: What do these trends mean for the customer journey in-store and how can they be integrated in (legacy) environments?

Define solutions concepts

In the process of further defining the solution concepts, it is essential to evaluate the market potential and to consider customer and user feedback. Depending on the solution, it might be necessary to ask the various stakeholders – from store managers to personnel to customers – in order to get a clearer picture. When talking to customers or users, it is also helpful to bring along scribbles, pictures or prototypes in order to increase immersion. The insights gathered in this way help to validate assumptions and to pilot the concept accordingly.

Set metrics and KPIs to prove success

Defining data-based metrics and KPIs is essential for a successful solution. When setting up metrics and KPIs, you need to consider two aspects:

  • Use existing data – e.g., checkout frequency – in order to demonstrate the impact of the new solution. This offers a very inexpensive way of validating the business potential of the solution early on.
  • Use new data – e.g. measure waiting time – from the solution and evaluate it on a regular basis. This helps to get a better understanding of whether you are collecting the right data and to derive measures that help to improve your solution.

Prototype for quick insights

In terms of technology, practically everything is feasible today. However, the value proposition of a use case (in terms of business and users) can remain unclear and requires testing. Instead of building a technical prototype, it can be helpful to evaluate the value proposition of the solution with humans (empathy prototyping). This could be a person triggering an alarm based on the information at hand instead of an automatic action. Insights and lessons learnt from this phase can be used alongside the technical realization (proof-of-concept) in order to tweak specific features of the solution.

Initiate a PoC for technical feasibility

When it comes to technical feasibility, a clear picture of the objectives and key results (OKRs) for the PoC is essential. This helps to set the boundaries for a lean process with respect to the installation of hardware, an efficient timeline and minimum costs. Furthermore, a well-defined test setup fosters short testing timespans that often yield all needed results.

How IoT platforms can help build retail IoT use cases

The strong trend towards digitization within the retail industry opens up new use cases for the (food) retail industry. In order to make the most of this trend and to build on IoT, it is crucial first of all to determine which use cases to start with. Every retailer has a different focus and needs for their stores.

In the course of our retail projects, we have identified some of the recurring use cases that food retailers are currently implementing. We have also learnt a lot about how they can best leverage IoT in order to build a connected store. We share these insights in our white paper “The connected retail store.”

Originally posted here.

Read more…
RSS
Email me when there are new items in this category –

Charter Sponsors

Upcoming IoT Events

More IoT News

IoT Career Opportunities