Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Connectivity (60)

Against the backdrop of digital technology and the industrial revolution, the Internet of Things has become the most influential and disruptive of all the latest technologies. As an advanced technology, IoT is showing a palpable difference in how businesses operate. 

Although the Fourth Industrial Revolution is still in its infancy, early adopters of this advanced technology are edging out the competition with their competitive advantage. 

Businesses eager to become a part of this disruptive technology are jostling against each other to implement IoT solutions. Yet, they are unaware of the steps in effective implementation and the challenges they might face during the process. 

This is a complete guide– the only one you’ll need – that focuses on delivering effective and uncomplicated IoT implementation. 

 

Key Elements of IoT

There are three main elements of IoT technology:

  • Connectivity:

IoT devices are connected to the internet and have a URI – Unique Resource Identifier – that can relay data to the connected network. The devices can be connected among themselves to a centralized server, a cloud, or a network of servers.

  • Data Communication:

IoT devices continuously share data with other devices in the network or the server. 

  • Interaction

IoT devices do not simply gather data. They transmit it to their endpoints or server. There is no point in collecting data if it is not put to good use. The collected data is used to deliver IoT smart solutions in automation, take real-time business decisions, formulate strategies, or monitor processes. 

How Does IoT work?

IoT devices have URI and come with embedded sensors. With these sensors, the devices sense their environment and gather information. For example, the devices could be air conditioners, smart watches, cars, etc. Then, all the devices dump their collected data into the IoT platform or gateway. 

The IoT platform then performs analytics on the data from various sources and derives useful information per the requirement

What are the Layers in IoT Architecture?

Although there isn’t a standard IoT structure that’s universally accepted, the 4-layer architecture is considered to be the basic form. The four layers include perception, network, middleware, and application.

  • Perception:

Perception is the first or the physical layer of IoT architecture. All the sensors, edge devices, and actuators gather useful information based on the project needs in this layer. The purpose of this layer is to gather data and transfer it to the next layer. 

  • Network:

It is the connecting layer between perception and application. This layer gathers information from the perception and transmits the data to other devices or servers. 

  • Middleware

The middleware layer offers storage and processing capabilities. It stores the incoming data and applies appropriate analytics based on requirements. 

  • Application

The user interacts with the application layer, responsible for taking specific services to the end-user. 

Implementation Requirements

Effective and seamless implementation of IoT depends on specific tools, such as:

  • High-Level Security 

Security is one of the fundamental IoT implementation requirements. Since the IoT devices gather real-time sensitive data about the environment, it is critical to put in place high-level security measures that ensure that sensitive information stays protected and confidential.  

  • Asset Management

Asset management includes the software, hardware, and processes that ensure that the devices are registered, upgraded, secured, and well-managed. 

  • Cloud Computing

Since massive amounts of structured and unstructured data are gathered and processed, it is stored in the cloud. The cloud acts as a centralized repository of resources that allows the data to be accessed easily. Cloud computing ensures seamless communication between various IoT devices. 

  • Data Analytics

With advanced algorithms, large amounts of data are processed and analyzed from the cloud platform. As a result, you can derive trends based on the analytics, and corrective action can be taken. 

What are the IoT Implementation Steps?

Knowing the appropriate IoT implementation steps will help your business align your goals and expectations against the solution. You can also ensure the entire process is time-bound, cost-efficient, and satisfies all your business needs. 

10661243654?profile=RESIZE_710x

Set Business Objectives 

IoT implementation should serve your business goals and objectives. Unfortunately, not every entrepreneur is an accomplished technician or computer-savvy. You can hire experts if you lack the practical know-how regarding IoT, the components needed, and specialist knowledge. 

Think of what you will accomplish with IoT, such as improving customer experience, eliminating operational inconsistencies, reducing costs, etc. With a clear understanding of IoT technology, you should be able to align your business needs to IoT applications. 

Hardware Components and Tools

Selecting the necessary tools, components, hardware, and software systems needed for the implementation is the next critical step. First, you must choose the tools and technology, keeping in mind connectivity and interoperability. 

You should also select the right IoT platform that acts as a centralized repository for collecting and controlling all aspects of the network and devices. You can choose to have a custom-made platform or get one from suppliers. 

Some of the major components you require for implementation include,

  • Sensors
  • Gateways
  • Communication protocols
  • IoT platforms
  • Analytics and data management software

Implementation

Before initiating the implementation process, it is recommended that you put together a team of IoT experts and professionals with selected use case experience and knowledge. Make sure that the team comprises experts from operations and IT with a specific skill set in IoT. 

A typical team should be experts with skills in mechanical engineering, embedded system design, electrical and industrial design, technical expertise, and front/back-end development. 

Prototyping

Before giving the go-ahead, the team must develop an Internet of Things implementation prototype. 

A prototype will help you experiment and identify fault lines, connectivity, and compatibility issues. After testing the prototype, you can include modified design ideas. 

Integrate with Advanced technologies

After the sensors gather useful data, you can add layers of other technologies such as analytics, edge computing, and machine learning. 

The amount of unstructured data collected by the sensors far exceeds structured data. However, both structured and unstructured, machine learning, deep learning neural systems, and cognitive computing technologies can be used for improvement. 

Take Security Measures

Security is one of the top concerns of most businesses. With IoT depending predominantly on the internet for functioning, it is prone to security attacks. However, communication protocols, endpoint security, encryption, and access control management can minimize security breaches. 

Although there are no standardized IoT implementation steps, most projects follow these processes. But the exact sequence of IoT implementation depends on your project’s specific needs.

Challenges in IoT Implementation

Every new technology comes with its own set of implementation challenges. 

10661244498?profile=RESIZE_710x

When you keep these challenges of IoT implementation in mind, you’ll be better equipped to handle them. 

  • Lack of Network Security

When your entire system is dependent on the network connectivity for functioning, you are just adding another layer of security concern to deal with. 

Unless you have a robust network security system, you are bound to face issues such as hacking into the servers or devices. Unfortunately, the IoT hacking statistics are rising, with over 1.5 million security breaches reported in 2021 alone. 

  • Data Retention and Storage 

IoT devices continually gather data, and over time the data becomes unwieldy to handle. Such massive amounts of data need high-capacity storage units and advanced IoT analytics technologies. 

  • Lack of Compatibility 

IoT implementation involves several sensors, devices, and tools, and a successful implementation largely depends on the seamless integration between these systems. In addition, since there are no standards for devices or protocols, there could be major compatibility issues during implementation. 

IoT is the latest technology that is delivering promising results. Yet, similar to any technology, without proper implementation, your businesses can’t hope to leverage its immense benefits. 

Taking chances with IoT implementation is not a smart business move, as your productivity, security, customer experience, and future depend on proper and effective implementation. The only way to harness this technology would be to seek a reliable IoT app development company that can take your initiatives towards success.

Read more…

The construction industry is among the many under pressure for optimisation and sustainable growth, driven by the development of smart urban cities. Construction will account for about USD 12.9 trillion global output by 2022 and is predicted to grow globally by 3.1% by 2030. Its demand and spending are growing rapidly in an attempt to answer the need for housing of the fast-growing global population.

 

However, the industry faces unique challenges on a global scale. Despite continual stable growth underperformance, constant project rework, labour shortage and lack of adopted digital solutions cause production delays and a worrying 1% growth in productivity.

The construction industry is one of the slowest growing sectors for IoT adoption and digitalization. To put this in numbers:

  • Only 18% of the construction companies use mobile apps for project data and collaboration.
  • Nearly 50% of the companies in the field spend 1% or less on technology
  • 95% of all data captured in construction goes unused
  • 28% of the UK construction firms point out lack of on-site information as the biggest challenge for productivity

And yet

  • 70% of the contractors trust that the advance of technology and software solutions, in particular, can improve their work.

To sum up the data, the construction industry is open to digitalization and IoT, but the advancement is slow and difficult, due to the specific needs of the field. When we talk about technology implementation on the construction site, IoT can help with project data collection, environment condition monitoring, equipment tracking and remote management, as well as safety monitoring with wearables.

However, the implementation of IoT for construction sites calls for careful planning and calculation of costs, as well as trust in the technology. Here is where private LTE networks can come into play and help construction companies take initial steps into advancing their digitalization.

What is Private LTE?

Long-term Evolution or LTE is a broadband technology that allows companies to vertically scale solutions for easier management, improved latency, range, speed and costs. LTE is a connectivity standard used for cases with multiple devices with multiple bands and for global technologies. For construction sites, this would mean LTE can connect all devices on-site, including heavy machinery, mobile devices, trackers, sensors and anything else that requires a stable uninterrupted connection.

LTE requires companies to connect to an MNO and depend on local infrastructure to run the network, much like Wi-Fi. Private LTE, on the other hand, allows companies to create and operate an independent wireless network that covers all their business facilities. Private LTE is often used to reduce congestion, add a layer of security and reduce cost for locations with no existing infrastructure, where constructions sites fall into.

Why Private LTE for construction?

When it comes to the particular needs of construction companies, private LTE offers the following benefits, over public LTE and Wi-Fi.

  • Network ownership and autonomy

Private LTE can be seen as creating a connectivity island on the construction site, where all company devices and machinery can be monitored and controlled by the company network team. Owning the network increases flexibility because businesses do not need to rely on local providers for making changes, creating additional secure networks or moving devices from one network to another.

Construction sites do not often come with suitable networks in place, so putting your own one up whenever needed is just as important as being able to take it down quickly. With private LTE, construction companies can do it as they see fit.

  • Cost

Private LTE can optimise the cost for running a construction site, not just by providing stable connectivity for IoT implementation, but also from a pure network running point. For example, Wi-Fi is often not sufficient for serving large construction sites and may require a number of repeaters to cover the area, which increases the running cost. Private LTE can run on a single tower and can be combined with CBRS for further cost-reduction. This makes it ideal for locations that would incur high infrastructure installation costs.

  • Control and security

With private LTE, companies can control network access, preventing unwanted users or outside network interference. This is critical for securing the project data and device access. Private LTE allows for setting up specific levels of security access for different on-site members.

Network ownership also allows teams to use real-time data to make timely decisions on consumption, device control and management, as well as react in case of emergency. This can further help increase the on-site safety of the team.

  • Performance

Compared to public LTE or Wi-Fi, private LTE networks are simply better performing when it comes to hundreds of devices. Because the network is private, it allows the usage of connectivity management platforms for control of individual SIM cards (connected devices), traffic optimisation and control. Public LTE and Wi-Fi networks are often not equipped to handle multiple devices on the site, let alone underground projects, where there are barriers to the network. Uninterrupted performance is also key for real-time data and employee safety at high-risk construction sites.

Private LTE is a technology widely applicable for manufacturing, mining, cargo and freight, as well as for utility, hospitals and smart cities in general. It is considered the stepping stone for 5G implementation, because of its capacity and it is agreed to be the gateway for future-proofing network access.

While the implementation of NB-IoT is progressing, we at JT IoT have already developed solutions suitable for future IoT connectivity. To learn more about private LTE, watch our deep dive into the topic with Pod Group. 

Originally posted here.

Read more…

By Bee Hayes-Thakore

The Android Ready SE Alliance, announced by Google on March 25th, paves the path for tamper resistant hardware backed security services. Kigen is bringing the first secure iSIM OS, along with our GSMA certified eSIM OS and personalization services to support fast adoption of emerging security services across smartphones, tablets, WearOS, Android Auto Embedded and Android TV.

Google has been advancing their investment in how tamper-resistant secure hardware modules can protect not only Android and its functionality, but also protect third-party apps and secure sensitive transactions. The latest android smartphone device features enable tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the hardware-backed Keystore that resides in a hardware security module.

To accelerate adoption of new Android use cases with stronger security, Google announced the formation of the Android Ready SE Alliance. Secure Element (SE) vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. On March 25th, Google launched the General Availability (GA) version of StrongBox for SE.

8887974290?profile=RESIZE_710x

Hardware based security modules are becoming a mainstay of the mobile world. Juniper Research’s latest eSIM research, eSIMs: Sector Analysis, Emerging Opportunities & Market Forecasts 2021-2025, independently assessed eSIM adoption and demand in the consumer sector, industrial sector, and public sector, and predicts that the consumer sector will account for 94% of global eSIM installations by 2025. It anticipates that established adoption of eSIM frameworks from consumer device vendors such as Google, will accelerate the growth of eSIMs in consumer devices ahead of the industrial and public sectors.


Consumer sector will account for 94% of global eSIM installations by 2025

Juniper Research, 2021.

Expanding the secure architecture of trust to consumer wearables, smart TV and smart car

What’s more? A major development is that now this is not just for smartphones and tablets, but also applicable to WearOS, Android Auto Embedded and Android TV. These less traditional form factors have huge potential beyond being purely companion devices to smartphones or tablets. With the power, size and performance benefits offered by Kigen’s iSIM OS, OEMs and chipset vendors can consider the full scope of the vast Android ecosystem to deliver new services.

This means new secure services and innovations around:

🔐 Digital keys (car, home, office)

🛂 Mobile Driver’s License (mDL), National ID, ePassports

🏧 eMoney solutions (for example, Wallet)

How is Kigen supporting Google’s Android Ready SE Alliance?

The alliance was created to make discrete tamper resistant hardware backed security the lowest common denominator for the Android ecosystem. A major goal of this alliance is to enable a consistent, interoperable, and demonstrably secure applets across the Android ecosystem.

Kigen believes that enabling the broadest choice and interoperability is fundamental to the architecture of digital trust. Our secure, standards-compliant eSIM and iSIM OS, and secure personalization services are available to all chipset or device partners in the Android Ready SE Alliance to leverage the benefits of iSIM for customer-centric innovations for billions of Android users quickly.

Vincent Korstanje, CEO of Kigen

Kigen’s support for the Android Ready SE Alliance will allow our industry partners to easily leapfrog to the enhanced security and power efficiency benefits of iSIM technology or choose a seamless transition from embedded SIM so they can focus on their innovation.

We are delighted to partner with Kigen to further strengthen the security of Android through StrongBox via Secure Element (SE). We look forward to widespread adoption by our OEM partners and developers and the entire Android ecosystem.

Sudhi Herle, Director of Android Platform Security 

In the near term, the Google team is prioritizing and delivering the following Applets in conjunction with corresponding Android feature releases:

  • Mobile driver’s license and Identity Credentials
  • Digital car keys

Kigen brings the ability to bridge the physical embedded security hardware to a fully integrated form factor. Our Kigen standards-compliant eSIM OS (version 2.2. eUICC OS) is available to support chipsets and device makers now. This announcement is a start to what will bring a whole host of new and exciting trusted services offering better experience for users on Android.

Kigen’s eSIM (eUICC) OS brings

8887975464?profile=RESIZE_710x

The smallest operating system, allowing OEMs to select compact, cost-effective hardware to run it on.

Kigen OS offers the highest level of logical security when employed on any SIM form factor, including a secure enclave.

On top of Kigen OS, we have a broad portfolio of Java Card™ Applets to support your needs for the Android SE Ready Alliance.

Kigen’s Integrated SIM or iSIM (iUICC) OS further this advantage

8887975878?profile=RESIZE_710x

Integrated at the heart of the device and securely personalized, iSIM brings significant size and battery life benefits to cellular Iot devices. iSIM can act as a root of trust for payment, identity, and critical infrastructure applications

Kigen’s iSIM is flexible enough to support dual sim capability through a single profile or remote SIM provisioning mechanisms with the latter enabling out-of-the-box connectivity, secure and remote profile management.

For smartphones, set top boxes, android auto applications, auto car display, Chromecast or Google Assistant enabled devices, iSIM can offer significant benefits to incorporate Artificial intelligence at the edge.

Kigen’s secure personalization services to support fast adoption

SIM vendors have in-house capabilities for data generation but the eSIM and iSIM value chains redistribute many roles and responsibilities among new stakeholders for the personalization of operator credentials along different stages of production or over-the-air when devices are deployed.

Kigen can offer data generation as a service to vendors new to the ecosystem.

Partner with us to provide cellular chipset and module makers with the strongest security, performance for integrated SIM leading to accelerate these new use cases.

Security considerations for eSIM and iSIM enabled secure connected services

Designing a secure connected product requires considerable thought and planning and there really is no ‘one-size-fits-all’ solution. How security should be implemented draws upon a multitude of factors, including:

  • What data is being stored or transmitted between the device and other connected apps?
  • Are there regulatory requirements for the device? (i.e. PCI DSS, HIPAA, FDA, etc.)
  • What are the hardware or design limitations that will affect security implementation?
  • Will the devices be manufactured in a site accredited by all of the necessary industry bodies?
  • What is the expected lifespan of the device?

End-to-end ecosystem and services thinking needs to be a design consideration from the very early stage especially when considering the strain on battery consumption in devices such as wearables, smart watches and fitness devices as well as portable devices that are part of the connected consumer vehicles.

Originally posted here.

Read more…

Selecting an IoT Platform

The past several years have seen a huge growth in the number of companies offering IoT Platforms. The market research firm IoT Analytics reported 613 companies offering IoT platforms in 2021! This is a mind-blowing number. The IoT platforms vary widely in capabilities but typically focus on one or more of the building blocks of IoT systems – physical devices, internet connectivity, and digital services. In one way or another, they provide software (or in some cases hardware too) that gives companies a head-start when building IoT systems. There are so many companies offering platforms that it is nearly impossible to keep up with all of them.

 Charting the right path, avoiding pitfalls, maximizing your success.

If you are getting into IoT and not familiar with IoT platforms, you might be asking yourself questions like –  What makes up an IoT platform? What advantages could they have for my company? How do I select an IoT platform? 

Let’s tackle these questions one by one. 

What makes up an IoT platform?
 

Features

True IoT platforms typically provide the following features:

  • Digital services running in the cloud that physical devices connect to
  • Software that runs on devices that communicates with the digital services
  • A framework or schema for data messaging and remote command & control of devices
  • Security infrastructure to handle device registration, authentication, security credential management
  • Tools and methods for updating device firmware over-the-air (OTA)
  • Web dashboards for viewing the state of devices and interacting with the system

IoT  platforms may or may not also provide other features, including:

  • Analytics tools and dashboards
  • Digital twins or shadows
  • Application deployment orchestration
  • Machine learning orchestration
  • Rules engines
  • Fleet management tools
  • Integrations to other services
  • Gateway or hub support for bridging devices to the cloud
  • Cellular network plans for devices
  • Web or mobile application interfaces and templates

Example Elements of an IoT PlatformIoT-Platform-Blog-General-1024x270.png

Types of IoT Platforms

IoT platforms are not all the same. Their features and target use-cases vary a lot. However, at a high level, they can be grouped into two main categories.

Platform as a Service (PaaS) – Offered by the big cloud service providers

PaaS platforms provide building blocks to do most things an IoT system needs, but it is up to you to write the custom code that connects it all together. With a PaaS provider, you don’t have to worry about underlying server hardware, but you have to compose their services into a working architecture and manage the deployment of applications that use their services. This is more work but allows more flexibility and the opportunity to customize the system to your needs. Ongoing costs of a PaaS IoT platform are typically lower than a SaaS, but expertise is required to ensure correct usage patterns to avoid larger costs. The big cloud providers all offer PaaS IoT platforms. This includes Amazon Web Services (AWS)Microsoft Azure, and Google Cloud Platform (GCP).

Software as a Service (SaaS) – Offered by numerous software vendors, large and small

With a SaaS provider, you get access to use the software application they deploy and manage for you. Or you can license it and deploy it yourself. SaaS platforms typically provide some configurability and integrations with other systems. There is much less work on the cloud side as this is mostly taken care of for you. However, you are limited to the features that the IoT platform provider offers. You may need to invest more in bridging the platform to your other systems. Depending on your use case, a SaaS may provide more advanced features out-of-the-box than a PaaS. Ongoing costs are likely to be higher with SaaS IoT platforms. Examples of SaaS IoT platform providers include PelionLosantFriendly TechnologiesSoftware AGBlynkParticleThingsBoard, and Golioth.

What advantages could they have for my company?

Benefits of IoT Platforms – There are a lot.

The goal of IoT platforms is to provide a foundation for product-makers to build IoT solutions on top of.  IoT platforms take care of all the fundamental features that all solutions need (e.g. “the plumbing”), so you can focus on adding value with the differentiating features that you add on top. Users of IoT platforms get a huge benefit from economies of scale – especially if using the most popular platforms. This translates into improved security, more robust services, and lower costs. For these reasons, we always recommend using an IoT platform.

How do I select an IoT platform?

The Big Question – Should you use a PaaS or SaaS?

At SpinDance, we believe in lean and agile business principles. This usually translates into taking a staged approach and focusing on different priorities in each stage. IoT is a journey, not a destination. We have seen the most success when companies tackle each of their challenges in stages, don’t try to do too much too quickly, and don’t lock themselves into long-term decisions too early. Choosing whether to use a PaaS or Saas depends on the stage you are in along your IoT journey.

Our Answer – It depends on your stage in your IoT journey. 

The-IOT-Journey-Graphic-White2-scaled-e1641320879831-1024x553.jpg

If you are just starting on your IoT journey…

In the disconnected stage your main goals are to learn what technology can do for you, develop a vision for your new product or service with that knowledge, and evaluate your vision based on customer input. At this stage, you shouldn’t be worried too much about scale or efficiency. You need to nail down the problem you want to solve and the solution you propose to solve it with. Ash Maurya, entrepreneur and author of Running Lean, says that “Building a successful product is fundamentally about risk mitigation.” To evaluate and reduce your risk, you need to test your assumptions.

We often recommend building Proof of Concepts and Prototypes in this stage. These experiments are crucial to help you quickly validate the feasibility, desirability, and viability of your plans. They also help rally your organization and potential customers around new possibilities.

SaaS IoT platforms have their most advantage in this stage. They can help you get devices connected and data flowing quickly because they typically have more features ready out-of-the-box. However, since your knowledge about the future is limited at this stage, we recommend you avoid long-term commitments so you don’t get stuck with a solution that doesn’t work for you down the road.

 If you are working on your first connected product…

In the connecting stage, you should have some confidence in your problem-market fit and you should have a better idea of what benefits IoT can bring to your business. Now you need to build a system to deal with the rigors of production. You also need to adapt your organization to support your new product or service.

We recommend shifting your focus to creating robust experiences for your customers spanning across the physical devices and digital interfaces they interact with. You need to consider the other parts of the system such as mobile applications, web applications, database storage, operations dashboards, etc that you’ll need for your customers and your internal teams to interact with the system.

PaaS IoT platforms start to have a strong advantage in this stage. More often than not, we see the needs of the company outstretch the features provided by a SaaS. Therefore, there is a need to augment the capabilities of the SaaS platform or bridge it to your other systems. For example, if a SaaS IoT platform does not provide long-term data storage, you will need to create a bridge that pulls data from the platform’s service and puts it into a database that you control in the cloud. Maintaining and monitoring this bridge is non-trivial which may lead to you wanting to consolidate everything into your existing cloud. For reasons like this, we typically recommend PaaS platforms at this stage.

If you already have connected products out in the market…

The connected or accelerating stages are all about maximizing the benefit of IoT, taking advantage of the valuable data you are likely getting, and aligning your costs to revenue. You should be focused on scaling up your system while you improve your connected customer relationships and build up new processes and skills. These are not insignificant tasks. It takes in-house expertise. Your team needs to understand your systems, be able to improve efficiencies and optimize costs. You’ve got to get data to the right place when you need it, and it has to drive reliable actions across all your infrastructure.

PaaS IoT platforms offer the most advantage at this stage. You have more control of your systems and are not locked into a specific software platform. You have the ability to customize and have tighter integration with your existing systems. This lets you adapt and evolve to meet the needs of your customers over time.

Which production architecture works for you?

Considering the needs of your production system likely go beyond the needs of your prototypes and minimum viable product (MVP), it is best to think about what additional features you will need to augment the capabilities of your selected IoT platform. The diagrams below show the difference between augmenting a SaaS platform versus a PaaS platform.

An IoT System Built Around SaaS Platform
IoT-Platform-Blog-Bridge-to-SaaS-1024x385.png

An IoT System Built on a PaaS PlatformIoT-Platform-Blog-PaaS-1024x444.png

What else should be considered when choosing an IoT Platform?

When selecting an IoT platform, you are also choosing an ecosystem to join. This has ramifications that go beyond just the platform. Consider the following questions:

  • What device types are already supported / how easy is it to support the devices I need?
  • How close does the platform fit my use-case?
  • How easy is it to get started and use?
  • What skills do I need on my team to utilize the platform?
  • Will my team get the support we need to succeed?
  • Is the service reliable / highly available / trustworthy?
  • What additional features and services will I have to develop?
  • What systems do I need to integrate with? How easy is that?
  • What will my ongoing costs be for the IoT platform as well as other systems I need to maintain.
  • What happens if I want to change to a different IoT Platform?
  • Am I building the skills and knowledge we need inside my organization to succeed in the future?

Jumpstarting your IoT Systems with Starter Components

Building a system based on a PaaS platform offers a lot of flexibility and control. But you are faced with configuring and deploying your own applications to get your system running. There are a lot of reasons why you don’t want to create things from scratch. You need a head start. You need to follow good patterns and industry best practices. So, what should you do?

We believe that starter components, a.k.a. solution templates, solution implementations, etc, offer a great jumpstart to standing up a robust system. The big cloud companies know this and offer templates for various use-cases. These can be used in any stage of the  IoT Journey. For example, AWS has a Smart Product Solution solution implementation that features capabilities to connect devices, process and analyze telemetry data, etc within a scalable framework. A fundamentally great feature of this is that it is based on AWS Cloud Development Kit (CDK) which means it can be programmatically deployed in minutes. Microsoft Azure has similar solution examples that can also be deployed and tested relatively quickly.

Additionally, there are a lot of benefits from working with a solution provider that has experience with IoT systems and can offer good guidance and support. SpinDance recently collaborated with our partner TwistThink to build Auris Cloud, a set of customizable IoT components that capture our combined years of experience working on IoT systems. Auris components are customizable to meet the needs of many different types of use cases and are deployable on AWS. Things like security, performance, and scalability are baked into the system. Auris can be optimized for different performance and cost models, integrated with other systems, and deployed as an application that you control. We believe this approach offers a great trade-off between fully custom and off-the-shelf solutions.

Summary

At SpinDance, we don’t recommend you try to build an IoT system from scratch. There are great solutions available from both SaaS and PaaS providers. They offer massive benefits in enabling you to build secure and scalable IoT solutions. However, we recommend you consider your organization’s goals and the stage you are in before locking yourself into an IoT platform. Be sure to start with your customer needs and build backward. Prototype and get things right before scaling. A SaaS IoT platform can be great for building proof of concepts or prototyping but may not work for you long term. For maximum customization, flexibility, and tighter integration with your other cloud applications we recommend a PaaS IoT platform. And for the lowest risks and maximum benefits, we recommend using pre-built components that can be customized to your needs.

Read more…

by Carsten Gregersen

With how fast the IoT industry is growing, it’s paramount your business isn’t left behind.

IoT technology has brought a ton of benefits and makes systems more efficient and easier to manage. As a result, it’s no surprise that more businesses are adopting IoT solutions. On top of that, businesses starting new projects have the slight advantage of buying all new technology and, therefore, not having to deal with legacy systems. 

On the other hand, if you have an already operational legacy system and you want to implement IoT, you may think you have to buy entirely new technology to get it online, right? Not necessarily. After all, if your legacy systems are still functional and your staff is comfortable with them, why should you waste all of that time and money?

Legacy systems can still bend to your will and be used for adopting IoT. Sticking rather than twisting can help your business save money on your IoT project.

In this blog, we’ll go over the steps you would need to follow for integrating IoT technology into your legacy systems and the different options you have to get this done.

1. Analyze Your Current Systems

First things first, take a look at your current system and take note of their purpose, the way they work, the type of data that they collect, and the way they could benefit by communicating with each other.

This step is important because it will allow you to plan out IoT integration more efficiently. When analyzing your current systems make sure you focus on these key aspects:

  • Automation – See how automation is currently accomplished and what other aspects should be automated.
  • Efficiency – What aspects are routinely tedious or slow and could become more efficient?
  • Data – How it’s taken, stored, and processed, and how it could be used better
  • Money – Analyze how much some processes cost and keep them in mind to know what aspects could be done for cheaper with IoT
  • Computing – The way data is processed, whether it be cloud, edge, or hybrid.

Following these steps will help you know your project in and out and apply IoT in the areas that truly matter.

2. Plan for IoT Integration

In order to integrate IoT into your legacy systems, you must get everything in order. 

In order to successfully integrate IoT into your system, you will need to have strong planning, design, and implementation phases. Steps you will need to follow in order to achieve this can be

  • Decide what IoT hardware is going to be needed
  • Set a budget taking software, hardware, and maintenance into account
  • Decide on a communication protocol
  • Develop software tools for interacting with the system
  • Decide on a security strategy

This process can be daunting if you don’t know how IoT works, but by following the right tutorials and developing with the right tools, your IoT project can be easily realizable. 

Nabto has tools that can not only help you set up an IoT project but also adding legacy systems and newer IoT devices to it.

Here are several ways in which we can help get your legacy systems IoT ready. 

  • You can integrate the Nabto SDK to add IoT remote control access to your devices.
  • Use the Nabto application to move data from one network to another – otherwise known as TCP tunneling.
  • Add secure remote access to your existing solutions. 
  • Build mobile apps for remote control of embedded devices our IoT app solution.

3. Implement IoT Sensors to Existing Hardware

IoT has the capability to automize, control, and make systems more efficient. Therefore, interconnecting your legacy systems to allow for communication is a great idea.

There’s a high chance your legacy systems don’t currently have the ability to sense or communicate data. However, adding new IoT sensors can give them these capabilities.

IoT sensors are small devices that can detect when something changes. Then, they capture and send information to a main computer over the internet to be processed or execute commands. These could measure (but not limited to):

  • Temperature
  • Humidity
  • Pressure
  • Gyroscope
  • Accelerometer

These sensors are cheap and easy to install, therefore, adding them to your existing legacy systems can be the simplest and quickest way to get to communicate over the internet.

Set up which inputs the sensor should respond to and under what conditions, and what it should do with the collected data. You could be surprised by the benefits that making a simple device to collect data can have for your project!

4. Connect Existing PLCs to the Internet

If you already have an automated system managed by a PLC (Programmable Logic Controller,) devices already share data with each other. Therefore, the next step is to get them online.

With access to the internet, these systems can be controlled remotely from anywhere in the world. Data can be accessed, modified, and analyzed more easily. On top of that, updates can be pushed globally at any time.

Given that some PLCs utilize proprietary protocols and have a weird way of making devices communicate with each other, an IoT gateway is the best way to take the PLC to the internet.

An IoT gateway is a device that acts as a bridge between IoT devices and the cloud, and allows for communication between them. This allows you to implement IoT to a PLC without having to restructure it or change it too much.

5. Connect Legacy using an IO port

A lot of times a legacy system has some kind of interface for data input/output. Sometimes, this is implemented for debugging when the product was developed. However, at other times, this is done to make it possible for service organizations to be able to interface with products in the field and to help customers with setup and/or debug problems.

These debug ports are similar to real serial ports, such as an RS-485 RS-232, etc. That being said, they can be more raw UART, SPI, or I2C. What’s more, the majority of the time the protocol on top of the serial connection is proprietary.

This kind of interface is great. It allows you a “black box” to be created via a physical interface matching the legacy system and firmware running on this black box. This can translate “internet” requests to the proprietary protocol of the legacy system. In addition,  this new system can be used as a design for newer internet-accessible versions of the system simply by adopting the black box onto the internal legacy design.

Bottom Line

Getting your legacy systems to work in IoT is not as much of a challenge as you might have initially thought.

Following some fairly simple strategies can let you set them up relatively quickly. However, don’t forget the planning phase for your IoT strategy and deciding how it’s going to be implemented in your own legacy system. This will allow you to streamline the process even more, and make you take full advantage of all the benefits that IoT brings to your project.

Originally posted here.

Read more…

Today the world is obsessed with the IoT, as if this is a new concept. We've been building the IoT for decades, but it was only recently some marketing "genius" came up with the new buzz-acronym.

Before there was an IoT, before there was an Internet, many of us were busy networking. For the Internet itself was a (brilliant) extension of what was already going on in the industry.

My first experience with networking was in 1971 at the University of Maryland. The school had a new computer, a $10 million Univac 1108 mainframe. This was a massive beast that occupied most of the first floor of a building. A dual-processor machine it was transistorized, though the control console did have some ICs. Rows of big tape drives mirrored the layman's idea of computers in those days. Many dishwasher-sized disk drives were placed around the floor and printers, card readers and other equipment were crammed into every corner. Two Fastrand drum memories, each consisting of a pair of six-foot long counterrotating drums, stored a whopping 90 MB each. Through a window you could watch the heads bounce around.

The machine was networked. It had a 300 baud modem with which it could contact computers at other universities. A primitive email system let users create mail which was queued till nightfall. Then, when demands on the machine were small, it would call the appropriate remote computer and forward mail. The system operated somewhat like today's "hot potato" packets, where the message might get delivered to the easiest machine available, which would then attempt further forwarding. It could take a week to get an email, but at least one saved the $0.08 stamp that the USPS charged.

The system was too slow to be useful. After college I lost my email account but didn't miss it at all.

By the late 70s many of us had our own computers. Mine was a home-made CP/M machine with a Z80 processor and a small TV set as a low-res monitor. Around this time Compuserve came along and I, like so many others, got an account with them. Among other features, users had email addresses. Pretty soon it was common to dial into their machines over a 300 baud modem and exchange email and files. Eventually Compuserve became so ubiquitous that millions were connected, and at my tools business during the 1980s it was common to provide support via this email. The CP/M machine gave way to a succession of PCs, Modems ramped up to 57 K baud.

My tools business expanded rapidly and soon we had a number of employees. Sneakernet was getting less efficient so we installed an Arcnet network using Windows 3.11. That morphed into Ethernet connections, though the cursing from networking problems multiplied about as fast as the data transfers. Windows was just terrible at maintaining reliable connectivity.

In 1992 Mike Lee, a friend from my Boys Night Out beer/politics/sailing/great friends group, which still meets weekly (though lately virtually) came by the office with his laptop. "You have GOT to see this" he intoned, and he showed me the world-wide web. There wasn't much to see as there were few sites. But the promise was shockingly clear. I was stunned.

The tools business had been doing well. Within a month we spent $100k on computers, modems and the like and had a new business: Softaid Internet Services. SIS was one of Maryland's first ISPs and grew quickly to several thousand customers. We had a T1 connection to MAE-EAST in the DC area which gave us a 1.5 Mb/s link… for $5000/month. Though a few customers had ISDN connections to us, most were dialup, and our modem shelf grew to over 100 units with many big fans keeping the things cool.

The computers all ran BSD Unix, which was my first intro to that OS.

I was only a few months back from a failed attempt to singlehand my sailboat across the Atlantic and had written a book-length account of that trip. I hastily created a web page of that book to learn about using the web. It is still online and has been read several million times in the intervening years. We put up a site for the tools business which eventually became our prime marketing arm.

The SIS customers were sometimes, well, "interesting." There was the one who claimed to be a computer expert, but who tried to use the mouse by waving it around over the desk. Many had no idea how to connect a modem. Others complained about our service because it dropped out when mom would pick up the phone to make a call over the modem's beeping. A lot of handholding and training was required.

The logs showed a shocking (to me at the time) amount of porn consumption. Over lunch an industry pundit explained how porn drove all media, from the earliest introduction of printing hundreds of years earlier.

The woman who ran the ISP was from India. She was delightful and had a wonderful marriage. She later told me it had been arranged; they met  their wedding day. She came from a remote and poor village and had had no exposure to computers, or electricity, till emigrating to the USA.

Meanwhile many of our tools customers were building networking equipment. We worked closely with many of them and often had big routers, switches and the like onsite that our engineers were working on. We worked on a lot of what we'd now call IoT gear: sensors et al connected to the net via a profusion of interfaces.

I sold both the tools and Internet businesses in 1997, but by then the web and Internet were old stories.

Today, like so many of us, I have a fast (250 Mb/s) and cheap connection into the house with four wireless links and multiple computers chattering to each other. Where in 1992 the web was incredibly novel and truly lacking in useful functionality, now I can't imagine being deprived of it. Remember travel agents? Ordering things over the phone (a phone that had a physical wire connecting it to Ma Bell)? Using 15 volumes of an encyclopedia? Physically mailing stuff to each other?

As one gets older the years spin by like microseconds, but it is amazing to stop and consider just how much this world has changed. My great grandfather lived on a farm in a world that changed slowly; he finally got electricity in his last year of life. His daughter didn't have access to a telephone till later in life, and my dad designed spacecraft on vellum and starched linen using a slide rule. My son once saw a typewriter and asked me what it was; I mumbled that it was a predecessor of Microsoft Word.

That he understood. I didn't have the heart to try and explain carbon paper.

Originally posted HERE.

Read more…

In my last post, I explored how OTA updates are typically performed using Amazon Web Services and FreeRTOS. OTA updates are critically important to developers with connected devices. In today’s post, we are going to explore several best practices developers should keep in mind with implementing their OTA solution. Most of these will be generic although I will point out a few AWS specific best practices.

Best Practice #1 – Name your S3 bucket with afr-ota

There is a little trick with creating S3 buckets that I was completely oblivious to for a long time. Thankfully when I checked in with some colleagues about it, they also had not been aware of it so I’m not sure how long this has been supported but it can help an embedded developer from having to wade through too many AWS policies and simplify the process a little bit.

Anyone who has attempted to create an OTA Update with AWS and FreeRTOS knows that you have to setup several permissions to allow an OTA Update Job to access the S3 bucket. Well if you name your S3 bucket so that it begins with “afr-ota”, then the S3 bucket will automatically have the AWS managed policy AmazonFreeRTOSOTAUpdate attached to it. (See Create an OTA Update service role for more details). It’s a small help, but a good best practice worth knowing.

Best Practice #2 – Encrypt your firmware updates

Embedded software must be one of the most expensive things to develop that mankind has ever invented! It’s time consuming to create and test and can consume a large percentage of the development budget. Software though also drives most features in a product and can dramatically different a product. That software is intellectual property that is worth protecting through encryption.

Encrypting a firmware image provides several benefits. First, it can convert your firmware binary into a form that seems random or meaningless. This is desired because a developer shouldn’t want their binary image to be easily studied, investigated or reverse engineered. This makes it harder for someone to steal intellectual property and more difficult to understand for someone who may be interested in attacking the system. Second, encrypting the image means that the sender must have a key or credential of some sort that matches the device that will decrypt the image. This can be looked at a simple source for helping to authenticate the source, although more should be done than just encryption to fully authenticate and verify integrity such as signing the image.

Best Practice #3 – Do not support firmware rollbacks

There is often a debate as to whether firmware rollbacks should be supported in a system or not. My recommendation for a best practice is that firmware rollbacks be disabled. The argument for rollbacks is often that if something goes wrong with a firmware update then the user can rollback to an older version that was working. This seems like a good idea at first, but it can be a vulnerability source in a system. For example, let’s say that version 1.7 had a bug in the system that allowed remote attackers to access the system. A new firmware version, 1.8, fixes this flaw. A customer updates their firmware to version 1.8, but an attacker knows that if they can force the system back to 1.7, they can own the system. Firmware rollbacks seem like a convenient and good idea, in fact I’m sure in the past I used to recommend them as a best practice. However, in today’s connected world where we perform OTA updates, firmware rollbacks are a vulnerability so disable them to protect your users.

Best Practice #4 – Secure your bootloader

Updating firmware Over-the-Air requires several components to ensure that it is done securely and successfully. Often the focus is on getting the new image to the device and getting it decrypted. However, just like in traditional firmware updates, the bootloader is still a critical piece to the update process and in OTA updates, the bootloader can’t just be your traditional flavor but must be secure.

There are quite a few methods that can be used with the onboard bootloader, but no matter the method used, the bootloader must be secure. Secure bootloaders need to be capable of verifying the authenticity and integrity of the firmware before it is ever loaded. Some systems will use the application code to verify and install the firmware into a new application slot while others fully rely on the bootloader. In either case, the secure bootloader needs to be able to verify the authenticity and integrity of the firmware prior to accepting the new firmware image.

It’s also a good idea to ensure that the bootloader is built into a chain of trust and cannot be easily modified or updated. The secure bootloader is a critical component in a chain-of-trust that is necessary to keep a system secure.

Best Practice #5 – Build a Chain-of-Trust

A chain-of-trust is a sequence of events that occur while booting the device that ensures each link in the chain is trusted software. For example, I’ve been working with the Cypress PSoC 64 secure MCU’s recently and these parts come shipped from the factory with a hardware-based root-of-trust to authenticate that the MCU came from a secure source. That Root-of-Trust (RoT) is then transferred to a developer, who programs a secure bootloader and security policies onto the device. During the boot sequence, the RoT verifying the integrity and authenticity of the bootloader, which then verifies the integrity and authenticity of any second stage bootloader or software which then verifies the authenticity and integrity of the application. The application then verifies the authenticity and integrity of its data, keys, operational parameters and so on.

This sequence creates a Chain-Of-Trust which is needed and used by firmware OTA updates. When the new firmware request is made, the application must decrypt the image and verify that authenticity and integrity of the new firmware is intact. That new firmware can then only be used if the Chain-Of-Trust can successfully make its way through each link in the chain. The bottom line, a developer and the end user know that when the system boots successfully that the new firmware is legitimate. 

Conclusions

OTA updates are a critical infrastructure component to nearly every embedded IoT device. Sure, there are systems out there that once deployed will never update, however, those are probably a small percentage of systems. OTA updates are the go-to mechanism to update firmware in the field. We’ve examined several best practices that developers and companies should consider when they start to design their connected systems. In fact, the bonus best practice for today is that if you are building a connected device, make sure you explore your OTA update solution sooner rather than later. Otherwise, you may find that building that Chain-Of-Trust necessary in today’s deployments will be far more expensive and time consuming to implement.

Originally posted here.

Read more…

Wi-Fi, NB-IoT, Bluetooth, LoRaWAN… This webinar will help you to choose the appropriate connectivity protocol for your IoT application.

Connectivity is cool! The cornucopia of connectivity choices available to us today would make engineers gasp in awe and disbelief just a few short decades ago.

I was just pondering this point and – as usual – random thoughts started to bounce around my poor old noggin. Take the topic of interoperability, for example (for the purposes of these discussions, we will take “interoperability” to mean “the ability of computer systems or software to exchange and make use of information”).

Don’t get me started on the subject of the Endian Wars. Instead, let’s consider the 7-bit American Standard Code for Information Interchange (ASCII) that we know and love. The currently used ASCII standard of 96 printing characters and 32 control characters was first defined in 1968. For machines that supported ASCII, this greatly facilitated their ability to exchange information.

For reasons of their own, the folks at IBM decided to go their own way by developing a proprietary 8-bit code called the Extended Binary Coded Decimal Interchange Code (EBCDIC). This code was first used on the IBM 360 computer, which was presented to the market in 1964. Just for giggles and grins, IBM eventually introduced 57 different variants EBCDIC targeted at different countries (a “standard” that came in 57 different flavors!). This obviously didn’t help IBM machines in different countries to make use of each other’s files. Even worse, different types of IBM computers found difficult to talk to each other, let alone with machines from other manufacturers.

There’s an old joke that goes, “Standard are great – everyone should have one.” The problem is that almost everybody did. Sometime around late-1980 or early 1981, for example, I was working at International Computers (ICL) in Manchester, England. I recall being invited to what I was told was going to be a milestone event. This turned out to be a demonstration in which a mainframe computer was connected to a much smaller computer (akin to one of the first PCs) via a proprietary wired network. With great flourish and fanfare, the presenter created and saved a simple ASCII text file on the mainframe, then – to the amazement of all present – opened and edited the same file on the small computer.

This may sound like no big deal to the young folks of today, but it was an event of such significance at that time that journalists from the national papers came up on the train from London to witness this august occasion with their own eyes so that they could report back to the unwashed masses.

Now, of course, we have a wide variety of wired standards, from simple (short range) protocols like I2C and SPI, to sophisticated (longer range) offerings like Ethernet. And, of course, we have a cornucopia of wireless standards like Wi-Fi, NB-IoT, Bluetooth, and LoRaWAN. In some respects, this is almost an embarrassment of riches … there are so many options … how can we be expected to choose the most appropriate connectivity protocol for our IoT applications?

Well, I’m glad you asked, because I will be hosting a one-hour webinar on this very topic on Tuesday 28 September 2021, starting at 8:00 a.m. Pacific Time (11:00 a.m. Eastern Time).

Presented by IoT Central and sponsored by ARM, yours truly will be joined in this webinar by Samuele Falconer (Principal Product Manager at u-blox), Omer Cheema (Head of the Wi-Fi Business Unit at Renesas Semiconductor), Wienke Giezeman (Co-Founder and CEO at The Things Industries), and Thomas Cuyckens (System Architect at Qorvo).

If you are at all interested in connectivity for your cunning IoT creations, then may I make so bold as to suggest you Register Now before all of the good virtual seats are taken. I’m so enthused by this event that I’m prepared to pledge on my honor that – if you fail to learn something new – I will be very surprised (I was going to say that I would return the price of your admission but, since this event is free, that would have been a tad pointless).

So, what say you? Can I dare to hope to see you there? Register Now

Read more…

4 key questions to ask tech vendors

Posted by Terri Hiskey

Without mindful and strategic investments, a company’s supply chain could become wedged in its own proverbial Suez Canal, ground to a halt by outside forces and its inflexible, complex systems.

 

It’s a dramatic image, but one that became reality for many companies in the last year. Supply chain failures aren’t typically such high-profile events as the Suez Canal blockage, but rather death by a thousand inefficiencies, each slowing business operations and affecting the customer experience.

Delay by delay and spreadsheet by spreadsheet, companies are at risk of falling behind more nimble, cloud-enabled competitors. And as we emerge from the pandemic with a new understanding of how important adaptable, integrated supply chains are, company leaders have critical choices to make.

The Hannover Messe conference (held online from April 12-16) gives manufacturing and supply chain executives around the world a chance to hear perspectives from industry leaders and explore the latest manufacturing and supply chain technologies available.

Technology holds great promise. But if executives don’t ask key strategic questions to supply chain software vendors, they could unknowingly introduce a range of operational and strategic obstacles into their company’s future.

If you’re attending Hannover Messe, here are a few critical questions to ask:

Are advanced technologies like machine learning, IoT, and blockchain integrated into your supply chain applications and business processes, or are they addressed separately?

It’s important to go beyond the marketing. Is the vendor actually promoting pilots of advanced technologies that are simply customized use cases for small parts of an overall business process hosted on a separate platform? If so, it may be up to your company to figure out how to integrate it with the rest of that vendor’s applications and to maintain those integrations.

To avoid this situation, seek solutions that have been purpose-built to leverage advanced technologies across use cases that address the problems you hope to solve. It’s also critical that these solutions come with built-in connections to ensure easy integration across your enterprise and to third party applications.

Are your applications or solutions written specifically for the cloud?

If a vendor’s solution for a key process (like integrated business planning or plan to produce, for example) includes applications developed over time by a range of internal development teams, partners, and acquired companies, what you’re likely to end up with is a range of disjointed applications and processes with varying user interfaces and no common data model. Look for a cloud solution that helps connect and streamline your business processes seamlessly.

Update schedules for the various applications could also be disjointed and complicated, so customers can be tempted to skip updates. But some upgrades may be forced, causing disruption in key areas of your business at various times.

And if some of the applications in the solution were written for the on-premises world, business processes will likely need customization, making them hard-wired and inflexible. The convenience of cloud solutions is that they can take frequent updates more easily, resulting in greater value driven by the latest innovations.

Are your supply chain applications fully integrated—and can they be integrated with other key applications like ERP or CX?

A lack of integration between and among applications within the supply chain and beyond means that end users don’t have visibility into the company’s operations—and that directly affects the quality and speed of business decisions. When market disruptions or new opportunities occur, unintegrated systems make it harder to shift operations—or even come to an agreement on what shift should happen.

And because many key business processes span multiple areas—like manufacturing forecast to plan, order to cash, and procure to pay—integration also increases efficiency. If applications are not integrated across these entire processes, business users resort to pulling data from the various systems and then often spend time debating whose data is right.

Of course, all of these issues increase operational costs and make it harder for a company to adapt to change. They also keep the IT department busy with maintenance tasks rather than focusing on more strategic projects.

Do you rely heavily on partners to deliver functionality in your supply chain solutions?

Ask for clarity on which products within the solution belong to the vendor and which were developed by partners. Is there a single SLA for the entire solution? Will the two organizations’ development teams work together on a roadmap that aligns the technologies? Will their priority be on making a better solution together or on enhancements to their own technology? Will they focus on enabling data to flow easily across the supply chain solution, as well as to other systems like ERP? Will they be able to overcome technical issues that arise and streamline customer support?

It’s critical for supply chain decision-makers to gain insight into these crucial questions. If the vendor is unable to meet these foundational needs, the customer will face constant obstacles in their supply chain operations.

Originally posted here.

Read more…

By Ricardo Buranello

What Is the Concept of a Virtual Factory?

For a decade, the first Friday in October has been designated as National Manufacturing Day. This day begins a month-long events schedule at manufacturing companies nationwide to attract talent to modern manufacturing careers.

For some period, manufacturing went out of fashion. Young tech talents preferred software and financial services career opportunities. This preference has changed in recent years. The advent of digital technologies and robotization brought some glamour back.

The connected factory is democratizing another innovation — the virtual factory. Without critical asset connection at the IoT edge, the virtual factory couldn’t have been realized by anything other than brand-new factories and technology implementations.

There are technologies that enable decades-old assets to communicate. Such technologies allow us to join machine data with physical environment and operational conditions data. Benefits of virtual factory technologies like digital twin are within reach for greenfield and legacy implementations.

Digital twin technologies can be used for predictive maintenance and scenario planning analysis. At its core, the digital twin is about access to real-time operational data to predict and manage the asset’s life cycle. It leverages relevant life cycle management information inside and outside the factory. The possibilities of bringing various data types together for advanced analysis are promising.

I used to see a distinction between IoT-enabled greenfield technology in new factories and legacy technology in older ones. Data flowed seamlessly from IoT-enabled machines to enterprise systems or the cloud for advanced analytics in new factories’ connected assets. In older factories, while data wanted to move to the enterprise systems or the cloud, it hit countless walls. Innovative factories were creating IoT technologies in proof of concepts (POCs) on legacy equipment, but this wasn’t the norm.

No matter the age of the factory or equipment, everything looks alike. When manufacturing companies invest in machines, the expectation is this asset will be used for a decade or more. We had to invent something inclusive to new and legacy machines and systems.

We had to create something to allow decades-old equipment from diverse brands and types (PLCs, CNCs, robots, etc.) to communicate with one another. We had to think in terms of how to make legacy machines to talk to legacy systems. Connecting was not enough. We had to make it accessible for experienced developers and technicians not specialized in systems integration.

If plant managers and leaders have clear and consumable data, they can use it for analysis and measurement. Surfacing and routing data has enabled innovative use cases in processes controlled by aged equipment. Prescriptive and predictive maintenance reduce downtime and allow access to data. This access enables remote operation and improved safety on the plant floor. Each line flows better, improving supply chain orchestration and worker productivity.

Open protocols aren’t optimized for connecting to each machine. You need tools and optimized drivers to connect to the machines, cut latency time and get the data to where it needs to be in the appropriate format to save costs. These tools include:

  • Machine data collection
  • Data transformation and visualization
  • Device management
  • Edge logic
  • Embedded security
  • Enterprise integration
This digital copy of the entire factory floor brings more promise for improving productivity, quality, downtime, throughput and lending access to more data and visibility. It enables factories to make small changes in the way machines and processes operate to achieve improvements.

Plants are trying to get and use data to improve overall equipment effectiveness. OEE applications can calculate how many good and bad parts were produced compared to the machine’s capacity. This analysis can go much deeper. Factories can visualize how the machine works down to sub-processes. They can synchronize each movement to the millisecond and change timing to increase operational efficiency.

The technology is here. It is mature. It’s no longer a question of whether you want to use it — you have it to get to what’s next. I think this makes it a fascinating time for smart manufacturing.

Originally posted here.

Read more…

By Jacqi Levy

The Internet of Things (IoT) is transforming every facet of the building – how we inhabit them, how we manage them, and even how we build them. There is a vast ecosystem around today’s buildings, and no part of the ecosystem is untouched.

In this blog series, I plan to examine the trends being driven by IoT across the buildings ecosystem. Since the lifecycle of building begins with design and construction, let’s start there. Here are four ways that the IoT is radically transforming building design and construction.

Building information modeling

Building information modeling (BIM) is a process that provides an intelligent, 3D model of a building. Typically, BIM is used to model a building’s structure and systems during design and construction, so that changes to one set of plans can be updated simultaneously in all other impacted plans. Taken a step further, however, BIM can also become a catalyst for smart buildings projects.

Once a building is up and running, data from IoT sensors can be pulled into the BIM. You can use that data to model things like energy usage patterns, temperature trends or people movement throughout a building. The output from these models can then be analyzed to improve future buildings projects. Beyond its impact on design and construction, BIM also has important implications for the management of building operations.

Green building

The construction industry is a huge driver of landfill waste – up to 40% of all solid waste in the US comes from the buildings projects. This unfortunate fact has ignited a wave of interest in sustainable architecture and construction. But the green building movement has become about much more than keeping building materials out of landfills. It is influencing the design and engineering of building systems themselves, allowing buildings to reduce their impact on the environment through energy management.

Today’s green buildings are being engineered to do things like shut down unnecessary systems automatically when the building is unoccupied, or open and close louvers automatically to let in optimal levels of natural light. In a previous post, I talk about 3 examples of the IoT in green buildings, but these are just some of the cool ways that the construction industry is learning to be more sustainable with help from the IoT.

Intelligent prefab

Using prefabricated building components can be faster and more cost effective than traditional building methods, and it has an added benefit of creating less construction waste. However, using prefab for large commercial buildings projects can be very complex to coordinate. The IoT is helping to solve this problem.

Using RFID sensors, individual prefab parts can be tracked throughout the supply chain. A recent example is the construction of the Leadenhall Building in London. Since the building occupies a relatively small footprint but required large prefabricated components, it was a logistically complex task to coordinate the installation. RFID data was used to help mitigate the effects of any downstream delays in construction. In addition, the data was the fed into the BIM once parts were installed, allowing for real time rendering of the building in progress, as well as establishment of project controls and KPIs.

Construction management

Time is money, so any delays on a construction project can be costly. So how do you prevent your critical heavy equipment from going down and backing up all the other trades on site? With the IoT!

Heavy construction equipment is being outfitted with sensors, which can be remotely monitored for key indicators of potential maintenance issues like temperature fluctuations, excessive vibrations, etc. When abnormal patterns are detected, alerts can trigger maintenance workers to intervene early, before critical equipment fails. Performing predictive maintenance in this way can save time and money, as well as prevent unnecessary delays in construction projects.

Originally posted here.

Read more…

By Ashley Ferguson

Thanks to the introduction of connected products, digital services, and increased customer expectations, it has been the trend for IoT enterprise spend to consistently increase. The global IoT market is projected to reach $1.4 trillion USD by 2027. The pressure to build IoT solutions and get a return on those investments has teams on a frantic search for IoT engineers to secure in-house IoT expertise. However, due to the complexity of IoT solutions, finding this in a single engineer is a difficult or impossible proposition.

So how do you adjust your search for an IoT engineer? The first step is to acknowledge that IoT solution development requires the fusion of multiple disciplines. Even simple IoT applications require hardware and software engineering, knowledge of protocols and connectivity, web development skills, and analytics. Certainly, there are many engineers with IoT knowledge, but complete IoT solutions require a team of partners with diverse skills. This often requires utilizing external sources to supplement the expertise gaps.

THE ANATOMY OF AN IoT SOLUTION

IoT solutions provide enterprises with opportunities for innovation through new product offerings and cost savings through refined operations. An IoT solution is an integrated bundle of technologies that help users answer a question or solve a specific problem by receiving data from devices connected to the internet. One of the most common IoT use cases is asset tracking solutions for enterprises who want to monitor trucks, equipment, inventory, or other items with IoT. The anatomy of an asset tracking IoT solution includes the following:

9266380467?profile=RESIZE_710x

This is a simple asset tracking example. For more complex solutions including remote monitoring or predictive maintenance, enterprises must also consider installation, increased bandwidth, post-development support, and UX/UI for the design of the interface for customers or others who will use the solution. Enterprise IoT solutions require an ecosystem of partners, components, and tools to be brought to market successfully.

Consider the design of your desired connected solution. Do you know where you will need to augment skills and services?

If you are in the early stages of IoT concept development and at the center of a buy vs. build debate, it may be a worthwhile exercise to assess your existing team’s skills and how they correspond with the IoT solution you are trying to build.

IoT SKILLS ASSESSMENT

  • Hardware
  • Firmware
  • Connectivity
  • Programming
  • Cloud
  • Data Science
  • Presentation
  • Technical Support and Maintenance
  • Security
  • Organizational Alignment

MAKING TIME FOR IoT APPLICATION DEVELOPMENT

The time it will take your organization to build a solution is dependent on the complexity of the application. One way to estimate the time and cost of IoT application development is with Indeema’s IoT Cost Calculator. This tool can help roughly estimate the hours required and the cost associated with the IoT solution your team is interested in building. In MachNation’s independent comparison of the Losant Enterprise IoT Platform and Azure, it was determined that developers could build an IoT solution in 30 hours using Losant and in 74-94 hours using Microsoft Azure.

As you consider IoT application development, consider the makeup of your team. Is your team prepared to dedicate hours to the development of a new solution, or will it be a side project? Enterprise IT teams are often in place to maintain existing operating systems and to ensure networks are running smoothly. In the event that an IT team is tapped to even partially build an IoT solution, there is a great chance that the IT team will need to invite partners to build or provide part of the stack.

HOW THE IoT JOB GETS DONE

Successful enterprises recognize early on that some of these skills will need to be augmented through additional people, through an ecosystem, or with software. It will require more than one ‘IoT engineer’ for the job. According to the results of a McKinsey survey, “the preferences of IoT leaders suggest a greater willingness to draw capabilities from an ecosystem of technology partners, rather than rely on homegrown capabilities.”

IoT architecture alone is intricate. Losant, an IoT application enablement platform, is designed with many of the IoT-specific components already in place. Losant enables users to build applications in a low-to-no code environment and scale them up to millions of devices. Losant is one piece in the wider scope of an IoT solution. In order to build a complete solution, an enterprise needs hardware, software, connectivity, and integration. For those components, our team relies on additional partners from the IoT ecosystem.

The IoT ecosystem, also known as the IoT landscape, refers to the network of IoT suppliers (hardware, devices, software platforms, sensors, connectivity, software, systems integrators, data scientists, data analytics) whose combined services help enterprises create complete IoT solutions. At Losant, we’ve built an IoT ecosystem with reliable experienced partners. When IoT customers need custom hardware, connectivity, system integrators, dev shops, or other experts with proven IoT expertise, we can tap one of our partners to help in their areas of expertise.

SECURE, SCALABLE, SEAMLESS IoT

Creating secure, scalable, and seamless IoT solutions for your environment begins by starting small. Starting small gives your enterprise the ability to establish its ecosystem. Teams can begin with a small investment and apply learnings to subsequent projects. Many IoT success stories begin with enterprises setting out to solve one problem. The simple beginnings have enabled them to now reap the benefits of the data harvest in their environments.

Originally posted here.

Read more…

By Tony Pisani

For midstream oil and gas operators, data flow can be as important as product flow. The operator’s job is to safely move oil and natural gas from its extraction point (upstream), to where it’s converted to fuels (midstream), to customer delivery locations (downstream). During this process, pump stations, meter stations, storage sites, interconnection points, and block valves generate a substantial volume and variety of data that can lead to increased efficiency and safety.

“Just one pipeline pump station might have 6 Programmable Logic Controllers (PLCs), 12 flow computers, and 30 field instruments, and each one is a source of valuable operational information,” said Mike Walden, IT and SCADA Director for New Frontier Technologies, a Cisco IoT Design-In Partner that implements OT and IT systems for industrial applications. Until recently, data collection from pipelines was so expensive that most operators only collected the bare minimum data required to comply with industry regulations. That data included pump discharge pressure, for instance, but not pump bearing temperature, which helps predict future equipment failures.

A turnkey solution to modernize midstream operations

Now midstream operators are modernizing their pipelines with Industrial Internet of Things (IIoT) solutions. Cisco and New Frontier Technologies have teamed up to offer a solution combining the Cisco 1100 Series Industrial Integrated Services Router, Cisco Edge Intelligence, and New Frontier’s know-how. Deployed at edge locations like pump stations, the solution extracts data from pipeline equipment and is sent via legacy protocols, transforming data at the edge to a format that analytics and other enterprise applications understand. The transformation also minimizes bandwidth usage.

Mike Walden views the Cisco IR1101 as a game-changer for midstream operators. He shared with me that “Before the Cisco IR1101, our customers needed four separate devices to transmit edge data to a cloud server—a router at the pump station, an edge device to do protocol conversion from the old to the new, a network switch, and maybe a firewall to encrypt messages…With the Cisco IR1101, we can meet all of those requirements with one physical device.”

Collect more data, at almost no extra cost

Using this IIoT solution, midstream operators can for the first time:

  • Collect all available field data instead of just the data on a polling list. If the maintenance team requests a new type of data, the operations team can meet the request using the built-in protocol translators in Edge Intelligence. “Collecting a new type of data takes almost no extra work,” Mike said. “It makes the operations team look like heroes.”
  • Collect data more frequently, helping to spot anomalies. Recording pump discharge pressure more frequently, for example, makes it easier to detect leaks. Interest in predicting (rather than responding to) equipment failure is also growing. The life of pump seals, for example, depends on both the pressure that seals experience over their lifetime and the peak pressures. “If you only collect pump pressure every 30 minutes, you probably missed the spike,” Mike explained. “If you do see the spike and replace the seal before it fails, you can prevent a very costly unexpected outage – saving far more than the cost of a new seal.”
  • Protect sensitive data with end-to-end security. Security is built into the IR1101, with secure boot, VPN, certificate-based authentication, and TLS encryption.
  • Give IT and OT their own interfaces so they don’t have to rely on the other team. The IT team has an interface to set up network templates to make sure device configuration is secure and consistent. Field engineers have their own interface to extract, transform, and deliver industrial data from Modbus, OPC-UA, EIP/CIP, or MQTT devices.

As Mike summed it up, “It’s finally simple to deploy a secure industrial network that makes all field data available to enterprise applications—in less time and using less bandwidth.”

Originally posted here.

Read more…

By GE Digital

“The End of Cloud Computing.” “The Edge Will Eat The cloud.” “Edge Computing—The End of Cloud Computing as We Know It.”  

Such headlines grab attention, but don’t necessarily reflect reality—especially in Industrial Internet of Things (IoT) deployments. To be sure, edge computing is rapidly emerging as a powerful force in turning industrial machines into intelligent machines, but to paraphrase Mark Twain: “The reports of the death of cloud are greatly exaggerated.” 

The Tipping Point: Edge Computing Hits Mainstream

We’ve all heard the stats—billions and billions of IoT devices, generating inconceivable amounts of big data volumes, with trillions and trillions of U.S. dollars to be invested in IoT over the next several years. Why? Because industrials have squeezed every ounce of productivity and efficiency out of operations over the past couple of decades, and are now looking to digital strategies to improve production, performance, and profit. 

The Industrial Internet of Things (IIoT) represents a world where human intelligence and machine intelligence—what GE Digital calls minds and machines—connect to deliver new value for industrial companies. 

In this new landscape, organizations use data, advanced analytics, and machine learning to drive digital industrial transformation. This can lead to reduced maintenance costs, improved asset utilization, and new business model innovations that further monetize industrial machines and the data they create. 

Despite the “cloud is dead” headlines, GE believes the cloud is still very important in delivering on the promise of IIoT, powering compute-intense workloads to manage massive amounts of data generated by machines. However, there’s no question that edge computing is quickly becoming a critical factor in the total IIoT equation.

“The End of Cloud Computing.” “The Edge Will Eat The cloud.” “Edge Computing—The End of Cloud Computing as We Know It.”  

Such headlines grab attention, but don’t necessarily reflect reality—especially in Industrial Internet of Things (IoT) deployments. To be sure, edge computing is rapidly emerging as a powerful force in turning industrial machines into intelligent machines, but to paraphrase Mark Twain: “The reports of the death of cloud are greatly exaggerated.”

The Tipping Point: Edge Computing Hits Mainstream

We’ve all heard the stats—billions and billions of IoT devices, generating inconceivable amounts of big data volumes, with trillions and trillions of U.S. dollars to be invested in IoT over the next several years. Why? Because industrials have squeezed every ounce of productivity and efficiency out of operations over the past couple of decades, and are now looking to digital strategies to improve production, performance, and profit. 

The Industrial Internet of Things (IIoT) represents a world where human intelligence and machine intelligence—what GE Digital calls minds and machines—connect to deliver new value for industrial companies. 

In this new landscape, organizations use data, advanced analytics, and machine learning to drive digital industrial transformation. This can lead to reduced maintenance costs, improved asset utilization, and new business model innovations that further monetize industrial machines and the data they create. 

Despite the “cloud is dead” headlines, GE believes the cloud is still very important in delivering on the promise of IIoT, powering compute-intense workloads to manage massive amounts of data generated by machines. However, there’s no question that edge computing is quickly becoming a critical factor in the total IIoT equation. 

What is edge computing? 

The “edge” of a network generally refers to technology located adjacent to the machine which you are analyzing or actuating, such as a gas turbine, a jet engine, or magnetic resonance (MR) scanner. 

Until recently, edge computing has been limited to collecting, aggregating, and forwarding data to the cloud. But what if instead of collecting data for transmission to the cloud, industrial companies could turn massive amounts of data into actionable intelligence, available right at the edge? Now they can. 

This is not just valuable to industrial organizations, but absolutely essential.

Edge computing vs. Cloud computing 

Cloud and edge are not at war … it’s not an either/or scenario. Think of your two hands. You go about your day using one or the other or both depending on the task. The same is true in Industrial Internet workloads. If the left hand is edge computing and the right hand is cloud computing, there will be times when the left hand is dominant for a given task, instances where the right hand is dominant, and some cases where both hands are needed together. 

Scenarios in which edge computing will take a leading position include things such as low latency, bandwidth, real-time/near real-time actuation, intermittent or no connectivity, etc. Scenarios where cloud will play a more prominent role include compute-heavy tasks, machine learning, digital twins, cross-plant control, etc. 

The point is you need both options working in tandem to provide design choices across edge to cloud that best meet business and operational goals.

Edge Computing and Cloud Computing: Balance in Action 

Let’s look at a couple of illustrations. In an industrial context, examples of intelligent edge machines abound—pumps, motors, sensors, blowout preventers and more benefit from the growing capabilities of edge computing for real-time analytics and actuation. 

Take locomotives. These modern 200 ton digital machines carry more than 200 sensors that can pump one billion instructions per second. Today, applications can not only collect data locally and respond to changes on that data, but they can also perform meaningful localized analytics. GE Transportation’s Evolution Series Tier 4 Locomotive uses on-board edge computing to analyze data and apply algorithms for running smarter and more efficiently. This improves operational costs, safety, and uptime. 

Sending all that data created by the locomotive to the cloud for processing, analyzing, and actuation isn’t useful, practical, or cost-effective. 

Now let’s switch gears (pun intended) and talk about another mode of transportation—trucking. Here’s an example where edge plays an important yet minor role, while cloud assumes a more dominant position. In this example, the company has 1,000 trucks under management. There are sensors on each truck tracking performance of the vehicle such as engine, transmission, electrical, battery, and more. 

But in this case, instead of real-time analytics and actuation on the machine (like our locomotive example), the data is being ingested, then stored and forwarded to the cloud where time series data and analytics are used to track performance of vehicle components. The fleet operator then leverages a fleet management solution for scheduled maintenance and cost analysis. This gives him or her insights such as the cost over time per part type, or the median costs over time, etc. The company can use this data to improve uptime of its vehicles, lower repair costs, and improve the safe operation of the vehicle.

What’s next in edge computing 

While edge computing isn’t a new concept, innovation is now beginning to deliver on the promise—unlocking untapped value from the data being created by machines. 

GE has been at the forefront of bridging minds and machines. Predix Platform supports a consistent execution environment across cloud and edge devices, helping industrials achieve new levels of performance, production, and profit.

Originally posted here.

Read more…

Computer vision is fundamental to capturing real-world data within the IoT. Arm technology provides a secure ecosystem for smart cameras in business, industrial and home applications

By Mohamed Awad, VP IoT & Embedded, Arm

Computer vision leverages artificial intelligence (AI) to enable devices such as smart cameras to interpret and understand what is happening in an image. Recreating a sensor as powerful as the human eye with technology opens up a wide and varied range of use cases for computers to perform tasks that previously required human sight – so it’s no wonder that computer vision is quickly becoming one of the most important ways to capture and act on real-world data within the Internet of Things (IoT).

Smart cameras now use computer vision in a range of business and industrial applications, from counting cars in parking lots to monitoring footfall in retail stores or spotting defects on a production line. And in the home, smart cameras can tell us when a package has been delivered, whether the dog escaped from the back yard or when our baby is awake.

Across the business and consumer worlds, the adoption of smart camera technology is growing exponentially. In its 2020 report “Cameras and Computing for Surveillance and Security”, market research and strategy consulting company Yole Développement estimates that for surveillance alone, there are approximately one billion cameras across the world. That number of installations is expected to double by 2024.

This technology features key advancements in security, heterogeneous computing, image processing and cloud services – enabling future computer vision products that are more capable than ever.

Smart camera security is top priority for computer vision

IoT security is a key priority and challenge for the technology industry. It’s important that all IoT devices are secure from exploitation by malicious actors, but it’s even more critical when that device captures and stores image data about people, places and high-value assets.

Unauthorized access to smart cameras tasked with watching over factories, hospitals, schools or homes would not only be a significant breach of privacy, it could also lead to untold harm—from plotting crimes to the leaking of confidential information. Compromising a smart camera could also provide a gateway, giving a malicious actor access to other devices within the network – from door, heating and lighting controls to control over an entire smart factory floor.

We need to be able to trust smart cameras to maintain security for us all, not open up new avenues for exploitation. Arm has embraced the importance of security in IoT devices for many years through its product portfolio offerings such as Arm TrustZone for both Cortex-A and Cortex-M.

In the future, smart camera chips based on the Armv9 architecture will add further security enhancements for computer vision products through the Arm Confidential Compute Architecture (CCA).

Further to this, Arm promotes common standards of security best practice such as PSA Certified and PARSEC. These are designed to ensure that all future smart camera deployments have built-in security, from the point the image sensor first records the scene to storage, whether that data is stored locally or in the cloud by using advanced security and data encryption techniques.

Endpoint AI powers computer vision in smart camera devices

9197834489?profile=RESIZE_710x

The combination of image sensor technology and endpoint AI is enabling smart cameras to infer increasingly complex insights from the vast amounts of computer vision data they capture. New machine learning capabilities within smart camera devices meet a diverse range of use cases – such as detecting individual people or animals, recognizing specific objects and reading license plates. All of these applications for computer vision require ML algorithms running on the endpoint device itself, rather than sending data to the cloud for inference. It’s all about moving compute closer to data.

For example, a smart camera employed at a busy intersection could use computer vision to determine the number and type of vehicles waiting at a red signal at various hours throughout the day. By processing its own data and inferring meaning using ML, the smart camera could automatically adjust its timings in order to reduce congestion and limit build-up of emissions automatically without human involvement.

Arm’s investment in AI for applications in endpoints and beyond is demonstrated through its range of Ethos machine learning processors: highly scalable and efficient NPUs capable of supporting a range of 0.1 to 10 TOP/s through many-core technologies. Software also plays a vital role in ML and this is why Arm continues to support the open-source community through the Arm NN SDK and TensorFlow Lite for Microcontrollers (TFLM) open-source frameworks.

These machine learning workload frameworks are based on existing neural networks and power-efficient Arm Cortex-A CPUs, Mali GPUs and Ethos NPUs as well as Arm Compute library and CMSIS-NN – a collection of low-level machine learning functions optimized for Cortex-A CPU, Cortex-M CPU and Mali GPU architectures.

The Armv9 architecture supports enhanced AI capabilities, too, by providing accessible vector arithmetic (individual arrays of data that can be computed in parallel) via Scalable Vector Extension 2 (SVE2). This enables scaling of the hardware vector length without having to rewrite or recompile code. In the future, extensions for matrix multiplication (a key element in enhancing ML) will push the AI envelope further.

Smart cameras connected in the cloud

Cloud and edge computing is also helping to expedite the adoption of smart cameras. Traditional CCTV architectures saw camera data stored on-premises via a Network Video Recorder (NVR) or a Digital Video Recorder (DVR). This model had numerous limitations, from the vast amount of storage required to the limited number of physical connections on each NVR.

Moving to a cloud-native model simplifies the rollout of smart cameras enormously: any number of cameras can be provisioned and managed via a configuration file downloaded to the device. There’s also a virtuous cycle at play: Data from smart cameras can be now used to train the models in the cloud for specific use-cases so that cameras become even smarter. And the smarter they become, the less data they need to send upstream.

The use of cloud computing also enables automation of processes via AI sensor fusion by combining computer vision data from multiple smart cameras. Taking our earlier example of the smart camera placed at a road intersection, cloud AI algorithms could combine data from multiple cameras to constantly adjust traffic light timings holistically across an entire city, keeping traffic moving.

Arm enables the required processing continuum from cloud to endpoint. Cortex-M microcontrollers and Cortex-A processors power smart cameras, with Cortex-A processors also powering edge gateways. Cloud and edge servers harness the capabilities of the Neoverse platform.

New hardware and software demands on smart cameras

9197835086?profile=RESIZE_710x

The compute needs for computer vision devices continue to grow year over year, with ultra-high resolution video capture (8K 60fps) and 64-bit (Armv8-A) processing marking the current standard for high-end smart camera products.

As a result, the system-on-chip (SoC) within next-generation smart cameras will need to embrace heterogenous architectures, combining CPUs, GPUs, NPUs alongside dedicated hardware for functions like computer vision, image processing, video encoding and decoding.

Storage, too, is a key concern: While endpoint AI can reduce storage requirements by processing images locally on the camera, many use cases will require that data be retained somewhere for safety and security – whether on the device, in edge servers or in the cloud.

To ensure proper storage of high-resolution computer vision data, new video encoding and decoding standards such as H.265 and AV1 are becoming the de facto standard.

New use cases driving continuous innovation

Overall, the demands from the new use cases are driving the need for continuous improvement in computing and imaging technologies across the board.

When we think about image-capturing devices such as CCTV cameras today, we should no longer imagine grainy images of barely recognizable faces passing by a camera. Advancements in computer vision – more efficient and powerful compute coupled with the intelligence of AI and machine learning – are making smart cameras not just image sensors but image interpreters. This bridge between the analog and digital worlds is opening up new classes of applications and use cases that were unimaginable a few years ago.

Originally posted here.

Read more…

TinyML focuses on optimizing machine learning (ML) workloads so that they can be processed on microcontrollers no bigger than a grain of rice and consuming only milliwatts of power.

By Arm Blueprint staff
 

TinyML focuses on the optimization of machine learning (ML) workloads so that they can be processed on microcontrollers no bigger than a grain of rice and consuming only a few milliwatts of power.

TinyML gives tiny devices intelligence. We mean tiny in every sense of the word: as tiny as a grain of rice and consuming tiny amounts of power. Supported by Arm, Google, Qualcomm and others, tinyML has the potential to transform the Internet of Things (IoT), where billions of tiny devices, based on Arm chips, are already being used to provide greater insight and efficiency in sectors including consumer, medical, automotive and industrial.

Why target microcontrollers with tinyML?

Microcontrollers such as the Arm Cortex-M family are an ideal platform for ML because they’re already used everywhere. They perform real-time calculations quickly and efficiently, so they’re reliable and responsive, and because they use very little power, can be deployed in places where replacing the battery is difficult or inconvenient. Perhaps even more importantly, they’re cheap enough to be used just about anywhere. The market analyst IDC reports that 28.1 billion microcontrollers were sold in 2018, and forecasts that annual shipment volume will grow to 38.2 billion by 2023.

TinyML on microcontrollers gives us new techniques for analyzing and making sense of the massive amount of data generated by the IoT. In particular, deep learning methods can be used to process information and make sense of the data from sensors that do things like detect sounds, capture images, and track motion.

Advanced pattern recognition in a very compact format

Looking at the math involved in machine learning, data scientists found they could reduce complexity by making certain changes, such as replacing floating-point calculations with simple 8-bit operations. These changes created machine learning models that work much more efficiently and require far fewer processing and memory resources.

TinyML technology is evolving rapidly thanks to new technology and an engaged base of committed developers. Only a few years ago, we were celebrating our ability to run a speech-recognition model capable of waking the system if it detects certain words on a constrained Arm Cortex-M3 microcontroller using just 15 kilobytes (KB) of code and 22KB of data.

Since then, Arm has launched new machine learning (ML) processors, called the Ethos-U55 and Ethos-U65, a microNPU specifically designed to accelerate ML inference in embedded and IoT devices.

The Ethos-U55, combined with the AI-capable Cortex-M55 processor, will provide a significant uplift in ML performance and improvement in energy efficiency over the already impressive examples we are seeing today.

TinyML takes endpoint devices to the next level

The potential use cases of tinyML are almost unlimited. Developers are already working with tinyML to explore all sorts of new ideas: responsive traffic lights that change signaling to reduce congestion, industrial machines that can predict when they’ll need service, sensors that can monitor crops for the presence of damaging insects, in-store shelves that can request restocking when inventory gets low, healthcare monitors that track vitals while maintaining privacy. The list goes on.

TinyML can make endpoint devices more consistent and reliable, since there’s less need to rely on busy, crowded internet connections to send data back and forth to the cloud. Reducing or even eliminating interactions with the cloud has major benefits including reduced energy use, significantly reduced latency in processing data and security benefits, since data that doesn’t travel is far less exposed to attack. 

It’s worth nothing that these tinyML models, which perform inference on the microcontroller, aren’t intended to replace the more sophisticated inference that currently happens in the cloud. What they do instead is bring specific capabilities down from the cloud to the endpoint device. That way, developers can save cloud interactions for if and when they’re needed. 

TinyML also gives developers a powerful new set of tools for solving problems. ML makes it possible to detect complex events that rule-based systems struggle to identify, so endpoint AI devices can start contributing in new ways. Also, since ML makes it possible to control devices with words or gestures, instead of buttons or a smartphone, endpoint devices can be built more rugged and deployable in more challenging operating environments. 

TinyML gaining momentum with an expanding ecosystem

Industry players have been quick to recognize the value of tinyML and have moved rapidly to create a supportive ecosystem. Developers at every level, from enthusiastic hobbyists to experienced professionals, can now access tools that make it easy to get started. All that’s needed is a laptop, an open-source software library and a USB cable to connect the laptop to one of several inexpensive development boards priced as low as a few dollars.

In fact, at the start of 2021, Raspberry Pi released its very first microcontroller board, one of the most affordable development board available in the market at just $4. Named Raspberry Pi Pico, it’s powered by the RP2040 SoC, a surprisingly powerful dual Arm Cortex-M0+ processor. The RP2040 MCU is able to run TensorFlow Lite Micro and we’re expecting to see a wide range of ML use cases for this board over the coming months.

Arm is a strong proponent of tinyML because our microcontroller architectures are so central to the IoT, and because we see the potential of on-device inference. Arm’s collaboration with Google is making it even easier for developers to deploy endpoint machine learning in power-conscious environments.

The combination of Arm CMSIS-NN libraries with Google’s TensorFlow Lite Micro (TFLu) framework, allows data scientists and software developers to take advantage of Arm’s hardware optimizations without needing to become experts in embedded programming.

On top of this, Arm is investing in new tools derived from Keil MDK to help developers get from prototype to production when deploying ML applications.

TinyML would not be possible without a number of early influencers. Pete Warden, a “founding father” of tinyML and a technical lead of TensorFlow Lite Micro at Google,&nbspArm Innovator, Kwabena Agyeman, who developed OpenMV, a project dedicated to low-cost, extensible, Python-powered machine-vision modules that support machine learning algorithms, and Arm Innovator, Daniel Situnayake a founding tinyML engineer and developer from Edge Impulse, a company that offers a full tinyML pipeline that covers data collection, model training and model optimization. Also, Arm partners such as Cartesiam.ai, a company that offers NanoEdge AI, a tool that creates software models on the endpoint based on the sensor behavior observed in real conditions have been pushing the possibilities of tinyML to another level. 

Arm, is also a partner of the TinyML Foundation, an open community that coordinates meet-ups to help people connect, share ideas, and get involved. There are many localised tinyML meet-ups covering UK, Israel and Seattle to name a few, as well as a global series of tinyML Summits. For more information, visit the tinyML foundation website.

Originally posted here.

Read more…

What is 5G NR (New Radio)?

by Gus Vos

Unless you have been living under a rock, you have been seeing and hearing a lot about&nbsp5G these days. In addition, if you are at all involved in Internet of Things (IoT) or other initiatives at your organization that use cellular networking technologies, you have also likely heard about 5G New Radio, otherwise known as 5G NR, the new 5G radio access technology specification.

However, all the jargon, hype, and sometimes contradictory statements made by solution providers, the media, and analysts regarding 5G and 5G NR can make it difficult to understand what 5G NR actually is, how it works, what its advantages are, to what extent it is different than other cellular radio access technologies, and perhaps most importantly, how your organization can use this new radio access technology.

In this blog, we will provide you with an overview on 5G NR, offering you answers to these and other basic 5G NR questions – with a particular focus on what these answers mean for those in the IoT industry. 

We can’t promise to make you a 5G NR expert with this blog – but we can say that if you are confused about 5G NR before reading it, you will come away afterward with a better understanding of what 5G NR is, how it works, and how it might transform your industry.

What is the NR in 5G NR?

As its name implies, 5G New Radio or 5G NR is the new radio access technology specification found in the 5G standard. 

Set by the 3rd Generation Partnership Project (3GPP) telecommunications standards group, the 5G NR specification defines how 5G NR edge devices (smart phones, embedded modules, routers, and gateways) and 5G NR network infrastructure (base stations, small cells, and other Radio Access Network equipment) wirelessly transmit data. To put it another way, 5G NR describes how 5G NR edge devices and 5G NR network infrastructure use radio waves to talk to each other. 

5G NR is a very important part of 5G. After all, it describes how 5G solutions will use radio waves to wirelessly transmit data faster and with less latency than previous radio access technology specifications. However, while 5G NR is a very important part of the new 5G standard, it does not encompass everything related to 5G. 

For example, 5G includes a new core network architecture standard (appropriately named 5G Core Network or 5GCN) that specifies the architecture of the network that collects, processes, and routes data from edge devices and then sends this data to the cloud, other edge devices, or elsewhere. The 5GCN will improve 5G networks’ operational capacity, efficiency, and performance.

However, 5GCN is not a radio access technology like 5G NR, but rather a core network technology. In fact, networks using the 5GCN core network will be able to work with previous types of radio access technologies – like LTE. 

Is 5G NR one of 5G’s most important new technological advancements? Yes. But it is not the only technological advancement to be introduced by 5G.  

How does 5G NR work?

Like all radio access communications technology specifications, the 5G NR specification describes how edge devices and network infrastructure transmit data to each other using electromagnetic radio waves. Depending on the frequency of the electromagnetic waves (how long the wave is), it occupies a different part of the wireless spectrum.

Some of the waves that 5G NR uses have frequencies of between 400 MHz and 6 GHz. These waves occupy what is called sub-6 spectrum (since their frequencies are all under 6 GHz).

This sub-6 spectrum is used by other cellular radio access technologies, like LTE, as well. In the past, using different cellular radio access technologies like this over the same spectrum would lead to unmanageable interference problems, with the different technologies radio waves interfering with each other. 

One of 5G NR’s many advantages is that it’s solved this problem, using a technology called Dynamic Spectrum Sharing (DSS). This DSS technology allows 5G NR signals to use the same band of spectrum as LTE and other cellular technologies, like LTE-M and NB-IoT. This allows 5G NR networks to be rolled out without shutting down LTE or other networks that support existing LTE smart phones or IoT devices. You can learn more about DSS, and how it speeds the rollout of 5G NR while also extending the life of IoT devices, here.

One of 5G NR’s other major advancements is that it does not just use waves in the sub-6 spectrum to transmit data. The 5G NR specification also specifies how edge devices and network infrastructure can use radio waves in bands between 24 GHz and 52 GHz to transmit data.

These millimeter wave (mmWave) bands greatly expand the amount of spectrum available for wireless data communications. The lack of spectrum capacity has been a problem in the past, as there is a limited number of bands of sub-6 spectrum available for organizations to use for cellular communications, and many of these bands are small. Lack of available capacity and narrow spectrum bands led to network congestion, which limits the amount of data that can be transmitted over networks that use sub-6 spectrum. 

mmWave opens up a massive amount of new wireless spectrum, as well as much broader bands of wireless spectrum for cellular data transmission. This additional spectrum and these broader spectrum bands increase the capacity (amount of data) that can be transmitted over these bands, enabling 5G NR mmWave devices to achieve data speeds that are four or more times faster than devices that use just sub-6 spectrum. 

The additional wireless capacity provided by mmWave also reduces latency (the time between when device sends a signal and when it receives a response). By reducing latency from 10 milliseconds with sub-6 devices to 3-4 milliseconds or lower with 5G NR mmWave devices, 5G enables new industrial automation, autonomous vehicle and immersive gaming use cases, as well as Virtual Reality (VR), Augmented Reality (AR), and similar Extended Reality (XR) use cases, all of which require very low latency. 

On the other hand, these new mmWave devices and network infrastructure come with new technical requirements, as well as drawbacks associated with their use of mmWave spectrum. For example, mmWave devices use more power and generate more heat than sub-6 devices. In addition, mmWave signals have less range and do not penetrate walls and other physical objects as easily as sub-6 waves. 5G NR includes some technologies, such as beamforming and massive Multiple Input Multiple Output (MIMO) that lessen some of these range and obstacle penetration limitations – but they do not eliminate them. 

To learn more about the implications of 5G NR mmWave on the design of IoT and other products, read our blog, Seven Tips For Designing 5G NR mmWave Products.

In addition, there has been a lot written on these two different “flavors” (sub-6 and mmWave) of 5G NR. If you are interested in learning more about the differences between sub-6 5G NR and mmWave 5G NR, and how together they enable both evolutionary and revolutionary changes for Fixed Wireless Access (FWA), mobile broadband, IoT and other wireless applications, read our previous blog A Closer Look at the Five Waves of 5G.

What is the difference between 5G NR and LTE?

Though sub-6 and mmWave are very different, both types of 5G NR provide data transfer speed, latency, and other performance improvements compared to LTE, the previous radio access technology specification used for cellular communications. 

For example, outside of its use of mmWave, 5G NR features other technical advancements designed to improve network performance, including:

• Flexible numerology, which enables 5G NR network infrastructure to set the spacing between subcarriers in a band of wireless spectrum at 15, 30, 60, 120 and 240 kHz, rather than only use 15 kHz spacing, like LTE. This flexible numerology is what allows 5G NR to use mmWave spectrum in the first place. It also improves the performance of 5G NR devices that use higher sub-6 spectrum, such as 3.5 GHz C-Band spectrum, since the network can adjust the subcarrier spacing to meet the particular spectrum and use case requirements of the data it is transmitting. For example, when low latency is required, the network can use wider subcarrier spacing to help improve the latency of the transmission.
• Beamforming, in which massive MIMO (multiple-input and multiple-output) antenna technologies are used to focus wireless signal and then sweep them across areas till they make a strong connection. Beamforming helps extend the range of networks that use mmWave and higher sub-6 spectrum.  
• Selective Hybrid Automatic Repeat Request (HARQ), which allows 5G NR to break large data blocks into smaller blocks, so that when there is an error, the retransmission is smaller and results in higher data transfer speeds than LTE, which transfers data in larger blocks. 
• Faster Time Division Duplexing (TDD), which enables 5G NR networks to switch between uplink and downlink faster, reducing latency. 
• Pre-emptive scheduling, which lowers latency by allowing higher-priority data to overwrite or pre-empt lower-priority data, even if the lower-priority data is already being transmitted. 
• Shorter scheduling units that trim the minimum scheduling unit to just two symbols, improving latency.
• A new inactive state for devices. LTE devices had two states – idle and connected. 5G NR includes a new state – inactive – that reduces the time needed for an edge device to move in and out of its connected state (the state used for transmission), making the device more responsive. 

These and the other technical advancements made to 5G NR are complicated, but the result of these advancements is pretty simple – faster data speeds, lower latency, more spectrum agility, and otherwise better performance than LTE. 

Are LPWA radio access technology specifications, like NB-IoT and LTE-M, supported by 5G?

Though 5G features a new radio access technology, 5G NR, 5G supports other radio access technologies as well. This includes the Low Power Wide Area (LPWA) technologies, Narrowband IoT (NB-IoT), and Long Term Evolution for Machines (LTE-M). In fact, these LPWA standards are the standards that 5G uses to address one of its three main use cases – Massive, Machine-Type Communications (mMTC). 

Improvements have been and continue to be made to these 5G LPWA standards to address these mMTC use cases – improvements that further lower the cost of LPWA devices, reduce these devices’ power usage, and enable an even larger number of LPWA devices to connect to the network in a given area.

What are the use cases for 5G NR and 5G LPWA Radio Access Technologies?

Today, LTE supports three basic use cases:

• Voice: People today can use LTE to talk to each other using mobile devices. 
• Mobile broadband (MBB): People can use smartphones, tablets, mobile and other edge devices to view videos, play games, and use other applications that require broadband data speeds.
• IoT: People can use cellular modules, routers, and other gateways embedded in practically anything – a smart speaker, a dog collar, a commercial washing machine, a safety shoe, an industrial air purifier, a liquid fertilizer storage tank – to transmit data from the thing to the cloud or a private data center and back via the internet.  

5G NR, as well as 5G’s LPWA radio access technologies (NB-IoT and LTE-M) will continue to support these existing IoT and voice use cases. 

However, 5G also expands on the MBB use case with a new Enhanced Mobile Broadband (eMBB) use case. These eMBB use cases leverage 5G NR’s higher peak and average speeds and lower latency to enable smart phones and other devices to support high-definition cloud-based immersive video games, high quality video calls and new VR, AR, and other XR applications.

In addition, 5G NR also supports a new use case, called Ultra-Reliable, Low-Latency Communications (URLLC). 5G NR enables devices to create connections that are ultra-reliable with very low latency. With these new 5G NR capabilities, as well as 5G NR’s support for very fast handoffs and high mobility, organizations can now deploy new factory automation, smart city 2.0 and other next generation Industrial IoT (IIoT) applications, as well as Vehicle-to-everything (V2X) applications, such as autonomous vehicles. 

As we mentioned above, 5G will also support the new mMTC use case, which represents an enhancement of the existing IoT use case. However, in the case of mMTC, new use cases will be enabled by improvements to LTE-M and NB-IoT radio access technology standards, not 5G NR. Examples of these types of new mMTC use cases include large-scale deployments of small, low cost edge devices (like sensors) for smart city, smart logistics, smart grid, and similar applications.

But this is not all. 3GPP is looking at additional new use cases (and new technologies for these use cases), as discussed in this recent blog on Release 17 of the 5G standard. One of these new technologies is a new Reduced Capability (RedCap) device – sometimes referred to as NR Light – for IoT or MTC use cases that require faster data speeds than LPWA devices can provide, but also need devices that are less expensive than the 5G NR devices being deployed today.

3GPP is also examining standard changes to NR, LTE-M, and NB-IoT in 5G Release 17 that would make it possible for satellites to use these technologies for Non-Terrestrial Network (NTN) communications. This new NTN feature would help enable the deployment of satellites able to provide NR, LTE-M, and NB-IoT coverage in very remote areas, far away from cellular base stations.

What should you look for in a 5G NR module, router or gateway solution?

While all 5G NR edge devices use the 5G NR technology specification, they are not all created equal. In fact, the flexibility, performance, quality, security, and other capabilities of a 5G NR edge device can make the difference between a successful 5G NR application rollout and a failed one. 

As they evaluate 5G NR edge devices for their application, organizations should ask themselves the following questions:

• Is the edge device multi-mode? 
While Mobile Network Operators (MNOs) are rapidly expanding their 5G NR networks, there are still many areas where 5G NR coverage is not available. Multi-mode edge devices that can support LTE, or even 3G, help ensure that wherever the edge device is deployed, it will be able to connect to a MNO’s network – even if this connection does not provide the data speed, latency, or other performance needed to maximize the value of the 5G NR application. 

In addition, many MNOs are rolling out non-standalone (NSA) 5G NR networks at first. These NSA 5G NR networks need a LTE connection in addition to a 5G NR connection to transmit data from and to 5G NR devices. If your edge device does not include support for LTE, it will not be able to use 5G NR on these NSA networks. 

• How secure are the edge devices? 
Data is valuable and sensitive – and the data transmitted by 5G NR devices is no different. To limit the risk that this data is exposed, altered, or destroyed, organizations need to adopt a Defense in Depth approach to 5G NR cybersecurity, with layers of security implemented at the cloud, network, and edge device levels. 

At the edge device level, organizations should ensure their devices have security built-in with features such as HTTPS, secure socket, secure boot, and free unlimited firmware over-the-air (FOTA) updates. 

Organizations will also want to use edge devices from trustworthy companies that are headquartered in countries that have strict laws in place to protect customer data. In doing so you will ensure these companies are committed to working with you to prevent state or other malicious actors from gaining access to your 5G NR data.

• Are the 5G NR devices future-proof? 
Over time, organizations are likely to want to upgrade their applications. In addition, the 5G NR specification is not set in stone, and updates to it are made periodically. Organizations will want to ensure their 5G NR edge devices are futureproof, with capabilities that include the ability to update them with new firmware over the air, so they can upgrade their applications and take advantage of new 5G NR capabilities in the future. 

• Can the 5G NR device do edge processing? 
While 5G NR increases the amount of data that can be transmitted over cellular wireless networks, in many cases organizations will want to filter, prioritize, or otherwise process some of their 5G NR application’s data at the edge. This edge processing can enable these organizations to lower their data transmission costs, improve application performance, and lower their devices energy use. 

5G NR edge devices that offer organizations the ability to easily process data at the edge allow them to lower their data transmission expenses, optimize application performance, and maximize their devices’ battery lives. 

Originally posted here.

Read more…

WEBINAR SERIES:
 
Fast and Fearless - The Future of IoT Software Development
 8995382285?profile=RESIZE_400x

SUMMARY

The IoT is transforming the software landscape. What was a relatively straightforward embedded software stack, has been revolutionized due to the IoT where developers juggle specialized workloads, security, machine learning, real-time connectivity, managing devices in the field - the list goes on.

How can our industry help developers prototype ‘fearlessly’ because the tools and platforms allow them to navigate varying IoT components? How can developers move to production quickly, capitalizing on innovation opportunities in emerging IoT markets? 

This webinar series will take you through the fundamental steps, tools and opportunities for simplifying IoT development. Each webinar will be a panel discussion with industry experts who will share their experience and development tips on the below topics.

 

Part One of Four: The IoT Software Developer Experience

Date: Tuesday, May 11, 2021

Webinar Recording Available Here
 

Part Two of Four: AI and IoT Innovation

Date: Tuesday, June 29, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Two
 

Part Three of Four: Making the Most of IoT Connectivity

Date: Tuesday, September 28, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Three
 

Part Four of Four: IoT Security Solidified and Simplified

Date: Tuesday, November 16, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Four
 
Read more…

It’s no secret that I love just about everything to do with what we now refer to as STEM; that is, science, technology, engineering, and math. When I was a kid, my parents gifted me with what was, at that time, a state-of-the-art educational electronics kit containing a collection of basic components (resistors, capacitors, inductors), a teensy loudspeaker, some small (6-volt) incandescent bulbs… that sort of thing. Everything was connected using a patch-board of springs (a bit like the 130-in-1 Electronic Playground from SparkFun).

The funny thing is, now that I come to look back on it, most electronics systems in the real world at that time weren’t all that much more sophisticated than my kit. In our house, for example, we had one small vacuum tube-based black-and-white television in the family room and one rotary-dial telephone that was hardwired to the wall in the hallway. We never even dreamed of color televisions and I would have laughed my socks off if you’d told me that the day would come when we’d have high-definition color televisions in almost every room in the house, smart phones so small you could carry them your pocket and use them to take photos and videos and make calls around the world, smart devices that you could control with your voice and that would speak back to you… the list goes on.

Now, of course, we have the Internet of Things (IoT), which boasts more “things” than you can throw a stick at (according to Statista, there were ~22 billion IoT devices in 2018, there will be ~38 billion in 2025, and there are expected to be ~50 billion by 2030).

One of the decisions required when embarking on an IoT deployment pertains to connectivity. Some devices are hardwired, many use Bluetooth or Wi-Fi or some form of wireless mesh, and many more employ cellular technology as their connectivity solution of choice.

In order to connect to a cellular network, the IoT device must include some form of subscriber identity module (SIM). Over the years, the original SIMs (which originated circa 1991) evolved in various ways. A few years ago, the industry saw the introduction of embedded SIM (eSIM) technology. Now, the next-generation integrated SIM (iSIM) is poised to shake the IoT world once more.

“But what is iSIM,” I hear you cry. Well, I’m glad you asked because, by some strange quirk of fate, I’ve been invited to host a panel discussion — Accelerating Innovation on the IoT Edge with Integrated SIM (iSIM) — which is being held under the august auspices of IotCentral.io

In this webinar — which will be held on Thursday 20 May 2021 from 10:00 a.m. to 11:00 a.m. CDT — I will be joined by four industry gurus to discuss how cellular IoT is changing and how to navigate through the cornucopia of SIM, eSIM, and iSIM options to decide what’s best for your product. As part of this, we will see quick-start tools and cool demos that can move you from concept to product. Also (and of particular interest to your humble narrator), we will experience the supercharge potential of TinyML and iSIM.

8929356061?profile=RESIZE_584x

Panel members Loic Bonvarlet (upper left), Brian Partridge (upper right),

Dr. Juan Nogueira (lower left), and Jan Jongboom (bottom right)

The gurus in question (and whom I will be questioning) are Loic Bonvarlet, VP Product and Marketing at Kigen; Brian Partridge, Research Director for Infrastructure and Cloud Technologies at 451 Research; Dr. Juan Nogueira, Senior Director, Connectivity, Global Technology Team at FLEX; and Jan Jongboom, CTO and Co-Founder at Edge Impulse.

So, what say you? Dare I hope that we will have the pleasure of your company and that you will be able to join us to (a) tease your auditory input systems with our discussions and (b) join our question-and-answer free-for-all at the end?

 

Video recording available:

Read more…

By Sachin Kotasthane

In his book, 21 Lessons for the 21st Century, the historian Yuval Noah Harari highlights the complex challenges mankind will face on account of technological challenges intertwined with issues such as nationalism, religion, culture, and calamities. In the current industrial world hit by a worldwide pandemic, we see this complexity translate in technology, systems, organizations, and at the workplace.

While in my previous article, Humane IIoT, I discussed the people-centric strategies that enterprises need to adopt while onboarding IoT initiatives of industrial IoT in the workforce, in this article, I will share thoughts on how new-age technologies such as AI, ML, and big data, and of course, industrial IoT, can be used for effective management of complex workforce problems in a factory, thereby changing the way people work and interact, especially in this COVID-stricken world.

Workforce related problems in production can be categorized into:

  1. Time complexity
  2. Effort complexity
  3. Behavioral complexity

Problems categorized in either of the above have a significant impact on the workforce, resulting in a detrimental effect on the outcome—of the product or the organization. The complexity of these problems can be attributed to the fact that the workforce solutions to such issues cannot be found using just engineering or technology fixes as there is no single root-cause, rather, a combination of factors and scenarios. Let us, therefore, explore a few and seek probable workforce solutions.8829066088?profile=RESIZE_584x

Figure 1: Workforce Challenges and Proposed Strategies in Production

  1. Addressing Time Complexity

    Any workforce-related issue that has a detrimental effect on the operational time, due to contributing factors from different factory systems and processes, can be classified as a time complex problem.

    Though classical paper-based schedules, lists, and punch sheets have largely been replaced with IT-systems such as MES, APS, and SRM, the increasing demands for flexibility in manufacturing operations and trends such as batch-size-one, warrant the need for new methodologies to solve these complex problems.

    • Worker attendance

      Anyone who has experienced, at close quarters, a typical day in the life of a factory supervisor, will be conversant with the anxiety that comes just before the start of a production shift. Not knowing who will report absent, until just before the shift starts, is one complex issue every line manager would want to get addressed. While planned absenteeism can be handled to some degree, it is the last-minute sick or emergency-pager text messages, or the transport delays, that make the planning of daily production complex.

      What if there were a solution to get the count that is almost close to the confirmed hands for the shift, an hour or half, at the least, in advance? It turns out that organizations are experimenting with a combination of GPS, RFID, and employee tracking that interacts with resource planning systems, trying to automate the shift planning activity.

      While some legal and privacy issues still need to be addressed, it would not be long before we see people being assigned to workplaces, even before they enter the factory floor.

      During this course of time, while making sure every line manager has accurate information about the confirmed hands for the shift, it is also equally important that health and well-being of employees is monitored during this pandemic time. Use of technologies such as radar, millimeter wave sensors, etc., would ensure the live tracking of workers around the shop-floor and make sure that social distancing norms are well-observed.

    • Resource mapping

      While resource skill-mapping and certification are mostly HR function prerogatives, not having the right resource at the workstation during exigencies such as absenteeism or extra workload is a complex problem. Precious time is lost in locating such resources, or worst still, millions spent in overtime.

      What if there were a tool that analyzed the current workload for a resource with the identified skillset code(s) and gave an accurate estimate of the resource’s availability? This could further be used by shop managers to plan manpower for a shift, keeping them as lean as possible.

      Today, IT teams of OEMs are seen working with software vendors to build such analytical tools that consume data from disparate systems—such as production work orders from MES and swiping details from time systems—to create real-time job profiles. These results are fed to the HR systems to give managers the insights needed to make resource decisions within minutes.

  2. Addressing Effort Complexity

    Just as time complexities result in increased  production time, problems in this category result in an increase in effort by the workforce to complete the same quantity of work. As the effort required is proportionate to the fatigue and long-term well-being of the workforce, seeking workforce solutions to reduce effort would be appreciated. Complexity arises when organizations try to create a method out-of-madness from a variety of factors such as changing workforce profiles, production sequences, logistical and process constraints, and demand fluctuations.

    Thankfully, solutions for this category of problems can be found in new technologies that augment existing systems to get insights and predictions, the results of which can reduce the efforts, thereby channelizing it more productively. Add to this, the demand fluctuations in the current pandemic, having a real-time operational visibility, coupled with advanced analytics, will ensure meeting shift production targets.

    • Intelligent exoskeletons

      Exoskeletons, as we know, are powered bodysuits designed to safeguard and support the user in performing tasks, while increasing overall human efficiency to do the respective tasks. These are deployed in strain-inducing postures or to lift objects that would otherwise be tiring after a few repetitions. Exoskeletons are the new-age answer to reducing user fatigue in areas requiring human skill and dexterity, which otherwise would require a complex robot and cost a bomb.

      However, the complexity that mars exoskeleton users is making the same suit adaptable for a variety of postures, user body types, and jobs at the same workstation. It would help if the exoskeleton could sense the user, set the posture, and adapt itself to the next operation automatically.

      Taking a leaf out of Marvel’s Iron Man, who uses a suit that complements his posture that is controlled by JARVIS, manufacturers can now hope to create intelligent exoskeletons that are always connected to factory systems and user profiles. These suits will adapt and respond to assistive needs, without the need for any intervention, thereby freeing its user to work and focus completely on the main job at hand.

      Given the ongoing COVID situation, it would make the life of workers and the management safe if these suits are equipped with sensors and technologies such as radar/millimeter wave to help observe social distancing, body-temperature measuring, etc.

    • Highlighting likely deviations

      The world over, quality teams on factory floors work with checklists that the quality inspector verifies for every product that comes at the inspection station. While this repetitive task is best suited for robots, when humans execute such repetitive tasks, especially those that involve using visual, audio, touch, and olfactory senses, mistakes and misses are bound to occur. This results in costly reworks and recalls.

      Manufacturers have tried to address this complexity by carrying out rotation of manpower. But this, too, has met with limited success, given the available manpower and ever-increasing workloads.

      Fortunately, predictive quality integrated with feed-forwards techniques and some smart tracking with visuals can be used to highlight the area or zone on the product that is prone to quality slips based on data captured from previous operations. The inspector can then be guided to pay more attention to these areas in the checklist.

  3. Addressing Behavioral Complexity

    Problems of this category usually manifest as a quality issue, but the root cause can often be traced to the workforce behavior or profile. Traditionally, organizations have addressed such problems through experienced supervisors, who as people managers were expected to read these signs, anticipate and align the manpower.

    However, with constantly changing manpower and product variants, these are now complex new-age problems requiring new-age solutions.

    • Heat-mapping workload

      Time and motion studies at the workplace map the user movements around the machine with the time each activity takes for completion, matching the available cycle-time, either by work distribution or by increasing the manpower at that station. Time-consuming and cumbersome as it is, the complexity increases when workload balancing is to be done for teams working on a single product at the workstation. Movements of multiple resources during different sequences are difficult to track, and the different users cannot be expected to follow the same footsteps every time.

      Solving this issue needs a solution that will monitor human motion unobtrusively, link those to the product work content at the workstation, generate recommendations to balance the workload and even out the ‘congestion.’ New industrial applications such as short-range radar and visual feeds can be used to create heat maps of the workforce as they work on the product. This can be superimposed on the digital twin of the process to identify the zone where there is ‘congestion.’ This can be fed to the line-planning function to implement corrective measures such as work distribution or partial outsourcing of the operation.

    • Aging workforce (loss of tribal knowledge)

      With new technology coming to the shop-floor, skills of the current workforce get outdated quickly. Also, with any new hire comes the critical task of training and knowledge sharing from experienced hands. As organizations already face a shortage of manpower, releasing more hands to impart training to a larger workforce audience, possibly at different locations, becomes an even more daunting task.

      Fully realizing the difficulties and reluctance to document, organizations are increasingly adopting AR-based workforce trainings that map to relevant learning and memory needs. These AR solutions capture the minutest of the actions executed by the expert on the shop-floor and can be played back by the novice in-situ as a step-by-step guide. Such tools simplify the knowledge transfer process and also increase worker productivity while reducing costs.

      Further, in extraordinary situations such  as the one we face at present, technologies such as AR offer solutions for effective and personalized support to field personnel, without the need to fly in specialists at multiple sites. This helps keep them safe, and accessible, still.

Key takeaways and Actionable Insights

The shape of the future workforce will be the result of complex, changing, and competing forces. Technology, globalization, demographics, social values, and the changing personal expectations of the workforce will continue to transform and disrupt the way businesses operate, increasing the complexity and radically changing where, and when of future workforce, and how work is done. While the need to constantly reskill and upskill the workforce will be humongous, using new-age techniques and technologies to enhance the effectiveness and efficiency of the existing workforce will come to the spotlight.

8829067296?profile=RESIZE_710x

Figure 2: The Future IIoT Workforce

Organizations will increasingly be required to:

  1. Deploy data farming to dive deep and extract vast amounts of information and process insights embedded in production systems. Tapping into large reservoirs of ‘tribal knowledge’ and digitizing it for ingestion to data lakes is another task that organizations will have to consider.
  2. Augment existing operations systems such as SCADA, DCS, MES, CMMS with new technology digital platforms, AI, AR/VR, big data, and machine learning to underpin and grow the world of work. While there will be no dearth of resources in one or more of the new technologies, organizations will need to ‘acqui-hire’ talent and intellectual property using a specialist, to integrate with existing systems and gain meaningful actionable insights.
  3. Address privacy and data security concerns of the workforce, through the smart use of technologies such as radar and video feeds.

Nonetheless, digital enablement will need to be optimally used to tackle the new normal that the COVID pandemic has set forth in manufacturing—fluctuating demands, modular and flexible assembly lines, reduced workforce, etc.

Originally posted here.

Read more…
RSS
Email me when there are new items in this category –

Premier Sponsors