Subscribe to our Newsletter | To Post On IoT Central, Click here


Featured Posts (700)

Sort by

How IoT Tools Are Mining Manufacturing's Gold

IIoT will allow assets to perform more cost-effectively – so the better the data, the greater the savings.

Ricardo Buranello

The IoT is enabling advances across multiple market sectors, but it is the Industrial IoT (IIoT) that is having the most impact. It is already the biggest IoT vertical and covers multiple types of projects across industry, from simple data collection to more complex projects incorporating just-in-time manufacturing and predictive quality control.

The biggest benefit of the IIoT is how it is creating innovative solutions to help manufacturers achieve their business objectives by delivering better services and products to their customers. There are three principle reasons for implementing an IIoT application – to make money, to save money, or to stay compliant – and sometimes all three can be delivered. Certainly, at Telit, we would not counsel anyone to consider investing in an IIoT project unless it meets one or more of those three objectives.

Data is the New Gold

A properly implemented IIoT should enable manufacturers to collect data from every step in the process. Every machine can and should produce data, and the processing of that data should deliver invaluable information that helps create more efficient processes and factories. Look back 10-15 years, and there was a big shift in production, with manufacturing operations leaving the U.S. and Europe for China because labor cost was the most important consideration.

The IIoT is set to have the same effect as labor costs; data is the new gold. Information from the IIoT will make manufacturers’ assets perform in a more cost-effective manner – so the better the data, the greater the improvements.

Let’s look at some examples of the transformational effect of the IIoT. One of the largest car vendors in the world implemented a replacement IIoT solution that significantly reduced latency in their systems.This reduction was so relevant that in just one plant it created 3,000 minutes more of uptime. This plant produces at a rate of about $30,000 per minute, so that’s an extra $90 million.

Additionally, integrating the solution operator by operator, line by line and shift by shift, there is now a continuous link between what is being produced and how it is being produced, increasing productivity and quality control. Based on the data gathered, the manufacturer achieved significant reductions in both set-up time and line downtime.

Global names like Mitsubishi and Honda rely on the IIoT to remotely connect sophisticated machinery with technicians and engineers who constantly check manufacturing performance levels, ensure preventative maintenance, and quickly react to any issues that may affect production. Chip giants utilize the IIoT to maintain top-level cybersecurity to protect its IPR from hackers. Multinational pharmaceutical companies use the IIoT to audit every step in the manufacture of their products to ensure full compliance with regulations and laws. 

The IIoT isn’t limited to high end manufacturing. Anything can be connected. In Brazil, the IIoT is used to transmit data about the condition of the sewer network and sends alerts to maintenance crews when cleaning is required. The IIoT can also be used to explain unusual behavior.

At a manufacturing plant In Mexico, an application measuring the productivity of each machine was able to show how one machine was producing less at night than during the morning and afternoon shifts. Upon investigation, it was revealed that the operator on the evening shift was leaving the machine on a regular basis – to chat with his girlfriend.

Manufacturers are embracing the technology and investing, and without needing to hire an army of software engineers to rewrite protocols. There are experts in the IoT space that can deliver guaranteed connectivity across all systems – reducing the implementation time to a couple of days.

The IIoT is changing the face of manufacturing, from predictive maintenance and supply chain management to condition monitoring. Yet only a fraction of the market potential has been explored so far. If you look at the Fortune 500, there isn’t one company that doesn’t have an IIoT application, but in most the technology is yet to permeate the whole organization.

There are huge untapped possibilities, and work to be done to achieve the true revolution that the IIoT promises. This applies not only to the actual manufacturing processes, but throughout the supply chain, leveraging connectivity for better traceability and quality control. The IIoT can, and will, touch, impact, and improve every step.

 

Ricardo Stefanato Buranello is the Global VP - IoT Factory Solutions for Telit, and has over 14 years of experience in the M2M/IoT industry. Buranello is responsible for Telit’s global factory solutions, which is a leading provider in industrial solutions for remote connectivity, edge logic automation, OT and IT integration.

 

Read more…

By AKHILESHSINGH SAITHWAR

The LLDP protocol is a Link Layer Discovery Protocol used by network devices to identify their neighbors and their capabilities.

If you want to integrate LLDP protocol in your Linux/Embedded system, there are mainly two open-source codes. The first is lldpd and the other is openlldp. When I needed to integrate the LLDP in my network device, I studied both open-source codes. I am writing this article hoping that it will be useful for others who also want to use LLDP open-source code in their systems or network devices.

Below are the key points which should be considered when selecting the LLDP open-source code.

1. License

License is an important point to consider when you want to integrate an open-source code in your application. The lldpd is published under ISC License, whereas the openlldp is published under GPL-2.0 License. The difference between two licenses is that the ISC License is more permissive than the GPL-2.0 License.

If you use GPL-2.0 licensed open-source code in your application, you need to publish the changes back to the community. In case of ISC License, it is not required to publish your changes back to community. Please note that the scope of the article does not cover the full licensing requirements. Please understand the license before using it in your project.

2. Active Community Support

When picking up open-source code, we should also make sure that the development is active for that code. The development and support in lldpd are more active than the openlldp. When writing this article, there are a total of 8 tags in openlldp and 54 tags in lldpd. This indicates how quickly bugs are fixed and new version is released in lldpd.

3. Supported Protocols

There are other protocols like LLDP to discover the network devices, for example EDP, CDP. When selecting the LLDP open-source code, one should also make sure that it supports other protocols as well. This will make sure that the network devices with other protocols are also discovered. Though I have not verified the protocols listed in the documentations, from the document I can say that the lldpd supports EDP, CDP, FDP, SONMP and the openlldp supports EDP, CDP, EVB, MED, DCBX, VDP.

4. Custom Interface Support

In most of the cases the LLDP would run on standard Ethernet Interface but in some specific cases it may require executing LLDP on non-Ethernet interfaces, like Serial or I2C. In this case, it would be very helpful if the open-source code supports other interfaces. Though both open-source code does not support custom interfaces, the lldpd at least have documentation on how to add the custom interfaces. Adding custom interfaces on openlldp may require more time to understand and implement than lldpd.

5. Multiple Neighbour Support

This is one of the most important features when selecting the LLDP open-source code. Multiple neighbour support is needed if you are supposed to capture more than one LLDP enabled neighbour (network devices) on the same interfaces. As per my understanding, this is very basic feature which should be supported in all LLDP code. But I was surprised to know that this feature is not available in openlldp. Multiple neighbour support is available in lldpd.

6. Daemon Configuration Tool

Daemon configuration tool helps to configure the LLDP parameters, get status, enable/disable interfaces. Both lldpd and openlldp has their configuration tools. The lldpd has lldpcli/lldpctl and the openlldp has lldptool for configuration.

7. LLDP Statistics

Both lldpd and openlldp supports display of interface and neighbour statistics through there configuration tools. The statistics includes Total Frame Outs, Total Error Frame Outs, Total Age Out Frames, Total Discarded Frames, Total Frame In, Total Frame In Errors, Total Discarded Error Frames, Total TLVs in Errors, Total TLV’s Accepted etc.

8. Custom TLV Support

Both the lldpd and openlldp supports reception and transmission of custom TLV’s. The custom TLV’s can be set or get using their configuration tools.

9. SNMP Agent

Both lldpd and openlldp supports SNMP agent.

Comparison table

Based on above points the below table is populated for comparison purpose. One can decide whether lldpd or openlldp should be used in their system or network devices.

8755613068?profile=RESIZE_710x

Conclusion

As per my opinion it is better to choose the lldpd open-source code over the openlldp considering the license, features and community support. The licensing of lldpd is more permissive than the open-lldp. There are more features in lldpd compared to open-lldp. The community support for lldpd is more active than the open-lldp. So unless you have direction from your client to use specific open source lldp package, go for lldpd. eInfochips has in-depth expertise in the areas of firmware design for embedded systems development. We offer end-to-end support for firmware development starting from system requirements to testing for quality and environment.

Originally posted here.

Read more…

by Evelyn Münster

IoT systems are complex data products: they consist of digital and physical components, networks, communications, processes, data, and artificial intelligence (AI). User interfaces (UIs) are meant to make this level of complexity understandable for the user. However, building a data product that can explain data and models to users in a way that they can understand is an unexpectedly difficult challenge. That is because data products are not your run-of-the-mill software product.

In fact, 85% of all big data and AI projects fail. Why? I can say from experience that it is not the technology but rather the design that is to blame.

So how do you create a valuable data product? The answer lies in a new type of user experience (UX) design. With data products, UX designers are confronted with several additional layers that are not usually found in conventional software products: it’s a relatively complex system, unfamiliar to most users, and comprises data and data visualization as well as AI in some cases. Last but not least, it presents an entirely different set of user problems and tasks than customary software products.

Let’s take things one step at a time. My many years in data product design have taught me that it is possible to create great data products, as long as you keep a few things in mind before you begin.

As a prelude to the UX design process, make sure you and your team answer the following nine questions:

1. Which problem does my product solve for the user?

The user must be able to understand the purpose of your data product in a matter of minutes. The assignment to the five categories of the specific tasks of data products can be helpful: actionable insights, performance feedback loop, root cause analysis, knowledge creation, and trust building.

2. What does the system look like?

Do not expect users to already know how to interpret the data properly. They need to be able to construct a fairly accurate mental model of the system behind the data.

3. What is the level of data quality?

The UI must reflect the quality of the data. A good UI leads the user to trust the product.

4. What is the user’s proficiency level in graphicacy and numeracy?

Conduct user testing to make sure that your audience will be able to read and interpret the data and visuals correctly.

5. What level of detail do I need?

Aggregated data is often too abstract to explain, or to build user trust. A good way to counter this challenge is to use details that explain things. Then again, too much detail can also be overwhelming.

6. Are we dealing with probabilities?

Probabilities are tricky and require explanations. The common practice of cutting out all uncertainties makes the UI deceptively simple – and dangerous.

7. Do we have a data visualization expert on the design team?

UX design applied to data visualization requires a special skillset that covers the entire process, from data analysis to data storytelling. It is always a good idea to have an expert on the team or, alternatively, have someone to reach out to when required.

8. How do we get user feedback?

As soon as the first prototype is ready, you should collect feedback through user testing. The prototype should present content in the most realistic and consistent way possible, especially when it comes to data and figures.

9. Can the user interface boost our marketing and sales?

If the user interface clearly communicates what the data product does and what the process is like, then it could take on a new function: sell your products.

To sum up: we must acknowledge that data products are an unexplored territory. They are not just another software product or dashboard, which is why, in order to create a valuable data product, we will need a specific strategy, new workflows, and a particular set of skills: Data UX Design.

Originally posted HERE 

Read more…

By Adam Dunkels

When you have to install thousands of IoT devices, you need to make device installation impressively fast. Here is how to do it.

Every single IoT device out there has been be installed by someone.

Installation is the activity that requires the most attention during that device’s lifetime.

This is particularly true for large scale IoT deployments.

We at Thingsquare have been involved in many IoT products and projects. Many of these have involved large scale IoT deployments with hundreds or thousands of devices per deployment site.

In this article, we look at why installation is so important for large IoT deployments – and a list of 6 installation tactics to make installation impressively fast while being highly useful:

  1. Take photos
  2. Make it easy to identify devices
  3. Record the location of every device
  4. Keep a log of who did what
  5. Develop an installation checklist, and turn it into an app
  6. Measure everything

And these tactics are useful even if you only have a handful of devices per site, but thousands or tens of thousands of devices in total.

Why Installation Tactics are Important in Large IoT Deployments

Installation is a necessary step of an IoT device’s life.

Someone – maybe your customers, your users, or a team of technicians working for you – will be responsible for the installation. The installer turns your device from a piece of hardware into a living thing: a valuable producer of information for your business.

But most of all, installation is an inevitable part of the IoT device life cycle.

The life cycle of an IoT device can be divided into four stages:

  1. Produce the device, at the factory (usually with a device programming tool).
  2. Install the device.
  3. Use the device. This is where the device generates the value that we created it for. The device may then be either re-installed at a new location, or we:
  4. Retire the device.

Two stages in the list contain the installation activity: both Install and Use.

So installation is inevitable – and important. We need to plan to deal with it.

Installation is the Most Time-Consuming Activity

Most devices should spend most of their lifetime in the Use stage of their life cycle.

But a device’s lifetime is different from the attention time that we need to spend on them.

Devices usually don’t need much attention in their Use stage. At this stage, they should mostly be sitting there and generate valuable information.

By contrast, for the people who work with the devices, most of their attention and time will be spent in the Install stage. Since those are people who’s salary you are paying for, you want to be as efficient as possible.

How To Make Installation Impressively Fast - and Useful

At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.

These are our top six tactics to make installation fast – and useful:

1. Take Photos

After installation, you will need to maintain and troubleshoot the system. This is a normal part of the Use stage.

Photos are a goldmine of information. Particularly if it is difficult to get to the location afterward.

Make sure you take plenty of photos of each device as they are installed. In fact, you should include multiple photos in your installation checklist – more about this below.

We have been involved in several deployments where we have needed to remotely troubleshoot installations after they were installed. Having a bunch of photos of how and where the devices were installed helps tremendously.

The photos don’t need to be great. Having a low-quality photo beats having no photo, every time.

 

2. Make it Easy to Identify Devices

When dealing with hundreds of devices, you need to make sure that you know exactly which you installed, and where.

You therefore need to make it easy to identify each device. Device identification can be made in several ways, and we recommend you to use more than one way to identify the devices. This will reduce the risk of manual errors.

The two ways we typically use are:

  • A printed unique ID number on the device, which you can take a photo of
  • Automatic secure device identification via Bluetooth – this is something the Thingsquare IoT platform supports out of the box

Being certain about where devices were installed will make maintenance and troubleshooting much easier – particularly if it is difficult to visit the installation site.

3. Record the Location of Every Device

When devices are installed, make sure to record their location.

The easiest way to do this is to take the GPS coordinates of the devices as it is being deployed. Preferably with the installation app, which can do this automatically – see below.

For indoor installations, exact GPS locations may be unreliable. But even for those devices, having a coarse-grained GPS location is useful.

The location is useful both when analyzing the data that the devices produce, and when troubleshooting problems in the network.

 

4. Keep a Log of Who Did What

In large deployments, there will be many people involved.

Being able to trace the installation actions, as well as who took what action, is enormously useful. Sometimes just knowing the steps that were taken when installing each device is important. And sometimes you need to talk to the person who did the installation.

5. Develop an Installation Checklist - and Turn it into an App

Determine what steps are needed to install each device, and develop a step-by-step checklist for each step.

Then turn this checklist into an app that installation personnel can run on their own phones.

Each step of each checklist should be really easy understand to avoid mistakes along the way. And it should be easy to go back and forth in the steps, if needed.

Ideally, the app should run on both Android and iOS, because you would like everyone to be able to use it on their own phones.

Here is an example checklist, that we developed for a sensor device in a retail IoT deployment:

  • Check that sensor has battery installed
  • Attach sensor to appliance
  • Make sure that the sensor is online
  • Check that the sensor has a strong signal
  • Check that the GPS location is correct
  • Move hand in front of sensor, to make sure sensor correctly detects movement
  • Be still, to make sure sensor correctly detects no movement
  • Enter description of sensor placement (e.g. “on top of the appliance”)
  • Enter description of appliance
  • Take a photo of the sensor
  • Take a photo of the appliance
  • Take a photo of the appliance and the two beside it
  • Take a photo of the appliance and the four beside it
 

6. Measure Everything

Since installation costs money, we want it to be efficient.

And the best way to make a process more efficient is to measure it, and then improve it.

Since we have an installation checklist app, measuring installation time is easy – just build it into the app.

Once we know how much time each step in the installation process needs, we are ready to revise the process and improve it. We should focus on the most time-consuming step first and measure the successive improvements to make sure we get the most bang for the buck.

Conclusions

Every IoT device needs to be installed and making the installation process efficient saves us attention time for everyone involved – and ultimately money.

At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.

We use our experience to solve hard problems in the IoT space, such as how to best install large IoT systems – get in touch with us to learn more!

Originally posted here.

Read more…

“Productivity isn’t everything, but in the long run, it is almost everything.” This well-known quote is attributed to Paul Krugman, the well-known American economist and winner of a Nobel Memorial Prize in Economic Sciences for his contributions to New Trade Theory and New Economic Geography.

In economic terms, a common definition of productivity cites it as the ratio between the volume of outputs and the volume of inputs. It measures the efficiency of production inputs – labor and capital – used to produce a given level of output.

 
For countries and companies alike, productivity gain is a fundamental goal. For countries, productivity leads to higher real income, which contributes to higher living standards and better social services.

For companies, productivity is a key driver of sustainable profits and competitiveness over time. The global economy, with open markets and wide competition, pushes companies for constant productivity gains. Companies that fail in the race for productivity are the perfect candidates for extinction in the near future.

 

Productivity can be boosted in a few different ways, most notably through the innovation of new products or through new business models that guarantee higher scalability and demand. One example is how Starbucks built a sustainable business model with high levels of productivity through the deployment of strong, intangible assets such as a unique brand and efficient business processes.

Another example is Apple, a company that executed its strategy to perfection, creating a legion of fans that constantly run to buy the company’s new products, and sometimes even camp overnight outside an Apple store to get a device before it sells out. Apple succeeded not only in designing some of the most desired smartphones and PCs on the market but also in creating a business platform that generates incremental service and software revenue on top of its products. In 2020, about 15% of Apple’s revenue came from services, leveraged by its platform strategy.

Another important factor in productivity is the innovation inside. That is, how to produce more with fewer resources. While in the past few decades industrial efficiency was boosted by moving factories to low labor cost economies, this recipe is getting exhausted. The cost increase in Asian countries, driven by higher salaries, geopolitical risks and the increase in automation levels is changing the balance of this equation.

In an environment of hyper-competition and open markets, technology is rapidly reshaping manufacturing. The companies that survive in this new paradigm will be those that adopt data-driven models, innovate on their products and services, and embrace the challenge of producing more with less. I believe IoT and Industry 4.0 will be the drivers of this transformation.

Start With Management

Everything starts with management. Managers need to embrace innovation and constant improvement. Processes need to be quantified, and efficiency ratios for each of the individual processes need to be measured. For example, overall equipment effectiveness (OEE) needs to be calculated per machine, line, operator, sector and plant. Such KPIs are important to enable managers to make real-time decisions.

Include Machines

If data-driven management is the goal, then it’s time to think about execution. The ability to collect data from a variety of different machines and from a variety of different vendors is a big challenge. Industrial machines in general don’t have a common protocol and as such, collecting the data in a highly efficient manner can be challenging and daunting.

Beyond connecting machines themselves, machine data needs to be efficiently integrated across different IT systems and software, such as manufacturing execution systems (MES), enterprise resource planning (ERP) software and a variety of database applications. On top of that, there comes the challenge of building and integrating higher-level functionality, such as edge logic for real-time actions, data visualization for operators and managers, data analytics, cloud computing, machine learning and the list goes on. The complexity and associated challenges of machine and data integration cause many companies to fail along the way.

Avoid The Custom Code Trap

Many companies fail in the execution, and one of the reasons is because it is not a simple task. As IIoT is a relatively new concept, the market is not fully matured. Many companies create their own internal team and start to code. The problem is companies may not be prepared – they often lack the right level of skills, people, and expertise. It's not impossible to execute internally, but oftentimes focusing on your core business and finding the best technology tools for your needs in the market is the more efficient choice.

If you're looking at outside teams, a good way to avoid high development costs and operations risk is to find an integrated platform that merges data collection, edge computing and information technology/operational technology (IT/OT) integration. The more vertically integrated, the faster the deployment and the less likely you will need "Band-Aids" to integrate systems. This will provide more flexibility and optimize performance while reducing the cost and risks of the project.

It’s also important to remember that innovation and productivity is more than a task. It is a journey. Processes need to constantly evolve, and your IIoT platform must provide the ability to be flexible when you need to change machines, systems, metrics and processes.

In the end, productivity excellence is a blend of management, creativity and technology. It means pushing people out of their comfort zone and augmenting possibilities with technology. Not easy, but certainly needed.

 

Read more…

by Stephanie Overby

What's next for edge computing, and how should it shape your strategy? Experts weigh in on edge trends and talk workloads, cloud partnerships, security, and related issues


All year, industry analysts have been predicting that that edge computing – and complimentary 5G network offerings ­­– will see significant growth, as major cloud vendors are deploying more edge servers in local markets and telecom providers pushing ahead with 5G deployments.

The global pandemic has not significantly altered these predictions. In fact, according to IDC’s worldwide IT predictions for 2021, COVID-19’s impact on workforce and operational practices will be the dominant accelerator for 80 percent of edge-driven investments and business model change across most industries over the next few years.

First, what exactly do we mean by edge? Here’s how Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat, defines it: “Edge computing refers to the concept of bringing computing services closer to service consumers or data sources. Fueled by emerging use cases like IoT, AR/VR, robotics, machine learning, and telco network functions that require service provisioning closer to users, edge computing helps solve the key challenges of bandwidth, latency, resiliency, and data sovereignty. It complements the hybrid computing model where centralized computing can be used for compute-intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time.”

Moving data infrastructure, applications, and data resources to the edge can enable faster response to business needs, increased flexibility, greater business scaling, and more effective long-term resilience.

“Edge computing is more important than ever and is becoming a primary consideration for organizations defining new cloud-based products or services that exploit local processing, storage, and security capabilities at the edge of the network through the billions of smart objects known as edge devices,” says Craig Wright, managing director with business transformation and outsourcing advisory firm Pace Harmon.

“In 2021 this will be an increasing consideration as autonomous vehicles become more common, as new post-COVID-19 ways of working require more distributed compute and data processing power without incurring debilitating latency, and as 5G adoption stimulates a whole new generation of augmented reality, real-time application solutions, and gaming experiences on mobile devices,” Wright adds.

8 key edge computing trends in 2021


Noting the steady maturation of edge computing capabilities, Forrester analysts said, “It’s time to step up investment in edge computing,” in their recent Predictions 2020: Edge Computing report. As edge computing emerges as ever more important to business strategy and operations, here are eight trends IT leaders will want to keep an eye on in the year ahead.

1. Edge meets more AI/ML


Until recently, pre-processing of data via near-edge technologies or gateways had its share of challenges due to the increased complexity of data solutions, especially in use cases with a high volume of events or limited connectivity, explains David Williams, managing principal of advisory at digital business consultancy AHEAD. “Now, AI/ML-optimized hardware, container-packaged analytics applications, frameworks such as TensorFlow Lite and tinyML, and open standards such as the Open Neural Network Exchange (ONNX) are encouraging machine learning interoperability and making on-device machine learning and data analytics at the edge a reality.” 

Machine learning at the edge will enable faster decision-making. “Moreover, the amalgamation of edge and AI will further drive real-time personalization,” predicts Mukesh Ranjan, practice director with management consultancy and research firm Everest Group.

“But without proper thresholds in place, anomalies can slowly become standards,” notes Greg Jones, CTO of IoT solutions provider Kajeet. “Advanced policy controls will enable greater confidence in the actions made as a result of the data collected and interpreted from the edge.” 

 

2. Cloud and edge providers explore partnerships


IDC predicts a quarter of organizations will improve business agility by integrating edge data with applications built on cloud platforms by 2024. That will require partnerships across cloud and communications service providers, with some pairing up already beginning between wireless carriers and the major public cloud providers.

According to IDC research, the systems that organizations can leverage to enable real-time analytics are already starting to expand beyond traditional data centers and deployment locations. Devices and computing platforms closer to end customers and/or co-located with real-world assets will become an increasingly critical component of this IT portfolio. This edge computing strategy will be part of a larger computing fabric that also includes public cloud services and on-premises locations.

In this scenario, edge provides immediacy and cloud supports big data computing.

 

3. Edge management takes center stage


“As edge computing becomes as ubiquitous as cloud computing, there will be increased demand for scalability and centralized management,” says Wright of Pace Harmon. IT leaders deploying applications at scale will need to invest in tools to “harness step change in their capabilities so that edge computing solutions and data can be custom-developed right from the processor level and deployed consistently and easily just like any other mainstream compute or storage platform,” Wright says.

The traditional approach to data center or cloud monitoring won’t work at the edge, notes Williams of AHEAD. “Because of the rather volatile nature of edge technologies, organizations should shift from monitoring the health of devices or the applications they run to instead monitor the digital experience of their users,” Williams says. “This user-centric approach to monitoring takes into consideration all of the components that can impact user or customer experience while avoiding the blind spots that often lie between infrastructure and the user.”

As Stu Miniman, director of market insights on the Red Hat cloud platforms team, recently noted, “If there is any remaining argument that hybrid or multi-cloud is a reality, the growth of edge solidifies this truth: When we think about where data and applications live, they will be in many places.”

“The discussion of edge is very different if you are talking to a telco company, one of the public cloud providers, or a typical enterprise,” Miniman adds. “When it comes to Kubernetes and the cloud-native ecosystem, there are many technology-driven solutions competing for mindshare and customer interest. While telecom giants are already extending their NFV solutions into the edge discussion, there are many options for enterprises. Edge becomes part of the overall distributed nature of hybrid environments, so users should work closely with their vendors to make sure the edge does not become an island of technology with a specialized skill set.“

 

4. IT and operational technology begin to converge


Resiliency is perhaps the business term of the year, thanks to a pandemic that revealed most organizations’ weaknesses in this area. IoT-enabled devices (and other connected equipment) drive the adoption of edge solutions where infrastructure and applications are being placed within operations facilities. This approach will be “critical for real-time inference using AI models and digital twins, which can detect changes in operating conditions and automate remediation,” IDC’s research says.

IDC predicts that the number of new operational processes deployed on edge infrastructure will grow from less than 20 percent today to more than 90 percent in 2024 as IT and operational technology converge. Organizations will begin to prioritize not just extracting insight from their new sources of data, but integrating that intelligence into processes and workflows using edge capabilities.

Mobile edge computing (MEC) will be a key enabler of supply chain resilience in 2021, according to Pace Harmon’s Wright. “Through MEC, the ecosystem of supply chain enablers has the ability to deploy artificial intelligence and machine learning to access near real-time insights into consumption data and predictive analytics as well as visibility into the most granular elements of highly complex demand and supply chains,” Wright says. “For organizations to compete and prosper, IT leaders will need to deliver MEC-based solutions that enable an end-to-end view across the supply chain available 24/7 – from the point of manufacture or service  throughout its distribution.”

 

5. Edge eases connected ecosystem adoption


Edge not only enables and enhances the use of IoT, but it also makes it easier for organizations to participate in the connected ecosystem with minimized network latency and bandwidth issues, says Manali Bhaumik, lead analyst at technology research and advisory firm ISG. “Enterprises can leverage edge computing’s scalability to quickly expand to other profitable businesses without incurring huge infrastructure costs,” Bhaumik says. “Enterprises can now move into profitable and fast-streaming markets with the power of edge and easy data processing.”

 

6. COVID-19 drives innovation at the edge


“There’s nothing like a pandemic to take the hype out of technology effectiveness,” says Jason Mann, vice president of IoT at SAS. Take IoT technologies such as computer vision enabled by edge computing: “From social distancing to thermal imaging, safety device assurance and operational changes such as daily cleaning and sanitation activities, computer vision is an essential technology to accelerate solutions that turn raw IoT data (from video/cameras) into actionable insights,” Mann says. Retailers, for example, can use computer vision solutions to identify when people are violating the store’s social distance policy.

 

7. Private 5G adoption increases


“Use cases such as factory floor automation, augmented and virtual reality within field service management, and autonomous vehicles will drive the adoption of private 5G networks,” says Ranjan of Everest Group. Expect more maturity in this area in the year ahead, Ranjan says.

 

8. Edge improves data security


“Data efficiency is improved at the edge compared with the cloud, reducing internet and data costs,” says ISG’s Bhaumik. “The additional layer of security at the edge enhances the user experience.” Edge computing is also not dependent on a single point of application or storage, Bhaumik says. “Rather, it distributes processes across a vast range of devices.”

As organizations adopt DevSecOps and take a “design for security” approach, edge is becoming a major consideration for the CSO to enable secure cloud-based solutions, says Pace Harmon’s Wright. “This is particularly important where cloud architectures alone may not deliver enough resiliency or inherent security to assure the continuity of services required by autonomous solutions, by virtual or augmented reality experiences, or big data transaction processing,” Wright says. “However, IT leaders should be aware of the rate of change and relative lack of maturity of edge management and monitoring systems; consequently, an edge-based security component or solution for today will likely need to be revisited in 18 to 24 months’ time.”

Originally posted here.

Read more…

By Natallia Babrovich

My experience shows that most of the visits to doctors are likely to become virtual in the future. Let’s see how IoT solutions make the healthcare environment more convenient for patients and medical staff.

What are IoT and IoMT?

My colleague Alex Grizhnevich, IoT consultant at ScienceSoft, defines Internet of Things as a network of physical devices with sensors and actuators, software, and network connectivity that enable devices to gather and transmit data and fulfill users' tasks. Today, IoT becomes a key component of the digital transformation of healthcare, so we can distinguish a separate group of initiatives, the so-called IoHT (Internet of Health Things) or IoMT (Internet of Medical Things).

Popular IoMT Use Cases

IoT-based patient care

Medication intake tracking

IoT-based medication tracking allows doctors to monitor the impact of a prescribed medication’s dosage on a patient’s condition. In their turn, patients can control medication intake, e.g., by using in-app reminders and note in the app how their symptoms change for their doctor’s further analysis. The patient app can be connected to smart devices, (e.g., a smart pill bottle) for easier management of multiple medications.

Remote health monitoring

Among examples of employing IoT in healthcare, this use case is especially viable for chronic disease management. Patients can use connected medical devices or body-worn biosensors to allow doctors or nurses to check their vitals (blood pressure, glucose level, heart rate, etc.) via doctor/nurse-facing apps. Health professionals can monitor this data 24/7 and study app-generated reports to get insights into health trends. Patients who show signs of deteriorating health are scheduled for in-person visits.

IoT- and RFID-based medical asset monitoring

Medical inventory and equipment tracking

All medical tools and durable assets (beds, medical equipment) are equipped with RFID (radio frequency identification) tags. Fixed RFID readers (e.g., on the walls) collect the info about the location of assets. Medical staff can view it using a mobile or web application with a map.

Drug tracking

RFID-enabled drug tracking helps pharmacies and hospitals verify the authenticity of medication packages and timely spot medication shortages.

Smart hospital space

Cloud-connected ward sensors (e.g., a light switch, door and window contacts) and ambient sensors (e.g., hydrometers, noise detectors) allow patients to control their environment for a comfortable hospital stay.

Advantages of using IoT technology in healthcare

Patient-centric care

Medical IoT helps turn patients into active participants of the treatment process, thus improving care outcomes. Besides, IoMT helps increase patient satisfaction with care delivery, from communication with medical staff to physical comfort (smart lighting, climate control, etc.).

Reduced care-related costs

Non-critical patients can stay at home and use cloud-connected medical IoT devices, which gather, track and send health data to the medical facility. And with the help of telehealth technology, patients can schedule e-visits with nurses and doctors without traveling to the hospital.

Reduced readmissions

Patient apps connected to biosensors help ensure compliance with a discharge plan, enable prompt detection of health state deviations, and provide an opportunity to timely contact a health professional remotely.

Challenges of IoMT and how to address them

Potential health data security breaches

The connected nature of IoT brings about information security challenges for healthcare providers and patients.

Tip from ScienceSoft

We recommend implementing HIPAA-compliant IoMT solutions and conduct vulnerability assessment and penetration testing regularly to ensure the highest level of protection.

Integration difficulties

Every medical facility has its unique set of applications to be integrated with an IoMT solution (e.g., EHR, EMR). Some of these applications may be heavily customized or outdated.

Tip from ScienceSoft

Develop the integrations strategy from the start of your IoMT project, including the scope and the nature of custom integrations.

Enhance care delivery with IoMT

According to my estimates, the use of IoT technology in healthcare will continue to rise during the next decade, driven by the impact of the COVID situation and the growing demand for remote care. If you need help with creating and implementing a fitting IoMT solution, you’re welcome to turn to ScienceSoft’s healthcare IT team.

Originally posted here.

Read more…

 Over the years, IoT has made its way into the complex consumer markets and made millions of lives easier and smarter. Without a doubt, the industry holds enormous potential for upcoming entrepreneurs to introduce innovative solutions. In fact, the number of IoT start-ups have grown by 27% from 2019 till mid-2020.

 

While many of these IoT projects have made the cut, others are struggling to realize the intended RoI. Although tempting but it still a highly challenging space to be in and create sustainable companies. 

 

While a new organization is a collaborative effort of many people, it is the leaders who hold the vision strong and spearhead the transformation. For those setting on their journeys of tech start-ups, here’s what you can learn from the best.

 

Targeting the right KPIs 

The rule to achieve your KPIs is simple – never ignore them. Start-ups who planned around the KPIs were able to meet them quickly and seamlessly. Starting from product ideation to distributing the budgets across marketing, development and acquiring customers and retaining them, the complete lifecycle should be evaluated periodically through metrics such as Customer Acquisition Cost (CAC), Customer Retention Rate (CRR) and the Life-Time Value (LTV).

Customer Retention Rate (CRR) is the total number of customers a business is able to retain over a given period of time. High retention rates are a clear hint of a successful product and a fully satisfied customer while high attrition rates mean the opposite. Life Time Value (LTV) is the net value of a customer to the business. When these metrics are evaluated in relation to each other such as the LTV/CAC ratio, the total capital efficiency of a company can be predicted.

IoT enables you to take a step further in tracking KPIs:

  • Track end usage using IoT and deduce usage analytics.
  • Take user feedback at the point of usage using IoT and deduce user experience in real-time.
  • Use remote device management to monitor the health of your IoT solution and run diagnostics to find and fix issues. This will help in keeping MTTr rate (Mean Time to Repair) as controlled as possible.

These advanced indicators can directly help you in reducing expenses and increase revenues by improving customer experience.  



Why are these important? Entrepreneurs who stayed intact to meet these KPIs have seen a 10x increase in business efficiency. This is an important takeaway for budding entrepreneurs who have to justify their investments periodically. Since CAC has increased by 50% over the past few years, not ignoring performance KPIs is the foremost lesson for every new leader. 

 

Rapid adoption to change

IoT is not the same as it was 5 years ago. In fact, it may not be a ‘new technology on the block’ anymore. It is continuously evolving and start-ups have no choice but to keep experimenting with newer builds and processes. For example, embracing new technologies such as Edge computing or bringing anonymity to the data transfers, IoT products must upgrade. Likewise, project owners can improvise their development process by officially collaborating with other companies. It is a mesh and more hands will help to simplify. So be it outsourcing the resourcing requirements to a partner or outsourcing end-to-end product development, start-ups must weigh their choices and utilize the available expertise optimally.  

 

Few entrepreneurs have been able to resolve this complexity by trekking the midline. They sensed that the risk of not embracing change is greater than the risk of failing. Therefore, budding entrepreneurs must understand that experimentation doesn’t have to replace your existing processes. It can be an additional vertical which is committed to embracing contemporary product offerings or technologies. 


Despite the world being restricted indoors due to Covid, the following tech entrepreneurs have brilliantly led their workforce and achieved impressive results. 

 

Yevgeny Dibrov
CEO Armis Security

Armis Security ventured, attempted and mastered a market that most companies are scared of trying – IoT cybersecurity. Led by the hugely ambitious Yevgeny Dibrov, Amris is a security platform that discovers devices across the network, analyses their behaviors and identifies risks. For an industry plagued with cybersecurity threats, Amris is a huge reassurance. The company has a line-up of customers across sectors such as healthcare, automobile, finance and manufacturing.

While the start-up completes 5 years shortly, CEO Yevgeny quotes – “ "As companies accelerate their digital transformation initiatives, securely enable employees to work from home long-term, and adopt 5G, we are seeing an explosion of connected devices. At the same time, this uptick has increased the risk profile for businesses, especially around ransomware attacks, which is driving even more demand for our industry-leading agentless device security platform”.

 

Daniel Price
CEO - Ioterra Inc.

When most start-ups were swaying in the hype of IoT, Ioterra foresaw the complications and immediately plunged at the opportunity to resolve a huge gap in the IoT ecosystem – the challenge of quickly sourcing reliable IoT service partners and other resources needed for a successful IoT initiative. Unlike other technology markets, IoT is a rare space that involves sourcing complications regarding IoT services as well as solutions from all walks of technology – hardware, software and wireless communications. Besides delaying projects, sourcing difficulties lead to cost overheads. As an IoT consultant himself, Daniel along with his team created a digital marketplace that enables project owners to seek sourcing assistance based on their business model, type and sector. 

Daniel says, “Startups are advised to ensure a minimum of 12-18 months of runway. The most important reasoning behind this thinking is that you would invariably pivot 2-3 times before you get it right and you need to survive until then. Unless you watch the KPIs regularly and quickly pivot adapting to what you see on the ground, you cannot build a growing startup”.

 

Amir Haleem
CEO - Helium

Technologies from all sectors and markets have started to embrace Web 3.0 and Helium is IoT’s big bet. It is a platform that empowers businesses to develop connectivity for devices and sensors over a peer-to-peer wireless network. CEO Amir Haleem who was always ambitious about wireless coverage for low power IoT devices aims at bringing more projects on the stage.

He quotes - We’ve worked hard to bring native geo-location to everything that connects to the network. This opens up all sorts of interesting use cases that haven’t been seen yet, which have otherwise been impossible to build.

 

What’s common in all of them?- The Ethos to grow 

Ultimately, no start-up can grow without the mindset to win. Although most tech leaders ensure a learning culture within the organization, the motivation is mostly missing at the employee level. This largely happens when leaders don’t communicate their vision to the workforce and keep them restricted to the task assignments. The ethos to grow has to reflect at the individual level and that’s the hack to organizational success that many don’t get right.

Moreover, the missing KPIs and not retrospecting upon those failures along with your teams is a big flaw. In a startup environment wherein the team structure is mostly lean, the entrepreneurs must share quarterly progress with everyone. Besides keeping everyone in unison about the expected outcomes, such sessions float innovative ideas to achieve the results more efficiently. Therefore, upcoming entrepreneurs should ensure a work culture that acknowledges creative inputs. 

Motivated employees with a growth mindset, diligent tracking of KPIs and quick adaptability to change lay a solid foundation for success.

Read more…

By: Kiva Allgood, Head of IoT for Ericsson

Recently, I had the pleasure of participating in PTC’s LiveWorx conference as it went virtual, adding further credence to its reputation as the definitive event for digital transformation. I joined PTC’s Chief Technology Officer Steve Dertien for a presentation on how to unleash the power of industrial IoT (IIoT) and cellular connectivity.

A lot has changed in business over the past few months. With a massive remote migration the foremost priority, many business initiatives were put on the back burner. IIoT wasn’t one of them. The realm has remained a key strategic objective; in fact, considering how it can close distances and extend what industrial enterprises are able to monitor, control and accomplish, it’s more important than ever.

Ericsson and PTC formed a partnership specifically to help industrial enterprises accelerate digital transformation. Ericsson unlocks the full value of global cellular IoT connectivity and provides on-premise solutions. PTC offers an industrial IoT platform, ready to configure and deploy, with flexible connectivity and capabilities to build IoT solutions without manual coding.

This can enable enterprises to speed up cellular IoT deployments, realize the advantages of Industry 4.0 and better compete. Further, they can create a foundation for 5G, introducing such future benefits as network slicing, edge computing and high reliability, low-latency communications.

It all sounds great, I know, but if you’re like most folks, you probably have a few basic questions on your mind. Here’s are a few of the ones that I typically receive and appreciate the most.

Why cellular?

You’re connected already, via wire or Wi-Fi, so why is cellular necessary? You need reliable, global and dedicated connectivity that’s flexible to deploy. If you think about a product and its lifecycle, it may be manufactured in one location, land in another, then ultimately move again. If you can gather secure insight from it – regardless of where it was manufactured, bought or sold – you can improve operational efficiency, product capabilities, identify new business opportunities and much more.

What cellular can do especially well is effectively capture all that value by combining global connectivity with a private network. Then, through software like PTC’s, you can glean an array of information that’ll leave you wondering how else you can use the technology, regardless of whether the data is on or off the manufacturing floor. For instance, by applying virtual or augmented reality (VR/AR), you can find product defects before they leave the factory or end up in other products.

That alone can eliminate waste, save money from production to shipping, protect your reputation and much more.

According to analysts at ABI Research, we’ll see 4.3 billion wireless connections in smart factories by 2030, leading to a $1 trillion smart manufacturing market. For those that embrace Industry 4.0, private cellular has the potential to improve gross margins by 5-13% for factory and warehouse operations. What’s more, manufacturers can expect a 10x return on their investment.

You just need to be able to reliably turn actionable intelligence throughout the product’s lifecycle and across your global enterprise both securely and reliably – and that’s what cellular delivers.

Where do I start?

People don’t often ask for cellular or a dedicated private network specifically. They come to us with questions about things like how they can improve production cycle times or reduce costs by a certain percentage. That’s exactly where you should begin, too.

I come from the manufacturing space where for years I lived quality control, throughput and output. When someone would introduce a new idea, we’d vet it with a powerful but simple question: How will this make or save us money? If it couldn’t do either, we weren’t interested.

Look at your products and processes the same way when it comes to venturing into IIoT and digital transformation. Find the pain points. Identify defects, bottlenecks and possible improvements. Seek out how to further connect your business and the opportunities that could present. Data is indeed the new oil; it’s the intelligence that’ll help you understand where you need to go and what you need to do to move forward or create a new business.

What should I look for?

To get off on the right foot, be sure to engage the right partners. Realize this is a very complex area; no single provider can offer a solution that’ll address every need in one. You need partners with an ecosystem of their own best-of-breed partners; that’s why we work with companies like PTC. We have expertise in specific areas, focus on what we do best and work closely together to ensure we approach IIoT right.

We are building on an established foundation we created together. Both organizations have invested a lot of time, money, R&D cycles and processes in developing our individual and collective offerings. That said, not only will we be working together into the future, customers are assured they’ll remain on the forefront of innovation.

That future proofing is what you need to look for as well. You need wireless connectivity for applications involving asset tracking, predictive maintenance, digital twins, human-robot workflow integration and more. While Industry 4.0 is a priority, you want to lay a foundation for fast adoption of 5G, too.

There are other considerations to keep in mind down the road, such as your workforce. Employees may not want to be “machines” themselves, but they will want to be a robotics engineer or use AR or VR for artificial intelligence analysis. The future of work is changing, too, and IIoT offers a way to keep employees engaged.

Originally posted HERE

CLICK HERE to view Kivsa Allgood's LiveWorx presentation, “Unleashing the Power of Industrial IoT and Cellular Connectivity.”

Read more…

OMG! Three 32-bit processor cores each running at 300 MHz, each with its own floating-point unit (FPU), and each with more memory than you can throw a stick at!

In a recent column on Recreating Retro-Futuristic 21-Segment Victorian Displays, I noted that I’ve habitually got a number of hobby projects on the go. I also joked that, “one day, I hope to actually finish one or more of the little rascals.” Unfortunately, I’m laughing through my tears because some of my projects do appear to be never-ending.

For example, shortly after the internet first impinged on the popular consciousness with the launch of the Mosaic web browser in 1993, a number of humorous memes started to bounce around. It was one of these that sparked my Pedagogical and Phantasmagorical Inamorata Prognostication Engine project, which has been a “work in progress” for more than 20 years as I pen these words.

Feast your orbs on the Prognostication Engine (Click image to see a larger version — Image source: Max Maxfield)

As you can see in the image to the right, the Prognostication Engine has grown in the telling. The main body of the engine is housed in a wooden radio cabinet from 1929. My chum, Carpenter Bob, created the smaller section on the top, with great attention to detail like the matching hand-carved rosettes.

The purported purpose of this bodacious beauty is to forecast the disposition of my wife (Gina the Gorgeous) when I’m poised to leave the office and head for home at the end of the day (hence the “Prognostication” portion of the engine’s moniker). Paradoxically, should Gina ever discover the true nature of the engine, I won’t actually need it to predict her mood of the moment.

As we see, the control panels are festooned with antique knobs, toggle switches, pushbuttons, and analog meters. The knobs are mounted on motorized potentiometers, so if an unauthorized user attempts to modify their settings, they will automatically return to their original values under program control. The switches and pushbuttons are each accompanied by two LEDs, while each knob is equipped with a ring of 16 LEDs, resulting in 116 LEDs in all. Then there are 64 LEDs in the furnace and 140 LEDs in the rings surrounding the bases of the large vacuum tubes mounted on the top of the beast.

I was just reflecting on how much technology has changed over the past couple of decades. For example, today’s “smart LEDs” like WS2812Bs (a.k.a. NeoPixels) can be daisy-chained together, allowing multiple devices to be controlled using a single pin on the microcontroller. It probably goes without saying (but I’ll say it anyway) that all of the LEDs in the current incarnation of the engine are tricolor devices in the form of NeoPixels, but this was not always the case.

An early prototype of a shift register capable of driving only 13 tricolor LEDs (Click image to see a larger version — Image source: Max Maxfield)

The tricolor LEDs I was planning on using 20 years ago each required three pins to be controlled. The solution at that time would have been to implement a huge external shift register. The image to the left shows an early shift register prototype sufficient to drive only 13 “old school” devices.

And, of course, developments in computing have been even more staggering. When I commenced this project, I was using a PIC microcontroller that I programmed in BASIC. After the Arduino first hit the scene circa 2005, I migrated to using Arduino Unos, followed by Arduino Megas, that I programmed in C/C++.

One of the reasons I like the Arduino Mega is its high pin count, boasting 54 digital input/output (I/O) pins, of which 15 can be used as pulse-width modulated (PWM) outputs, 16 analog inputs, and 4 UARTS. On the other hand, the Mega is only an 8-bit machine running at 16 MHz, it offers only 256 KB of Flash (program) memory and 8 KB of SRAM, and it doesn’t have hardware support for floating-point operations.

The thing is that the Prognostication Engine has a lot of things going on. In addition to reading the states of all the switches and pushbuttons and potentiometers, it has to control the motors behind the knobs and drive the analog meters. Currently, the LEDs are being driven with simple test patterns, but these are going to be upgraded to support much more sophisticated animation and fading effects. The engine is also constantly performing calculations of an astronomical and astrological nature, determining things like the dates of forthcoming full moons and blue moons.

In the fullness of time, the engine is going to be connected to the internet so it can monitor things like the weather. It’s also going to have its own environmental sensors (temperature, humidity, barometric pressure) and proximity detection sensors. Furthermore, the engine will also boast a suite of sound effects such that flicking a simple switch, for example, may result in myriad sounds of mechanical mechanisms performing their magic. At some stage, I’m even hoping to add things like artificial intelligence (AI) and facial recognition.

The current state of computational play (Click image to see a larger version — Image source: Max Maxfield)

Sad to relate, my existing computing solution is not capable of handling all the tasks I wish the engine to perform. The image to the right shows the current state of computational play. As we see, there is one Arduino Mega in the lower cabinet controlling the 116 LEDs on the front panel. Meanwhile, there are two Megas in the upper cabinet, with one controlling the LEDs in the furnace and the other controlling the LEDs associated with the large vacuum tubes.

Up until a couple of years ago, I was vaguely planning on adding more and more Megas. I was also cogitating and ruminating as to how I was going to get these little rascals to talk to each other so that everyone knew (a) what we were trying to do and (b) what everyone else was actually doing.

Unfortunately, the whole computational architecture was becoming unwieldy, so I started to look for another solution. You can only imagine my surprise and delight when I was first introduced to the original ShieldBuddy TC275 from the folks at Hitex (see Everybody Needs a ShieldBuddy). This little beauty, which has an Arduino Mega footprint, features the Aurix TC275 processor from Infineon. The TC275 boasts three 32-bit cores, all running at 200 MHz, each with its own floating-point unit (FPU), and all sharing 4 Mbytes of Flash and 500 Kbytes of RAM (this is a bit of a simplification, but it will suffice for now).

Processors like the Aurix are typically to be found only in state-of-the-art embedded systems and they rarely make it into the maker world. To be honest, when I first saw the ShieldBuddy TC275, I thought to myself, “Life can’t get any better than this!” Well, I was wrong, because the guys and gals at Hitex have just announced the ShieldBuddy TC375, which features an Aurix TC375 processor!

O.M.G! I just took delivery of one of these bodacious beauties, and I’m so excited that I was moved to make this video.

I don’t know where to start. As before, we have three 32-bit cores, each with its own FPU. This time, however, the cores run at 300 MHz. Although each core runs independently, they can communicate and coordinate between themselves using techniques like shared memory and software interrupts. With regard to memory, the easiest way to summarize this is as follows: The TC375 processor has:

  • 6MB Flash ROM
  • 384 KB Data flash

And each of the three cores has:

  • 240 KB Data Scratch-Pad RAM (DSPR)
  • 64 KB Instruction Scratch-Pad RAM (PSPR)
  • 32 KB Instruction Cache (ICACHE)
  • 16 KB Data Cache (DCACHE)
  • 64 KB DLMU RAM

Actually, there’s a lot more to this than meets the eye. For example, the main SRAMs (the DSPRs) associated with each of the cores appear at two locations in the memory map. In the case of Core 0, for example, the first location in its DSPR is located at address 0xD0000000 where it is considered to be local (i.e., it appears directly on Core 0’s local internal bus) and can be accessed quickly. However, this DSPR is also visible to Cores 1 and 2 at 0x70000000 via the main on-chip system bus, which allows them to read and write to this memory freely, but at a lower speed than Core 0. Similarly, Cores 1 and 2 access their own memories locally and each other’s memories globally.

Meet the ShieldBuddy TC375 (Click image to see a larger version — Image source: Hitex)

As for the original ShieldBuddy TC275, if you are a professional programmer, you’ll be delighted to hear that the main ShieldBuddy TC375 toolchain is the Eclipse-based “FreeEntryToolchain” from HighTec/PLS/Infineon. This is a full-on C/C++ development environment with source-level debugger and suchlike.

By comparison, if you are a novice programmer like your humble narrator, you’ll be overjoyed to hear that the ShieldBuddy TC375 can be programmed via the Arduino’s integrated development environment (IDE). As far as I’m concerned, this programming model is where things start to get very clever indeed.

An Arduino sketch (program) always contains two functions: setup(), which runs only one time, and loop(), which runs over and over again (the system automatically inserts a main() function while your back is turned). If you take an existing sketch and compile it for the ShieldBuddy, then it will run on Core 0 by default. You can achieve the same effect by renaming your setup() and loop() functions to be setup0() and loop0(), respectively.

Similarly, you can create setup1() and loop1() functions, which will automatically be compiled to run on Core 1, and you can create setup2() and loop2() functions, which will automatically be compiled to run on Core 2. Any of your “homegrown” functions will be compiled in such a way as to run on whichever of the cores need to use them. I know that, like Pooh, I’m a bear of little brain, but even I can wrap my poor old noggin around this usage model.

There’s much, much more to this incredible board than I can cover here, but if you are interested in learning more, then may I recommend that you visit this portion of the Hitex site where you will find all sorts of goodies, including the ShieldBuddy Forum and the ShieldBuddy TC375 User Manual.

Now, if you will forgive me, I must away because I have to go and gloat over “my precious” (my ShieldBuddy TC375) and commence preparations to upgrade the Prognostication Engine by removing all of its existing processors and replacing them with a single ShieldBuddy TC375.

Actually, I just had a parting thought, which is that the Prognostication Engine’s role in life is to help me predict the future but — when I started out on this project — I would never have predicted that technology would develop so fast that I would one day have a triple-core 300 MHz processor driving “the beast.” How about you? What are your thoughts on all of this?

Originally posted here.

Read more…

IoT Sustainability, Data At The Edge.

Recently I've written quite a bit about IOT, and one thing you may have picked up on is that the Internet of Things is made up of a lot of very large numbers.

For starters, the number of connected things is measured in the tens of billions, nearly 100's of billions. Then, behind that very large number is an even bigger number, the amount of data these billions of devices is predicted to generate.

As FutureIoT pointed out, IDC forecasted that the amount of data generated by IoT devices by 2025 is expected to be in excess of 79.4 zettabytes (ZB).

How much is Zettabyte!?

A zettabyte is a very large number indeed, but how big? How can you get your head around it? Does this help...?

A zettabyte is 1,000,000,000,000,000,000,000 bytes. Hmm, that's still not very easy to visualise.

So let's think of it in terms of London busses. Let's image a byte is represented as a human on a bus, a London bus can take 80 people, so you'd need 993 quintillion busses to accommodate 79.4 zettahumans.

I tried to calculate how long 993 quintillion busses would be. Relating it to the distance to the moon, Mars or the Sun wasn't doing it justice, the only comparable scale is the size of the Milky Way. Even with that, our 79.4 zettahumans lined up in London busses, would stretch across the entire Milky Way ... and a fair bit further!

Sustainability Of Cloud Storage For 993 Quintillion Busses Of Data

Everything we do has an impact on the planet. Just by reading this article, you're generating 0.2 grams of Carbon Dioxide (CO2) emissions per second ... so I'll try to keep this short.

Using data from the Stanford Magazine that suggests every 100 gigabytes of data stored in the Cloud could generate 0.2 tons of CO2 per year. Storing 79.4 zettabytes of data in the Cloud could be responsible for the production of 158.8 billion tons of greenhouse gases.

8598308493?profile=RESIZE_710x

 

Putting that number into context, using USA Today numbers, the total emissions for China, USA, India, Russia, Japan and Germany accounted for a little over 21 billion tons in 2019.

So if we just go ahead and let all the IoT devices stream data to the Cloud, those billions of little gadgets would indirectly generate more than seven times the air pollution than the six most industrial countries, combined.

Save The Planet, Store Data At The Edge

As mentioned in a previous article, not all data generated by IoT devices needs to be stored in the Cloud.

Speaking with an expert in data storage, ObjectBox, they say their users on average cut their Cloud data storage by 60%. So how does that work, then? 

First, what does The Edge mean?

The term "Edge" refers to the edge of the network, in other words the last piece of equipment or thing connected to the network closest to the point of usage.

Let me illustrate in rather over-simplified diagram.

8598328665?profile=RESIZE_710x

 

How Can Edge Data Storage Improve Sustainability?

In an article about computer vision and AI on the edge, I talked about how vast amounts of network data could be saved if the cameras themselves could detect what an important event was, and to just send that event over the network, not the entire video stream.

In that example, only the key events and meta data, like the identification marks of a vehicle crossing a stop light, needed to be transmitted across the network. However, it is important to keep the raw content at the edge, so it can be used for post processing, for further learning of the AI or even to be retrieved at a later date, e.g. by law-enforcement.

Another example could be sensors used to detect gas leaks, seismic activity, fires or broken glass. These sensors are capturing volumes of data each second, but they only want to alert someone when something happens - detection of abnormal gas levels, a tremor, fire or smashed window.

Those alerts are the primary purpose of those devices, but the data in between those events can also hold significant value. In this instance, keeping it locally at the edge, but having it as and when needed is an ideal way to reduce network traffic, reduce Cloud storage and save the planet (well, at least a little bit).

Accessible Data At The Edge

Keeping your data at the edge is a great way to save costs and increase performance, but you still want to be able to get access to it, when you need it.

ObjectBox have created not just one of the most efficient ways to store data at the edge, but they've also built a sophisticated and powerful method to synchronise data between edge devices, the Cloud and other edge devices.

Synchronise Data At The Edge - Fog Computing.

Fog Computing (which is computing that happens between the Cloud and the Edge) requires data to be exchanged with devices connected to the edge, but without going all the way to/from the servers in the Cloud. 

In the article on making smarter, safer cities, I talked about how by having AI-equipped cameras share data between themselves they could become smarter, more efficient. 

A solution like that could be using ObjectBox's synchronisation capabilities to efficiently discover and collect relevant video footage from various cameras to help either identify objects or even train the artificial intelligence algorithms running on the AI-equipped cameras at the edge.

Storing Data At The Edge Can Save A Bus Load CO2

Edge computing has a lot of benefits to offer, in this article I've just looked at what could often be overlooked - the cost of transferring data. I've also not really delved into the broader benefits of ObjectBox's technology, for example, from their open source benchmarks, ObjectBox seems to offer a ten times performance benefit compared to other solutions out there, and is being used by more than 300,000 developers.  

The team behind ObjectBox also built technologies currently used by internet heavy-weights like Twitter, Viber and Snapchat, so they seem to be doing something right, and if they can really cut down network traffic by 60%, they could be one of sustainable technology companies to watch.  

Originally posted here.

Read more…

Edge Impulse has joined 1% for Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. To complement this effort we launched the ElephantEdge competition, aiming to create the world’s best elephant tracking device to protect elephant populations that would otherwise be impacted by poaching. In this similar vein, this blog will detail how Lacuna Space, Edge Impulse, a microcontroller and LoraWAN can promote the conservation of endangered species by monitoring bird calls in remote areas.

Over the past years, The Things Networks has worked around the democratization of the Internet of Things, building a global and crowdsourced LoraWAN network carried by the thousands of users operating their own gateways worldwide. Thanks to Lacuna Space’ satellites constellation, the network coverage goes one step further. Lacuna Space uses LEO (Low-Earth Orbit) satellites to provide LoRaWAN coverage at any point around the globe. Messages received by satellites are then routed to ground stations and forwarded to LoRaWAN service providers such as TTN. This technology can benefit several industries and applications: tracking a vessel not only in harbors but across the oceans, monitoring endangered species in remote areas. All that with only 25mW power (ISM band limit) to send a message to the satellite. This is truly amazing!

Most of these devices are typically simple, just sending a single temperature value, or other sensor reading, to the satellite - but with machine learning we can track much more: what devices hear, see, or feel. In this blog post we'll take you through the process of deploying a bird sound classification project using an Arduino Nano 33 BLE Sense board and a Lacuna Space LS200 development kit. The inferencing results are then sent to a TTN application.

Note: Access to the Lacuna Space program and dev kit is closed group at the moment. Get in touch with Lacuna Space for hardware and software access. The technical details to configure your Arduino sketch and TTN application are available in our GitHub repository.

 

Our bird sound model classifies house sparrow and rose-ringed parakeet species with a 92% accuracy. You can clone our public project or make your own classification model following our different tutorials such as Recognize sounds from audio or Continuous Motion Recognition.

3U_BsvrlCU1J-Fvgi4Y_vV_I5u_LPwb7vSFhlV-Y4c3GCbOki958ccFA1GbVN4jVDRIrUVZVAa5gHwTmYKv17oFq6tXrmihcWbUblNACJ9gS1A_0f1sgLsw1WNYeFAz71_5KeimC

Once you have trained your model, head to the Deployment section, select the Arduino library and Build it.

QGsN2Sy7bP1MsEmsnFyH9cbMxsrbSAw-8_Q-K1_X8-YSXmHLXBHQ8SmGvXNv-mVT3InaLUJoJutnogOePJu-5yz2lctPemOrQUaj9rm0MSAbpRhKjxBb3BC5g-U5qHUImf4HIVvT

Import the library within the Arduino IDE, and open the microphone continuous example sketch. We made a few modifications to this example sketch to interact with the LS200 dev kit: we added a new UART link and we transmit classification results only if the prediction score is above 0.8.

Connect with the Lacuna Space dashboard by following the instructions on our application’s GitHub ReadMe. By using a web tracker you can determine when the next good time a Lacuna Space satellite will be flying in your location, then you can receive the signal through your The Things Network application and view the inferencing results on the bird call classification:

    {
       "housesparrow": "0.91406",
       "redringedparakeet": "0.05078",
       "noise": "0.03125",
       "satellite": true,
   }

No Lacuna Space development kit yet? No problem! You can already start building and verifying your ML models on the Arduino Nano 33 BLE Sense or one of our other development kits, test it out with your local LoRaWAN network (by pairing it with a LoRa radio or LoRa module) and switch over to the Lacuna satellites when you get your kit.

Originally posted on the Edge Impulse blog by Aurelien Lequertier - Lead User Success Engineer at Edge Impulse, Jenny Plunkett - User Success Engineer at Edge Impulse, & Raul James - Embedded Software Engineer at Edge Impulse

Read more…

The possibilities of what you can do with digital twin technology are only as limited as your imagination

Today, forward-thinking companies across industries are implementing digital twin technology in increasingly fascinating and ground-breaking ways. With Internet of Things (IoT) technology improving every day and more and more compute power readily available to organizations of all sizes, the possibilities of what you can do with digital twin technology are only as limited as your imagination.

What Is a Digital Twin?

A digital twin is a virtual representation of a physical asset that is practically indistinguishable from its physical counterpart. It is made possible thanks to IoT sensors that gather data from the physical world and send it to be virtually reconstructed. This data includes design and engineering details that describe the asset’s geometry, materials, components, and behavior or performance.

When combined with analytics, digital twin data can unlock hidden value for an organization and provide insights about how to improve operations, increase efficiency or discover and resolve problems before the real-world asset is affected.

These 4 Steps Are Critical for Digital Twin Success:

Involve the Entire Product Value Chain

It’s critical to involve stakeholders across the product value chain in your design and implementation. Each department faces diverse business challenges in their day-to-day operations, and a digital twin provides ready solutions to problems such as the inability to coordinate across end-to-end supply chain processes, minimal or no cross-functional collaboration, the inability to make data-driven decisions, or clouded visibility across the supply chain. Decision-makers at each level of the value chain have extensive knowledge on critical and practical challenges. Including their inputs will ensure a better and more efficient design of the digital twin and ensure more valuable and relevant insights.

Establish Well-Documented Practices

Standardized and well-documented design practices help organizations communicate ideas across departments, or across the globe, and make it easier for multiple users of the digital twin to build or alter the model without destroying existing components or repeating work. Best-in-class modelling practices increase transparency while simplifying and streamlining collaborative work.

Include Data From Multiple Sources

Data from multiple sources—both internal and external—is an essential part of creating realistic and helpful simulations. 3D modeling and geometry is sufficient to show how parts fit together and how a product works, but more input is required to model how various faults or errors might occur somewhere in the product’s lifecycle. Because many errors and problems can be nearly impossible to accurately predict by humans alone, a digital twin needs a vast amount of data and a robust analytics program to be able to run algorithms to make accurate forecasts and prevent downtime.

Ensure Long Access Lifecycles 

Digital twins implemented using proprietary design software have a risk of locking owners into a single vendor, which ties the long-term viability of the digital twin to the longevity of the supplier’s product. This risk is especially significant for assets with long lifecycles such as buildings, industrial machinery, airplanes, etc., since the lifecycles of these assets are usually much longer than software lifecycles. This proprietary dependency only becomes riskier and less sustainable over time. To overcome these risks, IT architects and digital twin owners need to carefully set terms with software vendors to ensure data compatibility is maintained and vendor lock-in can be avoided.

Common Pitfalls to Digital Twin Implementation

Digital twin implementation requires an extraordinary investment of time, capital, and engineering might, and as with any project of this scale, there are several common pitfalls to implementation success.

Pitfall 1: Using the Same Platform for Different Applications

Although it’s tempting to try and repurpose a digital twin platform, doing so can lead to incorrect data at best and catastrophic mistakes at worst. Each digital twin is completely unique to a part or machine, therefore assets with unique operating conditions and configurations cannot share digital twin platforms.

Pitfall 2: Going Too Big, Too Fast

In the long run, a digital twin replica of your entire production line or building is possible and could provide incredible insights, but it is a mistake to try and deploy digital twins for all of your pieces of equipment or programs all at once. Not only is doing too much, too fast costly, but it might cause you to rush and miss critical data and configurations along the way. Rather than rushing to do it all at once, perfect a few critical pieces of machinery first and work your way up from there.

Pitfall 3: Inability to Source Quality Data

Data collected in the field is subject to quality errors due to human mistakes or duplicate entries. The insights your digital twin provides you are only as valuable as the data it runs off of. Therefore, it is imperative to standardize data collection practices across your organization and to regularly cleanse your data to remove duplicate and erroneous entries.

Pitfall 4: Lack of Device Communication Standards

If your IoT devices do not speak a common language, miscommunications can muddy your processes and compromise your digital twin initiative. Build an IT framework that allows your IoT devices to communicate with one another seamlessly to ensure success.

Pitfall 5: Failing to Get User Buy-In

As mentioned earlier in this eBook, a successful digital twin strategy includes users from across your product value chain. It is critical that your users understand and appreciate the value your digital twin brings to them individually and to your organization as a whole. Lack of buy-in due to skepticism, lack of confidence, or resistance can lead to a lack of user participation, which can undermine all of your efforts.

The Challenge of Measuring Digital Twin Success

Each digital twin is unique and completely separate in its function and end-goal from others on the market, which can make measuring success challenging. Depending on the level of the twin implemented, businesses need to create KPIs for each individual digital twin as it relates to larger organizational goals.

The configuration of digital twins is determined by the type of input data, number of data sources and the defined metrics. The configuration determines the value an organization can extract from the digital twin. Therefore, a twin with a higher configuration can yield better predictions than can a twin with a lower configuration. The reality is that success can be relative, and it is impossible to compare the effectiveness of two different digital twins side by side.

Conclusion

It’s possible — probable even — that in the future all people, enterprises, and even cities will have a digital twin. With the enormous growth predicted in the digital twin market in the coming years, it’s evident that the technology is here to stay. The possible applications of digital twins are truly limitless, and as IoT technology becomes more advanced and widely accessible, we’re likely to see many more innovative and disruptive use cases.

However, a technology with this much potential must be carefully and thoughtfully implemented in order to ensure its business value and long-term viability. Before embracing a digital twin, an organization must first audit its maturity, standardize processes, and prepare its culture and staff for this radical change in operations. Is your organization ready?

Originally posted here.

Read more…

Five IoT retail trends for 2021

In 2020 we saw retailers hard hit by the economic effects of the COVID-19 pandemic with dozens of retailers—Neiman Marcus, J.C. Penney, and Brooks Brothers to name a few— declaring bankruptcy. During the unprecedented chaos of lockdowns and social distancing, consumers accelerated their shift to online shopping. Retailers like Target and Best Buy saw online sales double while Amazon’s e–commerce sales grew 39 percent.1 Retailers navigated supply chain disruptions due to COVID-19, climate change events, trade tensions, and cybersecurity events.  

After the last twelve tumultuous months, what will 2021 bring for the retail industry? I spoke with Microsoft Azure IoT partners to understand how they are planning for 2021 and compiled insights about five retail trends. One theme we’re seeing is a focus on efficiency. Retailers will look to pre-configured digital platforms that leverage cloud-based technologies including the Internet of Things (IoT), artificial intelligence (AI), and edge computing to meet their business goals. 

a group of people standing in front of a mirror posing for the camera

Empowering frontline workers with real-time data

In 2021, retailers will increase efficiency by empowering frontline workers with real-time data. Retail employees will be able to respond more quickly to customers and expand their roles to manage curbside pickups, returns, and frictionless kiosks.  

In H&M Mitte Garten in Berlin, H&M empowered employee ambassadors with fashionable bracelets connected to the Azure cloud. Ambassadors were able to receive real-time requests via their bracelets when customers needed help in fitting rooms or at a cash desk. The ambassadors also received visual merchandising instructions and promotional updates. 

Through the app built on Microsoft partner Turnpike’s wearable SaaS platform leveraging Azure IoT Hub, these frontline workers could also communicate with their peers or their management team during or after store hours. With the real-time data from the connected bracelets, H&M ambassadors were empowered to delivered best-in-class service.   

Carl Norberg, Founder, Turnpike explained, “We realized that by connecting store IoT sensors, POS systems, and AI cameras, store staff can be empowered to interact at the right place at the right time.” 

Leveraging live stream video to innovate omnichannel

Livestreaming has been exploding in China as influencers sell through their social media channels. Forbes recently projected that nearly 40 percent of China’s population will have viewed livestreams during 2020.2 Retailers in the West are starting to leverage live stream technology to create innovative omnichannel solutions.  

For example, Kjell & Company, one of Scandinavia’s leading consumer electronics retailers, is using a solution from Bambuser and Ombori called Omni-queue built on top of the Ombori Grid. Omni-queue enables store employees to handle a seamless combination of physical and online visitors within the same queue using one-to-one live stream video for online visitors.  

Kjell & Company ensures e-commerce customers receive the same level of technical expertise and personalized service they would receive in one of their physical locations. Omni-queue also enables its store employees to be utilized highly efficiently with advanced routing and knowledge matching. 

Maryam Ghahremani, CEO of Bambuser explains, “Live video shopping is the future, and we are so excited to see how Kjell & Company has found a use for our one-to-one solution.” Martin Knutson, CTO of Kjell & Company added “With physical store locations heavily affected due to the pandemic, offering a new and innovative way for customers to ask questions—especially about electronics—will be key to Kjell’s continued success in moving customers online.” 

20191026_JagerandKokemor_Attabotics_RobotOnWhite_15519.FIN (1)

Augmenting omnichannel with dark stores and micro-fulfillment centers  

In 2021, retailers will continue experimenting with dark stores—traditional retail stores that have been converted to local fulfillment centers—and micro-fulfillment centers. These supply chain innovations will increase efficiency by bringing products closer to customers. 

Microsoft partner Attabotics, a 3D robotics supply chain company, works with an American luxury department store retailer to reduce costs and delivery time using a micro-fulfillment center. Attabotics’ unique use of both horizontal and vertical space reduces warehouse needs by 85 percent. Attabotics’ structure and robotic shuttles leveraged Microsoft Azure Edge Zones, Azure IoT Central, and Azure Sphere.

The luxury retailer leverages the micro-fulfillment center to package and ship multiple beauty products together. As a result, customers experience faster delivery times. The retailer also reduces costs related to packaging, delivery, and warehouse space.  

Scott Gravelle, Founder, CEO, and CTO of Attabotics explained, “Commerce is at a crossroads, and for retailers and brands to thrive, they need to adapt and take advantage of new technologies to effectively meet consumers’ growing demands. Supply chains have not traditionally been set up for e-commerce. We will see supply chain innovations in automation and modulation take off in 2021 as they bring a wider variety of products closer to the consumer and streamline the picking and shipping to support e-commerce.” 

a group of people wearing costumes

Helping keep warehouse workers safe

What will this look like? Cognizant’s recent work with an athletic apparel retailer offers a blueprint. During the peak holiday season, the retailer needed to protect its expanding warehouse workforce while minimizing absenteeism. To implement physical distancing and other safety measures, the retailer  leveraged Cognizant’s Safe Buildings solution built with Azure IoT Edge and IoT Hub services.   

With this solution, employees maintain physical distancing using smart wristbands. When two smart wristbands were within a pre-defined distance of each other for more than a pre-defined time, the worker’s bands buzzed to reinforce safe behaviors. The results drove nearly 98 percent distancing compliance in the initial pilot. As the retailer plans to scale-up its workforce at other locations, implementing additional safety modules are being considered:   

  • Touchless temperature checks.  
  • Occupancy sensors communicate capacity information to the management team for compliance records.  
  • Air quality sensors provide environmental data so the facility team could help ensure optimal conditions for workers’ health.  

“For organizations to thrive during and post-pandemic, enterprise-grade workplace safety cannot be compromised. Real-time visibility of threats is providing essential businesses an edge in minimizing risks proactively while building employee trust and empowering productivity in a safer workplace,” Rajiv Mukherjee, Cognizant’s IoT Practice Director for Retail and Consumer Goods.  

Optimizing inventory management with real-time edge data

In 2021, retailers will ramp up the adoption of intelligent edge solutions to optimize inventory management with real-time data. Most retailers have complex inventory management systems. However, no matter how good the systems are, there can still be data gaps due to grocery pick-up services, theft, and sweethearting. The key to addressing these gaps is to combine real-time data from applications running on edge cameras and other edge devices in the physical store with backend enterprise resource planning (ERP) data.  

Seattle Goodwill worked with Avanade to implement a new Microsoft-based Dynamics platform across its 24 stores. The new system provided almost real-time visibility into the movement of goods from the warehouses to the stores. 

Rasmus Hyltegård, Director of Advanced Analytics at Avanade explained, “To ensure inventory moves quickly off the shelves, retailers can combine real-time inventory insights from Avanade’s smart inventory accelerator with other solutions across the customer journey to meet customer expectations.” Hyltegård continued, “Customers can check online to find the products they want, find the stores with product in stock, and gain insight into which stores have the shortest queues, which is important during the pandemic and beyond. Once a customer is in the store, digital signage allows for endless aisle support.” 

a person standing in front of a building

Summary

The new year 2021 holds a wealth of opportunities for retailers. We foresee retail leaders reimagining their businesses by investing in platforms that integrate IoT, AI, and edge computing technologies. Retailers will focus on increasing efficiencies to reduce costs. Modular platforms supported by an ecosystem of strong partner solutions will empower frontline workers with data, augment omnichannel fulfillment with dark stores and micro-fulfillment, leverage livestream video to enhance omnichannel, prioritize warehouse worker safety, and optimize inventory management with real-time data. 

Originally posted here.

Read more…

Security has long been a worry for the Internet of Things projects, and for many organizations with active or planned IoT deployments, security concerns have hampered digital ambitions. By implementing IoT security best practices, however, risk can be minimized.

Fortunately, IoT security best practices can help organizations reduce the risks facing their deployments and broader digital transformation initiatives. These same best practices can also reduce legal liability and protect an organization’s reputation.

Technological fragmentation is not just one of the biggest barriers to IoT adoption, but it also complicates the goal of securing connected devices and related services. With IoT-related cyberattacks on the rise, organizations must become more adept at managing cyber-risk or face potential reputational and legal consequences. This article summarizes best practices for enterprise and industrial IoT projects.

Key takeaways from this article include the following:

  • Data security remains a central technology hurdle related to IoT deployments.
  • IoT security best practices also can help organizations curb the risk of broader digital transformation initiatives.
  • Securing IoT projects requires a comprehensive view that encompasses the entire life cycle of connected devices and relevant supply chains.

Fragmentation and security have long been two of the most significant barriers to the Internet of Things adoption. The two challenges are also closely related.

Despite the Internet of Things (IoT) moniker, which implies a synthesis of connected devices, IoT technologies vary considerably based on their intended use. Organizations deploying IoT thus rely on an array of connectivity types, standards and hardware. As a result, even a simple IoT device can pose many security vulnerabilities, including weak authentication, insecure cloud integration, and outdated firmware and software.

For many organizations with active or planned IoT deployments, security concerns have hampered digital ambitions. An IoT World Today August 2020 survey revealed data security as the top technology hurdle for IoT deployments, selected by 46% of respondents.

Fortunately, IoT security best practices can help organizations reduce the risks facing their deployments and broader digital transformation initiatives. These same best practices can also reduce legal liability and protect an organization’s reputation.

But to be effective, an IoT-focused security strategy requires a broad view that encompasses the entire life cycle of an organization’s connected devices and projects in addition to relevant supply chains.

Know What You Have and What You Need

Asset management is a cornerstone of effective cyber defence. Organizations should identify which processes and systems need protection. They should also strive to assess the risk cyber attacks pose to assets and their broader operations.

In terms of enterprise and industrial IoT deployments, asset awareness is frequently spotty. It can be challenging given the array of industry verticals and the lack of comprehensive tools to track assets across those verticals. But asset awareness also demands a contextual understanding of the computing environment, including the interplay among devices, personnel, data and systems, as the National Institute of Standards and Technology (NIST) has observed.

There are two fundamental questions when creating an asset inventory: What is on my network? And what are these assets doing on my network?

Answering the latter requires tracking endpoints’ behaviours and their intended purpose from a business or operational perspective. From a networking perspective, asset management should involve more than counting networking nodes; it should focus on data protection and building intrinsic security into business processes.

Relevant considerations include the following:

  • Compliance with relevant security and privacy laws and standards.
  • Interval of security assessments.
  • Optimal access of personnel to facilities, information and technology, whether remote or in-person.
  • Data protection for sensitive information, including strong encryption for data at rest and data in transit.
  • Degree of security automation versus manual controls, as well as physical security controls to ensure worker safety.

IoT device makers and application developers also should implement a vulnerability disclosure program. Bug bounty programs are another option that should include public contact information for security researchers and plans for responding to disclosed vulnerabilities.

Organizations that have accurately assessed current cybersecurity readiness need to set relevant goals and create a comprehensive governance program to manage and enforce operational and regulatory policies and requirements. Governance programs also ensure that appropriate security controls are in place. Organizations need to have a plan to implement controls and determine accountability for that enforcement. Another consideration is determining when security policies need to be revised.

An effective governance plan is vital for engineering security into architecture and processes, as well as for safeguarding legacy devices with relatively weak security controls. Devising an effective risk management strategy for enterprise and industrial IoT devices is a complex endeavour, potentially involving a series of stakeholders and entities. Organizations that find it difficult to assess the cybersecurity of their IoT project should consider third-party assessments.

Many tools are available to help organizations evaluate cyber-risk and defences. These include the vulnerability database and the Security and Privacy Controls for Information Systems and Organizations document from the National Institute of Standards and Technology. Another resource is the list of 20 Critical Security Controls for Effective Cyber Defense. In terms of studying the threat landscape, the MITRE ATT&CK is one of the most popular frameworks for adversary tactics and techniques.

At this stage of the process, another vital consideration is the degree of cybersecurity savviness and support within your business. Three out of ten organizations deploying IoT cite lack of support for cybersecurity as a hurdle, according to August 2020 research from IoT World Today. Security awareness is also frequently a challenge. Many cyberattacks against organizations — including those with an IoT element — involve phishing, like the 2015 attack against Ukraine’s electric grid.

IoT Security Best Practices

Internet of Things projects demands a secure foundation. That starts with asset awareness and extends into responding to real and simulated cyberattacks.

Step 1: Know what you have.

Building an IoT security program starts with achieving a comprehensive understanding of which systems need to be protected.

Step 2: Deploy safeguards.

Shielding devices from cyber-risk requires a thorough approach. This step involves cyber-hygiene, effective asset control and the use of other security controls.

Step 3: Identify threats

Spotting anomalies can help mitigate attacks. Defenders should hone their skills through wargaming.

Step 4: Respond effectively.

Cyberattacks are inevitable but should provide feedback that feeds back to step 1.

Exploiting human gullibility is one of the most common cybercriminal strategies. While cybersecurity training can help individuals recognize suspected malicious activities, such programs tend not to be entirely effective. “It only takes one user and one-click to introduce an exploit into a network,” wrote Forrester analyst Chase Cunningham in the book “Cyber Warfare.” Recent studies have found that, even after receiving cybersecurity training, employees continue to click on phishing links about 3% of the time.

Security teams should work to earn the support of colleagues, while also factoring in the human element, according to David Coher, former head of reliability and cybersecurity for a major electric utility. “You can do what you can in terms of educating folks, whether it’s as a company IT department or as a consumer product manufacturer,” he said. But it is essential to put controls in place that can withstand user error and occasionally sloppy cybersecurity hygiene.

At the same time, organizations should also look to pool cybersecurity expertise inside and outside the business. “Designing the controls that are necessary to withstand user error requires understanding what users do and why they do it,” Coher said. “That means pulling together users from throughout your organization’s user chain — internal and external, vendors and customers, and counterparts.”

Those counterparts are easier to engage in some industries than others. Utilities, for example, have a strong track record in this regard, because of the limited market competition between them. Collaboration “can be more challenging in other industries, but no less necessary,” Coher added.

Deploy Appropriate Safeguards

Protecting an organization from cyberattacks demands a clear framework that is sensitive to business needs. While regulated industries are obligated to comply with specific cybersecurity-related requirements, consumer-facing organizations tend to have more generic requirements for privacy protections, data breach notifications and so forth. That said, all types of organizations deploying IoT have leeway in selecting a guiding philosophy for their cybersecurity efforts.

A basic security principle is to minimize networked or vulnerable systems’ attack surface — for instance, closing unused network ports and eliminating IoT device communication over the open internet. Generally speaking, building security into the architecture of IoT deployments and reducing attackers’ options to sabotage a system is more reliable than adding layers of defence to an unsecured architecture. Organizations deploying IoT projects should consider intrinsic security functionality such as embedded processors with cryptographic support.

But it is not practical to remove all risk from an IT system. For that reason, one of the most popular options is defence-in-depth, a military-rooted concept espousing the use of multiple layers of security. The basic idea is that if one countermeasure fails, additional security layers are available.

While the core principle of implementing multiple layers of security remains popular, defence in depth is also tied to the concept of perimeter-based defence, which is increasingly falling out of favour. “The defence-in-depth approach to cyber defence was formulated on the basis that everything outside of an organization’s perimeter should be considered ‘untrusted’ while everything internal should be inherently ‘trusted,’” said Andrew Rafla, a Deloitte Risk & Financial Advisory principal. “Organizations would layer a set of boundary security controls such that anyone trying to access the trusted side from the untrusted side had to traverse a set of detection and prevention controls to gain access to the internal network.”

Several trends have chipped away at the perimeter-based model. As a result, “modern enterprises no longer have defined perimeters,” Rafla said. “Gone are the days of inherently trusting any connection based on where the source originates.” Trends ranging from the proliferation of IoT devices and mobile applications to the popularity of cloud computing have fueled interest in cybersecurity models such as zero trust. “At its core, zero trust commits to ‘never trusting, always verifying’ as it relates to access control,” Rafla said. “Within the context of zero trusts, security boundaries are created at a lower level in the stack, and risk-based access control decisions are made based on contextual information of the user, device, workload or network attempting to gain access.”

Zero trust’s roots stretch back to the 1970s when a handful of computer scientists theorized on the most effective access control methods for networks. “Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job,” one of those researchers, Jerome Saltzer, concluded in 1974.

While the concept of least privilege sought to limit trust among internal computing network users, zero trusts extend the principle to devices, networks, workloads and external users. The recent surge in remote working has accelerated interest in the zero-trust model. “Many businesses have changed their paradigm for security as a result of COVID-19,” said Jason Haward-Grau, a leader in KPMG’s cybersecurity practice. “Many organizations are experiencing a surge to the cloud because businesses have concluded they cannot rely on a physically domiciled system in a set location.”

Based on data from Deloitte, 37.4% of businesses accelerated their zero trust adoption plans in response to the pandemic. In contrast, more than one-third, or 35.2%, of those embracing zero trusts stated that the pandemic had not changed the speed of their organization’s zero-trust adoption.

“I suspect that many of the respondents that said their organization’s zero-trust adoption efforts were unchanged by the pandemic were already embracing zero trusts and were continuing with efforts as planned,” Rafla said. “In many cases, the need to support a completely remote workforce in a secure and scalable way has provided a tangible use case to start pursuing zero-trust adoption.”

A growing number of organizations are beginning to blend aspects of zero trust and traditional perimeter-based controls through a model known as secure access service edge (SASE), according to Rafla. “In this model, traditional perimeter-based controls of the defence-in-depth approach are converged and delivered through a cloud-based subscription service,” he said. “This provides a more consistent, resilient, scalable and seamless user experience regardless of where the target application a user is trying to access may be hosted. User access can be tightly controlled, and all traffic passes through multiple layers of cloud-based detection and prevention controls.”

Regardless of the framework, organizations should have policies in place for access control and identity management, especially for passwords. As Forrester’s Cunningham noted in “Cyber Warfare,” the password is “the single most prolific means of authentication for enterprises, users, and almost any system on the planet” — is the lynchpin of failed security in cyberspace. Almost everything uses a password at some stage.” Numerous password repositories have been breached, and passwords are frequently recycled, making the password a common security weakness for user accounts as well as IoT devices.

A significant number of consumer-grade IoT devices have also had their default passwords posted online. Weak passwords used in IoT devices also fueled the growth of the Mirai botnet, which led to widespread internet outages in 2016. More recently, unsecured passwords on IoT devices in enterprise settings have reportedly attracted state-sponsored actors’ attention.

IoT devices and related systems also need an effective mechanism for device management, including tasks such as patching, connectivity management, device logging, device configuration, software and firmware updates and device provisioning. Device management capabilities also extend to access control modifications and include remediation of compromised devices. It is vital to ensure that device management processes themselves are secure and that a system is in place for verifying the integrity of software updates, which should be regular and not interfere with device functionality.

Organizations must additionally address the life span of devices and the cadence of software updates. Many environments allow IT pros to identify a specific end-of-life period and remove or replace expired hardware. In such cases, there should be a plan for device disposal or transfer of ownership. In other contexts, such as in industrial environments, legacy workstations don’t have a defined expiration date and run out-of-date software. These systems should be segmented on the network. Often, such industrial systems cannot be easily patched like IT systems are, requiring security professionals to perform a comprehensive security audit on the system before taking additional steps.

Identify Threats and Anomalies

In recent years, attacks have become so common that the cybersecurity community has shifted its approach from preventing breaches from assuming a breach has already happened. The threat landscape has evolved to the point that cyberattacks against most organizations are inevitable.

“You hear it everywhere: It’s a matter of when, not if, something happens,” said Dan Frank, a principal at Deloitte specializing in privacy and data protection. Matters have only become more precarious in 2020. The FBI has reported a three- to four-fold increase in cybersecurity complaints after the advent of COVID-19.

Advanced defenders have taken a more aggressive stance known as threat hunting, which focuses on proactively identifying breaches. Another popular strategy is to study adversary behaviour and tactics to classify attack types. Models such as the MITRE ATT&CK framework and the Common Vulnerability Scoring System (CVSS) are popular for assessing adversary tactics and vulnerabilities.

While approaches to analyzing vulnerabilities and potential attacks vary according to an organization’s maturity, situational awareness is a prerequisite at any stage. The U.S. Army Field Manual defines the term like this: “Knowledge and understanding of the current situation which promotes timely, relevant and accurate assessment of friendly, enemy and other operations within the battlespace to facilitate decision making.”

In cybersecurity as in warfare, situational awareness requires a clear perception of the elements in an environment and their potential to cause future events. In some cases, the possibility of a future cyber attack can be averted by merely patching software with known vulnerabilities.

Intrusion detection systems can automate some degree of monitoring of networks and operating systems. Intrusion detection systems that are based on detecting malware signatures also can identify common attacks. They are, however, not effective at recognizing so-called zero-day malware, which has not yet been catalogued by security researchers. Intrusion detection based on malware signatures is also ineffective at detecting custom attacks, (i.e., a disgruntled employee who knows just enough Python or PowerShell to be dangerous. Sophisticated threat actors who slip through defences to gain network access can become insiders, with permission to view sensitive networks and files. In such cases, situational awareness is a prerequisite to mitigate damage.

Another strategy for intrusion detection systems is to focus on context and anomalies rather than malware signatures. Such systems could use machine learning to learn legitimate commands, use of messaging protocols and so forth. While this strategy overcomes the reliance on malware signatures, it can potentially trigger false alarms. Such a system can also detect so-called slow-rate attacks, a type of denial of service attack that gradually robs networking bandwidth but is more difficult to detect than volumetric attacks.

Respond Effectively to Cyber-Incidents

The foundation for successful cyber-incident response lies in having concrete security policies, architecture and processes. “Once you have a breach, it’s kind of too late,” said Deloitte’s Frank. “It’s what you do before that matters.”

That said, the goal of warding off all cyber-incidents, which range from violations of security policies and laws to data breaches, is not realistic. It is thus essential to implement short- and long-term plans for managing cybersecurity emergencies. Organizations should have contingency plans for addressing possible attacks, practising how to respond to them through wargaming exercises to improve their ability to mitigate some cyberattacks and develop effective, coordinated escalation measures for successful breaches.

There are several aspects of the zero trust model that enhance organizations’ ability to respond and recover from cyber events. “Network and micro-segmentation, for example, is a concept by which trust zones are created by organizations around certain classes or types of assets, restricting the blast radius of potentially destructive cyberattacks and limiting the ability for an attacker to move laterally within the environment,” Rafla said. Also, efforts to automate and orchestrate zero trust principles can enhance the efficiency of security operations, speeding efforts to mitigate attacks. “Repetitive and manual tasks can now be automated and proactive actions to isolate and remediate security threats can be orchestrated through integrated controls,” Rafla added.

Response to cyber-incidents involves coordinating multiple stakeholders beyond the security team. “Every business function could be impacted — marketing, customer relations, legal compliance, information technology, etc.,” Frank said.

A six-tiered model for cyber incident response from the SANS Institute contains the following steps:

  • Preparation: Preparing the team to react to events ranging from cyberattacks to hardware failure and power outages.
  • Identification: Determining if an operational anomaly should be classified as a cybersecurity incident, and how to respond to it.
  • Containment: Segmenting compromised devices on the network long enough to limit damage in the event of a confirmed cybersecurity incident. Conversely, long-term containment measures involve hardening effective systems to allow them to enable normal operations.
  • Eradication: Removing or restoring compromised systems. If a security team detects malware on an IoT device, for instance, this phase could involve reimaging its hardware to prevent reinfection.
  • Recovery: Integrating previously compromised systems back into production and ensuring they operate normally after that. In addition to addressing the security event directly, recovery can involve crisis communications with external stakeholders such as customers or regulators.
  • Lessons Learned: Documenting and reviewing the factors that led to the cyber-incident and taking steps to avoid future problems. Feedback from this step should create a feedback loop providing insights that support future preparation, identification, etc.

While the bulk of the SANS model focuses on cybersecurity operations, the last step should be a multidisciplinary process. Investing in cybersecurity liability insurance to offset risks identified after ongoing cyber-incident response requires support from upper management and the legal team. Ensuring compliance with the evolving regulatory landscape also demands feedback from the legal department.

A central practice that can prove helpful is documentation — not just for security incidents, but as part of ongoing cybersecurity assessment and strategy. Organizations with mature security documentation tend to be better positioned to deal with breaches.

“If you fully document your program — your policies, procedures, standards and training — that might put you in a more favourable position after a breach,” Frank explained. “If you have all that information summarized and ready, in the event of an investigation by a regulatory authority after an incident, it shows the organization has robust programs in place.”

Documenting security events and controls can help organizations become more proactive and more capable of embracing automation and machine learning tools. As they collect data, they should repeatedly ask how to make the most of it. KPMG’s Haward-Grau said cybersecurity teams should consider the following questions:

  • What data should we focus on?
  • What can we do to improve our operational decision making?
  • How do we reduce our time and costs efficiently and effectively, given the nature of the reality in which we’re operating?

Ultimately, answering those questions may involve using machine learning or artificial intelligence technology, Haward- Grau said. “If your business is using machine learning or AI, you have to digitally enable them so that they can do what they want to do,” he said.

Finally, documenting security events and practices as they relate to IoT devices and beyond can be useful in evaluating the effectiveness of cybersecurity spending and provide valuable feedback for digital transformation programs. “Security is a foundational requirement that needs to be ingrained holistically in architecture and processes and governed by policies,” said Chander Damodaran, chief architect at Brillio, a digital consultancy firm. ”Security should be a common denominator.”

IoT Security

Recent legislation requires businesses to assume responsibility for protecting the Internet of Things (IoT) devices. “Security by Design” approaches are essential since successful applications deploy millions of units and analysts predict billions of devices deployed in the next five to ten years. The cost of fixing compromised devices later could overwhelm a business.

Security risks can never be eliminated: there is no single solution for all concerns, and the cost to counter every possible threat vector is prohibitively expensive. The best we can do is minimize the risk, and design devices and processes to be easily updatable.

It is best to assess damage potential and implement security methods accordingly. For example, for temperature and humidity sensors used in environmental monitoring, data protection needs are not as stringent as devices transmitting credit card information. The first may require anonymization for privacy, and the second may require encryption to prevent unauthorized access.

Overall Objectives

Senders and receivers must authenticate. IoT devices must transmit to the correct servers and ensure they receive messages from the correct servers.

Mission-critical applications, such as vehicle crash notification or medical alerts, may fail if the connection is not reliable. Lack of communication itself is a lack of security.

Connectivity errors can make good data unreliable, and actions on the content may be erroneous. It is best to select connectivity providers with strong security practices—e.g., whitelisting access and traffic segregation to prevent unauthorized communication.

ACtC-3dLml_wPNzqObxWBELrfzifYiQLQpU6QVaKaMERqQZXspv-WPYLG17u2sJEtTM1RP3Kj42_qgp4SLMhoJwYt75EXfRWF8MaqPbvJFl6fCp3EIt30sEvOZ3P74hoo21lwBkEd9Td41iGvZY-zNMhEvIo6A=w1980-h873-no?authuser=0&profile=RESIZE_710x

IoT Security: 360-Degree Approach

Finally, only authorized recipients should access the information. In particular, privacy laws require extra care in accessing the information on individuals.

Data Chain

Developers should implement security best practices at all points in the chain. However, traditional IT security protects servers with access controls, intrusion detection, etc., the farther away from the servers that best practices are implemented, the less impact that remote IoT device breaches have on the overall application.

For example, compromised sensors might send bad data, and servers might take incorrect actions despite data filtering. Thus, gateways offer an ideal location for security with compute capacity for encryption and implement over-the-air (OTA) updates for security fixes.

Servers often automate responses on data content. Simplistic and automated responses to bad data could cascade into much greater difficulty. If devices transmit excessively, servers could overload and fail to provide timely responses to transmissions—retry algorithms resulting from network unavailability often create data storms.

IoT devices often use electrical power rather than batteries, and compromised units could continue to operate for years. Implementing over-the-air (OTA) functions for remotely disabling devices could be critical.

When a breach requires device firmware updates, OTA support is vital when devices are inaccessible or large numbers of units must be modified rapidly. All devices should support OTA, even if it increases costs—for example, adding memory for managing multiple “images” of firmware for updates.

In summary, IoT security best practices of authentication, encryption, remote device disable, and OTA for security fixes, along with traditional IT server protection, offers the best chance of minimizing risks of attacks on IoT applications.

Originally posted here.

Read more…

Skoltech researchers and their colleagues from Russia and Germany have designed an on-chip printed "electronic nose" that serves as a proof of concept for this kind of low-cost and sensitive devices to be used in portable electronics and healthcare. The paper was published in the journal ACS Applied Materials Interfaces.

The rapidly growing fields of Internet of Things (IoT) and advanced medical diagnostics require small, cost-effective, low-powered yet reasonably sensitive and selective gas-analytical systems like so-called "electronic noses." These systems can be used for noninvasive diagnostics of human breath, such as diagnosing chronic obstructive pulmonary disease (COPD) with a compact sensor system also designed at Skoltech. Some of these sensors work a lot like actual noses—say, yours—by using an array of sensors to better detect the complex signal of a gaseous compound.

One approach to creating these sensors is by additive manufacturing technologies, which have achieved enough power and precision to be able to produce the most intricate devices. Skoltech senior research scientist Fedor Fedorov, Professor Albert Nasibulin, research scientist Dmitry Rupasov and their collaborators created a multisensor "electronic nose" by printing nanocrystalline films of eight different metal oxides onto a multielectrode chip (they were manganese, cerium, zirconium, zinc, chromium, cobalt, tin, and titanium). The Skoltech team came up with the idea for this project.

"For this work, we used microplotter printing and true solution inks. There are a few things that make it valuable. First, the resolution of the printing is close to the distance between electrodes on the chip which is optimized for more convenient measurements. We show these technologies are compatible. Second, we managed to use several different oxides which enables more orthogonal signal from the chip resulting in improved selectivity. We can also speculate that this technology is reproducible and easy to be implemented in industry to obtain chips with similar characteristics, and that is really important for the 'e-nose' industry," Fedorov explained.

In subsequent experiments, the device was able to sniff out the difference between different alcohol vapors (methanol, ethanol, isopropanol, and n-butanol), which are chemically very similar and hard to tell apart, at low concentrations in the air. Since methanol is extremely toxic, detecting it in beverages and differentiating between methanol and ethanol can even save lives. To process the data, the team used linear discriminant analysis (LDA), a pattern recognition algorithm, but other machine learning algorithms could also be used for this task.

So far the device operates at rather high temperatures of 200-400 degrees Celsius, but the researchers believe that new quasi-2-D materials such as MXenes, graphene and so on could be used to increase the sensitivity of the array and ultimately allow it to operate at room temperature. The team will continue working in this direction, optimizing the materials used to lower power consumption.

Originally posted here.

Read more…

The benefits of IoT data are widely touted. Enhanced operational visibility, reduced costs, improved efficiencies and increased productivity have driven organizations to take major strides towards digital transformation. With countless promising business opportunities, it’s no surprise that IoT is expanding rapidly and relentlessly. It is estimated that there will be 75.4 billion IoT devices by 2025. As IoT grows, so do the volumes of IoT data that need to be collected, analyzed and stored. Unfortunately, significant barriers exist that can limit or block access to this data altogether.

Successful IoT data acquisition starts and ends with reliable and scalable IoT connectivity. Selecting the right communications technology is paramount to the long-term success of your IoT project and various factors must be considered from the beginning to build a functional wireless infrastructure that can support and manage the influx of IoT data today and in the future.

Here are five IoT architecture must-haves for unlocking IoT data at scale.

1. Network Ownership

For many businesses, IoT data is one of their greatest assets, if not the most valuable. This intensifies the demand to protect the flow of data at all costs. With maximum data authority and architecture control, the adoption of privately managed networks is becoming prevalent across industrial verticals.

Beyond the undeniable benefits of data security and privacy, private networks give users more control over their deployment with the flexibility to tailor their coverage to the specific needs of their campus style network. On a public network, users risk not having the reliable connectivity needed for indoor, underground and remote critical IoT applications. And since this network is privately owned and operated, users also avoid the costly monthly access, data plans and subscription costs imposed by public operators, lowering the overall total-cost-of-ownership. Private networks also provide full control over network availability and uptime to ensure users have reliable access to their data at all times.

2. Minimal Infrastructure Requirements

Since the number of end devices is often fixed to your IoT use cases, choosing a wireless technology that requires minimal supporting infrastructure like base stations and repeaters, as well as configuration and optimization is crucial to cost-effectively scale your IoT network.

Wireless solutions with long range and excellent penetration capability, such as next-gen low-power wide area networks, require fewer base stations to cover a vast, structurally dense industrial or commercial campuses. Likewise, a robust radio link and large network capacity allow an individual base station to effectively support massive amounts of sensors without comprising performance to ensure a continuous flow of IoT data today and in the future.

3. Network and Device Management

As IoT initiatives move beyond proofs-of-concept, businesses need an effective and secure approach to operate, control and expand their IoT network with minimal costs and complexity.

As IoT deployments scale to hundreds or even thousands of geographically dispersed nodes, a manual approach to connecting, configuring and troubleshooting devices is inefficient and expensive. Likewise, by leaving devices completely unattended, users risk losing business-critical IoT data when it’s needed the most. A network and device management platform provides a single-pane, top-down view of all network traffic, registered nodes and their status for streamlined network monitoring and troubleshooting. Likewise, it acts as the bridge between the edge network and users’ downstream data servers and enterprise applications so users can streamline management of their entire IoT project from device to dashboard.

4. Legacy System Integration

Most traditional assets, machines, and facilities were not designed for IoT connectivity, creating huge data silos. This leaves companies with two choices: building entirely new, greenfield plants with native IoT technologies or updating brownfield facilities for IoT connectivity. Highly integrable, plug-and-play IoT connectivity is key to streamlining the costs and complexity of an IoT deployment. Businesses need a solution that can bridge the gap between legacy OT and IT systems to unlock new layers of data that were previously inaccessible. Wireless IoT connectivity must be able to easily retrofit existing assets and equipment without complex hardware modifications and production downtime. Likewise, it must enable straightforward data transfer to the existing IT infrastructure and business applications for data management, visualization and machine learning.

5. Interoperability

Each IoT system is a mashup of diverse components and technologies. This makes interoperability a prerequisite for IoT scalability, to avoid being saddled with an obsolete system that fails to keep pace with new innovation later on. By designing an interoperable architecture from the beginning, you can avoid fragmentation and reduce the integration costs of your IoT project in the long run. 

Today, technology standards exist to foster horizontal interoperability by fueling global cross-vendor support through robust, transparent and consistent technology specifications. For example, a standard-based wireless protocol allows you to benefit from a growing portfolio of off-the-shelf hardware across industry domains. When it comes to vertical interoperability, versatile APIs and open messaging protocols act as the glue to connect the edge network with a multitude of value-deriving backend applications. Leveraging these open interfaces, you can also scale your deployment across locations and seamlessly aggregate IoT data across premises.  

IoT data is the lifeblood of business intelligence and competitive differentiation and IoT connectivity is the crux to ensuring reliable and secure access to this data. When it comes to building a future-proof wireless architecture, it’s important to consider not only existing requirements, but also those that might pop up down the road. A wireless solution that offers data ownership, minimal infrastructure requirements, built-in network management and integration and interoperability will not only ensure access to IoT data today, but provide cost-effective support for the influx of data and devices in the future.

Originally posted here.

Read more…

Charter Sponsors

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

4 industries to watch for AI disruption

Consumer-centric applications for artificial intelligence (AI) and automation are helping to stamp out the public perception that these technologies will only benefit businesses and negatively impact jobs and hiring. The conversation from human…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities