Subscribe to our Newsletter | To Post On IoT Central, Click here


Software (43)

As the Internet of Things (IoT) grows rapidly, huge amounts of wireless sensor networks emerged monitoring a wide range of infrastructure, in various domains such as healthcare, energy, transportation, smart city, building automation, agriculture, and industry producing continuously streamlines of data. Big Data technologies play a significant role within IoT processes, as visual analytics tools, generating valuable knowledge in real-time in order to support critical decision making. This paper provides a comprehensive survey of visualization methods, tools, and techniques for the IoT. We position data visualization inside the visual analytics process by reviewing the visual analytics pipeline. We provide a study of various chart types available for data visualization and analyze rules for employing each one of them, taking into account the special conditions of the particular use case. We further examine some of the most promising visualization tools. Since each IoT domain is isolated in terms of Big Data approaches, we investigate visualization issues in each domain. Additionally, we review visualization methods oriented to anomaly detection. Finally, we provide an overview of the major challenges in IoT visualizations.

Internet of Things (IoT) has become one of the most emerging and powerful technologies that is used to improve the quality of life. IoT connects together a great number of heterogeneous devices in order to dynamically acquire various types of data from the real-world environment. IoT data is used to mine useful information that may be used, by context-aware applications, in order to improve people’s daily life. As data is typically featured with contextual information (time, location, status, etc), IoT turns into a valuable and voluminous source of contextual data with variety (several sources), velocity (real-time collection), veracity (uncertainty of data) and value. The cooperation of Big Data and IoT has initiated the development of smart services for many complex infrastructures. As IoT develops rapidly, Big Data technologies play a critical role, as visual analytics tools, producing valuable knowledge in real-time, within the IoT infrastructures, aiming in supporting critical decision making. Large-scale IoT applications employ a large number of sensors resulting in a very large amount of collected data. In the context of IoT data analysis, two tasks are of relevance: exploring the large amounts of data to find subsets and patterns of interest, and; analyzing the available data to make assessments and predictions. This paper will exploit ways to gain insight from IoT Data using meaningful visualizations. Visual analytics is an analysis technique that can assist the exploration of vast amounts of data by utilizing data mining, statistics, and visualization. Interactive visualization tools combine automated analysis and human interaction allowing user control during the data analysis process, aiming in producing valuable insight for decision making. They involve custom data visualization methods that enable the operator to interact with them, in order to view data through different perspectives and focus on details of interest. Data analytics methods involve machine learning and AI methods, to automatically extract patterns from data and make predictions. AI methods are usually untrustworthy to their operators, due to their black-box operation that does not provide insight into the accuracy of their results. Visual analytics can be used to make AI methods more transparent and explainable, visualizing both their results and the way they work.

Visual Analytics

Visual Analytics is a data analysis method that employs data mining, statistics, and visualization. Besides automated analysis, implementations of visual analytics tools combine human interaction allowing user control and judgment during data analysis, in order to produce valuable insight for decision making. Over the years, numerous research studies on visual analytics were conducted. Most of them deal with a conventional visual analytics pipeline originally presented by Keim et al which depicts the visual analytics process. As figure1 illustrates, the visual analytics process starts performing data transformation subprocesses, such as filtering and sampling, that modify the data set into representations enabled for further exploration. To create knowledge, the pipeline adopts either a visual exploration method or an automatic analysis method, depending on the specific use case. In the case of automatic analysis, data mining methods are applied to assist the characterization of the data. The visual interface is operated by analysts and decision-makers, to explore and analyze the data. The framework of the Visual Analytics Pipeline has four core concepts: Data, Models, Visualization, and Knowledge. The Data module is responsible for the collection and pre-processing of the raw and heterogeneous data. As data acquisition is done in real-time through sensors, raw data sets are usually incomplete, noisy, or inconsistent making it impossible for them to be used directly in the Visualization or in the Models module. In order to eliminate these difficulties, some data pre-processing has to be applied to the original data sets. Data pre-processing is a flexible process, depending on the quality of raw data. This module includes pre-processing techniques such as data parsing, data integration, data cleaning (elimination of redundancy, errors, and invalid data), data transformation (normalization), and data reduction. The Models module, is responsible for converting data to information. This module includes conversion methods such as feature selection and generation, model building, selection, and validation. Figure 1: Visual Analytics Pipeline The Visualization module is responsible for visualizing and abstractly transforming the data. This module includes techniques for visual mapping (parallel coordinates, force-directed graphs, chord graphs, scatter matrices), view generation and coordination, human-computer interaction. The Knowledge module is responsible for driving the process of transforming information into meaningful insight using human machine interaction methods. 3 Visualization Charts Rules and Tools Data visualization places data in an appropriate visual context that triggers people’s understanding of its significance. This reduces the overall effort to manually analyze the data. As a result, visualization and recognition of patterns within the IoT generated data, play a significant role in the insight-gaining process, and enhance the decision-making process. Visualizing data plays a major role in data analytics since it manifests the presentation of findings and its patterns concurrently with the original data. Data visualization helps to interpret the results by correlating the findings to the goals. It also exposes hidden patterns, trends, and correlations, that otherwise would be undetected, in an impactful and perceptible manner. As a result, it assists the creation of good storytelling in terms of data and data pattern understanding. In this section, we will address different types of data charting. Also, we will analyze chart selection rules, that take into account special conditions that hold for a particular use case. Moreover, we will present the most popular IoT visualization tools. 

IoT Data Visualization Tools Visualization tools assist the decision-making process since they provide strong data analytics that help interpreting big data acquired from the various IoT devices. IoT data visualization systems involve custom dashboard design that, given a set of measurements acquired by several geographically scattered IoT sensors, and several AI models applied to the data, allows the operator to explore the available raw measurements and gain insight about the models’ operation. The main aim of these systems is to enhance the operator’s trust in the models. A flexible visualization system should maintain some core characteristics such as the ability to update in real-time, interactivity, transparency, and explainability. Since, IoT measurements are highly dynamic, with new measurements being collected in real-time, dashboards should be able to update in real-time as new measurements become available. The dashboard should provide an interactive user interface allowing operators to engage with the data and explore them. The dashboard should also provide means of looking into the applied AI models and visualize their internals, to enhance the transparency and explainability of the models. Many proposed visualization platforms are designed based on SOA (Service Oriented Architecture) with four key services: Data Collection Service, that receives data; Data Visualization Service, that observes the data intuitively; Dynamic Dashboard Service, providing an interface that organizes and displays various information such as text, the value of the machine, or the visualization result; and Data Analytic Service, that delivers statistical analysis tools and consists of three main layers Big Data Infrastructure as a Service, Big Data Platform as a Service, and Big Data Analytics Software as a Service. The most widely used IoT Data visualization tools, across several industries globally, will be summarized in this section. Each one was compared against the following criteria: open-source tool, the ability to integrate with popular data sources (MapR Hadoop Hive, Salesforce, Google Analytics, Cloudera Hadoop, etc.), interactive visualization, client-type (desktop, online or mobile app), availability of APIs for customization and embedding purposes.

Tableau is a fast and flexible data visualization tool, allowing user interaction. Its user interface provides a wide range of fixed and custom visualizations employing a great variety of intuitive charts. In-depth analyses may be accomplished by R-scripting. It supports most data formats and connections to various servers such as Amazon Aurora, Cloudera Hadoop, and Salesforce. Tableau’s online service is publicly available but it supports limited storage. Server and desktop versions are available under commercial licenses. ThingsBoard is an open-source IoT platform containing modules for device management, data collection, processing, and visualization. The platform allows the creation of custom IoT dashboards containing widgets that visualize sensor data collected through multiple devices. It contains a set of features including line and bar chart modules for both historical and real-time data visualizations. It also contains map widgets enabling object tracking on online maps. Its complex stack technology (Java, Python, C++, JavaScript) provides error-free performance and real-time data analytics. It supports standard IoT protocols for device connectivity (e.g. MQTT, CoAP, and HTTP). It can be integrated with Node-Red, a flow-based programming platform for IoT, through a custom function. Plotly is an online cloud-based public data visualization service. It is built using Python and Django frameworks. It provides various data storage services and modules for IoT visualization and analytics. It allows the creation of online dashboards employing a wide range of charts such as statistical, scientific, 3D, multiple axes charts, etc. It provides Python, R, MATLAB and Julia based APIs for in-depth analyses. Also, graphics libraries such as ggplot2, matplotlib, and MATLAB chart conversion techniques enhance the visualizations. Its internal tool Web Plot Digitizer (WPD) may automatically grab data from static images. It is publicly available with limited chart features and storage while its full set of chart features are available through a professional membership license. IBM Watson IoT Platform is a cloud platform as a service supporting several programming languages, services, and integrated DevOps in order to deploy and manage cloud applications. It features a set of built-in web applications while it provides support for 3rd-party software integration via REST APIs. The visualization of static and dynamic data is provided through effortless creation of custom diagrams, graphs, and tables. It provides access to device properties and alert management. Node-RED may be used for IoT device connection, APIs, and online services. Sensor data, stored in Cloudant NoSQL DB, may be processed for further data analysis. Power BI is a powerful business analytics service based on the cloud. It provides a rich set of interactive visualizations and detailed analysis reports for large enterprises. It is designed to trace and visualize various sensor gathered data. The platform works in cooperation with Azure cloud-based analytics and cognitive services. It consists of 3 basic components: Power BI Desktop, report generator; Service (SaaS), report publisher; and Apps, report viewer, and dashboard. Numerous types of source integrations are supported while rich data visualizations are also provided. Among other methods, data may be queried using the natural language query feature. Data analysis is accomplished both in real-time streaming and static historic data. Power BI provides sub-components that enable IoT integration.  

These days, Immersive Virtual Reality is recognized as one of the most promising technologies that enables virtual interactions with physical systems. The user is situated within a 3D environment where data visualizations and physical space are matched in a sense that it provides users the ability to orient, navigate, and interact naturally. These frameworks utilize hybrid collaborative multi-modal methods to enable collaboration between users and provide intuitive and natural interaction within a specific virtual environment. As users remain immersed within a 3D virtual environment, immersive reality applications require sophisticated approaches for interacting with the IoT data analytics visualization. As such, immersive analytics is the visualization outcome within IoT infrastructures. Immersive analytics frameworks promote a better understanding of the IoT ecosystem and enhance decision-making. To ensure such a collaborative virtual environment presupposes highly responsive connectivity that may be accomplished by employing high-speed 5G network infrastructures, which provide ultra-low-delay and ultra-high-reliable communications. Similarly, a Cyber-Physical System (CPS) is a set of physical devices, connected through a communication network, that communicates with its virtual cyberspace. Each physical object is associated with a cyber model that stores all information and knowledge of it. This cyber model is called “Digital Twin”. It allows data transfer from the physical to the cyber part. However, in a specific CPS, where every physical object has a digital twin counterpart, the spatiotemporal relations between the individual digital twins are far more valuable than the actual digital twin. The generation of Digital twins may be accomplished using 3D technologies through AR/VR/MR or even hologram devices. Digital twins integrate various technologies such as Haptics, Humanoid Robotics as well as Soft Robotics, 5G and Tactile Internet, Cloud Computing Offloading, Wearable technology, IoT Contextual data, and AI.

IoT domains and Visualization

IoT technologies have already entered into various significant domains of our life. The growing market competition and inexpensive connectivity have emerged the Internet of things (IoT) across many domains. Connected sensors, devices, and machines via the Internet are the “things” in IoT. The enormous volume of IoT data provides the information needed to be analyzed to gain knowledge. Visual analytics, involving data analysis methods, artificial intelligence, and visualization, aims to improve domain operations with concerning efficiency, flexibility, and safety. The employment of IoT smart devices facilitates the transformation of traditional domains into modern, smart, and autonomous domains. Over the past recent years, many traditional domains such as healthcare, energy, industry, transportation, city and building management, and agriculture have become IoT-based with intelligent human-to-machine (H2M) and machine-to-machine (M2M) communication.  

Challenges and Future Work

Visual analytics main objective is to discover knowledge and produce actionable insight. This is succeeded by processing large and complex data sets through by integrating techniques from various fields such as data analysis, data management, visualization, knowledge discovery, analytical reasoning, human perception, and human-computer interaction. Even though visualization is an important entity in Big IoT data analytics, most visualization tools exhibit poor performance results in terms of functionality, scalability, interaction, infrastructure, insight creation, and evaluation.

Conclusion

The emergence of IoT Services increased drastically the growth rate of data production creating large and complex data sets. The integration of human judgment within the data analysis process enables visual analytics in discovering knowledge and gaining valuable insight from these data sets. In this process, every piece of IoT data is considered crucial for the extraction of information and useful patterns. Human cognitive and perceptual capabilities identify patterns efficiently when data is represented visually. Data visualization methods face several challenges in handling the voluminous and streaming IoT data without compromising performance and response time matters.

 

 

Read more…

Then it seemed that overnight, millions of workers worldwide were told to isolate and work from home as best as they could. Businesses were suddenly forced to enable remote access for hundreds or thousands of users, all at once, from anywhere across the globe. Many companies that already offered VPN services to a small group of remote workers scurried to extend those capabilities to the much larger workforce sequestering at home. It was a decision made in haste out of necessity, but now it’s time to consider, is VPN the best remote access technology for the enterprise, or can other technologies provide a better long-term solution?

Long-term Remote Access Could Be the Norm for Some Time

Some knowledge workers are trickling back to their actual offices, but many more are still at home and will be for some time. Global Workplace Analytics estimates that 25-30% of the workforce will still be working from home multiple days a week by the end of 2021. Others may never return to an official office, opting to remain a work-from-home (WFH) employee for good.

Consequently, enterprises need to find a remote access solution that gives home-based workers a similar experience as they would have in the office, including ease of use, good performance, and a fully secure network access experience. What’s more, the solution must be cost effective and easy to administer without the need to add more technical staff members.

VPNs are certainly one option, but not the only one. Other choices include appliance-based SD-WAN and SASE. Let’s have a look at each approach.

VPNs Weren’t Designed to Support an Entire Workforce

While VPNs are a useful remote access solution for a small portion of the workforce, they are an inefficient technology for giving remote access to a very large number of workers. VPNs are designed for point-to-point connectivity, so each secure connection between two points – presumably a remote worker and a network access server (NAS) in a datacenter – requires its own VPN link. Each NAS has a finite capacity for simultaneous users, so for a large remote user base, some serious infrastructure may be needed in the datacenter.

Performance can be an issue. With a VPN, all communication between the user and the VPN is encrypted. The encryption process takes time, and depending on the type of encryption used, this may add noticeable latency to Internet communications. More important, however, is the latency added when a remote user needs access to IaaS and SaaS applications and services. The traffic path is convoluted because it must travel between the end user and the NAS before then going out to the cloud, and vice versa on the way back.

An important issue with VPNs is that they provide overly broad access to the entire network without the option of controlling granular user access to specific resources. Stolen VPN credentials have been implicated in several high-profile data breaches. By using legitimate credentials and connecting through a VPN, attackers were able to infiltrate and move freely through targeted company networks. What’s more, there is no scrutiny of the security posture of the connecting device, which could allow malware to enter the network via insecure user devices.

SD-WAN Brings Intelligence into Routing Remote Users’ Traffic

Another option for providing remote access for home-based workers is appliance-based SD-WAN. It brings a level of intelligence to the connectivity that VPNs don’t have. Lee Doyle, principal analyst with Doyle Research, outlines the benefits of using SD-WAN to connect home office users to their enterprise network:

  • Prioritization for mission-critical and latency-sensitive applications
  • Accelerated access to cloud-based services
  • Enhanced security via encryption, VPNs, firewalls and integration with cloud-based security
  • Centralized management tools for IT administrators

One thing to consider about appliance-based SD-WAN is that it’s primarily designed for branch office connectivity—though it can accommodate individual users at home as well. However, if a company isn’t already using SD-WAN, this isn’t a technology that is easy to implement and setup for hundreds or thousands of home-based users. What’s more, a significant investment must be made in the various communication and security appliances.

SASE Provides a Simpler, More Secure, Easily Scalable Solution

Cato’s Secure Access Service Edge (or SASE) platform provides a great alternative to VPN for remote access by many simultaneous workers. The platform offers scalable access, optimized connectivity, and integrated threat prevention that are needed to support continuous large-scale remote access.

Companies that enable WFH using Cato’s platform can scale quickly to any number of remote users with ease. There is no need to set up regional hubs or VPN concentrators. The SASE service is built on top of dozens of globally distributed Points of Presence (PoPs) maintained by Cato to deliver a wide range of security and networking services close to all locations and users. The complexity of scaling is all hidden in the Cato-provided PoPs, so there is no infrastructure for the organization to purchase, configure or deploy. Giving end users remote access is as simple as installing a client agent on the user’s device, or by providing clientless access to specific applications via a secure browser.

Cato’s SASE platform employs Zero Trust Network Access in granting users access to the specific resources and applications they need to use. This granular-level security is part of the identity-driven approach to network access that SASE demands. Since all traffic passes through a full network security stack built into the SASE service, multi-factor authentication, full access control, and threat prevention are applied to traffic from remote users. All processing is done within the PoP closest to the users while enforcing all corporate network and security policies. This eliminates the “trombone effect” associated with forcing traffic to specific security choke points on a network. Further, admins have consistent visibility and control of all traffic throughout the enterprise WAN.

SASE Supports WFH in the Short-term and Long-term

While some workers are venturing back to their offices, many more are still working from home—and may work from home permanently. The Cato SASE platform is the ideal way to give them access to their usual network environment without forcing them to go through insecure and inconvenient VPNs.

Originally posted here

Read more…

When I think about the things that held the planet together in 2020, it was digital experiences delivered over wireless connectivity that made remote things local.

While heroes like doctors, nurses, first responders, teachers, and other essential personnel bore the brunt of the COVID-19 response, billions of people around the world found themselves cut off from society. In order to keep people safe, we were physically isolated from each other. Far beyond the six feet of social distancing, most of humanity weathered the storm from their homes.

And then little by little, old things we took for granted, combined with new things many had never heard of, pulled the world together. Let’s take a look at the technologies and trends that made the biggest impact in 2020 and where they’re headed in 2021:

The Internet

The global Internet infrastructure from which everything else is built is an undeniable hero of the pandemic. This highly-distributed network designed to withstand a nuclear attack performed admirably as usage by people, machines, critical infrastructure, hospitals, and businesses skyrocketed. Like the air we breathe, this primary facilitator of connected, digital experiences is indispensable to our modern society. Unfortunately, the Internet is also home to a growing cyberwar and security will be the biggest concern as we move into 2021 and beyond. It goes without saying that the Internet is one of the world’s most critical utilities along with water, electricity, and the farm-to-table supply chain of food.

Wireless Connectivity

People are mobile and they stay connected through their smartphones, tablets, in cars and airplanes, on laptops, and other devices. Just like the Internet, the cellular infrastructure has remained exceptionally resilient to enable communications and digital experiences delivered via native apps and the web. Indoor wireless connectivity continues to be dominated by WiFi at home and all those empty offices. Moving into 2021, the continued rollout of 5G around the world will give cellular endpoints dramatic increases in data capacity and WiFi-like speeds. Additionally, private 5G networks will challenge WiFi as a formidable indoor option, but WiFi 6E with increased capacity and speed won’t give up without a fight. All of these developments are good for consumers who need to stay connected from anywhere like never before.

Web Conferencing

With many people stuck at home in 2020, web conferencing technology took the place of traveling to other locations to meet people or receive education. This technology isn’t new and includes familiar players like GoToMeeting, Skype, WebEx, Google Hangouts/Meet, BlueJeans, FaceTime, and others. Before COVID, these platforms enjoyed success, but most people preferred to fly on airplanes to meet customers and attend conferences while students hopped on the bus to go to school. In 2020, “necessity is the mother of invention” took hold and the use of Zoom and Teams skyrocketed as airplanes sat on the ground while business offices and schools remained empty. These two platforms further increased their stickiness by increasing the number of visible people and adding features like breakout rooms to meet the demands of businesses, virtual conference organizers, and school teachers. Despite the rollout of the vaccine, COVID won’t be extinguished overnight and these platforms will remain strong through the first half of 2021 as organizations rethink where and when people work and learn. There’s way too many players in this space so look for some consolidation.

E-Commerce

“Stay at home” orders and closed businesses gave e-commerce platforms a dramatic boost in 2020 as they took the place of shopping at stores or going to malls. Amazon soared to even higher heights, Walmart upped their game, Etsy brought the artsy, and thousands of Shopify sites delivered the goods. Speaking of delivery, the empty city streets became home to fleets FedEx, Amazon, UPS, and DHL trucks bringing packages to your front doorstep. Many retail employees traded-in working at customer-facing stores for working in a distribution centers as long as they could outperform robots. Even though people are looking forward to hanging out at malls in 2021, the e-commerce, distribution center, delivery truck trinity is here to stay. This ball was already in motion and got a rocket boost from COVID. This market will stay hot in the first half of 2021 and then cool a bit in the second half.

Ghost Kitchens

The COVID pandemic really took a toll on restaurants in the 2020, with many of them going out of business permanently. Those that survived had to pivot to digital and other ways of doing business. High-end steakhouses started making burgers on grills in the parking lot, while takeout pizzerias discovered they finally had the best business model. Having a drive-thru lane was definitely one of the keys to success in a world without waiters, busboys, and hosts. “Front of house” was shut down, but the “back of house” still had a pulse. Adding mobile web and native apps that allowed customers to easily order from operating “ghost kitchens” and pay with credit cards or Apple/Google/Samsung Pay enabled many restaurants to survive. A combination of curbside pickup and delivery from the likes of DoorDash, Uber Eats, Postmates, Instacart and Grubhub made this business model work. A surge in digital marketing also took place where many restaurants learned the importance of maintaining a relationship with their loyal customers via connected mobile devices. For the most part, 2021 has restauranteurs hoping for 100% in-person dining, but a new business model that looks a lot like catering + digital + physical delivery is something that has legs.

The Internet of Things

At its very essence, IoT is all about remotely knowing the state of a device or environmental system along with being able to remotely control some of those machines. COVID forced people to work, learn, and meet remotely and this same trend applied to the industrial world. The need to remotely operate industrial equipment or an entire “lights out” factory became an urgent imperative in order to keep workers safe. This is yet another case where the pandemic dramatically accelerated digital transformation. Connecting everything via APIs, modeling entities as digital twins, and having software bots bring everything to life with analytics has become an ROI game-changer for companies trying to survive in a free-falling economy. Despite massive employee layoffs and furloughs, jobs and tasks still have to be accomplished, and business leaders will look to IoT-fueled automation to keep their companies running and drive economic gains in 2021.

Streaming Entertainment

Closed movie theaters, football stadiums, bowling alleys, and other sources of entertainment left most people sitting at home watching TV in 2020. This turned into a dream come true for streaming entertainment companies like Netflix, Apple TV+, Disney+, HBO Max, Hulu, Amazon Prime Video, Youtube TV, and others. That said, Quibi and Facebook Watch didn’t make it. The idea of binge-watching shows during the weekend turned into binge-watching every season of every show almost every day. Delivering all these streams over the Internet via apps has made it easy to get hooked. Multiplayer video games fall in this category as well and represent an even larger market than the film industry. Gamers socially distanced as they played each other from their locked-down homes. The rise of cloud gaming combined with the rollout of low-latency 5G and Edge computing will give gamers true mobility in 2021. On the other hand, the video streaming market has too many players and looks ripe for consolidation in 2021 as people escape the living room once the vaccine is broadly deployed.

Healthcare

With doctors and nurses working around the clock as hospitals and clinics were stretched to the limit, it became increasingly difficult for non-COVID patients to receive the healthcare they needed. This unfortunate situation gave tele-medicine the shot in the arm (no pun intended) it needed. The combination of healthcare professionals delivering healthcare digitally over widespread connectivity helped those in need. This was especially important in rural areas that lacked the healthcare capacity of cities. Concurrently, the Internet of Things is making deeper inroads into delivering the health of a person to healthcare professionals via wearable technology. Connected healthcare has a bright future that will accelerate in 2021 as high-bandwidth 5G provides coverage to more of the population to facilitate virtual visits to the doctor from anywhere.

Working and Living

As companies and governments told their employees to work from home, it gave people time to rethink their living and working situation. Lots of people living in previously hip, urban, high-rise buildings found themselves residing in not-so-cool, hollowed-out ghost towns comprised of boarded-up windows and closed bars and cafés. Others began to question why they were living in areas with expensive real estate and high taxes when they not longer had to be close to the office. This led to a 2020 COVID exodus out of pricey apartments/condos downtown to cheaper homes in distant suburbs as well as the move from pricey areas like Silicon Valley to cheaper destinations like Texas. Since you were stuck in your home, having a larger house with a home office, fast broadband, and a back yard became the most important thing. Looking ahead to 2021, a hybrid model of work-from-home plus occasionally going into the office is here to stay as employees will no longer tolerate sitting in traffic two hours a day just to sit in a cubicle in a skyscraper. The digital transformation of how and where we work has truly accelerated.

Data and Advanced Analytics

Data has shown itself to be one of the world’s most important assets during the time of COVID. Petabytes of data has continuously streamed-in from all over the world letting us know the number of cases, the growth or decline of infections, hospitalizations, contact-tracing, free ICU beds, temperature checks, deaths, and hotspots of infection. Some of this data has been reported manually while lots of other sources are fully automated from machines. Capturing, storing, organizing, modeling and analyzing this big data has elevated the importance of cloud and edge computing, global-scale databases, advanced analytics software, and the growing importance of machine learning. This is a trend that was already taking place in business and now has a giant spotlight on it due to its global importance. There’s no stopping the data + advanced analytics juggernaut in 2021 and beyond.

Conclusion

2020 was one of the worst years in human history and the loss of life was just heartbreaking. People, businesses, and our education system had to become resourceful to survive. This resourcefulness amplified the importance of delivering connected, digital experiences to make previously remote things into local ones. Cheers to 2021 and the hope for a brighter day for all of humanity.

Read more…

By Michele Pelino

The COVID-19 pandemic drove businesses and employees to became more reliant on technology for both professional and personal purposes. In 2021, demand for new internet-of-things (IoT) applications, technologies, and solutions will be driven by connected healthcare, smart offices, remote asset monitoring, and location services, all powered by a growing diversity of networking technologies.

In 2021, we predict that:

  • Network connectivity chaos will reign. Technology leaders will be inundated by an array of wireless connectivity options. Forrester expects that implementation of 5G and Wi-Fi technologies will decline from 2020 levels as organizations sort through market chaos. For long-distance connectivity, low-earth-orbit satellites now provide a complementary option, with more than 400 Starlink satellites delivering satellite connectivity today. We expect interest in satellite and other lower-power networking technologies to increase by 20% in the coming year.
  • Connected device makers will double down on healthcare use cases. Many people stayed at home in 2020, leaving chronic conditions unmanaged, cancers undetected, and preventable conditions unnoticed. In 2021, proactive engagement using wearables and sensors to detect patients’ health at home will surge. Consumer interest in digital health devices will accelerate as individuals appreciate the convenience of at-home monitoring, insight into their health, and the reduced cost of connected health devices.
  • Smart office initiatives will drive employee-experience transformation. In 2021, some firms will ditch expensive corporate real estate driven by the COVID-19 crisis. However, we expect at least 80% of firms to develop comprehensive on-premises return-to-work office strategies that include IoT applications to enhance employee safety and improve resource efficiency such as smart lighting, energy and environmental monitoring, or sensor-enabled space utilization and activity monitoring in high traffic areas.*
  • The near ubiquity of connected machines will finally disrupt traditional business. Manufacturers, distributors, utilities, and pharma firms switched to remote operations in 2020 and began connecting previously disconnected assets. This connected-asset approach increased reliance on remote experts to address repairs without protracted downtime and expensive travel. In 2021, field service firms and industrial OEMs will rush to keep up with customer demand for more connected assets and machines.
  • Consumer and employee location data will be core to convenience. The COVID-19 pandemic elevated the importance location plays in delivering convenient customer and employee experiences. In 2021, brands must utilize location to generate convenience for consumers or employees with virtual queues, curbside pickup, and checking in for reservations. They will depend on technology partners to help use location data, as well as a third-party source of location trusted and controlled by consumers.

* Proactive firms, including Atea, have extended IoT investments to enhance employee experience and productivity by enabling employees to access a mobile app that uses data collected from light-fixture sensors to locate open desks and conference rooms. Employees can modify light and temperature settings according to personal preferences, and the system adjusts light color and intensity to better align with employees’ circadian rhythms to aid in concentration and energy levels. See the Forrester report “Rethink Your Smart Office Strategy.”

Originally posted HERE.

Read more…

by Evelyn Münster

IoT systems are complex data products: they consist of digital and physical components, networks, communications, processes, data, and artificial intelligence (AI). User interfaces (UIs) are meant to make this level of complexity understandable for the user. However, building a data product that can explain data and models to users in a way that they can understand is an unexpectedly difficult challenge. That is because data products are not your run-of-the-mill software product.

In fact, 85% of all big data and AI projects fail. Why? I can say from experience that it is not the technology but rather the design that is to blame.

So how do you create a valuable data product? The answer lies in a new type of user experience (UX) design. With data products, UX designers are confronted with several additional layers that are not usually found in conventional software products: it’s a relatively complex system, unfamiliar to most users, and comprises data and data visualization as well as AI in some cases. Last but not least, it presents an entirely different set of user problems and tasks than customary software products.

Let’s take things one step at a time. My many years in data product design have taught me that it is possible to create great data products, as long as you keep a few things in mind before you begin.

As a prelude to the UX design process, make sure you and your team answer the following nine questions:

1. Which problem does my product solve for the user?

The user must be able to understand the purpose of your data product in a matter of minutes. The assignment to the five categories of the specific tasks of data products can be helpful: actionable insights, performance feedback loop, root cause analysis, knowledge creation, and trust building.

2. What does the system look like?

Do not expect users to already know how to interpret the data properly. They need to be able to construct a fairly accurate mental model of the system behind the data.

3. What is the level of data quality?

The UI must reflect the quality of the data. A good UI leads the user to trust the product.

4. What is the user’s proficiency level in graphicacy and numeracy?

Conduct user testing to make sure that your audience will be able to read and interpret the data and visuals correctly.

5. What level of detail do I need?

Aggregated data is often too abstract to explain, or to build user trust. A good way to counter this challenge is to use details that explain things. Then again, too much detail can also be overwhelming.

6. Are we dealing with probabilities?

Probabilities are tricky and require explanations. The common practice of cutting out all uncertainties makes the UI deceptively simple – and dangerous.

7. Do we have a data visualization expert on the design team?

UX design applied to data visualization requires a special skillset that covers the entire process, from data analysis to data storytelling. It is always a good idea to have an expert on the team or, alternatively, have someone to reach out to when required.

8. How do we get user feedback?

As soon as the first prototype is ready, you should collect feedback through user testing. The prototype should present content in the most realistic and consistent way possible, especially when it comes to data and figures.

9. Can the user interface boost our marketing and sales?

If the user interface clearly communicates what the data product does and what the process is like, then it could take on a new function: sell your products.

To sum up: we must acknowledge that data products are an unexplored territory. They are not just another software product or dashboard, which is why, in order to create a valuable data product, we will need a specific strategy, new workflows, and a particular set of skills: Data UX Design.

Originally posted HERE 

Read more…

By: Kiva Allgood, Head of IoT for Ericsson

Recently, I had the pleasure of participating in PTC’s LiveWorx conference as it went virtual, adding further credence to its reputation as the definitive event for digital transformation. I joined PTC’s Chief Technology Officer Steve Dertien for a presentation on how to unleash the power of industrial IoT (IIoT) and cellular connectivity.

A lot has changed in business over the past few months. With a massive remote migration the foremost priority, many business initiatives were put on the back burner. IIoT wasn’t one of them. The realm has remained a key strategic objective; in fact, considering how it can close distances and extend what industrial enterprises are able to monitor, control and accomplish, it’s more important than ever.

Ericsson and PTC formed a partnership specifically to help industrial enterprises accelerate digital transformation. Ericsson unlocks the full value of global cellular IoT connectivity and provides on-premise solutions. PTC offers an industrial IoT platform, ready to configure and deploy, with flexible connectivity and capabilities to build IoT solutions without manual coding.

This can enable enterprises to speed up cellular IoT deployments, realize the advantages of Industry 4.0 and better compete. Further, they can create a foundation for 5G, introducing such future benefits as network slicing, edge computing and high reliability, low-latency communications.

It all sounds great, I know, but if you’re like most folks, you probably have a few basic questions on your mind. Here’s are a few of the ones that I typically receive and appreciate the most.

Why cellular?

You’re connected already, via wire or Wi-Fi, so why is cellular necessary? You need reliable, global and dedicated connectivity that’s flexible to deploy. If you think about a product and its lifecycle, it may be manufactured in one location, land in another, then ultimately move again. If you can gather secure insight from it – regardless of where it was manufactured, bought or sold – you can improve operational efficiency, product capabilities, identify new business opportunities and much more.

What cellular can do especially well is effectively capture all that value by combining global connectivity with a private network. Then, through software like PTC’s, you can glean an array of information that’ll leave you wondering how else you can use the technology, regardless of whether the data is on or off the manufacturing floor. For instance, by applying virtual or augmented reality (VR/AR), you can find product defects before they leave the factory or end up in other products.

That alone can eliminate waste, save money from production to shipping, protect your reputation and much more.

According to analysts at ABI Research, we’ll see 4.3 billion wireless connections in smart factories by 2030, leading to a $1 trillion smart manufacturing market. For those that embrace Industry 4.0, private cellular has the potential to improve gross margins by 5-13% for factory and warehouse operations. What’s more, manufacturers can expect a 10x return on their investment.

You just need to be able to reliably turn actionable intelligence throughout the product’s lifecycle and across your global enterprise both securely and reliably – and that’s what cellular delivers.

Where do I start?

People don’t often ask for cellular or a dedicated private network specifically. They come to us with questions about things like how they can improve production cycle times or reduce costs by a certain percentage. That’s exactly where you should begin, too.

I come from the manufacturing space where for years I lived quality control, throughput and output. When someone would introduce a new idea, we’d vet it with a powerful but simple question: How will this make or save us money? If it couldn’t do either, we weren’t interested.

Look at your products and processes the same way when it comes to venturing into IIoT and digital transformation. Find the pain points. Identify defects, bottlenecks and possible improvements. Seek out how to further connect your business and the opportunities that could present. Data is indeed the new oil; it’s the intelligence that’ll help you understand where you need to go and what you need to do to move forward or create a new business.

What should I look for?

To get off on the right foot, be sure to engage the right partners. Realize this is a very complex area; no single provider can offer a solution that’ll address every need in one. You need partners with an ecosystem of their own best-of-breed partners; that’s why we work with companies like PTC. We have expertise in specific areas, focus on what we do best and work closely together to ensure we approach IIoT right.

We are building on an established foundation we created together. Both organizations have invested a lot of time, money, R&D cycles and processes in developing our individual and collective offerings. That said, not only will we be working together into the future, customers are assured they’ll remain on the forefront of innovation.

That future proofing is what you need to look for as well. You need wireless connectivity for applications involving asset tracking, predictive maintenance, digital twins, human-robot workflow integration and more. While Industry 4.0 is a priority, you want to lay a foundation for fast adoption of 5G, too.

There are other considerations to keep in mind down the road, such as your workforce. Employees may not want to be “machines” themselves, but they will want to be a robotics engineer or use AR or VR for artificial intelligence analysis. The future of work is changing, too, and IIoT offers a way to keep employees engaged.

Originally posted HERE

CLICK HERE to view Kivsa Allgood's LiveWorx presentation, “Unleashing the Power of Industrial IoT and Cellular Connectivity.”

Read more…

Written by: Mirko Grabel

Edge computing brings a number of benefits to the Internet of Things. Reduced latency, improved resiliency and availability, lower costs, and local data storage (to assist with regulatory compliance) to name a few. In my last blog post I examined some of these benefits as a means of defining exactly where is the edge. Now let’s take a closer look at how edge computing benefits play out in real-world IoT use cases.

Benefit No. 1: Reduced latency

Many applications have strict latency requirements, but when it comes to safety and security applications, latency can be a matter of life or death. Consider, for example, an autonomous vehicle applying brakes or roadside signs warning drivers of upcoming hazards. By the time data is sent to the cloud and analyzed, and a response is returned to the car or sign, lives can be endangered. But let’s crunch some numbers just for fun.

Say a Department of Transportation in Florida is considering a cloud service to host the apps for its roadside signs. One of the vendors on the DoT’s shortlist is a cloud in California. The DoT’s latency requirement is less than 15ms. The light speed in fiber is about 5 μs/km. The distance from the U.S. east coast to the west coast is about 5,000 km. Do the math and the resulting round-trip latency is 50ms. It’s pure physics. If the DoT requires a real-time response, it must move the compute closer to the devices.

Benefit No. 2: Improved resiliency/availability

Critical infrastructure requires the highest level of availability and resiliency to ensure safety and continuity of services. Consider a refinery gas leakage detection system. It must be able to operate without Internet access. If the system goes offline and there’s a leakage, that’s an issue. Compute must be done at the edge. In this case, the edge may be on the system itself.

While it’s not a life-threatening use case, retail operations can also benefit from the availability provided by edge compute. Retailers want their Point of Sale (PoS) systems to be available 100% of the time to service customers. But some retail stores are in remote locations with unreliable WAN connections. Moving the PoS systems onto their edge compute enables retailers to maintain high availability.

Benefit No. 3: Reduced costs

Bandwidth is almost infinite, but it comes at a cost. Edge computing allows organizations to reduce bandwidth costs by processing data before it crosses the WAN. This benefit applies to any use case, but here are two example use-cases where this is very evident: video surveillance and preventive maintenance. For example, a single city-deployed HD video camera may generate 1,296GB a month. Streaming that data over LTE easily becomes cost prohibitive. Adding edge compute to pre-aggregate the data significantly reduces those costs.

Manufacturers use edge computing for preventive maintenance of remote machinery. Sensors are used to monitor temperatures and vibrations. The currency of this data is critical, as the slightest variation can indicate a problem. To ensure that issues are caught as early as possible, the application requires high-resolution data (for example, 1000 per second). Rather than sending all of this data over the Internet to be analyzed, edge compute is used to filter the data and only averages, anomalies and threshold violations are sent to the cloud.

Benefit No. 4: Comply with government regulations

Countries are increasingly instituting privacy and data retention laws. The European Union’s General Data Protection Regulation (GDPR) is a prime example. Any organization that has data belonging to an EU citizen is required to meet the GDPR’s requirements, which includes an obligation to report leaks of personal data. Edge computing can help these organizations comply with GDPR. For example, instead of storing and backhauling surveillance video, a smart city can evaluate the footage at the edge and only backhaul the meta data.

Canada’s Water Act: National Hydrometric Program is another edge computing use case that delivers regulatory compliance benefits. As part of the program, about 3,000 measurement stations have been implemented nationwide. Any missing data requires justification. However, storing data at the edge ensures data retention.

Bonus Benefit: “Because I want to…”

Finally, some users simply prefer to have full control. By implementing compute at the edge rather than the cloud, users have greater flexibility. We have seen this in manufacturing. Technicians want to have full control over the machinery. Edge computing gives them this control as well as independence from IT. The technicians know the machinery best and security and availability remain top of mind.

Summary

By reducing latency and costs, improving resiliency and availability, and keeping data local, edge computing opens up a new world of IoT use cases. Those described here are just the beginning. It will be exciting to see where we see edge computing turn up next. 

Originaly posted: here

Read more…

It’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations.

I’m sorry about the title of this blog, but I’m feeling a little wackadoodle at the moment. I think the problem is that I’m giddy with excitement at the thought of the forthcoming Thanksgiving holiday.

So, here’s the deal. Starting sometime in 2021, I’m going to be writing a series of columns for Practical Electronics magazine in the UK teaching digital logic fundamentals to absolute beginners.

This will have a hands-on component with an accompanying circuit board. We’re going to start by constructing some simple logic gates at the transistor level, then use primitive logic gates in 7400-series ICs to construct more sophisticated functions, and work our way up to… but I fear I can say no more at the moment.

After we’ve created some really simple combinatorial functions — like a 2:1 multiplexer — by hand, we’re going to introduce things like Boolean algebra, DeMorgan transforms, and Karnaugh maps, and then we are going to use what we’ve learned to implement more complex combinatorial functions, cumulating in a BCD to 7-segment decoder, before we progress to sequential circuits.

I was sketching out some notes this past weekend. Prior to the BCD to 7-segment decoder, we’ll already have tackled a BCD to decimal decoder, so a lot of the groundwork will have been laid. We’ll start by explaining how the segments in the 7-segment display are identified using the letters ‘a’ through ‘f’ and showing the combinations of segments we use to create the decimal digits 0 through 9.

8217684257?profile=RESIZE_710x

Using a 7-segment display to represent the decimal digits 0 through 9 (Click image to see a larger version — Image source: Max Maxfield)

Next, we will create the truth table. We’ll be using a common cathode 7-segment display, which means active-high outputs from our decoder because this is easier for newbies to wrap their brains around.

8217685658?profile=RESIZE_710x

Truth table for BCD to 7-segment decoder with active-high outputs (Click image to see a larger version — Image source: Max Maxfield)

Observe the input combinations shown in red in the truth table. We’ll point out that, in our case, we aren’t planning on using these input combinations, which means we don’t care what the corresponding outputs are because we will never actually see them (we’re using ‘X’ characters to represent the “don’t care” values). In turn, this means we can use these don’t care values in our Karnaugh maps to aid us in our logic minimization and optimization.

The funny thing is that it’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations. Just for giggles and grins, I’ve shown the populated maps below. Before you look at my solutions, why don’t you take a couple of minutes to perform your own minimizations to see how much you remember?

 8217691254?profile=RESIZE_710x

Use these populated maps to perform your own minimizations and optimizations (Click image to see a larger version — Image source: Max Maxfield)

I should point out that I’m a bit rusty at this sort of thing, so you might want to check that I’ve correctly captured the truth table and accurately populated these maps before you leap into the fray with gusto and abandon.

Remember that we’re dealing with absolute beginners here, so — even though I will have recently introduced them to Karnaugh map techniques, I think it would be a good idea to commence this portion of the discussions by walking them through the process for segment ‘a’ step-by-step as illustrated below.

8217692064?profile=RESIZE_710x

Karnaugh map minimizations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Next, I extracted the Boolean equations corresponding to the Karnaugh map minimizations. As shown below, I’ve color-coded any product terms that appear multiple times. I don’t recall seeing this done before, but I think it could be a useful aid for beginners. Once again, I’d be interested to hear your thoughts about this.

8217692289?profile=RESIZE_710x

Boolean equations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Actually, I’d love to hear your thoughts on anything I’ve shown here. Do you think the way I’ve drawn the diagrams is conducive to beginners understanding what’s going on? Can you spot anything I’ve missed or could do better? I can’t wait for you to see what we have planned with regards to the circuit board and the “hands-on” part of this forthcoming series (I will, of course, be reporting back further in the future). Until then, as always, I welcome your comments, questions, and suggestions.

Originally posted HERE.

Read more…

In order to form proper networks to share data, the Internet of Things (IoT) needs reliable communications and connectivity. Because of popular demand, there’s a wide range of connectivity technologies that operators, as well as developers, can opt for.

IoT Connectivity Groups

The IoT connectivity technologies are currently divided into two groups. The first one is cellular-based, and the second one is unlicensed LPWAN. The first group is based around a licensed spectrum, something which offers an infrastructure that is consistent and better. This group supports larger data rates, but it comes with a cost of short battery life and expensive hardware. However, you don’t have to worry about this a lot as its hardware is becoming cheaper.

Cellular-Based IoT

Because of all this, cellular-based IoT is only offered by giant operators. The reason behind this is that acquiring licensed spectrum is expensive. But these big operators have access to this licensed spectrum, as well as expensive hardware. The cellular IoT connectivity also has its own two types. The first one being the narrowband IoT (NB-IoT) and category M1 IoT (Cat-M1).

Although both are based on cellular standards, there is one big difference between the two. That NB-IoT has a smaller bandwidth than Cat-M1, and thus offers a lower transmission power. In fact, its bandwidth is 10x smaller than that of Cat-M1. However, both still have a very long range with NB-IoT offering a range of up to 100 Km.

The cellular standard based IoT connectivity ensure more reliability. Their device operational lifetimes are longer as compared to unlicensed LPWAN. But when it comes to choosing, most operators prefer NB-IoT over Cat-M1. This is because Cat-M1 provides higher data rates that are not usually necessary. In addition to this, the higher costs of it prevent operators from choosing it.

Cat-M1 is mostly chosen by large-scale operators because it provides mobility support. This is something suitable for transportation and traffic control-based network. It can also be useful in emergency response situations as it offers voice data transfer.

The hardware (module) used for cellular IoT is relatively more expensive compared to LPWAN. It can cost around $10, compared to $2 LPWAN. However, this cost has been dropping rapidly recently because of its popular demand. 

Unlicensed LPWAN

As for the unlicensed LPWANs, they are used by those who don’t have the budget to afford cellular-based IoT. They are designed for customized IoT networks and offer lower data rates, but with increased battery life and long transmission range. They can also be deployed easily. At the moment, there are two types of unlicensed LPWANs, LoRa (Long Range) and SigFox.

Both types are amazing as they designed for devices that have a lower price, increased battery life, and long range. Their coverage range can be up to 10 Km, and their connectivity cost is as low as $2 per module. Not only this, but the cost is even lower than this sometimes. Therefore, they are ideal for local areas.

Weightless LPWAN

Although there are many variants of the LPWAN, Weightless is considered to be the most popular one. This is because the Weightless Special Interest Group, or the SIG, currently offers three different protocols. These include the Weightless-N, the Weightless-W, and the Weightless-P. All three work in a different way as they have different modalities.

Weightless-W

First off, we have the Weightless-W open standard model. This one is designed to operate in TV white space (TVWS). TV Whitespace (TVWS) is the inactive or unoccupied space found between channels actively used in UHF and VHF spectrum its frequency spans from 470 MHz – 790 MHz. For those who don’t know, this is similar to what Neul was developing before getting acquired by Huawei. Now, while using TVWS can be great as it uses ultra-high frequency spectrum, it has one downside. In theory, it seems perfect. But in practice, it is difficult because the rules and regulations for utilizing TVWS for IoT vary greatly.

In addition to this, the end nodes of this model don’t work like they are supposed to. They are designed to operate in a small part of the spectrum. As is difficult to design an antenna that can cover a such wide band of spectrum. This is why TVWS can be difficult when it comes to installing it. The Weightless-W is considered a good option in:

  • Smart Oil sector.
  • Gas sector.

Weightless-N

Second up we have the ultra-narrowband system, the Weightless-N. This model is similar to SigFox as both have a lot in common. The best thing about it is it is made up of different networks instead of being an end-to-end enclosed system. Weightless-N uses differential binary phase shift keying (DBPSK) digital modulation scheme same as of used in SigFox.

The Weightless-N line is operated by Nwave, a popular IoT hardware and software developer. However, while is model is best for sensor-based networks, temperature readings, tank level monitoring, and more, there are some problems with it. For instance, Nwave has a special requirement for TCXO, that is the temperature compensated crystal oscillator.

 In addition to this, it has an unbalanced link budget. The reason behind why this is bad is that there will be much more sensitivity going up to the base station compared to what will be coming down. 

Weightless-P

Finally, we have the Weightless-P. This model is the latest one in the group as it was launched some time after the above two. What people love the most about this one is that it has two-way features. In addition to this, it has a 12.5 kHz channel that is pretty amazing. The Weightless-P doesn’t require a TXCO, something which makes it different from Weightless-N and -W.

The main company behind Weightless-P is Ubiik. The only downside about this model is that it is not ideal for wide-area networks as it offers a range of around 2 Km. However, the Weightless-P is still ideal for:

  • Private Networks
  • Extra sophisticated use cases.
  • Areas where uplink data and downlink control are important.

Capacity

Because of the fact that the Weightless protocols are based on SDR, its base station for narrowband signals is much more complex. This is something that ends up creating thousands of small binary phase-shift keying channels. Although this will let you get more capacity, it will become a burden on your wallet.

In addition to this, since the Weightless-N end nodes require a TXCO, it will be more expensive. The TXCO is used when there is a threat of the frequency becoming unstable when the temperature gets disturbed at the end node.

Range

Talking about the ranges, the Weightless-N and -W has a range of around 5 Km in Urban environments. As for the Weightless-P, it can go up to 2 Km.

Comparison

Weightless and SigFox

If we take the technology into consideration, then the Weightless-N and SigFox are pretty similar. However, they are different when it comes to go-to-market. Since Weightless is a standard, it will require another company to create an IoT based on it. However, this is not the case with SigFox as it is a different type of solution.

Weightless and LoRa

In terms of technology, the Weightless and LoRa. Lorawan are different. However, the functionally of the Weightless-N and LoRaWAN is similar. This is because both are uplink-based systems. Weightless is also sometimes considered as the very good alternative when LoRa is not feasible to the user.

Weightless and Symphony Link

The Symphony Link and Weightless-P standards are more similar to each other. For instance, both focus on private networks. However, Symphony Link has a much more better range performance because it uses LoRa instead of Minimum-shift keying modulation MSK.

Originaly posted here

Read more…

PYNQ is great for accelerating Python applications in programmable logic. Let's take a look at how we can use it with OpenMV camera.

Things used in this project

Hardware:

  • Avnet Ultra96-V2 (Can also use V1 or V3)
  • OpenMV Cam M7
  • Avnet Ultra96 (Can use V1 or V2)

Software:

  • Xilinx PYNQ Framework

Introduction

Image processing is required for a range of applications from vision guided robotics to machine vision in industrial applications.

In this project we are going to look at how we can fuse the OpenMV camera with the Ultra96 running PYNQ. This will allow out PYNQ application to offload some image processing to the camera. Doing so will provide a higher performance system and open the Ultra96 using PYNQ to be able to work with the OpenMV ecosystem.

 

What Is the OpenMV Camera 

The OpenMV camera is a low cost machine vision camera which is developed using Python. Thanks to this architecture of the OpenMV Camera we can therefore offload some of the image processing to the camera. Meaning the image frames received by our Ultra96 already have faces identified, eyes tracked or Sobel filtering, it all depends on how we set up the OpenMV Camera.

As the OpenMV camera has been designed to be extensible it provides 10 external IO which can be used to drive external sensors. These 10 are able to support a range of interfaces from UART to SPI, I2C and PWM. Of course the PWM is very useful for driving servos.

On very useful feature of the OpenMV camera is its LEDs mine (OpenMV M7) provides a tri-colour LED which can be used to output Red, Green, Blue and a separate IR LED. As the sensor is IR sensitive this can be useful for low light performance.

8100406101?profile=RESIZE_400xOpenMV Camera

How Does the OpenMV Camera Work

OpenMV Cam uses micro python to control the imager and output frames over the USB link. Micro python is intended for use on micro controllers and is based on Python 3.4. To use the OpenMV camera we need to first generate a micro python script which configures the camera for the given algorithm we wish to implement. We then execute this script by uploading and running it over the USB link.

This means we need some OpenMV APIs and libraries on a host machine to communicate with the OpenMV Camera.

To develop the script we want to be able to ensure it works, which is where the OpenMV IDE comes into its own, this allows us to develop and test the script which we later use in our Ultra96 application.

We can develop this script using either a Windows, MAC or Linux desktop.

 

Creating the OpenMV Script using the OpenMV IDE

To get started with the OpenMV IDE we frist need to download and install it. Once it is installed the next step is to connect our OpenMV camera to it using the USB link and then running a script on it.

To get started we can run the example hello world provided, which configures the camera to outputs standard RGB image at QVGA resolution. On the right hand side of the IDE you will be able to see the images output from the camera.

 

We can use this IDE to develop scripts for the OpenMV camera such as the one below which detects and identifies circles in the captured image.

Note the frame rate is lower when the camera is connected to the IDE.

 

We can use the scripts developed here in our Ultra96 PYNQ implementation let's take a look at how we set up the Ultra96 and PYNQ

Setting Up the Ultra96 PYNQ Image

The first thing we need to do if we have not already done it, is to download and create a PYNQ SD Card so we can run the PYNQ framework on the Ultra96.

As we want to use the Xilinx image processing overlay we should download the Ultra96 PYNQ v2.3 image.

Once you have this image creating a SD Card is very simple, extract the ISO image from the compressed file and write it to a SD Card. To write the ISO image to the SD Card we need a program such a etcher or win32 disk imager.

With a SD Card available we can then boot the Ultra96 and connect to the PYNQ framework using either

  • Use a USB Ethernet connection over the MicroUSB (upstream USB connection).
  • Connect via WiFi.
  • Use the Ultra96 as a single-board computer and connect a monitor, keyboard and mouse.

For this project I used the USB Ethernet connection.

The next thing to do is to ensure we have the necessary overlays to be able to accelerate image processing functions into the programmable logic. To to this we need to install the PYNQ computer vision overlay. 

Downloading the Image Processing Overlay

Installing this overlay is very straight forward. Open a browser window and connect to the web address of 192.168.3.1 (USB Ethernet address). This will open a log in page to the Jupyter notebooks, the password is Xilinx

 

Upon log in you will see the following folders and scripts

 

Click on new and select terminal, this will open a new terminal window in a browser window. To download and use the PYNQ Computer Vision overlays we enter the following command

sudo pip3 install --upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
 

Once these are downloaded if you look back at the Jupyter home page you will see a new directory called pynqOpenCV.

 

Using these Jupyter notebooks we can test the image processing performance when we accelerate OpenCV functions into the programmable logic.

 

Typically the hardware acceleration as can be seen in the image above greatly out performs implementing the algorithm in SW.

Of course we can call this overlay from our own Jupyter notebooks

 

Setting Up the OpenMV Camera in PYNQ

The next step is to configure the Ultra96 PYNQ instance to be able to control the OpenMV camera using its APIs. We can obtain these by downloading the OpenMV git repo using the command below in a terminal window on the Ultra96.

git clone https://github.com/openmv/openmv
 

Once this is downloaded we need to move the file pyopenmv.py

From openmv/tools

To /usr/lib/python3.6

This will allow us to control the OpenMV camera from within our Jupyter applications.

To be able to do this we need to know which serial port the OpenMV camera enumerates as. This will generally be ttyACM0 or ttyACM1 we can find this out by doing a LS of the /dev directory

 

Now we are ready to begin working with the OpenMV camera in our applications let's take a look at how we set it up our Jupyter Scripts

 

Initial Test of OpenMV Camera

The first thing we need to do in a new Jupyter notebook is to import the necessary packages. This includes the pyopenmv as we just installed.

We will alos be importing numpy as the image is returned as a numpy array so that we can display it using numpy functionality.

import pyopenmvimport timeimport sysimport numpy as np 

The first thing we need to do is define the script we developed in the IDE, for the "first light" with the PYNQ and OpenMV we will use the hello world script to obtain a simple image.

script = """

# Hello World Example

#

# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time

import pyb

sensor.reset()                      # Reset and initialize the sensor.

sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)

sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)

sensor.skip_frames(time = 2000)     # Wait for settings take effect.

clock = time.clock()                # Create a clock object to track the FPS.

red_led = pyb.LED(1)

red_led.off()

red_led.on()

while(True):

   clock.tick() 

   img = sensor.snapshot()         # Take a picture and return the image.

"""

Once the script is defined the next thing we need to do is connect to the OpenMV camera and download the script.

 

portname = "/dev/ttyACM0"

connected = False

pyopenmv.disconnect()

for i in range(10):

   try:

       # opens CDC port.

       # Set small timeout when connecting

       pyopenmv.init(portname, baudrate=921600, timeout=0.050)

       connected = True

       break

   except Exception as e:

       connected = False

       sleep(0.100)

if not connected:

   print ( "Failed to connect to OpenMV's serial port.\n"

           "Please install OpenMV's udev rules first:\n"

           "sudo cp openmv/udev/50-openmv.rules /etc/udev/rules.d/\n"

           "sudo udevadm control --reload-rules\n\n")

   sys.exit(1)

# Set higher timeout after connecting for lengthy transfers.

pyopenmv.set_timeout(1*2) # SD Cards can cause big hicups.

pyopenmv.stop_script()

pyopenmv.enable_fb(True)

pyopenmv.exec_script(script)

Finally once the script has been downloaded and is executing, we want to be able to read out the frame buffer. This Cell below reads out the framebuffer and saves it as a jpg file in the PYNQ file system.

 

running = True

import numpy as np

from PIL import Image

from matplotlib import pyplot as plt

while running:

   fb = pyopenmv.fb_dump()

   if fb != None:

       img = Image.fromarray(fb[2], 'RGB')

       img.save("frame.jpg")

       img = Image.open("frame.jpg")

       img

       time.sleep(0.100)

 

When I ran this script the first light image below was received of me working in my office.

 

Having achieved this the next step is to start working with advanced scripts in the PYNQ Jupyter notebook. using the same approach as above we can redefine scripts which can be used for different processing including

script = """

import sensor, image, time

sensor.reset() # Initialize the camera sensor.

sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565

sensor.set_framesize(sensor.QQVGA) # or sensor.QVGA (or others)

sensor.skip_frames(time = 2000) # Let new settings take affect.

sensor.set_gainceiling(8)

clock = time.clock() # Tracks FPS.

while(True):

   clock.tick() # Track elapsed milliseconds between snapshots().

   img = sensor.snapshot() # Take a picture and return the image.

   # Use Canny edge detector

   img.find_edges(image.EDGE_CANNY, threshold=(50, 80))

   # Faster simpler edge detection

   #img.find_edges(image.EDGE_SIMPLE, threshold=(100, 255))

   print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while

"""

For Canny edge detection when imaging a MiniZed Board

 

Alternatively we can also extract key points from images for tracking in subsequent images.

script = """

import sensor, time, image

# Reset sensor

sensor.reset()

# Sensor settings

sensor.set_contrast(3)

sensor.set_gainceiling(16)

sensor.set_framesize(sensor.VGA)

sensor.set_windowing((320, 240))

sensor.set_pixformat(sensor.GRAYSCALE)

sensor.skip_frames(time = 2000)

sensor.set_auto_gain(False, value=100)

def draw_keypoints(img, kpts):

   if kpts:

       print(kpts)

       img.draw_keypoints(kpts)

       img = sensor.snapshot()

       time.sleep(1000)

kpts1 = None

# NOTE: uncomment to load a keypoints descriptor from file

#kpts1 = image.load_descriptor("/desc.orb")

#img = sensor.snapshot()

#draw_keypoints(img, kpts1)

clock = time.clock()

while (True):

   clock.tick()

   img = sensor.snapshot()

   if (kpts1 == None):

       # NOTE: By default find_keypoints returns multi-scale keypoints extracted from an image pyramid.

       kpts1 = img.find_keypoints(max_keypoints=150, threshold=10, scale_factor=1.2)

       draw_keypoints(img, kpts1)

   else:

       # NOTE: When extracting keypoints to match the first descriptor, we use normalized=True to extract

       # keypoints from the first scale only, which will match one of the scales in the first descriptor.

       kpts2 = img.find_keypoints(max_keypoints=150, threshold=10, normalized=True)

       if (kpts2):

           match = image.match_descriptor(kpts1, kpts2, threshold=85)

           if (match.count()>10):

               # If we have at least n "good matches"

               # Draw bounding rectangle and cross.

               img.draw_rectangle(match.rect())

               img.draw_cross(match.cx(), match.cy(), size=10)

           print(kpts2, "matched:%d dt:%d"%(match.count(), match.theta()))

           # NOTE: uncomment if you want to draw the keypoints

           #img.draw_keypoints(kpts2, size=KEYPOINTS_SIZE, matched=True)

   # Draw FPS

   img.draw_string(0, 0, "FPS:%.2f"%(clock.fps()))

"""

Circle Detection

 

import sensor, image, time

sensor.reset()

sensor.set_pixformat(sensor.RGB565) # grayscale is faster

sensor.set_framesize(sensor.QQVGA)

sensor.skip_frames(time = 2000)

clock = time.clock()

while(True):

   clock.tick()

   img = sensor.snapshot().lens_corr(1.8)

   # Circle objects have four values: x, y, r (radius), and magnitude. The

   # magnitude is the strength of the detection of the circle. Higher is

   # better...

   # `threshold` controls how many circles are found. Increase its value

   # to decrease the number of circles detected...

   # `x_margin`, `y_margin`, and `r_margin` control the merging of similar

   # circles in the x, y, and r (radius) directions.

   # r_min, r_max, and r_step control what radiuses of circles are tested.

   # Shrinking the number of tested circle radiuses yields a big performance boost.

   for c in img.find_circles(threshold = 2000, x_margin = 10, y_margin = 10, r_margin = 10,

           r_min = 2, r_max = 100, r_step = 2):

       img.draw_circle(c.x(), c.y(), c.r(), color = (255, 0, 0))

       print(c)

   print("FPS %f" % clock.fps())

 

 

 

This fusion of ability to offload processing to either the OpenMV camera or the Ultra96 programmable logic running Pynq provides the system designer with maximum flexibility.

 

Wrap Up

The ability to use the OpenMV camera, coupled with the PYNQ computer vision libraries along with other overlays such as the klaman filter and base overlays. We can implement algorithms which can be used to enable us to implement vision guided robotics. Using the base overlay and the Input Output processors also enables us to communicate with lower level drives, interfaces and other sensors required to implement such a solution.

Originaly posted here.

 

Read more…

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

Will We Ever Get Quantum Computers?

In a recent issue of IEEE Spectrum, Mikhail Dyakonov makes a pretty compelling argument that quantum computing (QC) isn't going to fly anytime soon. Now, I'm no expert on QC, and there sure is a lot of money being thrown at the problem by some very smart people, but having watched from the sidelines QC seems a lot like fusion research. Every year more claims are made, more venture capital gets burned, but we don't seem to get closer to useful systems.

Consider D-Wave Systems. They've been trying to build a QC for twenty years, and indeed do have products more or less on the market, including, it's claimed, one of 1024 q-bits. But there's a lot of controversy about whether their machines are either quantum computers at all, or if they offer any speedup over classical machines. One would think that if a 1K q-bit machine really did work the press would be all abuzz, and we'd be hearing constantly of new incredible results. Instead, the machines seem to disappear into research labs.

Mr. Duakonov notes that optimistic people expect useful QCs in the next 5-10 years; those less sanguine expect 20-30 years, a prediction that hasn't changed in two decades. He thinks a window of many decades to never is more realistic. Experts think that a useful machine, one that can do the sort of calculations your laptop is capable of, will require between 1000 and 100,000 q-bits. To me, this level of uncertainty suggests that there is a profound lack of knowledge about how these machines will work and what they will be able to do.

According to the author, a 1000 q-bit machine can be in 21000 states (a classical machine with N transistors can be in only 2N states), which is about 10300, or more than the number of sub-atomic particles in the universe. At 100,000 q-bits we're talking 1030,000, a mind-boggling number.

Because of noise, expect errors. Some theorize that those errors can be eliminated by adding q-bits, on the order of 1000 to 100,000 additional per q-bit. So a useful machine will need at least millions, or perhaps many orders of magnitude more, of these squirrelly microdots that are tamed only by keeping them at 10 millikelvin.

A related article in Spectrum mentions a committee formed of prestigious researchers tasked with assessing the probability of success with QC concluded that:

"[I]t is highly unexpected" that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable "noisy intermediate-scale quantum computers" will be built within that time frame, "there are at present no known algorithms/applications that could make effective use of this class of machine," the committee says."

I don't have a dog in this fight, but am relieved that useful QC seems to be no closer than The Distant Shore (to quote Jan de Hartog, one of my favorite writers). If it were feasible to easily break encryption schemes banking and other systems could collapse. I imagine Blockchain would fail as hash algorithms became reversable. The resulting disruption would not be healthy for our society.

On the other hand, Bruce Schneier's article in the March issue of IEEE Computing Edge suggests that QC won't break all forms of encryption, though he does think a lot of our current infrastructure will be vulnerable. The moral: if and when QC becomes practical, expect chaos.

I was once afraid of quantum computing, as it involves mechanisms that I'll never understand. But then I realized those machines will have an API. Just as one doesn't need to know how a computer works to program in Python, we'll be insulated from the quantum horrors by layers of abstraction.

Originaly posted here

Read more…

An edge device is the network component that is responsible for connecting a local area network to an external or wide area network, which can be accessed from anywhere. Edge devices offer several new services and improved outcomes for IoT deployments across all markets. Smart services that rely on high volumes of data and local analysis can be deployed in a wide range of environments.

Edge device provides the local data to an external network. If protocols are different in local and external networks, it also translates this information, and make the connection between both network boundaries. Edge devices analyze diagnostics and automatic data populating; however, it is necessary to make a secure connection between the field network and cloud computing. In the event of loss of internet connection or cloud crash edge device will store data until the connection is established, so it won’t lose any process information. The local data storage is optional and not all edge devices offer local storage, it depends on the application and service required to implement on the plant.

How does an edge device work?

An edge device has a very straightforward working principle, it communicates between two different networks and translates one protocol into another. Furthermore, it creates a secure connection with the cloud.

An edge device can be configured via local access and internet or cloud. In general, we can say an edge device is a plug-and-play, its setup is simple and does not require much time to configure.

Why should I use an edge device?

Depending on the service required in the plant, the edge devices will be a crucial point to collect the information and create an automatic digital twin of your device in the cloud. 

Edge devices are an essential part of IoT solutions since they connect the information from a network to a cloud solution. They do not affect the network but only collect the data from it, and never cause a problem with the communication between the control system and the field devices. by using an edge device to collect information, the user won’t need to touch the control system. Edge is one-way communication, nothing is written into the network, and data are acquired with the highest possible security.

Edge device requirements

Edge devices are required to meet certain requirements that are to meet at all conditions to perform in different secretions. This may include storage, network, and latency, etc.

Low latency

Sensor data is collected in near real-time by an edge server. For services like image recognition and visual monitoring, edge servers are located in very close proximity to the device, meeting low latency requirements. Edge deployment needs to ensure that these services are not lost through poor development practice or inadequate processing resources at the edge. Maintaining data quality and security at the edge whilst enabling low latency is a challenge that need to address.

Network independence

IoT services do not care for data communication topology.  The user requires the data through the most effective means possible which in many cases will be mobile networks, but in some scenarios, Wi-Fi or local mesh networking may be the most effective mechanism of collecting data to ensure latency requirements can be met.

Good-Edge-IOT-Device-1024x576.jpg

Data security

Users require data at the edge to be kept secure as when it is stored and used elsewhere. These challenges need to meet due to the larger vector and scope for attacks at the edge. Data authentication and user access are as important at the edge as it is on the device or at the core.  Additionally, the physical security of edge infrastructure needs to be considered, as it is likely to hold in less secure environments than dedicated data centers.

Data Quality

Data quality at the edge is a key requirement to guarantee to operate in demanding environments. To maintain data quality at the edge, applications must ensure that data is authenticated, replicated as and assigned into the correct classes and types of data category.

Flexibility in future enhancements

Additional sensors can be added and managed at the edge as requirements change. Sensors such as accelerometers, cameras, and GPS, can be added to equipment, with seamless integration and control at the edge.

Local storage

Local storage is essential in the event of loss of internet connection or cloud crash edge device will store data until the connection is established, so it won’t lose any process information. The local data storage is optional and not all edge devices offer local storage, it depends on the application and service required to implement on the plant

Originaly Posted here

Read more…

When you’re in technology, you have to expect change. Yet, there’s something to the phrase “the more things change, the more they stay the same.” For instance, I see in the industrial internet of things (IIoT) a realm that’ll dramatically shape the future - how we manufacture, the way we run our factories, workforce needs – but the underlying business goals are the same as always.

Simply put, while industrial enterprise initiatives may change, financial objectives don’t – and they’re still what matter most. That’s why IIoT is so appealing. While the possibilities of smart and connected operations, sites and products certainly appeal to the dreamer and innovator, the clear payoff ensures that it’s a road even the most pragmatic decision-maker will eagerly follow.

The big three
When it comes to industrial enterprises, IIoT addresses the “big three” financial objectives head on. The technology maximizes revenue growth, reduces operating expense and increases asset efficiency.

IIoT does this in numerous ways. It yields invaluable operational intelligence, like real-time performance management data, to reduce manufacturing costs, increase flexibility and enable agility. When it comes to productivity, connected digital assets can empower a workforce with actionable insights to improve productivity and quality, even prevent safety and compliance issues.

For example, recognizing defects in a product early on can save time, materials, staff hours and possibly even a company’s reputation.

Whether on or off the factory floor, IIoT can be used to optimize asset efficiency. With real-time monitoring, diagnostics and analytics, downtime can be reduced or avoided. Asset utilization can also be evaluated and maximized. Think applications like equipment health monitoring, predictive maintenance, the ability to provide augmented 3D instructions for complex repairs. And, you can also scale production more precisely via better control over processes and inventory.

All of this accelerates time to market; another key benefit of IIoT and long held business goal.

Why is 5G important for IIoT and augmented reality (AR)?
As we look at the growing need to connect more devices, more sensors and install things like real-time cameras for doing analytics, there is growing stress and strain that is brought into industrial settings. We have seen the need to increase connectivity while having greater scalability, performance, accessibility, reliability, and broader reach with a lower cost of ownership become much more important. This is where 5G can make a real difference.

Many of our customers have seen what we are doing with augmented reality and the way that PTC can help operators service equipment. But in the not so distant future, the way that people interact with robotics, for example, will change. There will be real-time video to do spatial analytics on the way that people are working with man and machines and we’ll be able to unlock a new level of intelligence with a new layer of connectivity that helps drive better business outcomes.

Partner up
It sounds nice but the truth is, a lot of heavy lifting is required to do IIoT right. The last thing you want to do is venture into a pilot, run into problems, and leave the C-suite less than enthused with the outcome. And make no mistake, there’s a lot potential pitfalls to be aware of.

For instance, lengthy proof of concept periods, cumbersome processes and integrations can slow time to market. Multiple, local integrations can be required when connectivity and device management gets siloed. If not done right, you may only gain limited visibility into devices and the experience will fall short. And, naturally, global initiatives can be hindered by high roaming costs and deployment obstacles.

That said, you want to harness best of breed providers, not only to realize the full benefits of Industry 4.0, but to set yourself up with a foundation that’ll be able to harness 5G developments. You need a trusted IoT partner, and because of the sophistication and complexity, it takes an ecosystem of proven innovators working collaboratively.

That’s why PTC and Ericsson are partners.

Doing what’s best
Ericsson unlocks the full value of global cellular IoT connectivity and provides on-premise solutions. PTC offers an industrial IoT platform that’s ready to configure and deploy, with flexible connectivity and capabilities to build solutions without manual coding.

Drilling down a bit further, Ericsson’s IoT Accelerator can connect and manage billions of devices and millions of applications easily, seamlessly and globally. PTC’s IoT solutions digitalize processes and products, combining the physical and digital worlds seamlessly.

And with wireless connectivity, we can deploy a lot of new technology – from augmented reality to artificial intelligence applications – without having to think about the time and cost of creating fixed infrastructures, running wires, adding network capacity and more.

According ABI Research, organizations that embrace Industry 4.0 and private cellular have the potential to improve gross margins by 5-13% in factory and warehouse operations. Manufacturers can expect a 10x return on their investment. And with 4.3 billion wireless connections in smart factories anticipated by 2030, it’s clear where things are headed.

By focusing on what we each do best, PTC and Ericsson is able to do what’s best for our customers. We can help them build and scale global cellular IoT deployments faster and gain a competitive advantage. They can reap the advantages of Industry 4.0 and create that path to 5G, future-proofing their operations and enjoying such differentiators as network slicing, edge computing and high-reliability, low latency communications.

Further, with our histories of innovation, customers are assured they’ll be supported in the future, remaining out front with the ability to adapt to change, grow and deliver on financial objections.

Editor's Note: This post was originally published by Steve Dertien, Chief Technology Officer for PTC, on Ericsson's website, and is part of a joint content effort with Kiva Allgood, head of IoT for Ericsson. To view Steve's original, please click here. To read Kiva's complementary post, please click here.

Read more…

A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them 'smart.' The study was published in Electronics.

Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance. For comparison, in ordinary modern computers, random access memory is calculated in gigabytes. Although microcontrollers possess significantly less processing power than laptops and smartphones, they are smaller and can be interfaced with household items. Smart doors, refrigerators, shoes, glasses, kettles and coffee makers create the foundation for so-called ambient intelligece. The term denotes an environment of interconnected smart devices. 

An example of ambient intelligence is a smart home. The devices with limited memory are not able to store a large number of keys for secure data transfer and arrays of neural network settings. It prevents the introduction of artificial intelligence into Internet of Things devices, as they lack the required computing power. However, artificial intelligence would allow smart devices to spend less time on analysis and decision-making, better understand a user and assist them in a friendly manner. Therefore, many new opportunities can arise in the creation of environmental intelligence, for example, in the field of health care.

Andrei Velichko from Petrozavodsk State University, Russia, has created a new neural network architecture that allows efficient use of small volumes of RAM and opens the opportunities for the introduction of low-power devices to the Internet of Things. The network, called LogNNet, is a feed-forward neural network in which the signals are directed exclusively from input to output. Its uses deterministic chaotic filters for the incoming signals. The system randomly mixes the input information, but at the same time extracts valuable data from the information that are invisible initially. A similar mechanism is used by reservoir neural networks. To generate chaos, a simple logistic mapping equation is applied, where the next value is calculated based on the previous one. The equation is commonly used in population biology and as an example of a simple equation for calculating a sequence of chaotic values. In this way, the simple equation stores an infinite set of random numbers calculated by the processor, and the network architecture uses them and consumes less RAM.

7978216495?profile=RESIZE_584x

The scientist tested his neural network on handwritten digit recognition from the MNIST database, which is considered the standard for training neural networks to recognize images. The database contains more than 70,000 handwritten digits. Sixty-thousand of these digits are intended for training the neural network, and another 10,000 for network testing. The more neurons and chaos in the network, the better it recognized images. The maximum accuracy achieved by the network is 96.3%, while the developed architecture uses no more than 29 KB of RAM. In addition, LogNNet demonstrated promising results using very small RAM sizes, in the range of 1-2kB. A miniature controller, Atmega328, can be embedded into a smart door or even a smart insole, has approximately the same amount of memory.

"Thanks to this development, new opportunities for the Internet of Things are opening up, as any device equipped with a low-power miniature controller can be powered with artificial intelligence. In this way, a path is opened for intelligent processing of information on peripheral devices without sending data to cloud services, and it improves the operation of, for example, a smart home. This is an important contribution to the development of IoT technologies, which are actively researched by the scientists of Petrozavodsk State University. In addition, the research outlines an alternative way to investigate the influence of chaos on artificial intelligence," said Andrei Velichko.

Originally posted HERE.

by Russian Science Foundation

Image Credit: Andrei Velichko

 

 

 

 

Read more…

Impact of IoT in Inventory

Internet of Things (IoT) has revolutionized many industries including inventory management. IoT is a concept where devices are interconnected via the internet. It is expected that by 2020, there will be 26 billion devices connected worldwide. These connections are important because it allows data sharing which then can perform actions to make life and business more efficient. Since inventory is a significant portion of a company’s assets, inventory data is vital for an accounting department for the company’s asset management and annual report.

Inventory solutions based on IoT and RFID, individual inventory item receives an RFID tag. Each tag has a unique identification number (ID) that contains information about an inventory item, e.g. a model, a batch number, etc. these tags are scanned by RF reader. Upon scanning, a reader extracts its IDs and transmits them to the cloud for processing. Along with the tag’s ID, the cloud receives location and the time of reading. This data is used for updates about inventory items’, allowing users to monitor the inventory from anywhere, in real-time.

Industrial IoT

The role of IoT in inventory management is to receive data and turn it into meaningful insights about inventory items’ location, status, and giving users a corresponding output. For example, based on the data, and inventory management solution architecture, we can forecast the number of raw materials needed for the upcoming production cycle. The output of the system can also send an alert if any individual inventory item is lost.

Moreover, IoT based inventory management solutions can be integrated with other systems, i.e. ERP and share data with other departments.

RFID in Industrial IoT

RFID consist of three main components tag, antenna, and a reader

Tags: An RFID tag carries information about a specific object. It can be attached to any surface, including raw materials, finished goods, packages, etc.

RFID antennas: An RFID antenna receives signals to supply power and data for tags’ operation

RFID readers: An RFID reader, uses radio signals to read and write to the tags. The reader receives data stored in the tag and transmits it to the cloud.

Benefits of IoT in inventory management

The benefits of IoT on the supply chain are the most exciting physical manifestations we can observe. IoT in the supply chain creates unparalleled transparency that increases efficiencies.

Inventory tracking

The major benefit of inventory management is asset tracking, instead of using barcodes to scan and record data, items have RFID tags which can be registered wirelessly. It is possible to accurately obtain data and track items from any point in the supply chain.

With RFID and IoT, managers don’t have to spend time on manual tracking and reporting on spreadsheets. Each item is tracked and the data about it is recorded automatically. Automated asset tracking and reporting save time and reduce the probability of human error.

Inventory optimization

Real-time data about the quantity and the location of the inventory, manufacturers can reduce the amount of inventory on hand while meeting the needs of the customers at the end of the supply chain.

The data about the amount of available inventory and machine learning can forecast the required inventory which allows manufacturers to reduce the lead time.

Remote tracking

Remote product tracking makes it easy to have an eye on production and business. Knowing production and transit times, allows you to better tweak orders to suit lead times and in response to fluctuating demand. It shows which suppliers are meeting production and shipping criteria and which needs monitoring for the required outcome.

It gives visibility into the flow of raw materials, work-in-progress and finished goods by providing updates about the status and location of the items so that inventory managers see when an individual item enters or leaves a specific location.

Bottlenecks in the operations

With the real-time data about the location and the quantity, manufacturers can reveal bottlenecks in the process and pinpoint the machine with lower utilization rates. For instance, if part of the inventory tends to pile up in front of a machine, a manufacturer assumes that the machine is underutilized and needs to be seen to.

The Outcomes

The data collected by inventory management is more accurate and up-to-date. By reducing these time delays, the manufacturing process can enhance accuracy and reduce wastage. An IoT-based inventory management solution offers complete visibility on inventory by providing real-time information fetched by RFID tags. It helps to track the exact location of raw materials, work-in-progress and finished goods. As a result, manufacturers can balance the amount of on-hand inventory, increase the utilization of machines, reduce lead time, and thus, avoid costs bound to the less effective methods. This is all about optimizing inventory and ensuring anything ordered can be sold through whatever channel necessary.

Originally posted here

Read more…

By: Tom Jeltes, Eindhoven University of Technology

The Internet of Things (IoT) consists of billions of sensors and other devices connected to each other via internet, all of which need to be protected against hackers with malicious purposes. A low-cost and energy efficient solution for the security of IoT devices uses the unique characteristics of the built-in memory chips. Ph.D. candidate Lieneke Kusters investigated how to make optimal use of the chip's digital fingerprint to generate a security key.

The higher the number of devices connected to each other via the Internet of Things, the greater the risk that malicious hackers might gain access to important information, or even take over entire systems. Quite apart from all kinds of privacy issues, it's not hard to imagine that that someone who, for example, has control over temperature sensors in a chemical or nuclear plant, could cause serious damage.

 To prevent problems like these from occurring, each IoT device needs to be able, as it were, to show an identity document—"authentication," in professional terms. Normally, speaking, this is done with a kind of password, which is sent in encrypted form to the person who is communicating with the device. The security key needed for that has to be stored in the IoT device one way or another, Lieneke Kusters explains. "But these are often small and cheap devices that aren't supposed to use much energy. To safely store a key in these devices, you need extra hardware with constant power supply. That's not very practical."

Digital fingerprint

There is a different way: namely by deducing the security key from a unique physical characteristic of the memory chip (Static Random-Access Memory, or SRAM) that can be found in practically every IoT device. Depending on the random circumstances during the chip's manufacturing process, the memory locations have a random default value of 0 or 1.

"That binary code which you can read out when activating the chip, constitutes a kind of digital fingerprint of the device," says Kusters, who gained her doctorate at the Information and Communication Theory Laboratory at the TU/e department of Electrical Engineering. This fingerprint is known as a Physical Unclonable Function (PUF). "The Eindhoven-based company Intrinsic ID sells digital security based on SRAM-PUFs. I collaborated with them for my doctoral research, during which I focused on how to generate, in a reliable way, a key from that digital fingerprint that is as long as possible. The longer, the safer."

The major advantage of security keys based on SRAM-PUFs is that the key exists only at the moment when authentication is required. "The device restarts itself to read out the SRAM-PUF and in doing so creates the key, which subsequently gets erased immediately after use. That makes it all but impossible for an attacker to steal the key."

Noise and reliability

But that's not the entire story, because some bits of the SRAM do not always have the same value during activation, Kusters explains. Ten to fifteen percent of the bits turn out not to be determined, which makes the digital fingerprint a bit fuzzy. How do you use that fuzzy fingerprint to make a key of the highest possible complexity that nevertheless still fits into the receiving lock—practically—each time?

"What you want to prevent is that the generated key won't be recognized by the receiving party as a consequence of the 'noise' in the SRAM-PUF," Kusters explains. "It's alright if that happens one in a million times perhaps, preferably less often." The probability of error is smaller with a shorter key, but such a key is also easier to guess for people with bad intentions. "I've searched for the longest reliable key, given a certain amount of noise in the measurement. It helps if you store extra information about the SRAM-PUF, but that must not be of use to a potential attacker. My thesis is an analysis of how you can reach the optimal result in different situations with that extra information."

Originaly posted here.


 
Read more…

Can AI Replace Firmware?

Scott Rosenthal and I go back about a thousand years; we've worked together, helped midwife the embedded field into being, had some amazing sailing adventures, and recently took a jaunt to the Azores just for the heck of it. Our sons are both big data people; their physics PhDs were perfect entrees into that field, and both now work in the field of artificial intelligence.

At lunch recently we were talking about embedded systems and AI, and Scott posed a thought that has been rattling around in my head since. Could AI replace firmware?

Firmware is a huge problem for our industry. It's hideously expensive. Only highly-skilled people can create it, and there are too few of us.

What if an AI engine of some sort could be dumped into a microcontroller and the "software" then created by training that AI? If that were possible - and that's a big "if" - then it might be possible to achieve what was hoped for when COBOL was invented: programmers would no longer be needed as domain experts could do the work. That didn't pan out for COBOL; the industry learned that accountants couldn't code. Though the language was much more friendly than the assembly it replaced, it still required serious development skills.

But with AI, could a domain expert train an inference engine?

Consider a robot: a "home economics" major could create scenarios of stacking dishes from a dishwasher. Maybe these would be in the form of videos, which were then fed to the AI engine as it tuned the weighting coefficients to achieve what the home ec expert deems worthy goals.

My first objection to this idea was that these sorts of systems have physical constraints. With firmware I'd write code to sample limit switches so the motors would turn off if at an end-of-motion extreme. During training an AI-based system would try and drive the motors into all kinds of crazy positions, banging destructively into stops. But think how a child learns: a parent encourages experimentation but prevents the youngster from self-harm. Maybe that's the role of the future developer training an AI. Or perhaps the training will be done on a simulator of some sort where nothing can go horribly wrong.

Taking this further, a domain expert could define the desired inputs and outputs, and then a poorly-paid person do the actual training. CEOs will love that. With that model a strange parallel emerges to computation a century ago: before the computer age "computers" were people doing simple math to create tables of logs, trig, ballistics, etc. A room full all labored at a problem. They weren't particularly skilled, didn't make much, but did the rote work under the direction of one master. Maybe AI trainers will be somewhat like that.

Like we outsource clothing manufacturing to Bangladesh, I could see training, basically grunt work, being sent overseas as well.

I'm not wild about this idea as it means we'd have an IoT of idiots: billions of AI-powered machines where no one really knows how they work. They've been well-trained but what happens when there's a corner case?

And most of the AI literature I read suggests that inference successes of 97% or so are the norm. That might be fine for classifying faces, but a 3% failure rate of a safety-critical system is a disaster. And the same rate for less-critical systems like factory controllers would also be completely unacceptable.

But the idea is intriguing.

Original post can be viewed here

Feel free to email me with comments.

Back to Jack's blog index page.

Read more…

Theoratical Embedded Linux requirements

Hardware

SoC

A System on Chip (SoC), is essentially an integrated circuit that takes a single platform and integrates an entire computer system onto it. It combines the power of the CPU with other components that it needs to perform and execute its functions. It is in charge of using the other hardware and running your software. The main advantage of SoC includes lower latency and power saving.

It is made of various building blocks:

  • Core + Caches + MMU – An SoC has a processor at its core which will define its functions. Normally, an SoC has multiple processor cores. For a “real” processor, e.g. ARM Cortex-A9. It’s the main thing kept in mind while choosing an SoC. Maybe co-adjuvanted by e.g. a SIMD co-processor like NEON.
  • Internal RAM – IRAM is composed of very high-speed SRAM located alongside the CPU. It acts similar to a CPU cache, and generally very small. It is used in the first phase of the boot sequence.
  • Peripherals – These can be a simple ADC, DSP, or a Graphical Processing Unit which is connected via some bus to the Core. A low power/real-time co-processor helps the main Core with real-time tasks or handle low power states. Examples of such IP cores are USB, PCI-E, SGX, etc.

External RAM

An SoC uses RAM to store temporary data during and after bootstrap. It is the memory an embedded system uses during regular operation.

Non-Volatile Memory

In an Embedded system or single-board computer, it is the SD card. In other cases, it can be a NAND, NOR, or SPI Data flash memory. It is the source of data the SoC reads and stores all the software components needed for the system to work.

External Peripherals

An SoC must have external interfaces for standard communication protocols such as USB, Ethernet, and HDMI. It also includes wireless technology protocols of Wi-Fi and Bluetooth.

Software

Second-Article-01-1024x576.jpghttps://www.tirichlabs.com/storage/2020/09/Second-Article-01-300x169.jpg 300w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-768x432.jpg 768w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-1200x675.jpg 1200w" alt="" />

First of all, we introduce the boot chain which is the series of actions that happens when an SoC is powered up.

Boot ROM: It is a piece of code stored in the ROM which is executed by the booting core when it is powered-on. This code contains instructions for the configuration of SoC to allow it to execute applications. The configurations performed by Boot ROM include initialization of the core’s register and stack pointer, enablement of caches and line buffers, programming of interrupt service routine, clock configuration.

Boot ROM also implements a Boot Assist Module (BAM) for downloading an application image from external memories using interfaces like Ethernet, SD/MMC, USB, CAN, UART, etc.

1st stage bootloader

In the first-stage bootloader performs the following

  • Setup the memory segments and stack used by the bootloader code
  • Reset the disk system
  • Display a string “Loading OS…”
  • Find the 2nd stage boot loader in the FAT directory
  • Read the 2nd stage boot loader image into memory at 1000:0000
  • Transfer control to the second-stage bootloader

It copies the Boot ROM into the SoC’s internal RAM. Must be tiny enough to fit that memory usually well under 100kB. It initializes the External RAM and the SoC’s external memory interface, as well as other peripherals that may be of interest (e.g. disable watchdog timers). Once done, it executes the next stage, depending on the context, which could be called MLO, SPL, or else.

2nd stage bootloader

This is the main bootloader and can be 10 times bigger than the 1st stage, it completes the initialization of the relevant peripherals.

  • Copy the boot sector to a local memory area
  • Find kernel image in the FAT directory
  • Read kernel image in memory at 2000:0000
  • Reset the disk system
  • Enable the A20 line
  • Setup interrupt descriptor table at 0000:0000
  • Setup the global descriptor table at 0000:0800
  • Load the descriptor tables into the CPU
  • Switch to protected mode
  • Clear the prefetch queue
  • Setup protected mode memory segments and stack for use by the kernel code
  • Transfer control to the kernel code using a long jump

Linux Kernel

The Linux kernel is the main component of a Linux OS and is the core interface between hardware and processes. It communicates between the hardware and processes, managing resources as efficiently as possible. The kernel performs following jobs

  • Memory management: Keep track of memory, how much is used to store what, and where
  • Process management: Determine which processes can use the processor, when, and for how long
  • Device drivers: Act as an interpreter between the hardware and the processes
  • System calls and security: Receive requests for the service from processes

To put the kernel in context, they can be interpreted as a Linux machine as having 3 layers:

  • The hardware: The physical machine—the base of the system, made up of memory (RAM) and the processor (CPU), as well as input/output (I/O) devices such as storage, networking, and graphics.
  • The Linux kernel: The core of the OS. It is a software residing in memory that tells the CPU what to do.
  • User processes: These are the running programs that the kernel manages. User processes are what collectively makeup user space. The kernel allows processes and servers to communicate with each other.

Init and rootfs – init is the first non-Kernel task to be run, and has PID 1. It initializes everything needed to use the system. In production embedded systems, it also starts the main application. In such systems, it is either BusyBox or a custom-crafted application.

View original post here

Read more…
RSS
Email me when there are new items in this category –

Charter Sponsors

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities