Subscribe to our Newsletter | To Post On IoT Central, Click here


Platforms (245)

By Sanjay Tripathi, Kevin Egge, and Shane Kehoe

Each Industrial Revolution has been catalyzed by the convergence of technologies from multiple domains. Industry 4.0 is no different.

Machines were first introduced into a manual manufacturing process between 1760 and 1820.  But, it was the concurrent introduction of means to power machines that led to the First Industrial Revolution. An example is the first commercially viable Textile Power Loom which was introduced by Edmund Cartwright in England. It used water-power at first. But in two short years water-powered looms were replaced with looms powered with the steam-engines created by James Watts. The relatively smaller steam-engines allowed textile looms to be deployed in many sites enabling persons to be employed in factories.

Multiple innovations such as new manufacturing methods, electricity, steel, and machine tools ushered in the era of mass manufacturing and the Second Industrial Revolution. Henry Ford’s River Rouge Complex in Michigan, completed in 1928, deployed these modern inventions and was the largest integrated factory in the world at that time. The era of mass manufacturing subsequently brought about an explosion in the consumption of goods by households.

The Third Industrial Revolution improved Automation and Controls across many industries through the use of Programmable Logic Controllers (PLCs). PLCs were first introduced by Modicon in 1969. PLC-based automation and controls were introduced to a mostly mechanical world, and helped improve yields and decrease manufacturing costs. This revolution helped provide cheaper products.

Fast forward to the Industry 4.0 Revolution made possible by the synergistic combination of expertise from the worlds of Operating Technologies (OT) and Information Technologies (IT). The current revolution is bringing about intelligent, interconnected and autonomous manufacturing equipment and systems. This is by augmenting deep domain expertise within OT companies with IT technologies such as artificial intelligence (AI), big data, cloud computing and ubiquitous connectivity.

The widespread use of open protocols across heterogeneous equipment makes it feasible to optimize horizontally across previously disjointed processes. In addition, owner/operators of assets can more easily link the shop-floor to the top-floor. Connections across multiple layers of the ISA-95/Purdue Model stack provides greater vertical visibility and added ability to optimize processes.

The increased integration brings together both OT data (from sensors, PLCs, DCS, SCADA systems) and IT data (from MES, ERP systems). However, this integration has different impacts on different functions such as operations, engineering, quality, reliability, and maintenance.

To learn more about how the integration positively impacts the organization, read the next installment in this series to see how you can bridge the gap between OT and IT teams to improve production resilience.

Originally posted here.

Read more…

Then it seemed that overnight, millions of workers worldwide were told to isolate and work from home as best as they could. Businesses were suddenly forced to enable remote access for hundreds or thousands of users, all at once, from anywhere across the globe. Many companies that already offered VPN services to a small group of remote workers scurried to extend those capabilities to the much larger workforce sequestering at home. It was a decision made in haste out of necessity, but now it’s time to consider, is VPN the best remote access technology for the enterprise, or can other technologies provide a better long-term solution?

Long-term Remote Access Could Be the Norm for Some Time

Some knowledge workers are trickling back to their actual offices, but many more are still at home and will be for some time. Global Workplace Analytics estimates that 25-30% of the workforce will still be working from home multiple days a week by the end of 2021. Others may never return to an official office, opting to remain a work-from-home (WFH) employee for good.

Consequently, enterprises need to find a remote access solution that gives home-based workers a similar experience as they would have in the office, including ease of use, good performance, and a fully secure network access experience. What’s more, the solution must be cost effective and easy to administer without the need to add more technical staff members.

VPNs are certainly one option, but not the only one. Other choices include appliance-based SD-WAN and SASE. Let’s have a look at each approach.

VPNs Weren’t Designed to Support an Entire Workforce

While VPNs are a useful remote access solution for a small portion of the workforce, they are an inefficient technology for giving remote access to a very large number of workers. VPNs are designed for point-to-point connectivity, so each secure connection between two points – presumably a remote worker and a network access server (NAS) in a datacenter – requires its own VPN link. Each NAS has a finite capacity for simultaneous users, so for a large remote user base, some serious infrastructure may be needed in the datacenter.

Performance can be an issue. With a VPN, all communication between the user and the VPN is encrypted. The encryption process takes time, and depending on the type of encryption used, this may add noticeable latency to Internet communications. More important, however, is the latency added when a remote user needs access to IaaS and SaaS applications and services. The traffic path is convoluted because it must travel between the end user and the NAS before then going out to the cloud, and vice versa on the way back.

An important issue with VPNs is that they provide overly broad access to the entire network without the option of controlling granular user access to specific resources. Stolen VPN credentials have been implicated in several high-profile data breaches. By using legitimate credentials and connecting through a VPN, attackers were able to infiltrate and move freely through targeted company networks. What’s more, there is no scrutiny of the security posture of the connecting device, which could allow malware to enter the network via insecure user devices.

SD-WAN Brings Intelligence into Routing Remote Users’ Traffic

Another option for providing remote access for home-based workers is appliance-based SD-WAN. It brings a level of intelligence to the connectivity that VPNs don’t have. Lee Doyle, principal analyst with Doyle Research, outlines the benefits of using SD-WAN to connect home office users to their enterprise network:

  • Prioritization for mission-critical and latency-sensitive applications
  • Accelerated access to cloud-based services
  • Enhanced security via encryption, VPNs, firewalls and integration with cloud-based security
  • Centralized management tools for IT administrators

One thing to consider about appliance-based SD-WAN is that it’s primarily designed for branch office connectivity—though it can accommodate individual users at home as well. However, if a company isn’t already using SD-WAN, this isn’t a technology that is easy to implement and setup for hundreds or thousands of home-based users. What’s more, a significant investment must be made in the various communication and security appliances.

SASE Provides a Simpler, More Secure, Easily Scalable Solution

Cato’s Secure Access Service Edge (or SASE) platform provides a great alternative to VPN for remote access by many simultaneous workers. The platform offers scalable access, optimized connectivity, and integrated threat prevention that are needed to support continuous large-scale remote access.

Companies that enable WFH using Cato’s platform can scale quickly to any number of remote users with ease. There is no need to set up regional hubs or VPN concentrators. The SASE service is built on top of dozens of globally distributed Points of Presence (PoPs) maintained by Cato to deliver a wide range of security and networking services close to all locations and users. The complexity of scaling is all hidden in the Cato-provided PoPs, so there is no infrastructure for the organization to purchase, configure or deploy. Giving end users remote access is as simple as installing a client agent on the user’s device, or by providing clientless access to specific applications via a secure browser.

Cato’s SASE platform employs Zero Trust Network Access in granting users access to the specific resources and applications they need to use. This granular-level security is part of the identity-driven approach to network access that SASE demands. Since all traffic passes through a full network security stack built into the SASE service, multi-factor authentication, full access control, and threat prevention are applied to traffic from remote users. All processing is done within the PoP closest to the users while enforcing all corporate network and security policies. This eliminates the “trombone effect” associated with forcing traffic to specific security choke points on a network. Further, admins have consistent visibility and control of all traffic throughout the enterprise WAN.

SASE Supports WFH in the Short-term and Long-term

While some workers are venturing back to their offices, many more are still working from home—and may work from home permanently. The Cato SASE platform is the ideal way to give them access to their usual network environment without forcing them to go through insecure and inconvenient VPNs.

Originally posted here

Read more…

Today the world is obsessed with the IoT, as if this is a new concept. We've been building the IoT for decades, but it was only recently some marketing "genius" came up with the new buzz-acronym.

Before there was an IoT, before there was an Internet, many of us were busy networking. For the Internet itself was a (brilliant) extension of what was already going on in the industry.

My first experience with networking was in 1971 at the University of Maryland. The school had a new computer, a $10 million Univac 1108 mainframe. This was a massive beast that occupied most of the first floor of a building. A dual-processor machine it was transistorized, though the control console did have some ICs. Rows of big tape drives mirrored the layman's idea of computers in those days. Many dishwasher-sized disk drives were placed around the floor and printers, card readers and other equipment were crammed into every corner. Two Fastrand drum memories, each consisting of a pair of six-foot long counterrotating drums, stored a whopping 90 MB each. Through a window you could watch the heads bounce around.

The machine was networked. It had a 300 baud modem with which it could contact computers at other universities. A primitive email system let users create mail which was queued till nightfall. Then, when demands on the machine were small, it would call the appropriate remote computer and forward mail. The system operated somewhat like today's "hot potato" packets, where the message might get delivered to the easiest machine available, which would then attempt further forwarding. It could take a week to get an email, but at least one saved the $0.08 stamp that the USPS charged.

The system was too slow to be useful. After college I lost my email account but didn't miss it at all.

By the late 70s many of us had our own computers. Mine was a home-made CP/M machine with a Z80 processor and a small TV set as a low-res monitor. Around this time Compuserve came along and I, like so many others, got an account with them. Among other features, users had email addresses. Pretty soon it was common to dial into their machines over a 300 baud modem and exchange email and files. Eventually Compuserve became so ubiquitous that millions were connected, and at my tools business during the 1980s it was common to provide support via this email. The CP/M machine gave way to a succession of PCs, Modems ramped up to 57 K baud.

My tools business expanded rapidly and soon we had a number of employees. Sneakernet was getting less efficient so we installed an Arcnet network using Windows 3.11. That morphed into Ethernet connections, though the cursing from networking problems multiplied about as fast as the data transfers. Windows was just terrible at maintaining reliable connectivity.

In 1992 Mike Lee, a friend from my Boys Night Out beer/politics/sailing/great friends group, which still meets weekly (though lately virtually) came by the office with his laptop. "You have GOT to see this" he intoned, and he showed me the world-wide web. There wasn't much to see as there were few sites. But the promise was shockingly clear. I was stunned.

The tools business had been doing well. Within a month we spent $100k on computers, modems and the like and had a new business: Softaid Internet Services. SIS was one of Maryland's first ISPs and grew quickly to several thousand customers. We had a T1 connection to MAE-EAST in the DC area which gave us a 1.5 Mb/s link… for $5000/month. Though a few customers had ISDN connections to us, most were dialup, and our modem shelf grew to over 100 units with many big fans keeping the things cool.

The computers all ran BSD Unix, which was my first intro to that OS.

I was only a few months back from a failed attempt to singlehand my sailboat across the Atlantic and had written a book-length account of that trip. I hastily created a web page of that book to learn about using the web. It is still online and has been read several million times in the intervening years. We put up a site for the tools business which eventually became our prime marketing arm.

The SIS customers were sometimes, well, "interesting." There was the one who claimed to be a computer expert, but who tried to use the mouse by waving it around over the desk. Many had no idea how to connect a modem. Others complained about our service because it dropped out when mom would pick up the phone to make a call over the modem's beeping. A lot of handholding and training was required.

The logs showed a shocking (to me at the time) amount of porn consumption. Over lunch an industry pundit explained how porn drove all media, from the earliest introduction of printing hundreds of years earlier.

The woman who ran the ISP was from India. She was delightful and had a wonderful marriage. She later told me it had been arranged; they met  their wedding day. She came from a remote and poor village and had had no exposure to computers, or electricity, till emigrating to the USA.

Meanwhile many of our tools customers were building networking equipment. We worked closely with many of them and often had big routers, switches and the like onsite that our engineers were working on. We worked on a lot of what we'd now call IoT gear: sensors et al connected to the net via a profusion of interfaces.

I sold both the tools and Internet businesses in 1997, but by then the web and Internet were old stories.

Today, like so many of us, I have a fast (250 Mb/s) and cheap connection into the house with four wireless links and multiple computers chattering to each other. Where in 1992 the web was incredibly novel and truly lacking in useful functionality, now I can't imagine being deprived of it. Remember travel agents? Ordering things over the phone (a phone that had a physical wire connecting it to Ma Bell)? Using 15 volumes of an encyclopedia? Physically mailing stuff to each other?

As one gets older the years spin by like microseconds, but it is amazing to stop and consider just how much this world has changed. My great grandfather lived on a farm in a world that changed slowly; he finally got electricity in his last year of life. His daughter didn't have access to a telephone till later in life, and my dad designed spacecraft on vellum and starched linen using a slide rule. My son once saw a typewriter and asked me what it was; I mumbled that it was a predecessor of Microsoft Word.

That he understood. I didn't have the heart to try and explain carbon paper.

Originally posted HERE.

Read more…

When I think about the things that held the planet together in 2020, it was digital experiences delivered over wireless connectivity that made remote things local.

While heroes like doctors, nurses, first responders, teachers, and other essential personnel bore the brunt of the COVID-19 response, billions of people around the world found themselves cut off from society. In order to keep people safe, we were physically isolated from each other. Far beyond the six feet of social distancing, most of humanity weathered the storm from their homes.

And then little by little, old things we took for granted, combined with new things many had never heard of, pulled the world together. Let’s take a look at the technologies and trends that made the biggest impact in 2020 and where they’re headed in 2021:

The Internet

The global Internet infrastructure from which everything else is built is an undeniable hero of the pandemic. This highly-distributed network designed to withstand a nuclear attack performed admirably as usage by people, machines, critical infrastructure, hospitals, and businesses skyrocketed. Like the air we breathe, this primary facilitator of connected, digital experiences is indispensable to our modern society. Unfortunately, the Internet is also home to a growing cyberwar and security will be the biggest concern as we move into 2021 and beyond. It goes without saying that the Internet is one of the world’s most critical utilities along with water, electricity, and the farm-to-table supply chain of food.

Wireless Connectivity

People are mobile and they stay connected through their smartphones, tablets, in cars and airplanes, on laptops, and other devices. Just like the Internet, the cellular infrastructure has remained exceptionally resilient to enable communications and digital experiences delivered via native apps and the web. Indoor wireless connectivity continues to be dominated by WiFi at home and all those empty offices. Moving into 2021, the continued rollout of 5G around the world will give cellular endpoints dramatic increases in data capacity and WiFi-like speeds. Additionally, private 5G networks will challenge WiFi as a formidable indoor option, but WiFi 6E with increased capacity and speed won’t give up without a fight. All of these developments are good for consumers who need to stay connected from anywhere like never before.

Web Conferencing

With many people stuck at home in 2020, web conferencing technology took the place of traveling to other locations to meet people or receive education. This technology isn’t new and includes familiar players like GoToMeeting, Skype, WebEx, Google Hangouts/Meet, BlueJeans, FaceTime, and others. Before COVID, these platforms enjoyed success, but most people preferred to fly on airplanes to meet customers and attend conferences while students hopped on the bus to go to school. In 2020, “necessity is the mother of invention” took hold and the use of Zoom and Teams skyrocketed as airplanes sat on the ground while business offices and schools remained empty. These two platforms further increased their stickiness by increasing the number of visible people and adding features like breakout rooms to meet the demands of businesses, virtual conference organizers, and school teachers. Despite the rollout of the vaccine, COVID won’t be extinguished overnight and these platforms will remain strong through the first half of 2021 as organizations rethink where and when people work and learn. There’s way too many players in this space so look for some consolidation.

E-Commerce

“Stay at home” orders and closed businesses gave e-commerce platforms a dramatic boost in 2020 as they took the place of shopping at stores or going to malls. Amazon soared to even higher heights, Walmart upped their game, Etsy brought the artsy, and thousands of Shopify sites delivered the goods. Speaking of delivery, the empty city streets became home to fleets FedEx, Amazon, UPS, and DHL trucks bringing packages to your front doorstep. Many retail employees traded-in working at customer-facing stores for working in a distribution centers as long as they could outperform robots. Even though people are looking forward to hanging out at malls in 2021, the e-commerce, distribution center, delivery truck trinity is here to stay. This ball was already in motion and got a rocket boost from COVID. This market will stay hot in the first half of 2021 and then cool a bit in the second half.

Ghost Kitchens

The COVID pandemic really took a toll on restaurants in the 2020, with many of them going out of business permanently. Those that survived had to pivot to digital and other ways of doing business. High-end steakhouses started making burgers on grills in the parking lot, while takeout pizzerias discovered they finally had the best business model. Having a drive-thru lane was definitely one of the keys to success in a world without waiters, busboys, and hosts. “Front of house” was shut down, but the “back of house” still had a pulse. Adding mobile web and native apps that allowed customers to easily order from operating “ghost kitchens” and pay with credit cards or Apple/Google/Samsung Pay enabled many restaurants to survive. A combination of curbside pickup and delivery from the likes of DoorDash, Uber Eats, Postmates, Instacart and Grubhub made this business model work. A surge in digital marketing also took place where many restaurants learned the importance of maintaining a relationship with their loyal customers via connected mobile devices. For the most part, 2021 has restauranteurs hoping for 100% in-person dining, but a new business model that looks a lot like catering + digital + physical delivery is something that has legs.

The Internet of Things

At its very essence, IoT is all about remotely knowing the state of a device or environmental system along with being able to remotely control some of those machines. COVID forced people to work, learn, and meet remotely and this same trend applied to the industrial world. The need to remotely operate industrial equipment or an entire “lights out” factory became an urgent imperative in order to keep workers safe. This is yet another case where the pandemic dramatically accelerated digital transformation. Connecting everything via APIs, modeling entities as digital twins, and having software bots bring everything to life with analytics has become an ROI game-changer for companies trying to survive in a free-falling economy. Despite massive employee layoffs and furloughs, jobs and tasks still have to be accomplished, and business leaders will look to IoT-fueled automation to keep their companies running and drive economic gains in 2021.

Streaming Entertainment

Closed movie theaters, football stadiums, bowling alleys, and other sources of entertainment left most people sitting at home watching TV in 2020. This turned into a dream come true for streaming entertainment companies like Netflix, Apple TV+, Disney+, HBO Max, Hulu, Amazon Prime Video, Youtube TV, and others. That said, Quibi and Facebook Watch didn’t make it. The idea of binge-watching shows during the weekend turned into binge-watching every season of every show almost every day. Delivering all these streams over the Internet via apps has made it easy to get hooked. Multiplayer video games fall in this category as well and represent an even larger market than the film industry. Gamers socially distanced as they played each other from their locked-down homes. The rise of cloud gaming combined with the rollout of low-latency 5G and Edge computing will give gamers true mobility in 2021. On the other hand, the video streaming market has too many players and looks ripe for consolidation in 2021 as people escape the living room once the vaccine is broadly deployed.

Healthcare

With doctors and nurses working around the clock as hospitals and clinics were stretched to the limit, it became increasingly difficult for non-COVID patients to receive the healthcare they needed. This unfortunate situation gave tele-medicine the shot in the arm (no pun intended) it needed. The combination of healthcare professionals delivering healthcare digitally over widespread connectivity helped those in need. This was especially important in rural areas that lacked the healthcare capacity of cities. Concurrently, the Internet of Things is making deeper inroads into delivering the health of a person to healthcare professionals via wearable technology. Connected healthcare has a bright future that will accelerate in 2021 as high-bandwidth 5G provides coverage to more of the population to facilitate virtual visits to the doctor from anywhere.

Working and Living

As companies and governments told their employees to work from home, it gave people time to rethink their living and working situation. Lots of people living in previously hip, urban, high-rise buildings found themselves residing in not-so-cool, hollowed-out ghost towns comprised of boarded-up windows and closed bars and cafés. Others began to question why they were living in areas with expensive real estate and high taxes when they not longer had to be close to the office. This led to a 2020 COVID exodus out of pricey apartments/condos downtown to cheaper homes in distant suburbs as well as the move from pricey areas like Silicon Valley to cheaper destinations like Texas. Since you were stuck in your home, having a larger house with a home office, fast broadband, and a back yard became the most important thing. Looking ahead to 2021, a hybrid model of work-from-home plus occasionally going into the office is here to stay as employees will no longer tolerate sitting in traffic two hours a day just to sit in a cubicle in a skyscraper. The digital transformation of how and where we work has truly accelerated.

Data and Advanced Analytics

Data has shown itself to be one of the world’s most important assets during the time of COVID. Petabytes of data has continuously streamed-in from all over the world letting us know the number of cases, the growth or decline of infections, hospitalizations, contact-tracing, free ICU beds, temperature checks, deaths, and hotspots of infection. Some of this data has been reported manually while lots of other sources are fully automated from machines. Capturing, storing, organizing, modeling and analyzing this big data has elevated the importance of cloud and edge computing, global-scale databases, advanced analytics software, and the growing importance of machine learning. This is a trend that was already taking place in business and now has a giant spotlight on it due to its global importance. There’s no stopping the data + advanced analytics juggernaut in 2021 and beyond.

Conclusion

2020 was one of the worst years in human history and the loss of life was just heartbreaking. People, businesses, and our education system had to become resourceful to survive. This resourcefulness amplified the importance of delivering connected, digital experiences to make previously remote things into local ones. Cheers to 2021 and the hope for a brighter day for all of humanity.

Read more…

By Michele Pelino

The COVID-19 pandemic drove businesses and employees to became more reliant on technology for both professional and personal purposes. In 2021, demand for new internet-of-things (IoT) applications, technologies, and solutions will be driven by connected healthcare, smart offices, remote asset monitoring, and location services, all powered by a growing diversity of networking technologies.

In 2021, we predict that:

  • Network connectivity chaos will reign. Technology leaders will be inundated by an array of wireless connectivity options. Forrester expects that implementation of 5G and Wi-Fi technologies will decline from 2020 levels as organizations sort through market chaos. For long-distance connectivity, low-earth-orbit satellites now provide a complementary option, with more than 400 Starlink satellites delivering satellite connectivity today. We expect interest in satellite and other lower-power networking technologies to increase by 20% in the coming year.
  • Connected device makers will double down on healthcare use cases. Many people stayed at home in 2020, leaving chronic conditions unmanaged, cancers undetected, and preventable conditions unnoticed. In 2021, proactive engagement using wearables and sensors to detect patients’ health at home will surge. Consumer interest in digital health devices will accelerate as individuals appreciate the convenience of at-home monitoring, insight into their health, and the reduced cost of connected health devices.
  • Smart office initiatives will drive employee-experience transformation. In 2021, some firms will ditch expensive corporate real estate driven by the COVID-19 crisis. However, we expect at least 80% of firms to develop comprehensive on-premises return-to-work office strategies that include IoT applications to enhance employee safety and improve resource efficiency such as smart lighting, energy and environmental monitoring, or sensor-enabled space utilization and activity monitoring in high traffic areas.*
  • The near ubiquity of connected machines will finally disrupt traditional business. Manufacturers, distributors, utilities, and pharma firms switched to remote operations in 2020 and began connecting previously disconnected assets. This connected-asset approach increased reliance on remote experts to address repairs without protracted downtime and expensive travel. In 2021, field service firms and industrial OEMs will rush to keep up with customer demand for more connected assets and machines.
  • Consumer and employee location data will be core to convenience. The COVID-19 pandemic elevated the importance location plays in delivering convenient customer and employee experiences. In 2021, brands must utilize location to generate convenience for consumers or employees with virtual queues, curbside pickup, and checking in for reservations. They will depend on technology partners to help use location data, as well as a third-party source of location trusted and controlled by consumers.

* Proactive firms, including Atea, have extended IoT investments to enhance employee experience and productivity by enabling employees to access a mobile app that uses data collected from light-fixture sensors to locate open desks and conference rooms. Employees can modify light and temperature settings according to personal preferences, and the system adjusts light color and intensity to better align with employees’ circadian rhythms to aid in concentration and energy levels. See the Forrester report “Rethink Your Smart Office Strategy.”

Originally posted HERE.

Read more…

By Patty Medberry

After 2020’s twists and turns, here’s hoping that 2021 ushers in a restored sense of “normal.” In thinking about what the upcoming year might bring for industrial IoT, three key trends emerge.

Trend #1: Securing operational technology (OT)

 IT will take a bolder posture to secure OT environments.

Cyber risks in industrial environments will continue to grow causing IT to take bolder steps to secure the OT network in 2021. The CISO and IT teams have accountability for cybersecurity across the enterprise. But often they do not have visibility into the OT network. Many OT networks use traditional measures like air gapping or an industrial demilitarized zone to protect against attacks. But these solutions are rife with backdoors. For example, third-party technicians and other vendors often have remote access to update systems, machines and devices. With increasing pressure from board members and government regulators to manage IoT/OT security risks, and to protect the business itself, the CISO and IT will need to do more.

Success requires OT’s help. IT cybersecurity practices that work in the enterprise are not always appropriate for industrial environments. What’s more, IT doesn’t have the expertise or insight into operational and process control technology. A simple patch could bring down production (and revenues).

Bottom line? Organizations will need solutions that strengthen cybersecurity while meeting IT and OT needs. For IT, that means visibility and control across their own environment to the OT network. For OT, it means security solutions that allow them respond to anomalies while keeping production humming.

Trend #2: Remote and autonomous operations

The need for operational resiliency will accelerate the deployment of remote and autonomous operations – driving a new class of networking.

The impact of changes brought on in 2020 is driving organizations to increasingly use IoT technologies for operational resiliency. After all, IoT helps keep a business up and running when people cannot be on the ground. It also helps improve safety and efficiencies by preventing unnecessary site visits and reducing employee movement throughout facilities.

In 2021, we will see more deployments aimed at sophisticated remote operations. These will go well beyond remote monitoring. They will include autonomous operational controls for select parts of a process and will be remotely enabled for other parts. Also, deployments will increasingly move toward full autonomy, eliminating the need for humans to be present locally or remotely. And more and more, AI will used for dynamic optimization and self-healing, in use cases such as:

  • autonomous guided vehicles for picking and packing, material handling, and autonomous container applications across manufacturing, warehouses and ports
  • increased automation of the distribution grid
  • autonomous haul trucks for mining applications
  • Computer-based train control for rail and mass transit

All these use cases require data instantly and in mass, demanding a network that can support that data plus deliver the speed required for analysis. This new class of industrial networking must provide the ability to handle more network bandwidth, offer zero latency data and support edge compute. It also needs security and scale to adapt quickly, ensuring the business is up and running – no matter what.

Trend #3: Managing multiple access technologies

Organizations will operate multiple-access technologies to achieve operational agility and flexibility.

While Ethernet has always been the foundation for connectivity in industrial IoT spaces, that connectivity is quickly expanding to wireless. Wireless helps reduce the pain of physical cabling and provides the flexibility and agility to upgrade, deploy and reconfigure the network with less operational downtime. Newer wireless technologies like Wi-Fi 6 and 5G also power use cases not possible in the past (or possible only with wired connectivity).

As organizations expand their IoT deployments, the need to manage multiple access technologies will grow. Successful deployments will require the right connectivity for the use case, otherwise, costs, complexity and security risks increase. With wireless choices including Wi-Fi, LoRaWAN, Wi-SUN, public or private cellular, Bluetooth and more, organizations will need to determine the best technology for each use case.  

Cisco’s recommendation: Build an access strategy to optimize costs and resources while ensuring security. Interactions between access technologies should deliver a secured and automated end-to-end IP infrastructure – and must avoid a “mishmash” leading to complexity and failed objectives.

As the end of 2020 fast approaches, I wish everyone a safe and healthy New Year. As you continue building and refining your plans for 2021, please consider how you can unleash these IoT network trends to reduce your cybersecurity risks and increase your operational resiliency. 

Originally posted HERE.

Read more…

New solar performance monitoring system has potential to become IoT of photovoltaics. Credit: Pexels

A new system for measuring solar performance over the long term in scalable photovoltaic systems, developed by Arizona State University researchers, represents a breakthrough in the cost and longevity of interconnected power delivery.

When solar cells are developed, they are "current-voltage" tested in the lab before they are deployed in panels and systems outdoors. Once installed outdoors, they aren't usually tested again unless the system undergoes major issues. The new test system, Suns-Voc, measures the system's voltage as a function of light intensity in the outdoor setting, enabling real-time measurements of performance and detailed diagnostics.

"Inside the lab, however, everything is controlled," explained Alexander Killam, an ASU electrical engineering doctoral student and graduate research associate. "Our research has developed a way to use Suns-Voc to measure solar panels' degradation once they are outdoors in the real world and affected by weather, temperature and humidity," he said.

Current photovoltaic modules are rated to last 25 years at 80 percent efficiency. The goal is to expand that time frame to 50 years or longer.

"This system of monitoring will give photovoltaic manufacturers and big utility installations the kind of data necessary to adjust designs to increase efficiency and lifespans," said Killam, the lead author of "Monitoring of Photovoltaic System Performance Using Outdoor Suns-Voc," for Joule.

For example, most techniques used to measure outdoor solar efficiency require you to disconnect from the power delivery mechanism. The new approach can automatically measure daily during sunrise and sunset without interfering with power delivery.

"When we were developing photovoltaics 20 years ago, panels were expensive," said Stuart Bowden, an associate research professor who heads the silicon section of ASU's Solar Power Laboratory. "Now they are cheap enough that we don't have to worry about the cost of the panels. We are more interested in how they maintain their performance in different environments.

"A banker in Miami underwriting a photovoltaic system wants to know in dollars and cents how the system will perform in Miami and not in Phoenix, Arizona."

"The weather effects on photovoltaic systems in Arizona will be vastly different than those in Wisconsin or Louisiana," said Joseph Karas, co-author and materials science doctoral graduate now at the National Renewable Energy Lab. "The ability to collect data from a variety of climates and locations will support the development of universally effective solar cells and systems."

The research team was able to test its approach at ASU's Research Park facility, where the Solar Lab is primarily solar powered. For its next step, the lab is negotiating with a power plant in California that is looking to add a megawatt of silicon photovoltaics to its power profile.

The system, which can monitor reliability and lifespan remotely for larger, interconnected systems, will be a major breakthrough for the power industry.

"Most residential solar rooftop systems aren't owned by the homeowner, they are owned by a utility company or broker with a vested interest in monitoring photovoltaic efficiency," said Andre' Augusto, head of Silicon Heterojunction Research at ASU's Solar Power Laboratory and a co-author of the paper.

"Likewise, as developers of malls or even planned residential communities begin to incorporate solar power into their construction projects, the interest in monitoring at scale will increase, " Augusto said.

According to Bowden, it's all about the data, especially when it can be monitored automatically and remotely—data for the bankers, data for developers, and data for the utility providers.

If Bill Gates' smart city, planned about 30 miles from Phoenix in Buckeye, Ariz., uses the team's measurement technology, "It could become the IoT of Photovoltaics," said Bowden.

Originally posted HERE.

Read more…
Fig.1 Arrow Shield 96 Trusted Platform

Introduction

IoT product development crosses several domains of expertise from embedded design to communication protocols and cloud computing. Because of this complexity “end-to-end” or “edge-to-cloud” IoT security is becoming a challenging concept in the industry. Edge in many cases refers to the device as a single element in the edge-to-cloud chain. But the device must not be regarded as a whole when security requirements are defined. Trust must first be established within the processing unit and propagated through several layers of the software stack before the device becomes a trusted end node. Securing the processor requires to properly integrate multiple layers of security and use security features implemented in hardware. Embedded security expertise and experience is required to accomplish such tasks. It is very easy to put a lot of effort on implementing security for an IoT product and in the same time missing to cover key use cases. A simpler way to narrowing down on defining the end-to-end security is to start with identifying the minimum set of business requirements.

Brand image, how a company’s customers perceive and value it, is one of the most valuable assets of any corporation. Two of the most important characteristics of an IoT device that can promote a positive brand image are: resiliency and privacy. For resiliency, this might mean adding features that increase the device’s ability to self-recover from malfunctions or cyber-attacks. For privacy, this means protecting user information and data but also the intellectual property (IP), the product invested in the product. This means that preventing exploitation through vectors such as product\device cloning and over production becomes important. Another business driver is the overall cost of ownership for the product. Are there security related features that can drive the cost down? We include here not just operational cost but also liabilities.

In this blog, we dive deeper into solutions that support these business requirements. We will also discuss a demo we have created in collaboration with our partners Sequitur Labs and Arrow to demonstrate a commercially available approach to solving a number of several security use cases for IoT.

Security in depth – a methodical approach for securing connected products

IoT security must start with securing the device, so that data, data collection, and information processing can be trusted. Security must be applied in layers and facilitate trust propagation from the silicon hardware root of trust (HWRoT) to the public/private cloud or the application provider back-end. Furthermore, the connected paradigm provides the opportunity to delegate access control and security monitoring in the cloud, outside of the device. Narrowing down further, device security must be rooted by enabling fundamental capabilities of the processor or system on chip and consider all three stages of the device lifecycle: inception (manufacturing, first boot), operation, and decommissioning.

In a nutshell we should consider the following layers for securing any IoT product:

  • Set a hardware root of trust – secure programming and provisioning (firmware, key material, fuses)
  • Implement hardware enforced isolation – system partitioning secure / non-secure
  • Design secure boot – authenticated boot chain all the way to an authenticated kernel
  • Build for resiliency – fail-safe to an alternative firmware image and restore from off-board location
  • Enable Trusted Execution – establish a logical secure enclave
  • Abstract hardware security – streamline application development
  • Enable security monitoring – cloud based, actionable security monitoring for a fleets of devices

These capabilities provide a foundation sufficient to fulfill the most common security requirements of any IoT product.

Embedded security features needed to build the security layers described above are available today from many silicon providers. However, software is needed to turn these into a usable framework for application developers to easily implement higher layer security use cases without the need for advanced silicon expertise.

Such software products must be architected to be easily ported to diverse silicon designs. Secondly, the software solution must work with the established IoT manufacturing process. “Turning on” embedded security features triggers changes to existing manufacturing flows to accommodate hardware testing before final firmware image can be programmed, burning fuses in the silicon in a specific order and overall handling sensitive cryptographic key material. The fragmentation, complexity, and expertise required are the reasons why embedded security is a challenge to implement at scale in IoT today.

A closer look – commercially available secure platform with Arrow Shield96

AWS partnered with Sequitur Labs and Arrow to provide a commercial solution that follows the approach described in the previous paragraph. This solution follows the NIST SP 800-193 for Platform Firmware Resilience Guidelines and goes beyond to create a secure platform fitted for embedded and IoT products. In the same time it is abstracting the complexity of understanding and utilizing embedded security IP such as hardware crypto, random number generators, fuse controllers, tampers, hardware integrity checkers, TrustZone, on-the-fly memory encryption.

For this blog, we created a demo using the Arrow Shield 96 Trusted Platform (Fig 1) single board computer running Sequitur Labs custom firmware image based on the EmSPARK Security Suite. The Arrow Shield96 board is based on the Microchip SAMD27, a Cortex A5 entry level MPU that embeds a set of security IP capable to fulfill the most stringent security requirements.

Let’s dive deeper into the technical implementation first then into the demo scenarios that fulfill some of customers’ business needs.

Security inception and propagation of trust

Secure boot and firmware provisioning

Introducing secure boot requires initial programming of the CPU, essentially burning keys in the processor’s fuses, setting up the boot configuration, establishing the Hardware Root of Trust, and ensuring the processor only boots authenticated, trusted firmware. Secure boot implementation is tightly correlated to the processor programming and the device firmware provisioning. The following section provides details how secure boot and firmware provisioning can be done properly to establish a trusted security foundation for any application.

Firmware provisioning

EmSPARK Security Suite methodology for provisioning and programming the Shield96 board minimizes complexity and the need for embedded security expertise. It provides a tool and software building blocks that guide the device makers to create an encrypted manufacturing firmware image first. The manufacturing firmware image packages the final components: encrypted blobs of the final device firmware, a provisioning application, and customer specific key material such as private key and X.509 certificate for cloud connectivity, certificate authorities to authenticate firmware components and application updates.
The actual firmware provisioning and CPU programming is performed automatically during the very first boot of the device flashed with the manufacturing image. With the CPU running in secure mode the provisioning application burns the necessary CPU fuses and generates keys using the embedded TRNG (true random number generator) to uniquely encrypt the software components that together form the final firmware. Such components are the Trusted Execution Environment (CoreTEE), Linux kernel, customer applications, Trusted Applications, and key material (such as key material needed to authenticate with AWS IoT Core).

The output – establishing a trusted foundation

The result is firmware encrypted uniquely with a key derived from the HWRoT for each device in a process that does not leave room for device secrets mismanagement or human error. Device diversification achieved this way drastically reduces the cost of manufacturing by eliminating the need for HSMs and secure facilities while providing protection from class break attacks (break one break all).
Another task the provisioning process performs during the very first boot is creating and securely storing a unique device certificate from a preloaded CSR (Certificate Signing Request) template and a key pair generated using the HW TRNG then signed with a customer provided private key only usable securely during the device first boot. The device certificate serves as the immutable device identity for cloud authentication.

Secure boot

The secure boot implemented creates the system partitioning in secure and non-secure domains making sure all peripherals are set to the desired domain. Arm TrustZone and Microchip security IP are at the core of the implementation. CoreTEE, the operating system for the secure domain runs in on-the-fly AES encrypted DDR memory. This protects a critical software component (the TEE) from memory probing attacks. Secure boot has been designed so at the end of the boot process, before handing over control of the processor from the secure domain to the non-secure domain (Linux) to close access to the fuse controller, secure JTAG, and other peripherals that can be leveraged to breach the security.

Building for resilience

Secure boot implements two features that boost device resilience – a fail-over boot from a secondary image (B) when primary boot (A) fails, and the ability to restore a known good image (A) from an off-board location. The solution includes a hardware watchdog and a boot-loop counter (as set by the device maker) that Linux resets to maximum after each successful boot. If Linux fails to boot repeatedly and the counter reaches zero the B partition is set for the next boot. After such failure once the failover boot B is loaded, the device connects to an off-board location (in our demo that is a repository on AWS) retrieves the latest firmware image and re-installs it as the primary one (A). These two features help to reduce operational cost by allowing devices in the field to self-heal. In addition, AWS IoT Device Defender checks device behaviors for ongoing analysis and triggers alerts when behaviors deviate from expected ranges.

In our demo when the alternative firmware image (B) is loaded, an event is triggered in the AWS IoT Device Defender agent. The AWS IoT Device Defender agent running as a TA in the secure domain sends these events to the AWS IoT Device Defender Detect service for evaluation. The TA, running in the secure domain, also signs AWS IoT Device Defender messages to facilitate integrity validation for each reported event.

Another key component of the EmSPARK Suite is the secure update process. Since secure boot is the only process that can decrypt firmware components during device start it is also involved in performing the firmware update. The firmware update feature is facilitated in Linux as an API call that requires a manifest and the signed and/or encrypted new firmware image. The API call performs image signature verification and sets the flag for the boot to update and restarts the board. During next boot the secure boot process decrypts the new image using a pre-provisioned key and re-encrypts it with the board-specific key. The manifest indicates which components need to be updated – Linux Kernel, TEE, TAs and/or bootloader.

Enabling easy development through security abstraction

Arrow Shield through the EmSPARK Suite product preloads a number of TAs (Trusted Applications) with the Shield96 firmware. The figure below is a view of the dual domain implementation and the software components provided with the Shield96 Trusted product in our demo.

8275351859?profile=RESIZE_710x

Fig 2. Software architecture enabling TrustZone\TEE with EmSPARK Suite

These TAs expose a set of secure functions to Linux via a C SDK (called the CoreLocker APIs). The Arrow board and Sequitur’s security suite preloads the following TAs for our demo:

  • Cryptographic engine – providing symmetric, asymmetric crypto operations and key generation integrating silicon-specific hardware crypto
    Key-store and a CA-store managed (add, delete) via signed commands
  • Secure firmware update
  • Secure storage for files and stream data
  • TLS and MQTT stacks
  • AWS IoT Device Defender secure agent

In addition, a tamper detection and remediation TA has been added for our demo purposes (as detailed in “The demo” section below). These TAs provide a preloaded framework for implementing a comprehensive set of security use cases assuring that security operations are executed in isolation from the application OS in an authenticated and resilient environment. Such use cases include confidentiality, authentication and authorization, access control, attestation, privacy, integrity protection, device health monitoring, secure communication with the cloud or other devices, secure lifecycle management.

All TA functions are made available to application development through a set of C APIs via an SDK. Developers do not need to understand the complexity of creating TAs or using HW security provided by the chipset.

Translating TAs to security use cases

Through a securely managed CA-store (Certificate Authority) the device can authenticate payloads against a set of CAs optionally loaded at manufacturing or later in the device lifecycle. Having the ability to update securely the CAs the device or product owner can transfer the ownership of certain functions such as firmware update or application update to other entities. For example, the customer owns the applications but the firmware update and security management may be delegated to a third party Managed Service Provider while maintaining privacy requirements.
The cryptographic engine is core to anything related to security and implement a set of symmetric and asymmetric cryptographic functions and key generation allowing applications in non-secure domain to execute crypto in isolation. HW crypto is used when implemented by the chipset.

The Microchip SAMA5D2 implements in hardware the ability to monitor in real time regions of memory. In the Shield96 firmware this feature – ICM, Integrity Check Monitoring – is used to monitor the integrity of the Linux kernel. Any modification of the Linux kernel triggers an interrupt in the secure domain. The hardware isolation implemented through TrustZone prevents Linux to even “be aware” of such interrupts. The interrupt triggers a remediation function implemented in a TA and together with the Device Defender Secure Agent TA that does three operations:

  • records the tampering event and restarts Linux from the verified, authenticated encrypted image provided through secure boot
  • after restart packages the tampering event into a JSON format, signs it for integrity assurance and stores it
  • publishes the JSON package to the AWS IoT Device Defender monitoring service

Complementing the edge-to-cloud security strategy with AWS IoT Device Defender

AWS IoT Device Defender audits device cloud configuration based on security best practices and monitors anomalies and threats on devices based on expected cloud- and device-side behaviors on an ongoing basis. In this demo and for complementing the defense mechanisms implemented at the device level, AWS IoT Device Defender performs its monitoring capability and enables customers to receive alerts when it evaluates that anomalous or threat events occurred on an end-node. This demo required installing AWS IoT Device Defender agents on both the non-secure and secure domains of the Shield96 board. The security domain is providing the secure crypto signature (using securely a private key) to device health reports and also isolates the detection and reporting processes from being intercepted by malicious applications. AWS IoT Device Defender agent collects monitored behaviors in the forms of metrics from both domains; then from the secure domain, AWS IoT Device Defender agent sends the metrics to the AWS Cloud for evaluation.

The Demo

For a full demo tutorial, please watch this video .

8275363691?profile=RESIZE_710x

Fig. 3 Edge-to-cloud IoT security demo at Arrow Embedded to Go 2020

The demo covers the following scenarios:

  • Out of the box experience
  • Firmware personalization – secure firmware rotation to provide a logistical separation between manufacturing and production firmware
  • Device registration to AWS IoT Core
  • Device decommissioning (de-registration) from AWS IoT Core
  • Secure firmware update
  • Resilience demonstration – tamper event simulation and remediation
  • Event reporting to AWS IoT Device Defender

Demonstrating resilience and tamper violation reporting with AWS IoT Device Defender

The boot logic for the demo includes a safety check for tamper events. In this case, we connected a button to an environmental tamper pin. The tamper violation generated by the button press is detected in the next boot sequence so the initial boot code switches to the secondary boot stack, and proceeds to boot the “fail-safe” boot image. Once booted the system will publish the tamper event to AWS IoT Device Defender for logging and analysis. In the demo, the primary and secondary images are identical, so each tamper event simply switches to the other. This allows the demo scenario to be repeated with each tamper event switching the system from A to B or B to A firmware images.

Streamlining personalized firmware to commercial boards

The commercial solution introduced by Arrow with the Shiled96 board includes a cloud based secure firmware rotation from the manufacturing generic firmware using AWS thus streamlining device personalization and providing a production ready device to a multitude of customers.

Out of manufacturing, the Shield96 Trusted board comes preloaded with a minimum and generic version of Linux. The out of the box experience to get to a personalized and up to date firmware is as simple as inserting an SD card and connecting the board to the Internet. The device boots securely, partitions the SD card then using Just-in-Time Registration of Device Certificates on AWS IoT (JITR) registers the device to AWS IoT Core and provisions it to Sequitur’s AWS IoT Core endpoint and to the Sandbox application. Next, the device automatically downloads the most recent generic or customer-specific file system, installs it and restarts. Thus the Sandbox provides lifecycle device management and firmware updates.

The 2-stage firmware deployment starting with a generic preloaded firmware at Arrow Programming Center followed by a cloud based final firmware rotation gives customers valuable features. For instance, an Original Equipment Manufacturer (OEM)\Original Device Manufacturer (ODM) may need to produce devices with firmware variations for deployment in different geographical regions or customized for different customers. Alternatively, the OEM\ODM may want to optimize logistics, manufacture in volume while the firmware is still in development, and load the final firmware in a distribution facility before shipping to customers. It also eliminates the opportunity for IP theft in manufacturing since the final firmware is never present at the manufacturer.

Conclusion

The solution introduced with this blog demonstrates that manufacturers can produce devices at scale while security is implemented properly, taking full advantage of the silicon embedded security IP. This implementation extends niche expertise and years of experience into a framework accessible to any developer.
Why is this important? Advanced security implemented right, massively reduces time to market and cost; the solution is also highly portable to other silicon. Sequitur Lab’s EmSPARK Security Suite is already available for NXP microprocessors (i.MX and QuorIQ Layerscape families) and nVidia Xavier bringing the same level of abstraction to IoT and embedded developers.
In this relationship Arrow proposes a secure single board computer fully provisioned. Arrow adds greater value by offering the ability to customize the hardware and the firmware. Customers can choose to add or remove hardware components, customize the Linux kernel, and subscribe for firmware management and security monitoring.
APN partners complement existing AWS services to enable customers in deploying a comprehensive security architecture and a seamless experience. In this case, Sequitur Labs and Arrow bring to market a game changing product complementing existing AWS edge and cloud services to enable any project of any size to use advanced security without the need for qualified embedded security experts.
Moreover, the product builds on top of HW security features of existing processors while providing the necessary software tools and process to work with existing manufacturing flows and not require secure manufacturing.
For a deeper dive into this solution the Getting Started Guide on the AWS Partner Device Catalog provides board bring up steps and example code for many of the supported use cases.

Originally posted HERE.

Read more…

Written by: Mirko Grabel

Edge computing brings a number of benefits to the Internet of Things. Reduced latency, improved resiliency and availability, lower costs, and local data storage (to assist with regulatory compliance) to name a few. In my last blog post I examined some of these benefits as a means of defining exactly where is the edge. Now let’s take a closer look at how edge computing benefits play out in real-world IoT use cases.

Benefit No. 1: Reduced latency

Many applications have strict latency requirements, but when it comes to safety and security applications, latency can be a matter of life or death. Consider, for example, an autonomous vehicle applying brakes or roadside signs warning drivers of upcoming hazards. By the time data is sent to the cloud and analyzed, and a response is returned to the car or sign, lives can be endangered. But let’s crunch some numbers just for fun.

Say a Department of Transportation in Florida is considering a cloud service to host the apps for its roadside signs. One of the vendors on the DoT’s shortlist is a cloud in California. The DoT’s latency requirement is less than 15ms. The light speed in fiber is about 5 μs/km. The distance from the U.S. east coast to the west coast is about 5,000 km. Do the math and the resulting round-trip latency is 50ms. It’s pure physics. If the DoT requires a real-time response, it must move the compute closer to the devices.

Benefit No. 2: Improved resiliency/availability

Critical infrastructure requires the highest level of availability and resiliency to ensure safety and continuity of services. Consider a refinery gas leakage detection system. It must be able to operate without Internet access. If the system goes offline and there’s a leakage, that’s an issue. Compute must be done at the edge. In this case, the edge may be on the system itself.

While it’s not a life-threatening use case, retail operations can also benefit from the availability provided by edge compute. Retailers want their Point of Sale (PoS) systems to be available 100% of the time to service customers. But some retail stores are in remote locations with unreliable WAN connections. Moving the PoS systems onto their edge compute enables retailers to maintain high availability.

Benefit No. 3: Reduced costs

Bandwidth is almost infinite, but it comes at a cost. Edge computing allows organizations to reduce bandwidth costs by processing data before it crosses the WAN. This benefit applies to any use case, but here are two example use-cases where this is very evident: video surveillance and preventive maintenance. For example, a single city-deployed HD video camera may generate 1,296GB a month. Streaming that data over LTE easily becomes cost prohibitive. Adding edge compute to pre-aggregate the data significantly reduces those costs.

Manufacturers use edge computing for preventive maintenance of remote machinery. Sensors are used to monitor temperatures and vibrations. The currency of this data is critical, as the slightest variation can indicate a problem. To ensure that issues are caught as early as possible, the application requires high-resolution data (for example, 1000 per second). Rather than sending all of this data over the Internet to be analyzed, edge compute is used to filter the data and only averages, anomalies and threshold violations are sent to the cloud.

Benefit No. 4: Comply with government regulations

Countries are increasingly instituting privacy and data retention laws. The European Union’s General Data Protection Regulation (GDPR) is a prime example. Any organization that has data belonging to an EU citizen is required to meet the GDPR’s requirements, which includes an obligation to report leaks of personal data. Edge computing can help these organizations comply with GDPR. For example, instead of storing and backhauling surveillance video, a smart city can evaluate the footage at the edge and only backhaul the meta data.

Canada’s Water Act: National Hydrometric Program is another edge computing use case that delivers regulatory compliance benefits. As part of the program, about 3,000 measurement stations have been implemented nationwide. Any missing data requires justification. However, storing data at the edge ensures data retention.

Bonus Benefit: “Because I want to…”

Finally, some users simply prefer to have full control. By implementing compute at the edge rather than the cloud, users have greater flexibility. We have seen this in manufacturing. Technicians want to have full control over the machinery. Edge computing gives them this control as well as independence from IT. The technicians know the machinery best and security and availability remain top of mind.

Summary

By reducing latency and costs, improving resiliency and availability, and keeping data local, edge computing opens up a new world of IoT use cases. Those described here are just the beginning. It will be exciting to see where we see edge computing turn up next. 

Originaly posted: here

Read more…

In order to form proper networks to share data, the Internet of Things (IoT) needs reliable communications and connectivity. Because of popular demand, there’s a wide range of connectivity technologies that operators, as well as developers, can opt for.

IoT Connectivity Groups

The IoT connectivity technologies are currently divided into two groups. The first one is cellular-based, and the second one is unlicensed LPWAN. The first group is based around a licensed spectrum, something which offers an infrastructure that is consistent and better. This group supports larger data rates, but it comes with a cost of short battery life and expensive hardware. However, you don’t have to worry about this a lot as its hardware is becoming cheaper.

Cellular-Based IoT

Because of all this, cellular-based IoT is only offered by giant operators. The reason behind this is that acquiring licensed spectrum is expensive. But these big operators have access to this licensed spectrum, as well as expensive hardware. The cellular IoT connectivity also has its own two types. The first one being the narrowband IoT (NB-IoT) and category M1 IoT (Cat-M1).

Although both are based on cellular standards, there is one big difference between the two. That NB-IoT has a smaller bandwidth than Cat-M1, and thus offers a lower transmission power. In fact, its bandwidth is 10x smaller than that of Cat-M1. However, both still have a very long range with NB-IoT offering a range of up to 100 Km.

The cellular standard based IoT connectivity ensure more reliability. Their device operational lifetimes are longer as compared to unlicensed LPWAN. But when it comes to choosing, most operators prefer NB-IoT over Cat-M1. This is because Cat-M1 provides higher data rates that are not usually necessary. In addition to this, the higher costs of it prevent operators from choosing it.

Cat-M1 is mostly chosen by large-scale operators because it provides mobility support. This is something suitable for transportation and traffic control-based network. It can also be useful in emergency response situations as it offers voice data transfer.

The hardware (module) used for cellular IoT is relatively more expensive compared to LPWAN. It can cost around $10, compared to $2 LPWAN. However, this cost has been dropping rapidly recently because of its popular demand. 

Unlicensed LPWAN

As for the unlicensed LPWANs, they are used by those who don’t have the budget to afford cellular-based IoT. They are designed for customized IoT networks and offer lower data rates, but with increased battery life and long transmission range. They can also be deployed easily. At the moment, there are two types of unlicensed LPWANs, LoRa (Long Range) and SigFox.

Both types are amazing as they designed for devices that have a lower price, increased battery life, and long range. Their coverage range can be up to 10 Km, and their connectivity cost is as low as $2 per module. Not only this, but the cost is even lower than this sometimes. Therefore, they are ideal for local areas.

Weightless LPWAN

Although there are many variants of the LPWAN, Weightless is considered to be the most popular one. This is because the Weightless Special Interest Group, or the SIG, currently offers three different protocols. These include the Weightless-N, the Weightless-W, and the Weightless-P. All three work in a different way as they have different modalities.

Weightless-W

First off, we have the Weightless-W open standard model. This one is designed to operate in TV white space (TVWS). TV Whitespace (TVWS) is the inactive or unoccupied space found between channels actively used in UHF and VHF spectrum its frequency spans from 470 MHz – 790 MHz. For those who don’t know, this is similar to what Neul was developing before getting acquired by Huawei. Now, while using TVWS can be great as it uses ultra-high frequency spectrum, it has one downside. In theory, it seems perfect. But in practice, it is difficult because the rules and regulations for utilizing TVWS for IoT vary greatly.

In addition to this, the end nodes of this model don’t work like they are supposed to. They are designed to operate in a small part of the spectrum. As is difficult to design an antenna that can cover a such wide band of spectrum. This is why TVWS can be difficult when it comes to installing it. The Weightless-W is considered a good option in:

  • Smart Oil sector.
  • Gas sector.

Weightless-N

Second up we have the ultra-narrowband system, the Weightless-N. This model is similar to SigFox as both have a lot in common. The best thing about it is it is made up of different networks instead of being an end-to-end enclosed system. Weightless-N uses differential binary phase shift keying (DBPSK) digital modulation scheme same as of used in SigFox.

The Weightless-N line is operated by Nwave, a popular IoT hardware and software developer. However, while is model is best for sensor-based networks, temperature readings, tank level monitoring, and more, there are some problems with it. For instance, Nwave has a special requirement for TCXO, that is the temperature compensated crystal oscillator.

 In addition to this, it has an unbalanced link budget. The reason behind why this is bad is that there will be much more sensitivity going up to the base station compared to what will be coming down. 

Weightless-P

Finally, we have the Weightless-P. This model is the latest one in the group as it was launched some time after the above two. What people love the most about this one is that it has two-way features. In addition to this, it has a 12.5 kHz channel that is pretty amazing. The Weightless-P doesn’t require a TXCO, something which makes it different from Weightless-N and -W.

The main company behind Weightless-P is Ubiik. The only downside about this model is that it is not ideal for wide-area networks as it offers a range of around 2 Km. However, the Weightless-P is still ideal for:

  • Private Networks
  • Extra sophisticated use cases.
  • Areas where uplink data and downlink control are important.

Capacity

Because of the fact that the Weightless protocols are based on SDR, its base station for narrowband signals is much more complex. This is something that ends up creating thousands of small binary phase-shift keying channels. Although this will let you get more capacity, it will become a burden on your wallet.

In addition to this, since the Weightless-N end nodes require a TXCO, it will be more expensive. The TXCO is used when there is a threat of the frequency becoming unstable when the temperature gets disturbed at the end node.

Range

Talking about the ranges, the Weightless-N and -W has a range of around 5 Km in Urban environments. As for the Weightless-P, it can go up to 2 Km.

Comparison

Weightless and SigFox

If we take the technology into consideration, then the Weightless-N and SigFox are pretty similar. However, they are different when it comes to go-to-market. Since Weightless is a standard, it will require another company to create an IoT based on it. However, this is not the case with SigFox as it is a different type of solution.

Weightless and LoRa

In terms of technology, the Weightless and LoRa. Lorawan are different. However, the functionally of the Weightless-N and LoRaWAN is similar. This is because both are uplink-based systems. Weightless is also sometimes considered as the very good alternative when LoRa is not feasible to the user.

Weightless and Symphony Link

The Symphony Link and Weightless-P standards are more similar to each other. For instance, both focus on private networks. However, Symphony Link has a much more better range performance because it uses LoRa instead of Minimum-shift keying modulation MSK.

Originaly posted here

Read more…

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

SSE Airtricity employees Derek Conty, left, Francie Byrne, middle, and Ryan Doran, right, install solar panels on the roof of Kinsale Community School in Kinsale, Ireland. The installation is part of a project with Microsoft to demonstrate the feasibility of distributed power purchase agreements. Credit: Naoise Culhane

by John Roach

Solar panels being installed on the roofs of dozens of schools throughout Dublin, Ireland, reflect a novel front in the fight against global climate change, according to a senior software engineer and a sustainability lead at Microsoft.

The technology copmpany partnered with SSE Airtricity, Ireland's largest provider of 100% green energy and a part of FTSE listed SSE Group, to install and manage the internet-connected solar panels, which are connected via Azure IoT to Microsoft Azure, a cloud computing platform.

The software tools aggregate and analyze real-time data on energy generated by the solar panels, demonstrating a mechanism for Microsoft and other corporations to achieve sustainability goals and reduce the carbon footprint of the electric power grid.

"We need to decarbonize the global economy to avoid catastrophic climate change," said Conor Kelly, the software engineer who is leading the distributed solar energy project for Microsoft Azure IoT. "The first thing we can do, and the easiest thing we can do, is focus on electricity."

Microsoft's $1.1 million contribution to the project builds on the company's ongoing investment in renewable energy technologies to offset carbon emissions from the operation of its datacenters.

A typical approach to power datacenters with renewable energy is for companies such as Microsoft to sign so-called power purchase agreements with energy companies.The agreements provide financial guarantees needed to build industrial-scale wind and solar farms and connections to the power grid.

The new project demonstrates the feasibility of agreements to install solar panels on rooftops distributed across towns with existing grid connections and use internet of things, or IoT, technologies to aggregate the accumulated energy production for carbon offset accounting.

"It utilizes existing assets that are sitting there unmonetized, which are roofs of buildings that absorb sunlight all day," Kelly said.

New Business Model

The project is also a proof-of-concept, or blueprint, for how energy providers can adapt as the falling price of solar panels enables distributed electric power generation throughout the existing electric power grid.

Traditionally, suppliers purchase power from central power plants and industrial-scale wind and solar farms and sell it to consumers on the distribution grid. Now, energy providers like SSE Airtricity provide renewable energy solutions that allow end consumers to generate power, from sustainable sources, using the existing grid connection on their premises.

"The more forward-thinking energy providers that we are working with, like SSE Airtricity, identify this as an opportunity and industry changing shift in how energy will be generated and consumed," Kelly noted.

The opportunity comes in the ability to finance the installation of solar panels and batteries at homes, schools, businesses and other buildings throughout a community and leverage IoT technology to efficiently perform a range of services from energy trading to carbon offset accounting.

Kelly and his team with Azure IoT are working with SSE Airtricity to develop the tools and machine learning models necessary to unlock this opportunity.

"Instead of having utility scale solar farms located outside of cities, you could have a solar farm at the distribution level, spread across a number of locations," said Fergal Ahern, a business energy solutions manager and renewable energy expert with SSE Airtricity.

For the distributed power purchase agreement, SSE Airtricity uses Azure IoT to aggregate the generation of all the solar panels installed across 27 schools around the provinces of Leinster, Munster and Connacht and run it through a machine learning model to determine the carbon emissions that the solar panels avoid.

The schools use the electricity generated by the solar panels, which reduces their utility bills; Microsoft receives the renewable energy credits for the generated electricity, which the company applies to its carbon neutrality commitments.

The panels are expected to produce enough energy annually to power the equivalent of 68 Irish homes for a year and abate more than 2.1 million kilograms, which is equivalent to 4.6 million pounds, of carbon dioxide emissions over the 15 years of the agreement, according to Kelly.

"This is additional renewable energy that wouldn't have otherwise happened," he said. "Every little bit counts when it comes to meeting our sustainability targets and combatting climate change."

Every little bit counts

Victory Luke, a 16 year old student at Collinstown Park Community College in Dublin, has lived by the "every little bit counts" mantra since she participated in a "Generation Green" sustainability workshop in 2019 organized by the Sustainable Energy Authority of Ireland, SSE Airtricity and Microsoft.

The workshop was part of an education program surrounding the installation of solar panels and batteries at her school along with a retrofit of the lighting system with LEDs. Digital screens show the school's energy use in real time, allowing students to see the impact of the energy efficiency upgrades.

Luke said the workshop captured her interest on climate change issues. She started reading more about sustainability and environmental conservation and agreed to share her newfound knowledge with the younger students at her school.

"I was going around and talking to them about energy efficiency, sharing tips and tricks like if you are going to boil a kettle, only boil as much water as you need, not too much," she explained.

That June, the Sustainable Energy Authority of Ireland invited her to give a speech at the Global Conference on Energy Efficiency in Dublin, which was organized by the International Energy Agency, an organization that works with governments and industry to shape sustainable energy policy.

"It kind of felt surreal because I honestly felt like I wasn't adequate enough to be speaking about these things," she said, noting that the conference attendees included government ministers, CEOs and energy experts from around the world.

At the time, she added, the global climate strike movement and its youth leaders were making international headlines, which made her advocacy at school feel even smaller. "Then I kind of realized that it is those smaller things that make the big difference," she said.

SSE Airtricity and Microsoft plan to replicate the educational program that inspired Luke and her classmates at dozens of the schools around Ireland that are participating in the project.

"When you've got solar at a school and you can physically point at the installation and a screen that monitors the power being generated, it brings sustainability into daily school life," Ahern said.

Proof of concept for policymakers

The project's education campaign extends to renewable energy policymakers, Kelly noted. He explained that renewable energy credits—a market incentive for corporations to support renewable energy projects—are currently unavailable for distributed power purchase agreements.

For this project, Microsoft will receive genuine renewable energy credits from a wind farm that SSE Airtricity also operates, he added.

"And," he said, "we are hoping to use this project as an example of what regulation should look like, to say, 'You need to award renewable energy credits to distributed generation because they would allow corporates to scale-up this type of project.'"

For her part, Luke supports steps by multinational corporations such as Microsoft to invest in renewable energy projects that address global climate change.

"It is a good thing to see," she said. "Once one person does something, other people are going to follow.

Originaly posted HERE

Read more…

An edge device is the network component that is responsible for connecting a local area network to an external or wide area network, which can be accessed from anywhere. Edge devices offer several new services and improved outcomes for IoT deployments across all markets. Smart services that rely on high volumes of data and local analysis can be deployed in a wide range of environments.

Edge device provides the local data to an external network. If protocols are different in local and external networks, it also translates this information, and make the connection between both network boundaries. Edge devices analyze diagnostics and automatic data populating; however, it is necessary to make a secure connection between the field network and cloud computing. In the event of loss of internet connection or cloud crash edge device will store data until the connection is established, so it won’t lose any process information. The local data storage is optional and not all edge devices offer local storage, it depends on the application and service required to implement on the plant.

How does an edge device work?

An edge device has a very straightforward working principle, it communicates between two different networks and translates one protocol into another. Furthermore, it creates a secure connection with the cloud.

An edge device can be configured via local access and internet or cloud. In general, we can say an edge device is a plug-and-play, its setup is simple and does not require much time to configure.

Why should I use an edge device?

Depending on the service required in the plant, the edge devices will be a crucial point to collect the information and create an automatic digital twin of your device in the cloud. 

Edge devices are an essential part of IoT solutions since they connect the information from a network to a cloud solution. They do not affect the network but only collect the data from it, and never cause a problem with the communication between the control system and the field devices. by using an edge device to collect information, the user won’t need to touch the control system. Edge is one-way communication, nothing is written into the network, and data are acquired with the highest possible security.

Edge device requirements

Edge devices are required to meet certain requirements that are to meet at all conditions to perform in different secretions. This may include storage, network, and latency, etc.

Low latency

Sensor data is collected in near real-time by an edge server. For services like image recognition and visual monitoring, edge servers are located in very close proximity to the device, meeting low latency requirements. Edge deployment needs to ensure that these services are not lost through poor development practice or inadequate processing resources at the edge. Maintaining data quality and security at the edge whilst enabling low latency is a challenge that need to address.

Network independence

IoT services do not care for data communication topology.  The user requires the data through the most effective means possible which in many cases will be mobile networks, but in some scenarios, Wi-Fi or local mesh networking may be the most effective mechanism of collecting data to ensure latency requirements can be met.

Good-Edge-IOT-Device-1024x576.jpg

Data security

Users require data at the edge to be kept secure as when it is stored and used elsewhere. These challenges need to meet due to the larger vector and scope for attacks at the edge. Data authentication and user access are as important at the edge as it is on the device or at the core.  Additionally, the physical security of edge infrastructure needs to be considered, as it is likely to hold in less secure environments than dedicated data centers.

Data Quality

Data quality at the edge is a key requirement to guarantee to operate in demanding environments. To maintain data quality at the edge, applications must ensure that data is authenticated, replicated as and assigned into the correct classes and types of data category.

Flexibility in future enhancements

Additional sensors can be added and managed at the edge as requirements change. Sensors such as accelerometers, cameras, and GPS, can be added to equipment, with seamless integration and control at the edge.

Local storage

Local storage is essential in the event of loss of internet connection or cloud crash edge device will store data until the connection is established, so it won’t lose any process information. The local data storage is optional and not all edge devices offer local storage, it depends on the application and service required to implement on the plant

Originaly Posted here

Read more…

Impact of IoT in Inventory

Internet of Things (IoT) has revolutionized many industries including inventory management. IoT is a concept where devices are interconnected via the internet. It is expected that by 2020, there will be 26 billion devices connected worldwide. These connections are important because it allows data sharing which then can perform actions to make life and business more efficient. Since inventory is a significant portion of a company’s assets, inventory data is vital for an accounting department for the company’s asset management and annual report.

Inventory solutions based on IoT and RFID, individual inventory item receives an RFID tag. Each tag has a unique identification number (ID) that contains information about an inventory item, e.g. a model, a batch number, etc. these tags are scanned by RF reader. Upon scanning, a reader extracts its IDs and transmits them to the cloud for processing. Along with the tag’s ID, the cloud receives location and the time of reading. This data is used for updates about inventory items’, allowing users to monitor the inventory from anywhere, in real-time.

Industrial IoT

The role of IoT in inventory management is to receive data and turn it into meaningful insights about inventory items’ location, status, and giving users a corresponding output. For example, based on the data, and inventory management solution architecture, we can forecast the number of raw materials needed for the upcoming production cycle. The output of the system can also send an alert if any individual inventory item is lost.

Moreover, IoT based inventory management solutions can be integrated with other systems, i.e. ERP and share data with other departments.

RFID in Industrial IoT

RFID consist of three main components tag, antenna, and a reader

Tags: An RFID tag carries information about a specific object. It can be attached to any surface, including raw materials, finished goods, packages, etc.

RFID antennas: An RFID antenna receives signals to supply power and data for tags’ operation

RFID readers: An RFID reader, uses radio signals to read and write to the tags. The reader receives data stored in the tag and transmits it to the cloud.

Benefits of IoT in inventory management

The benefits of IoT on the supply chain are the most exciting physical manifestations we can observe. IoT in the supply chain creates unparalleled transparency that increases efficiencies.

Inventory tracking

The major benefit of inventory management is asset tracking, instead of using barcodes to scan and record data, items have RFID tags which can be registered wirelessly. It is possible to accurately obtain data and track items from any point in the supply chain.

With RFID and IoT, managers don’t have to spend time on manual tracking and reporting on spreadsheets. Each item is tracked and the data about it is recorded automatically. Automated asset tracking and reporting save time and reduce the probability of human error.

Inventory optimization

Real-time data about the quantity and the location of the inventory, manufacturers can reduce the amount of inventory on hand while meeting the needs of the customers at the end of the supply chain.

The data about the amount of available inventory and machine learning can forecast the required inventory which allows manufacturers to reduce the lead time.

Remote tracking

Remote product tracking makes it easy to have an eye on production and business. Knowing production and transit times, allows you to better tweak orders to suit lead times and in response to fluctuating demand. It shows which suppliers are meeting production and shipping criteria and which needs monitoring for the required outcome.

It gives visibility into the flow of raw materials, work-in-progress and finished goods by providing updates about the status and location of the items so that inventory managers see when an individual item enters or leaves a specific location.

Bottlenecks in the operations

With the real-time data about the location and the quantity, manufacturers can reveal bottlenecks in the process and pinpoint the machine with lower utilization rates. For instance, if part of the inventory tends to pile up in front of a machine, a manufacturer assumes that the machine is underutilized and needs to be seen to.

The Outcomes

The data collected by inventory management is more accurate and up-to-date. By reducing these time delays, the manufacturing process can enhance accuracy and reduce wastage. An IoT-based inventory management solution offers complete visibility on inventory by providing real-time information fetched by RFID tags. It helps to track the exact location of raw materials, work-in-progress and finished goods. As a result, manufacturers can balance the amount of on-hand inventory, increase the utilization of machines, reduce lead time, and thus, avoid costs bound to the less effective methods. This is all about optimizing inventory and ensuring anything ordered can be sold through whatever channel necessary.

Originally posted here

Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

 

max0492-01-arduino-breakout-board-1024x885.jpg

When I work on a development project, I’ve become a big fan of using development boards that have the Arduino headers on them. The vast number of shields that easily connect to these headers is phenomenal. The one problem that I’ve always had though was that there is always a need to use a breadboard to test a circuit or integrate a sensor that just isn’t in an Arduino header format. The result is a wiring mess that can result in loose or missing connections.

I was recently talking with Max Maxfield and he pointed me to a really cool adapter board designed to remove these wiring jumpers to a breadboard. Max wrote about this board here but I’m so excited about this that I thought I’d add my two cents as well.

The BreadShield, which can be purchased at https://www.crowdsupply.com/loser/breadshield, adapts the Arduino headers to a linear set of header pins designed to be plugged into a breadboard. You can see in the image below that this completely removes all the extra jumpers that one would normally require which has the potential to remove quite a few jumper wires.

max0492-03-arduino-breakout-board-1024x675.jpg

When I heard about these, I purchased three assembled units for about $28 which saves me the time from having to assemble the adapter myself. DIY assembly runs for about $15 for a set of three boards. Either way, a great price to remove a bunch of wires from the workbench.

Now I’m still waiting for mine to arrive, but from the image, you can see that the one challenge to using these adapters might be adapting the height of your breadboard to your hardware stack. While this could be an issue, I keep various length spacers around the office so that I can adapt board heights and undoubtedly there will be a length that will ensure these line up properly.

You can view the original post here

Read more…

In-Circuit Emulators

Does anyone remember in-circuit emulators (ICEs)?

Around 1975 Intel came out with the 8080 microprocessor. This was a big step up from the 8008, for the 8080 had a 64k address space, a reasonable ISA, and an honest stack pointer (the 8008 had a hardware stack a mere 7 levels deep). They soon released the MDS 800, a complete computer based on the 8080, with twin 8" floppy drives. An optional ICE was available; this was, as I recall, a two-board set that was inserted in the MDS. A ribbon cable from those boards went to a small pod that could be plugged into the 8080 CPU socket of a system an engineer was developing.

The idea was that the MDS could act as the device's under test (DUT) CPU. It was rather like today's JTAG debuggers in that one could run code on the DUT, set breakpoints, collect trace data, and generally debug the hardware and software. For there was no JTAG then.

We had been developing microprocessor-based products using the 8008, but quickly transitioned to the 8080 for the increased computational power and address space. I begged my boss for the money for an MDS, which was $20k (about $100k in today's dollars), and to my surprise he let us order one. Despite slow floppies that stored only 80 KB each this tool greatly accelerated our work.

Before long ICEs were the standard platform for embedded work. Remember: this was before PCs so there were no standard desktop computers. The ICE was the computer, the IDE (such as it was) and the debugger.

In the mid-80s I was consulting and designed a, uh, "data gathering" system for our friends in Langley, VA, using multiple NSC-800 CPUs. There were few tools available for this part so I created a custom ICE that let me debug the code. Then a light bulb went on: why not sell the thing? There was practically no market for NSC-800 tools so I came up with versions for the Z80 and 8085 and slapped a $695 label on it. Most ICEs at the time cost many thousands so sales spiked.

Back then we still drew schematics on large D-size (17" x 22") vellum with a pencil. I laid out the PCBs on mylar with black tape for the tracks, as was the norm at the time.

This ICE is perhaps the design I'm most proud of in my career. It was only 17 ICs but was the epitome of an embedded system. Software replaced the usual gobs of hardware. On a breakpoint, for instance, the hardware switched from using the DUT stack to a stack on the emulator, but since the user's stack pointer could point anywhere, and the RAM in the ICE was only a few KB, the hardware masked off the upper address bits and lots of convoluted code reconstructed the user environment.

At the time ICEs advertised their breakpoints; most supported no more than a few as comparators watched the address bus for the breakpoint. My ICE used a 64k by one bit memory that mirrored the user bus. Need a breakpoint at, say, address 0x1234? The emulator set that bit in the memory true. Thus, the thing had 65K breakpoints. One of my dumbest mistakes was to not patent that, as all ICE vendors eventually copied the approach.

The trouble with tools is support. An ICE replaces the DUT CPU, and interfaces with all sorts of unknown target hardware. Though the low clock rates of the Z80 meant we initially had few problems, as we expanded the product line support consumed more and more time. Eventually I learned it was equally easy to sell a six-thousand-dollar product as a six-hundred-dollar version, so those simple first emulators were replaced by much more complex many-hundred chip versions with vast numbers of features.

But the market was changing. By the mid-90s SMT CPUs were common. These were challenging to connect to. Clock rate soared making every connection a Maxwell Law nightmare. I sold the business in 1997 and went on to other endeavors. Eventually the ICE market disappeared.

One regret from all those years is that I didn't save any of the emulator's firmware or schematics. In this business everything is ephemeral. We should make an effort to preserve some of that history.

You can view the original post on TEM here

Read more…

Industrial Prototyping for IoT

I-Pi SMARC.jpg

ADLINK is a global leader in edge computing driving data-to-decision applications across industries. The company recently introduced I-Pi SMARC for Industrial IoT prototyping.

-       AdLInk I-Pi SMARC consists of a simple carrier paired with a SMARC Computer on Module

-       SMARC Modules are available from entry level PX30 Rockchip to top of the line Intel Apollo Lake.

-       SMARC modules are specifically designed for typical industrial embedded applications that require long life, high MTBF and strict revision control.

-       Use popular off the shelve sensors and create prototypes or proof of concepts on short notice.

Additional information can be found here

 

Read more…
RSS
Email me when there are new items in this category –

Charter Sponsors

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities