Subscribe to our Newsletter | To Post On IoT Central, Click here


Software (26)

A fingerprint for the Internet of Things

 

Image Credit: Pixabay/CC0 Public Domain

by Tom Jeltes, Eindhoven University of Technology

The Internet of Things (IoT) consists of billions of sensors and other devices connected to each other via internet, all of which need to be protected against hackers with malicious purposes. A low-cost and energy efficient solution for the security of IoT devices uses the unique characteristics of the built-in memory chips. Ph.D. candidate Lieneke Kusters investigated how to make optimal use of the chip's digital fingerprint to generate a security key.

The higher the number of devices connected to each other via the Internet of Things, the greater the risk that malicious hackers might gain access to important information, or even take over entire systems. Quite apart from all kinds of privacy issues, it's not hard to imagine that that someone who, for example, has control over temperature sensors in a chemical or nuclear plant, could cause serious damage.

 

To prevent problems like these from occurring, each IoT device needs to be able, as it were, to show an identity document—"authentication," in professional terms. Normally, speaking, this is done with a kind of password, which is sent in encrypted form to the person who is communicating with the device. The security key needed for that has to be stored in the IoT device one way or another, Lieneke Kusters explains. "But these are often small and cheap devices that aren't supposed to use much energy. To safely store a key in these devices, you need extra hardware with constant power supply. That's not very practical."

Digital fingerprint

There is a different way: namely by deducing the security key from a unique physical characteristic of the memory chip (Static Random-Access Memory, or SRAM) that can be found in practically every IoT device. Depending on the random circumstances during the chip's manufacturing process, the memory locations have a random default value of 0 or 1.

"That binary code which you can read out when activating the chip, constitutes a kind of digital fingerprint of the device," says Kusters, who gained her doctorate at the Information and Communication Theory Laboratory at the TU/e department of Electrical Engineering. This fingerprint is known as a Physical Unclonable Function (PUF). "The Eindhoven-based company Intrinsic ID sells digital security based on SRAM-PUFs. I collaborated with them for my doctoral research, during which I focused on how to generate, in a reliable way, a key from that digital fingerprint that is as long as possible. The longer, the safer."

The major advantage of security keys based on SRAM-PUFs is that the key exists only at the moment when authentication is required. "The device restarts itself to read out the SRAM-PUF and in doing so creates the key, which subsequently gets erased immediately after use. That makes it all but impossible for an attacker to steal the key."

Noise and reliability

But that's not the entire story, because some bits of the SRAM do not always have the same value during activation, Kusters explains. Ten to fifteen percent of the bits turn out not to be determined, which makes the digital fingerprint a bit fuzzy. How do you use that fuzzy fingerprint to make a key of the highest possible complexity that nevertheless still fits into the receiving lock—practically—each time?

"What you want to prevent is that the generated key won't be recognized by the receiving party as a consequence of the 'noise' in the SRAM-PUF," Kusters explains. "It's alright if that happens one in a million times perhaps, preferably less often." The probability of error is smaller with a shorter key, but such a key is also easier to guess for people with bad intentions. "I've searched for the longest reliable key, given a certain amount of noise in the measurement. It helps if you store extra information about the SRAM-PUF, but that must not be of use to a potential attacker. My thesis is an analysis of how you can reach the optimal result in different situations with that extra information."

Originaly posted here.


 
Read more…

Can AI Replace Firmware?

Scott Rosenthal and I go back about a thousand years; we've worked together, helped midwife the embedded field into being, had some amazing sailing adventures, and recently took a jaunt to the Azores just for the heck of it. Our sons are both big data people; their physics PhDs were perfect entrees into that field, and both now work in the field of artificial intelligence.

At lunch recently we were talking about embedded systems and AI, and Scott posed a thought that has been rattling around in my head since. Could AI replace firmware?

Firmware is a huge problem for our industry. It's hideously expensive. Only highly-skilled people can create it, and there are too few of us.

What if an AI engine of some sort could be dumped into a microcontroller and the "software" then created by training that AI? If that were possible - and that's a big "if" - then it might be possible to achieve what was hoped for when COBOL was invented: programmers would no longer be needed as domain experts could do the work. That didn't pan out for COBOL; the industry learned that accountants couldn't code. Though the language was much more friendly than the assembly it replaced, it still required serious development skills.

But with AI, could a domain expert train an inference engine?

Consider a robot: a "home economics" major could create scenarios of stacking dishes from a dishwasher. Maybe these would be in the form of videos, which were then fed to the AI engine as it tuned the weighting coefficients to achieve what the home ec expert deems worthy goals.

My first objection to this idea was that these sorts of systems have physical constraints. With firmware I'd write code to sample limit switches so the motors would turn off if at an end-of-motion extreme. During training an AI-based system would try and drive the motors into all kinds of crazy positions, banging destructively into stops. But think how a child learns: a parent encourages experimentation but prevents the youngster from self-harm. Maybe that's the role of the future developer training an AI. Or perhaps the training will be done on a simulator of some sort where nothing can go horribly wrong.

Taking this further, a domain expert could define the desired inputs and outputs, and then a poorly-paid person do the actual training. CEOs will love that. With that model a strange parallel emerges to computation a century ago: before the computer age "computers" were people doing simple math to create tables of logs, trig, ballistics, etc. A room full all labored at a problem. They weren't particularly skilled, didn't make much, but did the rote work under the direction of one master. Maybe AI trainers will be somewhat like that.

Like we outsource clothing manufacturing to Bangladesh, I could see training, basically grunt work, being sent overseas as well.

I'm not wild about this idea as it means we'd have an IoT of idiots: billions of AI-powered machines where no one really knows how they work. They've been well-trained but what happens when there's a corner case?

And most of the AI literature I read suggests that inference successes of 97% or so are the norm. That might be fine for classifying faces, but a 3% failure rate of a safety-critical system is a disaster. And the same rate for less-critical systems like factory controllers would also be completely unacceptable.

But the idea is intriguing.

Original post can be viewed here

Feel free to email me with comments.

Back to Jack's blog index page.

Read more…

Theoratical Embedded Linux requirements

Hardware

SoC

A System on Chip (SoC), is essentially an integrated circuit that takes a single platform and integrates an entire computer system onto it. It combines the power of the CPU with other components that it needs to perform and execute its functions. It is in charge of using the other hardware and running your software. The main advantage of SoC includes lower latency and power saving.

It is made of various building blocks:

  • Core + Caches + MMU – An SoC has a processor at its core which will define its functions. Normally, an SoC has multiple processor cores. For a “real” processor, e.g. ARM Cortex-A9. It’s the main thing kept in mind while choosing an SoC. Maybe co-adjuvanted by e.g. a SIMD co-processor like NEON.
  • Internal RAM – IRAM is composed of very high-speed SRAM located alongside the CPU. It acts similar to a CPU cache, and generally very small. It is used in the first phase of the boot sequence.
  • Peripherals – These can be a simple ADC, DSP, or a Graphical Processing Unit which is connected via some bus to the Core. A low power/real-time co-processor helps the main Core with real-time tasks or handle low power states. Examples of such IP cores are USB, PCI-E, SGX, etc.

External RAM

An SoC uses RAM to store temporary data during and after bootstrap. It is the memory an embedded system uses during regular operation.

Non-Volatile Memory

In an Embedded system or single-board computer, it is the SD card. In other cases, it can be a NAND, NOR, or SPI Data flash memory. It is the source of data the SoC reads and stores all the software components needed for the system to work.

External Peripherals

An SoC must have external interfaces for standard communication protocols such as USB, Ethernet, and HDMI. It also includes wireless technology protocols of Wi-Fi and Bluetooth.

Software

Second-Article-01-1024x576.jpghttps://www.tirichlabs.com/storage/2020/09/Second-Article-01-300x169.jpg 300w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-768x432.jpg 768w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-1200x675.jpg 1200w" alt="" />

First of all, we introduce the boot chain which is the series of actions that happens when an SoC is powered up.

Boot ROM: It is a piece of code stored in the ROM which is executed by the booting core when it is powered-on. This code contains instructions for the configuration of SoC to allow it to execute applications. The configurations performed by Boot ROM include initialization of the core’s register and stack pointer, enablement of caches and line buffers, programming of interrupt service routine, clock configuration.

Boot ROM also implements a Boot Assist Module (BAM) for downloading an application image from external memories using interfaces like Ethernet, SD/MMC, USB, CAN, UART, etc.

1st stage bootloader

In the first-stage bootloader performs the following

  • Setup the memory segments and stack used by the bootloader code
  • Reset the disk system
  • Display a string “Loading OS…”
  • Find the 2nd stage boot loader in the FAT directory
  • Read the 2nd stage boot loader image into memory at 1000:0000
  • Transfer control to the second-stage bootloader

It copies the Boot ROM into the SoC’s internal RAM. Must be tiny enough to fit that memory usually well under 100kB. It initializes the External RAM and the SoC’s external memory interface, as well as other peripherals that may be of interest (e.g. disable watchdog timers). Once done, it executes the next stage, depending on the context, which could be called MLO, SPL, or else.

2nd stage bootloader

This is the main bootloader and can be 10 times bigger than the 1st stage, it completes the initialization of the relevant peripherals.

  • Copy the boot sector to a local memory area
  • Find kernel image in the FAT directory
  • Read kernel image in memory at 2000:0000
  • Reset the disk system
  • Enable the A20 line
  • Setup interrupt descriptor table at 0000:0000
  • Setup the global descriptor table at 0000:0800
  • Load the descriptor tables into the CPU
  • Switch to protected mode
  • Clear the prefetch queue
  • Setup protected mode memory segments and stack for use by the kernel code
  • Transfer control to the kernel code using a long jump

Linux Kernel

The Linux kernel is the main component of a Linux OS and is the core interface between hardware and processes. It communicates between the hardware and processes, managing resources as efficiently as possible. The kernel performs following jobs

  • Memory management: Keep track of memory, how much is used to store what, and where
  • Process management: Determine which processes can use the processor, when, and for how long
  • Device drivers: Act as an interpreter between the hardware and the processes
  • System calls and security: Receive requests for the service from processes

To put the kernel in context, they can be interpreted as a Linux machine as having 3 layers:

  • The hardware: The physical machine—the base of the system, made up of memory (RAM) and the processor (CPU), as well as input/output (I/O) devices such as storage, networking, and graphics.
  • The Linux kernel: The core of the OS. It is a software residing in memory that tells the CPU what to do.
  • User processes: These are the running programs that the kernel manages. User processes are what collectively makeup user space. The kernel allows processes and servers to communicate with each other.

Init and rootfs – init is the first non-Kernel task to be run, and has PID 1. It initializes everything needed to use the system. In production embedded systems, it also starts the main application. In such systems, it is either BusyBox or a custom-crafted application.

View original post here

Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

Edge Products Are Now Managed At The Cloud

Now more than ever, there are billions of edge products in the world. But without proper cloud computing, making the most of electronic devices that run on Linux or any other OS would not be possible.

And so, a question most people keep asking is which is the best Software-as-a-service platform that can effectively manage edge devices through cloud computing. Well, while edge device management may not be something, the fact that cloud computing space is not fully exploited means there is a lot to do in the cloud space.

Product remote management is especially necessary for the 21st century and beyond. Because of the increasing number of devices connected to the internet of things (IoT), a reliable SaaS platform should, therefore, help with maintaining software glitches from anywhere in the world. From smart homes, stereo speakers, cars, to personal computers, any product that is connected to the internet needs real-time protection from hacking threats such as unlawful access to business or personal data.

Data being the most vital asset is constantly at risk, especially if individuals using edge products do not connect to trusted, reliable, and secure edge device management platforms.

Bridges the Gap Between Complicated Software And End Users

Cloud computing is the new frontier through which SaaS platforms help manage edge devices in real-time. But something even more noteworthy is the increasing number of complicated software that now run edge devices at homes and in workplaces.

Edge device management, therefore, ensures everything runs smoothly. From fixing bugs, running debugging commands to real-time software patch deployment, cloud management of edge products bridges a gap between end-users and complicated software that is becoming the norm these days.

Even more importantly, going beyond physical firewall barriers is a major necessity in remote management of edge devices. A reliable Software-as-a-Service, therefore, ensures data encryption for edge devices is not only hackproof by also accessed by the right people. Moreover, deployment of secure routers and access tools are especially critical in cloud computing when managing edge devices. And so, developers behind successful SaaS platforms do conduct regular security checks over the cloud, design and implement solutions for edge products.

Reliable IT Infrastructure Is Necessary

Software-as-a-service platforms that manage edge devices focus on having a reliable IT infrastructure and centralized systems through which they can conduct cloud computing. It is all about remotely managing edge devices with the help of an IT infrastructure that eliminates challenges such as connectivity latency.

Originally posted here

Read more…

Introducing Profiler, by Auptimizer: Select the best AI model for your target device — no deployment required.

Profiler is a simulator for profiling the performance of Machine Learning (ML) model scripts. Profiler can be used during both the training and inference stages of the development pipeline. It is particularly useful for evaluating script performance and resource requirements for models and scripts being deployed to edge devices. Profiler is part of Auptimizer. You can get Profiler from the Auptimizer GitHub page or via pip install auptimizer.

The cost of training machine learning models in the cloud has dropped dramatically over the past few years. While this drop has pushed model development to the cloud, there are still important reasons for training, adapting, and deploying models to devices. Performance and security are the big two but cost-savings is also an important consideration as the cost of transferring and storing data, and building models for millions of devices tends to add up. Unsurprisingly, machine learning for edge devices or Edge AI as it is more commonly known continues to become mainstream even as cloud compute becomes cheaper.

Developing models for the edge opens up interesting problems for practitioners.

  1. Model selection now involves taking into consideration the resource requirements of these models.
  2. The training-testing cycle becomes longer due to having a device in the loop because the model now needs to be deployed on the device to test its performance. This problem is only magnified when there are multiple target devices.

Currently, there are three ways to shorten the model selection/deployment cycle:

  • The use of device-specific simulators that run on the development machine and preclude the need for deployment to the device. Caveat: Simulators are usually not generalizable across devices.
  • The use of profilers that are native to the target device. Caveat: They need the model to be deployed to the target device for measurement.
  • The use of measures like FLOPS or Multiply-Add (MAC) operations to give approximate measures of resource usage. Caveat: The model itself is only one (sometimes insignificant) part of the entire pipeline (which also includes data loading, augmentation, feature engineering, etc.)

In practice, if you want to pick a model that will run efficiently on your target devices but do not have access to a dedicated simulator, you have to test each model by deploying on all of the target devices.

Profiler helps alleviate these issues. Profiler allows you to simulate, on your development machine, how your training or inference script will perform on a target device. With Profiler, you can understand CPU- and memory-usage as well as run-time for your model script on the target device.

How Profiler works

Profiler encapsulates the model script, its requirements, and corresponding data into a Docker container. It uses user-inputs on compute-, memory-, and framework-constraints to build a corresponding Docker image so the script can run independently and without external dependencies. This image can then easily be scaled and ported to ease future development and deployment. As the model script is executed within the container, Profiler tracks and records various resource utilization statistics including Average CPU UtilizationMemory UsageNetwork I/O, and Block I/O. The logger also supports setting the Sample Time to control how frequently Profiler samples utilization statistics from the Docker container.

Get Profiler: Click here

How Profiler helps

Our results show that Profiler can help users build a good estimate of model runtime and memory usage for many popular image/video recognition models. We conducted over 300 experiments across a variety of models (InceptionV3, SqueezeNet, Resnet18, MobileNetV2–0.25x, -0.5x, -0.75x, -1.0x, 3D-SqueezeNet, 3D-ShuffleNetV2–0.25x, -0.5x, -1.0x, -1.5x, -2.0x, 3D-MobileNetV2–0.25x, -0.5x, -0.75x, -1.0x, -2.0x) on three different devices — LG G6 and Samsung S8 phones, and NVIDIA Jetson Nano. You can find the full set of experimental results and more information on how to conduct similar experiments on your devices here.

The addition of Profiler brings Auptimizer closer to the vision of a tool that helps machine learning scientists and engineers build models for edge devices. The hyperparameter optimization (HPO) capabilities of Auptimizer help speed up model discovery. Profiler helps with choosing the right model for deployment. It is particularly useful in the following two scenarios:

  1. Deciding between models — The ranking of the run-times and memory usages of the model scripts measured using Profiler on the development machine is indicative of their ranking on the target device. For instance, if Model1 is faster than Model2 when measured using Profiler on the development machine, Model1 will be faster than Model2 on the device. This ranking is valid only when the CPU’s are running at full utilization.
  2. Predicting model script performance on the device — A simple linear relationship relates the run-times and memory usage measured using Profiler on the development machine with the usage measured using a native profiling tool on the target device. In other words, if a model runs in time x when measured using Profiler, it will run approximately in time (a*x+b) on the target device (where a and b can be discovered by profiling a few models on the device with a native profiling tool). The strength of this relationship depends on the architectural similarity between the models but, in general, the models designed for the same task are architecturally similar as they are composed of the same set of layers. This makes Profiler a useful tool for selecting the best suited model.

Looking forward

Profiler continues to evolve. So far, we have tested its efficacy on select mobile- and edge-platforms for running popular image and video recognition models for inference, but there is much more to explore. Profiler might have limitations for certain models or devices and can potentially result in inconsistencies between Profiler outputs and on-device measurements. Our experiment page provides more information on how to best set up your experiment using Profiler and how to interpret potential inconsistencies in results. The exact use case varies from user to user but we believe that Profiler is relevant to anyone deploying models on devices. We hope that Profiler’s estimation capability can enable leaner and faster model development for resource-constrained devices. We’d love to hear (via github) if you use Profiler during deployment.

Originaly posted here


Authors: Samarth Tripathi, Junyao Guo, Vera Serdiukova, Unmesh Kurup, and Mohak Shah — Advanced AI, LG Electronics USA

Read more…

Industrial IoT Revolution

Why the Nvidia Jetson Nano is responsible for the biggest industrial IoT revolution these days

 
c1f0a2_ecaa338269684f82b2661b550075f528~mv2.webp
 
 

It feels like yesterday when the Raspberry Pi foundation released the first-in-line Single Board Computer (SBC) to the market. Back in 2012, Raspberry Pi wasn't alone in the SBC growing market, however, it was the first to make a community-based product that brings the hardware and the software eco-system to a beautiful harmony on the internet. Before those days, embedded Linux based SBC's and SOM's were a place for Linux kernel and embedded hardware experts, no easy-to-use tools, ready Linux based distros, or most importantly without the enormous amount of questions and answers across the internet on anything related.

Today, 8 years later, the "2012 revolution" happens again

This time, it took a year to understand the impact of the new 'kid' in the market, but now, there are a few indications that defiantly build the route to a revolution.

The Raspberry Pi was the first to make embedded Linux easy while keeping the advantages of reliability and flexibility in terms of fitting to different kinds of industries applications. It's almost impossible to ignore the variety of industries where Raspberry Pi is in its hurt of products to save time-to-market and costs. The power of this magical board leans on the software side: The Raspberry Pi foundation and their community, worked hard across the years to improve and share their knowledge, but, at the same time, without notice or targeting, they brought the Pi development to an extremely "serverless" level.

The Nvidia Jetson Nano

Let's stop talking about the Raspberry Pi and focus on today's industry needs to understand better why the new kid in the town is here to change the market of IoT and smart products forever.

 
c1f0a2_2ca55bc3cd744a10a05bc244c4e092c1~mv2.webp
 
 Why do we need to thanks Nvidia and the Jetson Nano?
 

The market is going forward. AI, Robotics, amazing-looking screen app Gui's, image processing, and long data calculations are all become the new standard of smart edge products.

If a few years ago, you would only want to connect your product to the cloud and receive anything valuable, today, product managers and developers compete in a much tougher industry era. This time, the Raspberry Pi can't be the technology hero again, its resources are limited and the eco-system starts to squint to a better-fit solution.

 
c1f0a2_b46f958fa9b543af88a6ad38b2afce82~mv2.webp
 
 

NVIDIA Jetson devices in Upswift.io device management platform

The Jetson Nano is the first SBC to understood the necessary combination that will drive new products to use it. It's the first SBC designed in the mind of industrial powerful use cases, while not forgetting the prototyping stage and the harmony that gave the Raspberry Pi their success. It's the first solution to bring the whole package for developers and for hardware engineers with a "SaaS" feel: The OS is already perfect thanks to Ubuntu, there is plenty of software instructions by Nvidia and open-source ready-to-use tools custom made for the Jetson family, and for the hardware engineers: they are free to go with the System On Module (SOM) that is connected to a carrier board which includes all the necessary outputs and inputs to make the development stage even faster.

The Jetson Nano combination is basically providing the first world infrastructure for producing a "2020" product with complex software while working in a minimal budget and time-to-market. The Jetson Nano enables developers and product managers to imagine further without compromises, bringing tough software missions to the edge easily.

Originally posted here

Read more…

by Dan Carroll, Carnegie Mellon University, Department of Civil and Environmental Engineering

7451650263?profile=RESIZE_400x
Credit: Pixabay/CC0 Public Domain
 
Across the U.S., there has been some criticism of the cost and efficacy of emissions inspection and maintenance (I/M) programs administered at the state and county level. In response, Engineering and Public Policy (EPP) Ph.D. student Prithvi Acharya and his advisor, Civil and Environmental Engineering's Scott Matthews, teamed up with EPP's Paul Fischbeck. They have created a new method for identifying over-emitting vehicles using remote data transmission and machine learning that would be both less expensive and more effective than current I/M programs.
 

Most states in America require passenger vehicles to undergo periodic emissions inspections to preserve air quality by ensuring that a vehicle's exhaust emissions do not exceed standards set at the time the vehicle was manufactured. What some may not know is that the metrics through which emissions are gauged nowadays are usually measured by the car itself through on-board diagnostics (OBD) systems that process all of the vehicle's data. Effectively, these emissions tests are checking whether a vehicle's "check engine light" is on. While over-emitting identified by this system is 87 percent likely to be true, it also has a 50 percent false pass rate of over-emitters when compared to tailpipe testing of actual emissions.

With cars as smart devices increasingly becoming integrated into the Internet of Things (IoT), there's no longer any reason for state and county administrations to force drivers to come in for regular I/M checkups when all the necessary data is stored on their vehicle's OBD. In an attempt to eliminate these unnecessary costs and improve the effectiveness of I/M programs, Acharya, Matthews, and Fischbeck published their recent study in IEEE Transactions on Intelligent Transportation Systems.

Their new method entails sending data directly from the vehicle to a cloud server managed by the state or county within which the driver lives, eliminating the need for them to come in for regular inspections. Instead, the data would be run through machine learning algorithms that identify trends in the data and codes prevalent among over-emitting vehicles. This means that most drivers would never need to report to an inspection site unless their vehicle's data indicates that it's likely over-emitting, at which point they could be contacted to come in for further inspection and maintenance.

Not only has the team's work shown that a significant amount of time and cost could be saved through smarter emissions inspecting programs, but their study has also shown how these methods are more effective. Their model for identifying vehicles likely to be over-emitting was 24 percent more accurate than current OBD systems. This makes it cheaper, less demanding, and more efficient at reducing vehicle emissions.

This study could have major implications for leaders and residents within the 31 states and countless counties across the U.S. where I/M programs are currently in place. As these initiatives face criticism from proponents of both environmental deregulation and fiscal austerity, this team has presented a novel system that promises both significant reductions to cost and demonstrably improved effectiveness in reducing vehicle emissions. Their study may well redefine the testing paradigm for how vehicle emissions are regulated and reduced in America.

 
Originally posted here on Tech Xplore
 
Read more…

Summary: Know How Businesses Are Leveraging Their Business Power with the Help of the Internet of Things (IoT). They Are Paying Attention to It to Enhance Their Business Process and Ensuring Gain Long Term Success for Their Business in This Fiercely Competitive Market. 

In this IT era, the latest technology is making its way to our day to day life. It has influenced our life to a great extent and has also affected the way we work. Now we use different gadgets and modern equipment that ease our work and helps us to complete it more smoothly and accurately than ever before. The latest technology like Machine Learning, Big Data Analytics, and Artificial Intelligence has slowly established its command across different industries. Apart from all these technologies one technology that gained significant importance is the internet of things (IoT), it has affected the different areas of various sectors to a great extent. 

The use of IoT enabled devices has enhanced the way people live their lives. According to Gartner's prediction, more than 25 billion IoT devices will be present in the market by 2021. The use of IoT will introduce new innovation for businesses, customers, and society. 

The potential growth in usage of IoT has resulted in improvement in various sectors like healthcare, education, entertainment, and many more. Now it has become possible to track assert in real-time, monitoring the ups and downs in the human body, home automation, environmental monitoring, etc have become easy and all thanks go to the internet of things (IoT). 

Internet of Things: Know Why Businesses Need It for Their Business?

As per the report by Cisco, more than 500 billion devices will be connected with the Internet by 2030. Each device that will be connected by the internet will include sensors that collect data by interacting with the environment and will communicate over a network very accurately. 

And all this will become possible through the Internet of Things (IoT) as it's the network of all these connected devices. These smart devices which are developed using this latest technology will generate data that IoT applications use to accomplish various tasks like deliver insight, analyze, aggregate which helps to respond much accurately as per user's actions. 

The internet of things is one such latest technology that is continuously improving with each passing second. As this technology connects multiple things with each other, it becomes possible for businesses to get real-time access to all the information on the network and thus it has been proved to be beneficial for them to improve their business processes. It provided multiple benefits to the businesses who adopt it, go through the list of benefits that IoT offers for your business. There are various advantages to explore when it comes to implementing the internet of things for your business. 

1. Offers a Large Amount of Data

Almost all businesses these days have realized the power of the internet of things and have started opting for the same for their business. As more and more businesses are stepping ahead to opt for this technology it is predicted that the total market value of IoT will grow rapidly and will reach $3 trillion by 2026

IoT enabled devices are able to collect huge data from the network with the help of added sensors. This information can be beneficial for businesses as they can easily know what their customers really want from them, how can they fulfill their demands in the best possible way, and much more. 

2. Better Customer Service 

Every business these days boil down to satisfy their customers and offer the best to them on their demand. The combo of IoT based devices with an app like spoitify can provide quick access to customers' behaviors. It helps businesses to analyze all the data which includes customers' preference, the time they spent on making a particular purchase, the language they prefer, and much more. 

All this information can help businesses to enhance their customer support and come up with an advanced solution that satisfies all their needs. Using this information you can diversify your business according to new market trends and grab all the opportunities that come your way. 

3. Ability to Monitor and Track Things

IoT enabled devices will allow all businesses to track and monitor each and every activity of their employees. They can easily know what their employees are working, how many tasks they have completed, what progress that has made, and much more. They can even share information with their employees in real-time about the current project on which they are working and can also get information from them whenever they want.  

4. Save Money and Resource

There is no doubt that machine to machine communication is growing dramatically in recent years. It is estimated that the total number of M2M connections will grow speedily from 5 billion to 27 billion from 2014 to 2024.

Machines have taken the place of the human in most of the business sector which save a huge amount of money and resources of businesses which they used to spend on human labor. Nowadays work like answering customers' queries, managing accounts, keeping other business records, and much more work in the business environment is performed by the latest application and software that has been developed using the latest technology like the internet of things or any other. 

5. Automation 

IoT helps businesses to find the best way to make their business process faster and better. They can let them know which areas to be automated so that they can reduce the task of the employees and can save a huge amount of time and resources of their business. If as a business entrepreneur if you feel that your business needs to be automated then IoT will analyze each and every area of your business and will let you know which can be automated and don't need human interaction. 

6. Helps to offer Personalized Experiences

As stated above, businesses can get all the information related to their ideal customers with the help of IoT enabled devices. They can know their purchase preferences, likes, dislikes, and much more and can try to provide a personalized experience. 

As per New Epsilon research, 80% of consumers like to make a purchase from a particular brand if and only if they offer personalized experiences to them. For example, businesses can develop accurate bills keeping in mind the analyzed IoT data and can provide various discounts and offers to the customers as more than 74% of customers expect that they will get automatic crediting for coupons and loyalty points. 

Wonders of the Internet of Things Have a Long Way to Go!

There are certain areas that are still untapped by businesses as they are unable to implement IoT technology in every aspect of their business environment. And even some of the businesses have yet not opted for this modern technology, due to which they are missing various opportunities that are in their success. There are various ways in which IoT works wonders for every business sector. As technology is evolving continually due to research and efforts of brilliant minds, there are certain changes that IoT will have much to offer to the businesses in the nearby future. 

When businesses implement the internet of things in their business they will experience enhancement in their employee's productivity, speed, and efficiency which will directly affect the business profit. Hence work on your business niche and find out whether you can implement IoT in your business environment or not. It’s the demand of time to stand out from others and you can do it using IoT, implement this technology in a basic way for your business if possible.

Read more…

Embedded Linux or RTOS: For IoT

by Tirichlabs

Embedded Linux utilizes Linux kernel for an embedded device, but it is quite different from the standard Linux OS. Its application to embedded systems is motivated by the availability of device support, file-systems, network connectivity, and UI support. It is a customized version of Linux for embedded systems, consequently having a much smaller size and minimal features and requires less processing power. Based on embedded system requirements, the Linux kernel is modified and optimized. Such embedded Linux can only run device-specific purpose-built applications.

The Real-Time Operating System (RTOS) with minimal code is used for such applications where least and fix processing time is required. RTOS is a time-sharing system based on clock interrupts that implement priority sequences to execute a process. In the event of a high priority, interrupt is generated by the system, the running low priority processes are stopped and the interrupt is served. The real-time operating system requires less operational memory and synchronizes the processes in such a way they can communicate with each other hence resources can be used efficiently without wastage of time.

 

COMPARISON

Size

The major difference between Embedded Linux and RTOS is in their sizes. RTOS running on an AVR requires approximately 4.4 kilobytes of ROM. Embedded Linux, on the other hand, is relatively larger. The kernel can be stripped of which are not required and even with that, the footprint is generally measured in megabytes.

Embedded Linux RAM requirement is in order of few megabytes. In practical applications, it requires more than that because some other tasks run under these Linux kernels. RTOS has much smaller memory requirements than Linux. A very simple setup, running two tasks, a scheduler, a queue for communication and a semaphore on an 8-bit architecture would use in the vicinity of 200 bytes.

Scheduler

The scheduler in an RT-system is important to ensure that tasks complete in a fixed time. Compared to a regular scheduler for a general-purpose system, it is not the main task of the scheduler to ensure ’fair’ distribution of CPU-time. A common technique is simply to let the task with the highest priority run before all tasks with lower priority. It works fine for a soft real-time system but for hard real-time, the system must provide a better guarantee.

RTOS scheduler

RTOS uses the highest priority first scheduler. It means that the task having the highest priority is always running. This is achieved by having a preemptive scheduler that at a tick-interrupt decides if the currently running task is allowed to continue executing or it needs to be switched for another task based on priority. The scheduler uses the priority to schedule the task with the highest priority. Tasks having the same priority are given a “fair” process time. This schedular allows us to achieve soft real-time but it is difficult to achieve hard real-time by not having any kind of deadline-based scheduling.

For this purpose, there are choices of having a preemptive or a cooperative scheduler. In preemptive mode, a task can be preempted unlike in cooperative mode where it’s up to all tasks to give away the CPU “often” enough so higher priority tasks get to run. Typical RTOS real-time kernel achieves scheduler latencies from zero to a few microseconds.

Embedded Linux scheduler

In Embedded Linux, there are more choices to choose the scheduler. The modular of Embedded Linux allows to change different parts of the system. A simple insmod gives the possibility to change the scheduler. There are a couple of schedulers designed for different things.

First of all, it has a basic highest priority first scheduler that uses the priority of a task and schedules it first. Embedded Linux also implements the Earliest deadline first which uses the periodic feature of Embedded Linux. Assuming that the deadline for every task is when it is next to be run again one can implement a fast EDF. In theory, it is optimal since it can schedule tasks to 100% CPU-usages. In practice, it is not the same due to some overheads. As in idle process Embedded Linux runs a usual Linux kernel and when there are no rt-tasks that can run, Linux gets to run. which can lead to starvation of Linux and thus effectively disabling Linux. But the importance of a real-time system is to run the real-time tasks this is not a big problem for the system. Typical latencies in real-time Linux schedular are in the order of tens to hundreds of microseconds.

CPU resource

Embedded Linux requires a significant amount of CPU resources, perhaps >200MIPS, 32bit processor, ideally with an MMU, 4Mb of ROM and 16MB of RAM and boot may take several seconds.

An RTOS, on the other hand, runs in less than 10Kb, on microcontrollers from 8-bit up and boot in milliseconds.

IoT Implementation of OS

Embedded Linux is often preferred for extremely low-power applications, such as sensors, run for months on batteries. The low-power nature often precludes direct IP connectivity which serves as a gateway for Internet connectivity. The gateway communicates the low-power protocol to the sensors and would translate them to IP. Linux may have an existing protocol to fulfill the requirements.

The basic requirement of an IoT device is network connectivity, typically in the form of IP via a web server. An RTOS can offer IP connectivity but have a risk to be buggy unless you examine it. For example, usually, RTOSs do not isolate the IP stack user from the IP stack itself. Network connectivity requires potentially dealing with low speed or congested links which can lead to obscure and hard-to-debug buffer handling issues when the stack is intermingled with other code. On the other hand, an embedded Linux leverages hardware separation and a widely utilized IP stack that probably has been exposed to corner cases.

Security is essential in IoT devices, which are often exposed to open Internet. A system compromise on the Internet interface is prone to intruders and information or control of the device can be hijacked. Developers can leverage native, embedded Linux features—multiuser, SELinux, and containers—to contain and limit the damage.

Linux certainly is a robust and secure OS and the system has matured in an embedded operating system. Yet one of the drawbacks is its Memory footprint when compared to a real-time operating system even though it can be trimmed down by removing tools and system services that are not required in embedded systems, it still is a large software. It simply cannot run on 8 or 16-bit MCUs and requires more onboard RAM for the Linux kernel. For example, ARM Cortex-M architecture based MCUs, which typically have only a few hundred kilobytes of RAM, and Linux cannot run on these chips.

A common engineering solution for networked systems is to use two processors in the device. In this arrangement, an 8 or 16-bit MCU is used for the sensor or actuator, while a 32-bit processor is used for the network interface which runs an RTOS. Sales of 32-bit MCUs have exploded in the last several years, and have become the largest segment of the MCU market.

ORIGINALLY POSTED HERE ON TIRICH LABS

Read more…

Industrial Prototyping for IoT

I-Pi SMARC.jpg

ADLINK is a global leader in edge computing driving data-to-decision applications across industries. The company recently introduced I-Pi SMARC for Industrial IoT prototyping.

-       AdLInk I-Pi SMARC consists of a simple carrier paired with a SMARC Computer on Module

-       SMARC Modules are available from entry level PX30 Rockchip to top of the line Intel Apollo Lake.

-       SMARC modules are specifically designed for typical industrial embedded applications that require long life, high MTBF and strict revision control.

-       Use popular off the shelve sensors and create prototypes or proof of concepts on short notice.

Additional information can be found here

 

Read more…

By: Kelly McNelis

We have faced unprecedented disruption from the many challenges of COVID-19, and PTC’s LiveWorx was no exception. The definitive digital transformation event went virtual this year, and despite the transition from physical to digital, LiveWorx delivered.

Of the many insightful virtual keynotes, one that caught everyone’s attention was ‘Digital Transformation: The Technology & Support You Need to Succeed,’ presented by PTC’s Executive Vice President (EVP) of Products, Kevin Wrenn, and PTC’s EVP and Chief Customer Officer, Eduarda Camacho.

Their keynote focused on how companies should be prioritizing the use of best-in-class technology that will meet their changing needs during times of disruption and accelerated digital transformation. Wrenn and Camacho highlighted five of our customers through interactive case studies on how they are using PTC technology to capitalize on digital transformation to thrive in an era of disruption.

6907721673?profile=RESIZE_400x

Below is a summary of the five customers and their stories that were highlighted during the keynote.

1. Royal Enfield (Mass Customization)

Royal Enfield is an Indian motorcycle company that has been manufacturing motor bikes since 1901. They have British roots, and their main customer base is located in India and Europe. Riders of Royal Enfield wants their bikes to be particular to their brand, so they worked to better manage the complexities of mass customization and respond to market demands.

Royal Enfield is a long time PTC customer, but they were on old versions of PTC technology. They first upgraded Creo and Windchill to the latest releases so they could leverage the new capabilities. They then moved on to transform their processes for platform and variant designs, introduced simulation much earlier by using Creo Simulation Live, and leveraged generative design by bringing AI into engineering and applying it to engine and chassis complex custom forged components. Finally, they retrained and retooled their engineering staff to fully leverage the power of new processes and technologies.

The entire Royal Enfield team now has digital capabilities that accelerate new product designs, variants, and accessories for personalization; as a result, they are able to deliver a much-shortened design cycle. Royal Enfield is continuing their digital transformation trend, and will invest in new ways to create value while leveraging augmented reality with PTC's Vuforia suite.

2. VCST (Manufacturing Efficiency, Quality, and Innovation)

VCST is part of the BMT Group and are a world-class automotive supplier of precision-machined power train and brake components. Their problem was that they had high costs for their production facility in Belgium. They either needed to improve their cost efficiency in their plant or face the potential of needing to shut down the facility and relocate it to another region. VCST decided to implement ThingWorx so that anyone can have instant visibility to asset status and performance. VCST is also creating the ability to digitize maintenance requests and the ability to acquire about spare parts to improve the overall efficiency in support of their costs reduction goals.

Additionally, VCST has a goal to reach zero complaints for their customers and, if any quality problems appear to their customers, they can be required to do a 100% inspection until the problem is solved. Moreover, as cars have gotten quieter with electrification, the noise from the gears has become an issue, and puts pressure on VCST to innovate and reduce gear noise.

VCST has again relied on ThingWorx and Windchill to collect and share data for joint collaborative analysis to innovate and reduce gear noise. VCST also plans to use Vuforia Expert Capture and Vuforia Chalk to train maintenance workers to further improve their efficiency and cost effectiveness. The company is not done with their digital transformation, and they have plans to implement Creo and Windchill to enable end-to-end digital thread connectivity to the factory.

3. BID Group Holdings (Connected Product)

BID Group Holdings operates in the wood processing industry. It is one of the largest integrated suppliers and North American leader in the field. The purpose of BID Group is to deliver a complete range of innovative equipment, digital technologies, turnkey installations, and aftermarket services to their customers. BID Group decided to focus on their areas of expertise, an rely on PTC, Microsoft, and Rockwell Automation’s combined capabilities and scale to deliver SaaS type solutions to their own industry.

Leveraging this combined power, the BID Group developed a digital strategy for service to improve mill efficiency and profitability. The solution is named OPER8 and was built on the ThingWorx platform. This allowed BID Group to provide their customers an out of the box solution with efficient time-to-value and low costs of ownership. BID Group is continuing to work with PTC and Rockwell Automation, to develop additional solutions that will reduce downtime of OPER8 with a predictive analytics module by using ThingWorx Analytics and LogixAI.

4. Hitachi (Service Optimization)

Hitachi operates an extensive service decision that ensures its customers’ data systems remain up and running. Their challenge was not to only meet their customers uptime Service Level Agreements, but to do it without killing their cost structure. Hitachi decided to implement PTC’s Servigistics Service Parts Management software to ensure the right parts are available when and where they are needed for service. With Servigistics, Hitachi was able to accomplish their needs while staying cost effective and delighting their customers.

Hitachi runs on the cloud, which allows them to upgrade to current releases more often, take advantage of new functionality, and avoid unexpected costs.

PTC has driven engagement and support for Hitachi through the PTC Community, and encourages all customers to utilize this platform. The network of collaborative spaces in a gathering place for PTC customers and partners to showcase their work, inspire each other, and share ideas or best practices in order to expand the value of their PTC solutions and services.

5. COVID-19 Response 

COVID-19 has put significant strain on the world’s hospitals and healthcare infrastructure, and hospitalization rates for COVID brought into question the capacity of being able to handle cases. Many countries began thinking of the value field hospitals could bring to safely care for patients and ease the admissions numbers of ‘regular’ hospitals. However, the complication is that field hospitals have essentially no isolation or air filtration capability that is required for treating COVID patients or healthcare workers.

As a result, the US Army Corp of Engineers has put out specifications to create self-contained isolation units, which are fully functioning hospital rooms that can be transported or built onsite. But, the assembly needed to happen fast, and a group of companies (including PTC) led by The Innovation Machine rallied to help design and define the SCIU’s.

With buy-in from numerous companies, a common platform was needed for companies to collaborate. PTC felt compelled to react, and many PTC customers and partners joined in to help create a collaboration platform, with cloud-based Windchill as the foundation. But, PTC didn’t just provide software to this collaboration; PTC also contributed with digital thread and design advice to help the group solve some of the major challenges. This design is a result of the many companies coming together to create deployments across various US state governments, agencies, and FEMA.

Final Thoughts

All of the above customers approached digital transformation as a business imperative. They all had sizeable challenges that needed to be solved and took leadership positions to implement plans that leveraged digital transformation technologies combined with new processes.

PTC will continue to innovate across the digital transformation portfolio and is committed to ensuring that customer success offerings capture value faster and provide the best outcomes.

Original Post Link: https://www.ptc.com/en/product-lifecycle-report/liveworx-digital-transformation–technology-and-support-you-need-to-succeed

Author Bio: Kelly is a corporate communications specialist at PTC. Her responsibilities include drafting and approving content for PTC’s external and social media presence and supporting communications for the Chief Strategy Officer. Kelly has previous experience as a communications specialist working to create and implement materials for the Executive Vice President of the Products Organization and senior management team members.

 

Read more…

The tinyML Foundation is excited to be offering a new activity to our community: tinyML Talks webcast series. A strong line-up of speakers making 30-minute presentations will take place twice a month on Tuesdays at 8 am Pacific time to make sure that tinyML enthusiasts worldwide will have an opportunity to watch them live. Presentations and videos will be available online the day afterwards for those that were not able to join live.

View Schedule of Upcoming Talks

If you want to re-watch all talks starting March 31 or were unable to join us live, the slides and links to our YouTube Channel of the talks are posted at our tinyML Forums. Many questions were asked during the presentations but not all could be answered in the allotted time frame. The answers to some of those can be found on the tinyML Forums as well.

Read more…

In this IoT Central Video Feature, we present Jacob Sorber's video, "How to Get Started Learning Embedded Systems." Jacob is a computer scientist, researcher, teacher, and Internet of Things enthusiast. He teaches systems and networking courses at Clemson University and leads the PERSIST research lab. His “get started” videos are valuable for those early in their practice. 

From Jacob: I've been meaning to start making more embedded systems videos — that is, computer science videos oriented to things you don't normally think of as computers (toys, robots, machines, cars, appliances). I hope this video helps you take the first step.

 

 

Read more…

Helium Expands to Europe

Helium, the company behind one of the world’s first peer-to-peer wireless networks, is announcing the introduction of Helium Tabs, its first branded IoT tracking device that runs on The People’s Network. In addition, after launching its network in 1,000 cities in North America within one year, the company is expanding to Europe to address growing market demand with Helium Hotspots shipping to the region starting July 2020. 

Since its launch in June 2019, Helium quickly grew its footprint with Hotspots covering more than 700,000 square miles across North America. Helium is now expanding to Europe to allow for seamless use of connected devices across borders. Powered by entrepreneurs looking to own a piece of the people-powered network, Helium’s open-source blockchain technology incentivizes individuals to deploy Hotspots and earn Helium (HNT), a new cryptocurrency, for simultaneously building the network and enabling IoT devices to send data to the Internet. When connected with other nearby Hotspots, this acts as the backbone of the network. 

“We’re excited to launch Helium Tabs at a time where we’ve seen incredible growth of The People’s Network across North America,” said Amir Haleem, Helium’s CEO and co-founder. “We could not have accomplished what we have done, in such a short amount of time, without the support of our partners and our incredible community. We look forward to launching The People’s Network in Europe and eventually bringing Helium Tabs and other third-party IoT devices to consumers there.”  

Introducing Helium Tabs that Run on The People’s Network
Unlike other tracking devices,Tabs uses LongFi technology, which combines the LoRaWAN wireless protocol with the Helium blockchain, and provides network coverage up to 10 miles away from a single Hotspot. This is a game-changer compared to WiFi and Bluetooth enabled tracking devices which only work up to 100 feet from a network source. What’s more, due to Helium’s unique blockchain-based rewards system, Hotspot owners will be rewarded with Helium (HNT) each time a Tab connects to its network. 

In addition to its increased growth with partners and customers, Helium has also seen accelerated expansion of its Helium Patrons program, which was introduced in late 2019. All three combined have helped to strengthen its network. 

Patrons are entrepreneurial customers who purchase 15 or more Hotspots to help blanket their cities with coverage and enable customers, who use the network. In return, they receive discounts, priority shipping, network tools, and Helium support. Currently, the program has more than 70 Patrons throughout North America and is expanding to Europe. 

Key brands that use the Helium Network include: 

  • Nestle, ReadyRefresh, a beverage delivery service company
  • Agulus, an agricultural tech company
  • Conserv, a collections-focused environmental monitoring platform

Helium Tabs will initially be available to existing Hotspot owners for $49. The Helium Hotspot is now available for purchase online in Europe for €450.

Read more…

IoT security testing should comprise activities like checking for endpoints, authentication, encryption, firewalls, and compliance requirements. The testing helps the IoT ecosystem to function safely and prevent incidences of a data breach.

The Internet of Things or IoT has swept the realm of technology and become mainstream as far as automation is concerned. Its popularity is attributable to features such as communication between machines, easy usage, and the integration of various devices, enabling technologies, and protocols.

When one talks about smart cities, smart transport, smart healthcare, or smart homes, the role of IoT is paramount.  According to Gartner, the number of connected things courtesy IoT is projected to reach 20.8 billion by 2020. Since IoT is about connected products that communicate with each other and share a huge volume of data, it is vulnerable to security breaches. With greater digitization and a rush towards delivering smart devices to add more comfort to people’s lives, businesses may end up keeping their flanks uncovered. The threats related to cybersecurity, besides threatening the smooth functioning of the digital ecosystem, are putting a question mark on the implementation of the IoT ecosystem.

The future is likely to be driven by smart systems with IoT at their core. Since such systems will witness a huge exchange of data, their security needs to be ensured. Also, as the smooth functioning of such smart systems will hinge on the accuracy and integrity of data, enabling IoT security at every step of the way should be the norm. If statistics are to be believed then around 84% of companies adopting IoT have reported security breaches of some kind (Source: Stoodnt.com.) The resident vulnerabilities in such systems are exploited by cybercriminals to exhibit malicious behavior such as committing credit card theft, phishing and spamming, distributed denial of service attacks, and malware distribution, among others.

How to conduct IoT security testing effectively

The security implications of a vulnerable or broken IoT system can be catastrophic for individuals, businesses, and entities. The devices and the transfer of data within them should be monitored by the implementing agency to check for a data breach. The best ways to conduct IoT security is as follow:

  • Checking of endpoints: As more devices or endpoints are added to expand the network, more vulnerabilities are created. Since IoT systems are built using devices of different configurations, computing and storage power, and running on different versions and types of operating systems, every such device should be evaluated for safety. An inventory of such devices should be made and tracked.
  • Authentication: Care should be taken that the vendor-supplied default passwords for specific systems should be dealt with at the beginning. If not, these can be exploited by hackers to take control of the IoT ecosystem and wreak havoc. Moreover, every device in the IoT system should be authenticated before being plugged into the network. This should be made an integral part of the internet of things testing.
  • Firewalls: The firewall present in the network should be tested for its capability of filtering specific data range and controlling traffic. Also, data aimed at terminating the device to ensure its optimal performance should be tested.
  • Encryption: Since IoT systems transmit data among themselves they should be encrypted for safety. During testing IoT applications the encryption approach and nitty-gritty should be thoroughly checked and validated. If not, then while relaying the location of assets in the IoT system, the information can be easily read by a hacker.
  • Compliance: Mere testing of IoT devices is not complete unless compliance with standards like FCC and ETSI/CE is carried out. These regulations and standards have been instituted to validate the performance of the IoT devices based on certain parameters. So, any IoT testing approach should take into account compliance with such regulations.

Why IoT systems should undergo security testing?

The smart devices forming part of the IoT system need to undergo IoT testing (security) to:

  • Prevent data theft: The unsecured endpoints within the system can leave a trail for hackers to strike but for the IoT device testing solutions. The vulnerabilities can be used to break into the controlling mechanism of the system in order to launch more malicious forms of attacks.
  • Protect brand equity: When scores of companies are competing with each other to get a pie of the IoT market, a security breach or malware attack can put a brand in jeopardy. With IoT penetration testing, such attacks can be pre-empted with the elimination of vulnerabilities and glitches.

Conclusion

The IoT ecosystem is projected to grow at a humongous pace and scale. Technology companies having an integrated IoT security testing approach are likely to earn a huge chunk of the pie. The approach when executed at regular intervals should be able to help enterprises achieve growth across domains.

Read more…

A bountiful harvest: Smart Farming

When talking about advanced technology in general and Internet of Things (IoT) in particular the first aspects that come to mind are things such as gleaming manufacturing production lines, industrial IoT solutions, critical infrastructure facilities, and consumer products for the home or fitness. It is rare that agriculture or farming gets included. Yet IoT is already having an impact within the agricultural sector, helping to improve productivity and yields.

The need

While food shortages can often be more of a food distribution problem than an absolute shortage of production per se, increases in agricultural food production are going to be essential in the years ahead. The United Nations’ World Population Prospects 2019 predicts that global population will rise from an estimated 7.7 billion people in 2019 to c.8.5 billion in 2030, 9.7 billion in 2050 and 10.9 billion by the end of the century, increasing the demand for food. This is combined with likely increased levels of prosperity and reductions in poverty, which has been shown before to always lead to increases in per capita food consumption as well as, importantly, changes in the food stuffs consumed. As the UN report puts it, “continued rapid population growth presents challenges for sustainable development”.

The response

First off, it’s important to say that any predictions of a Malthusian population crunch are likely to be way off the mark. In recent history, the agricultural sector has shown itself able to substantially increase levels of production, for example through the Green Revolution in the 1950s and 1960s that witnessed the use of new disease resistance high-yield varieties of wheat, rice and other crops.

But to ensure that food production can keep up with demand, a range of responses will be needed. Some of will be knowledge-based, others practice-based: for example, with knowledge of new farming techniques being spread, notably in developing countries; with increased used of hardier and more resistance varieties of crops; and with increased access to tools that enable greater productivity.

In some cases, this access to tools can mean access to farming equipment such as tractors or irrigation equipment. On others, it can include what is being called ‘smart farming’, ‘precision farming’, or ‘smart agriculture’.

Smart farming

The UN Food and Agriculture Organization summarizes smart farming as: “a farming management concept using modern technology to increase the quantity and quality of agricultural products. Farmers in the 21st century have access to GPS, soil scanning, data management, and Internet of Things technologies. By precisely measuring variations within a field and adapting the strategy accordingly, farmers can greatly increase the effectiveness of pesticides and fertilizers, and use them more selectively. Similarly, using Smart Farming techniques, farmers can better monitor the needs of individual animals and adjust their nutrition correspondingly, thereby preventing disease and enhancing herd health”.

In essence, smart farming is the deployment of advanced technology and IoT in agriculture.

The benefits that can be gained from this are manifold. There are the afore mentioned increases in production and greater effectiveness of agricultural inputs, such as fertilizer. But there are also major environmental benefits to be gained through the more sustainable use of water, energy, feed and the soil. The commercial and economic benefits are also significant. An Irish Government initiative that promotes smart farming states that, on participating farms, it averages EUR 6,300 in cost savings per farm and ways to reduce greenhouse gas emissions by 10%.

Using IoT and technology in agriculture

Despite the images that many may have of agriculture being technologically limited, this could hardly be further from the truth. Advanced technology and IoT have been rolling out within the sector in line with the developments elsewhere. One of the first studies to look at IoT in agriculture by Beecham Research identified several aspects where in which these could be used:

  • Sensing (or observation) technologies,
  • Software applications,
  • Communication systems,
  • Telematics and positioning technologies,
  • Data analytics,
  • Hardware and software systems.

Specific areas where IoT and related technologies are being rolled out within include:

  • Livestock monitoring,
  • Storage monitoring, for example in water tanks, fuel tanks, waste tanks,
  • Indoor farming in greenhouses and stables,
  • Forestry,
  • Arable farming,
  • Fleet management,
  • Fish farming.

There are a wide range of uses within each of these areas. For examples, drones are being used for crop spraying as well as providing remote monitoring of crop growth. DroneFly, a US-based drone supplier, provides a multispectral imagery drone for agricultural use that is enabled for sunlight detection; it further estimates that fertilizer can be delivered approximately 40-60 times faster than through traditional methods. 

Larger equipment is also being outfitted with IoT technology. John Deere, the major agricultural and horticultural equipment company, provides a range of precision agricultural equipment that enables automated guidance for harvesting equipment and data collection to assist with input placement and land stewardship, amongst others.

Some of the most important IoT solutions and tools involve observation and diagnostics. Sensing IoT solutions can be used, for example, to record and monitor conditional data from crops, soil, meteorological conditions, or livestock. As with IoT solutions in other fields, this data can then be integrated and diagnosed in order for automated decisions to be taken or alerts raised. All of this reduces the workload on the farmer while improving reaction time.

Conclusion

Although public awareness of IoT solutions within smart agriculture is less than those provided for industrial IoT solutions or within the consumer environment, the range of IoT tools, systems and applications that are being deployed is rapidly growing and will make an important contribution to the future farming and food needs of us all.

 

Read more…

This blog is the final part of a series covering the insights I uncovered at the 2020 Embedded Online Conference.

In the previous blogs in this series, I discussed the opportunities we have in the embedded world to make the next-generation of small, low-power devices smarter and more capable. I also discussed the improved accessibility of embedded technologies, such as FPGAs, that are allowing more developers to participate, experiment, and drive innovation in our industry.

Today, I’d like to discuss another topic that is driving change in our industry and was heavily featured at the Embedded Online Conference – security. 

Security is still being under-prioritised in our industry. You only have to watch the first 12 minutes of Maria "Azeria" Markstedter’s ‘defending against Hackers’ talk to see the lack of security features in widely used IoT devices today. 

Security is often seen as a burden - but, it doesn’t need to be. In recent years, many passionate security researchers have helped to highlight some simple steps you can take to vastly improve the overall security of your system. In fact, by clearly identifying the threats and utilizing appropriate and well-defined mitigation techniques, systems become much harder to compromize. I’d recommend watching these talks to familiarize yourself with some of the different aspects of security you need to consider: 

  • Azeria is a security researcher and Arm Innovator, she is passionate about educating developers on how to defend their applications against malicious attacks. In this talk, Maria focusses on shedding the light on the most common exploit mitigations to consider for memory-corruption-based exploits, when writing code for Arm Cortex-A processors, such as Execute Never (XN), Address Space Layout Randomisation (ASLR) and stack canaries. What’s really interesting is that it becomes clear from listening to Azeria’s talk and from seeing the audience comments that there is a lot of low-hanging fruit that we, as developers, are not fully aware of. We should collectively, start to see exploit mitigations as great tools to increase the security of our systems, no matter what type of code we are writing.
  • In the same vein as Maria’s talk, Aljoscha Lautenbach discusses some of the most common vulnerabilities and security mechanisms for the IoT, but with a focus on cryptography. He focusses on how to use block cipher modes correctly, common insecure algorithms to watch out for and the importance of entropy and initialization vectors (IVs)
  • A different approach is taken by Colin O'Flynn in his talk, Hardware Hacking: Hands-On. I personally really appreciate the angle that Colin takes, as it is something that, as software engineers, we tend to forget. The IoT and embedded devices running our code can be physically tampered in order to extract our secrets. As Colin mentions protecting from these attacks is usually costly, but there are a lot of steps we can take to substantially mitigate the risk. The first step is to analyse the weaknesses of our system by performing a threat analysis to ensure we are covering all bases when architecting and implementing our code. A popular framework to address the issue of security is the Platform Security Architecture (PSA) that Jacob Beningo describes in detail during his talk. Colin then moves on to introduce practical tools and techniques that you can use to test the ability of your systems to resist physical attacks. 

The passion of the security community to educate embedded software developers on security system flaws is shown during the talks and the answers to the questions submitted.

With the growing number of news headlines depicting compromised IoT devices, it is clear that security is no longer optional. The collaboration between the security researchers and the software and hardware communities I have seen at this and at many other conferences and events reassures me that we really are on the verge of putting security first.  

It has been great to see so many talks at the Embedded Online Conference, highlighting the new opportunities for developers in the embedded world. If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!

In case you missed the previous posts in this series, here they are:

Read more…
RSS
Email me when there are new items in this category –

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities