Subscribe to our Newsletter | To Post On IoT Central, Click here


Devices (266)

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

by Singapore University of Technology and Design

Internet-of-Things (IoT) such as smart home locks and medical devices, depend largely on Bluetooth low energy (BLE) technology to function and connect across other devices with reduced energy consumption. As these devices get more prevalent with increasing levels of connectivity, the need for strengthened security in IoT has also become vital.

A research team, led by Assistant Professor Sudipta Chattopadhyay from the Singapore University of Technology and Design (SUTD), wit team members from SUTD and the Institute for Infocomm Research (I2R), designed and implemented the Greyhound framework, a tool used to discover SweynTooth—a critical set of 11 cyber vulnerabilities.

Their study was presented at the USENIX Annual Technical Conference (USENIX ATC) on 15 to 17 July 2020 and they have been invited to present at the upcoming Singapore International Cyber Week (SICW) in October 2020.

These security lapses were found to affect devices by causing them to crash, reboot or bypass security features. At least 12 BLE based devices from eight vendors were affected, including a few hundred types of IoT products including pacemakers, wearable fitness trackers and home security locks.

The SweynTooth code has since been made available to the public and several IoT product manufacturers have used it to find security issues in their products. In Singapore alone, 32 medical devices reported to be affected by SweynTooth and 90% of these device manufacturers have since implemented preventive measures against this set of cyber vulnerabilities.

Regulatory agencies including the Cyber Security Agency and the Health Sciences Authority in Singapore as well as the Department of Homeland Security and the Food and Drug Administration in the United States have reached out to the research team to further understand the impact of these vulnerabilities.

These agencies have also raised public alerts to inform medical device manufacturers, healthcare institutions and end users on the potential security breach and disruptions. The research team continues to keep them updated on their research findings and assessments.

Beyond Bluetooth technology, the research team designed the Greyhound framework using a modular approach so that it could easily be adapted for new wireless protocols. This allowed the team to test it across the diverse set of protocols that IoTs frequently employ. This automated framework also paves new avenues in the testing security of more complex protocols and IoTs in next-generation wireless protocol implementations such as 5G and NarrowBand-IoT which require rigorous and systematic security testing.

"As we are transitioning towards a smart nation, more of such vulnerabilities could appear in the future. We need to start rethinking the device manufacturing design process so that there is limited reliance on communication modules such as Bluetooth to ensure a better and more secure smart nation by design," explained principal investigator Assistant Professor Sudipta from SUTD.

Originally posted HERE.

Read more…

When you’re in technology, you have to expect change. Yet, there’s something to the phrase “the more things change, the more they stay the same.” For instance, I see in the industrial internet of things (IIoT) a realm that’ll dramatically shape the future - how we manufacture, the way we run our factories, workforce needs – but the underlying business goals are the same as always.

Simply put, while industrial enterprise initiatives may change, financial objectives don’t – and they’re still what matter most. That’s why IIoT is so appealing. While the possibilities of smart and connected operations, sites and products certainly appeal to the dreamer and innovator, the clear payoff ensures that it’s a road even the most pragmatic decision-maker will eagerly follow.

The big three
When it comes to industrial enterprises, IIoT addresses the “big three” financial objectives head on. The technology maximizes revenue growth, reduces operating expense and increases asset efficiency.

IIoT does this in numerous ways. It yields invaluable operational intelligence, like real-time performance management data, to reduce manufacturing costs, increase flexibility and enable agility. When it comes to productivity, connected digital assets can empower a workforce with actionable insights to improve productivity and quality, even prevent safety and compliance issues.

For example, recognizing defects in a product early on can save time, materials, staff hours and possibly even a company’s reputation.

Whether on or off the factory floor, IIoT can be used to optimize asset efficiency. With real-time monitoring, diagnostics and analytics, downtime can be reduced or avoided. Asset utilization can also be evaluated and maximized. Think applications like equipment health monitoring, predictive maintenance, the ability to provide augmented 3D instructions for complex repairs. And, you can also scale production more precisely via better control over processes and inventory.

All of this accelerates time to market; another key benefit of IIoT and long held business goal.

Why is 5G important for IIoT and augmented reality (AR)?
As we look at the growing need to connect more devices, more sensors and install things like real-time cameras for doing analytics, there is growing stress and strain that is brought into industrial settings. We have seen the need to increase connectivity while having greater scalability, performance, accessibility, reliability, and broader reach with a lower cost of ownership become much more important. This is where 5G can make a real difference.

Many of our customers have seen what we are doing with augmented reality and the way that PTC can help operators service equipment. But in the not so distant future, the way that people interact with robotics, for example, will change. There will be real-time video to do spatial analytics on the way that people are working with man and machines and we’ll be able to unlock a new level of intelligence with a new layer of connectivity that helps drive better business outcomes.

Partner up
It sounds nice but the truth is, a lot of heavy lifting is required to do IIoT right. The last thing you want to do is venture into a pilot, run into problems, and leave the C-suite less than enthused with the outcome. And make no mistake, there’s a lot potential pitfalls to be aware of.

For instance, lengthy proof of concept periods, cumbersome processes and integrations can slow time to market. Multiple, local integrations can be required when connectivity and device management gets siloed. If not done right, you may only gain limited visibility into devices and the experience will fall short. And, naturally, global initiatives can be hindered by high roaming costs and deployment obstacles.

That said, you want to harness best of breed providers, not only to realize the full benefits of Industry 4.0, but to set yourself up with a foundation that’ll be able to harness 5G developments. You need a trusted IoT partner, and because of the sophistication and complexity, it takes an ecosystem of proven innovators working collaboratively.

That’s why PTC and Ericsson are partners.

Doing what’s best
Ericsson unlocks the full value of global cellular IoT connectivity and provides on-premise solutions. PTC offers an industrial IoT platform that’s ready to configure and deploy, with flexible connectivity and capabilities to build solutions without manual coding.

Drilling down a bit further, Ericsson’s IoT Accelerator can connect and manage billions of devices and millions of applications easily, seamlessly and globally. PTC’s IoT solutions digitalize processes and products, combining the physical and digital worlds seamlessly.

And with wireless connectivity, we can deploy a lot of new technology – from augmented reality to artificial intelligence applications – without having to think about the time and cost of creating fixed infrastructures, running wires, adding network capacity and more.

According ABI Research, organizations that embrace Industry 4.0 and private cellular have the potential to improve gross margins by 5-13% in factory and warehouse operations. Manufacturers can expect a 10x return on their investment. And with 4.3 billion wireless connections in smart factories anticipated by 2030, it’s clear where things are headed.

By focusing on what we each do best, PTC and Ericsson is able to do what’s best for our customers. We can help them build and scale global cellular IoT deployments faster and gain a competitive advantage. They can reap the advantages of Industry 4.0 and create that path to 5G, future-proofing their operations and enjoying such differentiators as network slicing, edge computing and high-reliability, low latency communications.

Further, with our histories of innovation, customers are assured they’ll be supported in the future, remaining out front with the ability to adapt to change, grow and deliver on financial objections.

Editor's Note: This post was originally published by Steve Dertien, Chief Technology Officer for PTC, on Ericsson's website, and is part of a joint content effort with Kiva Allgood, head of IoT for Ericsson. To view Steve's original, please click here. To read Kiva's complementary post, please click here.

Read more…

A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them 'smart.' The study was published in Electronics.

Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance. For comparison, in ordinary modern computers, random access memory is calculated in gigabytes. Although microcontrollers possess significantly less processing power than laptops and smartphones, they are smaller and can be interfaced with household items. Smart doors, refrigerators, shoes, glasses, kettles and coffee makers create the foundation for so-called ambient intelligece. The term denotes an environment of interconnected smart devices. 

An example of ambient intelligence is a smart home. The devices with limited memory are not able to store a large number of keys for secure data transfer and arrays of neural network settings. It prevents the introduction of artificial intelligence into Internet of Things devices, as they lack the required computing power. However, artificial intelligence would allow smart devices to spend less time on analysis and decision-making, better understand a user and assist them in a friendly manner. Therefore, many new opportunities can arise in the creation of environmental intelligence, for example, in the field of health care.

Andrei Velichko from Petrozavodsk State University, Russia, has created a new neural network architecture that allows efficient use of small volumes of RAM and opens the opportunities for the introduction of low-power devices to the Internet of Things. The network, called LogNNet, is a feed-forward neural network in which the signals are directed exclusively from input to output. Its uses deterministic chaotic filters for the incoming signals. The system randomly mixes the input information, but at the same time extracts valuable data from the information that are invisible initially. A similar mechanism is used by reservoir neural networks. To generate chaos, a simple logistic mapping equation is applied, where the next value is calculated based on the previous one. The equation is commonly used in population biology and as an example of a simple equation for calculating a sequence of chaotic values. In this way, the simple equation stores an infinite set of random numbers calculated by the processor, and the network architecture uses them and consumes less RAM.

7978216495?profile=RESIZE_584x

The scientist tested his neural network on handwritten digit recognition from the MNIST database, which is considered the standard for training neural networks to recognize images. The database contains more than 70,000 handwritten digits. Sixty-thousand of these digits are intended for training the neural network, and another 10,000 for network testing. The more neurons and chaos in the network, the better it recognized images. The maximum accuracy achieved by the network is 96.3%, while the developed architecture uses no more than 29 KB of RAM. In addition, LogNNet demonstrated promising results using very small RAM sizes, in the range of 1-2kB. A miniature controller, Atmega328, can be embedded into a smart door or even a smart insole, has approximately the same amount of memory.

"Thanks to this development, new opportunities for the Internet of Things are opening up, as any device equipped with a low-power miniature controller can be powered with artificial intelligence. In this way, a path is opened for intelligent processing of information on peripheral devices without sending data to cloud services, and it improves the operation of, for example, a smart home. This is an important contribution to the development of IoT technologies, which are actively researched by the scientists of Petrozavodsk State University. In addition, the research outlines an alternative way to investigate the influence of chaos on artificial intelligence," said Andrei Velichko.

Originally posted HERE.

by Russian Science Foundation

Image Credit: Andrei Velichko

 

 

 

 

Read more…

A fingerprint for the Internet of Things

By: Tom Jeltes, Eindhoven University of Technology

The Internet of Things (IoT) consists of billions of sensors and other devices connected to each other via internet, all of which need to be protected against hackers with malicious purposes. A low-cost and energy efficient solution for the security of IoT devices uses the unique characteristics of the built-in memory chips. Ph.D. candidate Lieneke Kusters investigated how to make optimal use of the chip's digital fingerprint to generate a security key.

The higher the number of devices connected to each other via the Internet of Things, the greater the risk that malicious hackers might gain access to important information, or even take over entire systems. Quite apart from all kinds of privacy issues, it's not hard to imagine that that someone who, for example, has control over temperature sensors in a chemical or nuclear plant, could cause serious damage.

 To prevent problems like these from occurring, each IoT device needs to be able, as it were, to show an identity document—"authentication," in professional terms. Normally, speaking, this is done with a kind of password, which is sent in encrypted form to the person who is communicating with the device. The security key needed for that has to be stored in the IoT device one way or another, Lieneke Kusters explains. "But these are often small and cheap devices that aren't supposed to use much energy. To safely store a key in these devices, you need extra hardware with constant power supply. That's not very practical."

Digital fingerprint

There is a different way: namely by deducing the security key from a unique physical characteristic of the memory chip (Static Random-Access Memory, or SRAM) that can be found in practically every IoT device. Depending on the random circumstances during the chip's manufacturing process, the memory locations have a random default value of 0 or 1.

"That binary code which you can read out when activating the chip, constitutes a kind of digital fingerprint of the device," says Kusters, who gained her doctorate at the Information and Communication Theory Laboratory at the TU/e department of Electrical Engineering. This fingerprint is known as a Physical Unclonable Function (PUF). "The Eindhoven-based company Intrinsic ID sells digital security based on SRAM-PUFs. I collaborated with them for my doctoral research, during which I focused on how to generate, in a reliable way, a key from that digital fingerprint that is as long as possible. The longer, the safer."

The major advantage of security keys based on SRAM-PUFs is that the key exists only at the moment when authentication is required. "The device restarts itself to read out the SRAM-PUF and in doing so creates the key, which subsequently gets erased immediately after use. That makes it all but impossible for an attacker to steal the key."

Noise and reliability

But that's not the entire story, because some bits of the SRAM do not always have the same value during activation, Kusters explains. Ten to fifteen percent of the bits turn out not to be determined, which makes the digital fingerprint a bit fuzzy. How do you use that fuzzy fingerprint to make a key of the highest possible complexity that nevertheless still fits into the receiving lock—practically—each time?

"What you want to prevent is that the generated key won't be recognized by the receiving party as a consequence of the 'noise' in the SRAM-PUF," Kusters explains. "It's alright if that happens one in a million times perhaps, preferably less often." The probability of error is smaller with a shorter key, but such a key is also easier to guess for people with bad intentions. "I've searched for the longest reliable key, given a certain amount of noise in the measurement. It helps if you store extra information about the SRAM-PUF, but that must not be of use to a potential attacker. My thesis is an analysis of how you can reach the optimal result in different situations with that extra information."

Originaly posted here.


 
Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

Edge Products Are Now Managed At The Cloud

Now more than ever, there are billions of edge products in the world. But without proper cloud computing, making the most of electronic devices that run on Linux or any other OS would not be possible.

And so, a question most people keep asking is which is the best Software-as-a-service platform that can effectively manage edge devices through cloud computing. Well, while edge device management may not be something, the fact that cloud computing space is not fully exploited means there is a lot to do in the cloud space.

Product remote management is especially necessary for the 21st century and beyond. Because of the increasing number of devices connected to the internet of things (IoT), a reliable SaaS platform should, therefore, help with maintaining software glitches from anywhere in the world. From smart homes, stereo speakers, cars, to personal computers, any product that is connected to the internet needs real-time protection from hacking threats such as unlawful access to business or personal data.

Data being the most vital asset is constantly at risk, especially if individuals using edge products do not connect to trusted, reliable, and secure edge device management platforms.

Bridges the Gap Between Complicated Software And End Users

Cloud computing is the new frontier through which SaaS platforms help manage edge devices in real-time. But something even more noteworthy is the increasing number of complicated software that now run edge devices at homes and in workplaces.

Edge device management, therefore, ensures everything runs smoothly. From fixing bugs, running debugging commands to real-time software patch deployment, cloud management of edge products bridges a gap between end-users and complicated software that is becoming the norm these days.

Even more importantly, going beyond physical firewall barriers is a major necessity in remote management of edge devices. A reliable Software-as-a-Service, therefore, ensures data encryption for edge devices is not only hackproof by also accessed by the right people. Moreover, deployment of secure routers and access tools are especially critical in cloud computing when managing edge devices. And so, developers behind successful SaaS platforms do conduct regular security checks over the cloud, design and implement solutions for edge products.

Reliable IT Infrastructure Is Necessary

Software-as-a-service platforms that manage edge devices focus on having a reliable IT infrastructure and centralized systems through which they can conduct cloud computing. It is all about remotely managing edge devices with the help of an IT infrastructure that eliminates challenges such as connectivity latency.

Originally posted here

Read more…

Embedded Linux or RTOS: For IoT

by Tirichlabs

Embedded Linux utilizes Linux kernel for an embedded device, but it is quite different from the standard Linux OS. Its application to embedded systems is motivated by the availability of device support, file-systems, network connectivity, and UI support. It is a customized version of Linux for embedded systems, consequently having a much smaller size and minimal features and requires less processing power. Based on embedded system requirements, the Linux kernel is modified and optimized. Such embedded Linux can only run device-specific purpose-built applications.

The Real-Time Operating System (RTOS) with minimal code is used for such applications where least and fix processing time is required. RTOS is a time-sharing system based on clock interrupts that implement priority sequences to execute a process. In the event of a high priority, interrupt is generated by the system, the running low priority processes are stopped and the interrupt is served. The real-time operating system requires less operational memory and synchronizes the processes in such a way they can communicate with each other hence resources can be used efficiently without wastage of time.

 

COMPARISON

Size

The major difference between Embedded Linux and RTOS is in their sizes. RTOS running on an AVR requires approximately 4.4 kilobytes of ROM. Embedded Linux, on the other hand, is relatively larger. The kernel can be stripped of which are not required and even with that, the footprint is generally measured in megabytes.

Embedded Linux RAM requirement is in order of few megabytes. In practical applications, it requires more than that because some other tasks run under these Linux kernels. RTOS has much smaller memory requirements than Linux. A very simple setup, running two tasks, a scheduler, a queue for communication and a semaphore on an 8-bit architecture would use in the vicinity of 200 bytes.

Scheduler

The scheduler in an RT-system is important to ensure that tasks complete in a fixed time. Compared to a regular scheduler for a general-purpose system, it is not the main task of the scheduler to ensure ’fair’ distribution of CPU-time. A common technique is simply to let the task with the highest priority run before all tasks with lower priority. It works fine for a soft real-time system but for hard real-time, the system must provide a better guarantee.

RTOS scheduler

RTOS uses the highest priority first scheduler. It means that the task having the highest priority is always running. This is achieved by having a preemptive scheduler that at a tick-interrupt decides if the currently running task is allowed to continue executing or it needs to be switched for another task based on priority. The scheduler uses the priority to schedule the task with the highest priority. Tasks having the same priority are given a “fair” process time. This schedular allows us to achieve soft real-time but it is difficult to achieve hard real-time by not having any kind of deadline-based scheduling.

For this purpose, there are choices of having a preemptive or a cooperative scheduler. In preemptive mode, a task can be preempted unlike in cooperative mode where it’s up to all tasks to give away the CPU “often” enough so higher priority tasks get to run. Typical RTOS real-time kernel achieves scheduler latencies from zero to a few microseconds.

Embedded Linux scheduler

In Embedded Linux, there are more choices to choose the scheduler. The modular of Embedded Linux allows to change different parts of the system. A simple insmod gives the possibility to change the scheduler. There are a couple of schedulers designed for different things.

First of all, it has a basic highest priority first scheduler that uses the priority of a task and schedules it first. Embedded Linux also implements the Earliest deadline first which uses the periodic feature of Embedded Linux. Assuming that the deadline for every task is when it is next to be run again one can implement a fast EDF. In theory, it is optimal since it can schedule tasks to 100% CPU-usages. In practice, it is not the same due to some overheads. As in idle process Embedded Linux runs a usual Linux kernel and when there are no rt-tasks that can run, Linux gets to run. which can lead to starvation of Linux and thus effectively disabling Linux. But the importance of a real-time system is to run the real-time tasks this is not a big problem for the system. Typical latencies in real-time Linux schedular are in the order of tens to hundreds of microseconds.

CPU resource

Embedded Linux requires a significant amount of CPU resources, perhaps >200MIPS, 32bit processor, ideally with an MMU, 4Mb of ROM and 16MB of RAM and boot may take several seconds.

An RTOS, on the other hand, runs in less than 10Kb, on microcontrollers from 8-bit up and boot in milliseconds.

IoT Implementation of OS

Embedded Linux is often preferred for extremely low-power applications, such as sensors, run for months on batteries. The low-power nature often precludes direct IP connectivity which serves as a gateway for Internet connectivity. The gateway communicates the low-power protocol to the sensors and would translate them to IP. Linux may have an existing protocol to fulfill the requirements.

The basic requirement of an IoT device is network connectivity, typically in the form of IP via a web server. An RTOS can offer IP connectivity but have a risk to be buggy unless you examine it. For example, usually, RTOSs do not isolate the IP stack user from the IP stack itself. Network connectivity requires potentially dealing with low speed or congested links which can lead to obscure and hard-to-debug buffer handling issues when the stack is intermingled with other code. On the other hand, an embedded Linux leverages hardware separation and a widely utilized IP stack that probably has been exposed to corner cases.

Security is essential in IoT devices, which are often exposed to open Internet. A system compromise on the Internet interface is prone to intruders and information or control of the device can be hijacked. Developers can leverage native, embedded Linux features—multiuser, SELinux, and containers—to contain and limit the damage.

Linux certainly is a robust and secure OS and the system has matured in an embedded operating system. Yet one of the drawbacks is its Memory footprint when compared to a real-time operating system even though it can be trimmed down by removing tools and system services that are not required in embedded systems, it still is a large software. It simply cannot run on 8 or 16-bit MCUs and requires more onboard RAM for the Linux kernel. For example, ARM Cortex-M architecture based MCUs, which typically have only a few hundred kilobytes of RAM, and Linux cannot run on these chips.

A common engineering solution for networked systems is to use two processors in the device. In this arrangement, an 8 or 16-bit MCU is used for the sensor or actuator, while a 32-bit processor is used for the network interface which runs an RTOS. Sales of 32-bit MCUs have exploded in the last several years, and have become the largest segment of the MCU market.

ORIGINALLY POSTED HERE ON TIRICH LABS

Read more…

 

max0492-01-arduino-breakout-board-1024x885.jpg

When I work on a development project, I’ve become a big fan of using development boards that have the Arduino headers on them. The vast number of shields that easily connect to these headers is phenomenal. The one problem that I’ve always had though was that there is always a need to use a breadboard to test a circuit or integrate a sensor that just isn’t in an Arduino header format. The result is a wiring mess that can result in loose or missing connections.

I was recently talking with Max Maxfield and he pointed me to a really cool adapter board designed to remove these wiring jumpers to a breadboard. Max wrote about this board here but I’m so excited about this that I thought I’d add my two cents as well.

The BreadShield, which can be purchased at https://www.crowdsupply.com/loser/breadshield, adapts the Arduino headers to a linear set of header pins designed to be plugged into a breadboard. You can see in the image below that this completely removes all the extra jumpers that one would normally require which has the potential to remove quite a few jumper wires.

max0492-03-arduino-breakout-board-1024x675.jpg

When I heard about these, I purchased three assembled units for about $28 which saves me the time from having to assemble the adapter myself. DIY assembly runs for about $15 for a set of three boards. Either way, a great price to remove a bunch of wires from the workbench.

Now I’m still waiting for mine to arrive, but from the image, you can see that the one challenge to using these adapters might be adapting the height of your breadboard to your hardware stack. While this could be an issue, I keep various length spacers around the office so that I can adapt board heights and undoubtedly there will be a length that will ensure these line up properly.

You can view the original post here

Read more…

In-Circuit Emulators

Does anyone remember in-circuit emulators (ICEs)?

Around 1975 Intel came out with the 8080 microprocessor. This was a big step up from the 8008, for the 8080 had a 64k address space, a reasonable ISA, and an honest stack pointer (the 8008 had a hardware stack a mere 7 levels deep). They soon released the MDS 800, a complete computer based on the 8080, with twin 8" floppy drives. An optional ICE was available; this was, as I recall, a two-board set that was inserted in the MDS. A ribbon cable from those boards went to a small pod that could be plugged into the 8080 CPU socket of a system an engineer was developing.

The idea was that the MDS could act as the device's under test (DUT) CPU. It was rather like today's JTAG debuggers in that one could run code on the DUT, set breakpoints, collect trace data, and generally debug the hardware and software. For there was no JTAG then.

We had been developing microprocessor-based products using the 8008, but quickly transitioned to the 8080 for the increased computational power and address space. I begged my boss for the money for an MDS, which was $20k (about $100k in today's dollars), and to my surprise he let us order one. Despite slow floppies that stored only 80 KB each this tool greatly accelerated our work.

Before long ICEs were the standard platform for embedded work. Remember: this was before PCs so there were no standard desktop computers. The ICE was the computer, the IDE (such as it was) and the debugger.

In the mid-80s I was consulting and designed a, uh, "data gathering" system for our friends in Langley, VA, using multiple NSC-800 CPUs. There were few tools available for this part so I created a custom ICE that let me debug the code. Then a light bulb went on: why not sell the thing? There was practically no market for NSC-800 tools so I came up with versions for the Z80 and 8085 and slapped a $695 label on it. Most ICEs at the time cost many thousands so sales spiked.

Back then we still drew schematics on large D-size (17" x 22") vellum with a pencil. I laid out the PCBs on mylar with black tape for the tracks, as was the norm at the time.

This ICE is perhaps the design I'm most proud of in my career. It was only 17 ICs but was the epitome of an embedded system. Software replaced the usual gobs of hardware. On a breakpoint, for instance, the hardware switched from using the DUT stack to a stack on the emulator, but since the user's stack pointer could point anywhere, and the RAM in the ICE was only a few KB, the hardware masked off the upper address bits and lots of convoluted code reconstructed the user environment.

At the time ICEs advertised their breakpoints; most supported no more than a few as comparators watched the address bus for the breakpoint. My ICE used a 64k by one bit memory that mirrored the user bus. Need a breakpoint at, say, address 0x1234? The emulator set that bit in the memory true. Thus, the thing had 65K breakpoints. One of my dumbest mistakes was to not patent that, as all ICE vendors eventually copied the approach.

The trouble with tools is support. An ICE replaces the DUT CPU, and interfaces with all sorts of unknown target hardware. Though the low clock rates of the Z80 meant we initially had few problems, as we expanded the product line support consumed more and more time. Eventually I learned it was equally easy to sell a six-thousand-dollar product as a six-hundred-dollar version, so those simple first emulators were replaced by much more complex many-hundred chip versions with vast numbers of features.

But the market was changing. By the mid-90s SMT CPUs were common. These were challenging to connect to. Clock rate soared making every connection a Maxwell Law nightmare. I sold the business in 1997 and went on to other endeavors. Eventually the ICE market disappeared.

One regret from all those years is that I didn't save any of the emulator's firmware or schematics. In this business everything is ephemeral. We should make an effort to preserve some of that history.

You can view the original post on TEM here

Read more…

Industrial Prototyping for IoT

I-Pi SMARC.jpg

ADLINK is a global leader in edge computing driving data-to-decision applications across industries. The company recently introduced I-Pi SMARC for Industrial IoT prototyping.

-       AdLInk I-Pi SMARC consists of a simple carrier paired with a SMARC Computer on Module

-       SMARC Modules are available from entry level PX30 Rockchip to top of the line Intel Apollo Lake.

-       SMARC modules are specifically designed for typical industrial embedded applications that require long life, high MTBF and strict revision control.

-       Use popular off the shelve sensors and create prototypes or proof of concepts on short notice.

Additional information can be found here

 

Read more…

The tinyML Foundation is excited to be offering a new activity to our community: tinyML Talks webcast series. A strong line-up of speakers making 30-minute presentations will take place twice a month on Tuesdays at 8 am Pacific time to make sure that tinyML enthusiasts worldwide will have an opportunity to watch them live. Presentations and videos will be available online the day afterwards for those that were not able to join live.

View Schedule of Upcoming Talks

If you want to re-watch all talks starting March 31 or were unable to join us live, the slides and links to our YouTube Channel of the talks are posted at our tinyML Forums. Many questions were asked during the presentations but not all could be answered in the allotted time frame. The answers to some of those can be found on the tinyML Forums as well.

Read more…

In this IoT Central Video Feature, we present Jacob Sorber's video, "How to Get Started Learning Embedded Systems." Jacob is a computer scientist, researcher, teacher, and Internet of Things enthusiast. He teaches systems and networking courses at Clemson University and leads the PERSIST research lab. His “get started” videos are valuable for those early in their practice. 

From Jacob: I've been meaning to start making more embedded systems videos — that is, computer science videos oriented to things you don't normally think of as computers (toys, robots, machines, cars, appliances). I hope this video helps you take the first step.

 

 

Read more…

Helium Expands to Europe

Helium, the company behind one of the world’s first peer-to-peer wireless networks, is announcing the introduction of Helium Tabs, its first branded IoT tracking device that runs on The People’s Network. In addition, after launching its network in 1,000 cities in North America within one year, the company is expanding to Europe to address growing market demand with Helium Hotspots shipping to the region starting July 2020. 

Since its launch in June 2019, Helium quickly grew its footprint with Hotspots covering more than 700,000 square miles across North America. Helium is now expanding to Europe to allow for seamless use of connected devices across borders. Powered by entrepreneurs looking to own a piece of the people-powered network, Helium’s open-source blockchain technology incentivizes individuals to deploy Hotspots and earn Helium (HNT), a new cryptocurrency, for simultaneously building the network and enabling IoT devices to send data to the Internet. When connected with other nearby Hotspots, this acts as the backbone of the network. 

“We’re excited to launch Helium Tabs at a time where we’ve seen incredible growth of The People’s Network across North America,” said Amir Haleem, Helium’s CEO and co-founder. “We could not have accomplished what we have done, in such a short amount of time, without the support of our partners and our incredible community. We look forward to launching The People’s Network in Europe and eventually bringing Helium Tabs and other third-party IoT devices to consumers there.”  

Introducing Helium Tabs that Run on The People’s Network
Unlike other tracking devices,Tabs uses LongFi technology, which combines the LoRaWAN wireless protocol with the Helium blockchain, and provides network coverage up to 10 miles away from a single Hotspot. This is a game-changer compared to WiFi and Bluetooth enabled tracking devices which only work up to 100 feet from a network source. What’s more, due to Helium’s unique blockchain-based rewards system, Hotspot owners will be rewarded with Helium (HNT) each time a Tab connects to its network. 

In addition to its increased growth with partners and customers, Helium has also seen accelerated expansion of its Helium Patrons program, which was introduced in late 2019. All three combined have helped to strengthen its network. 

Patrons are entrepreneurial customers who purchase 15 or more Hotspots to help blanket their cities with coverage and enable customers, who use the network. In return, they receive discounts, priority shipping, network tools, and Helium support. Currently, the program has more than 70 Patrons throughout North America and is expanding to Europe. 

Key brands that use the Helium Network include: 

  • Nestle, ReadyRefresh, a beverage delivery service company
  • Agulus, an agricultural tech company
  • Conserv, a collections-focused environmental monitoring platform

Helium Tabs will initially be available to existing Hotspot owners for $49. The Helium Hotspot is now available for purchase online in Europe for €450.

Read more…

This blog is the second part of a series covering the insights I uncovered at the 2020 Embedded Online Conference. 

Last week, I wrote about the fascinating intersection of the embedded and IoT world with data science and machine learning, and the deeper co-operation I am experiencing between software and hardware developers. This intersection is driving a new wave of intelligence on small and cost-sensitive devices.

Today, I’d like to share with you my excitement around how far we have come in the FPGA world, what used to be something only a few individuals in the world used to be able to do, is at the verge of becoming more accessible.

I’m a hardware guy and I started my career writing in VHDL at university. I then started working on designing digital circuits with Verilog and C and used Python only as a way of automating some of the most tedious daily tasks. More recently, I have started to appreciate the power of abstraction and simplicity that is achievable through the use of higher-level languages, such as Python, Go, and Java. And I dream of a reality in which I’m able to use these languages to program even the most constrained embedded platforms.

At the Embedded Online Conference, Clive Maxfield talked about FPGAs, he mentions “in a world of 22 million software developers, there are only around a million core embedded programmers and even fewer FPGA engineers.” But, things are changing. As an industry, we are moving towards a world in which taking advantage of the capabilities of a reconfigurable hardware device, such as an FPGA, is becoming easier.

  • What the FAQ is an FPGA, by Max the Magnificent, starts with what an FPGA is and the beauties of parallelism in hardware – something that took me quite some time to grasp when I first started writing in HDL (hardware description languages). This is not only the case for an FPGA, but it also holds true in any digital circuit. The cool thing about an FPGA is the fact that at any point you can just reprogram the whole board to operate in a different hardware configuration, allowing you to accelerate a completely new set of software functions. What I find extremely interesting is the new tendency to abstract away even further, by creating HLS (high-level synthesis) representations that allow a wider set of software developers to start experimenting with programmable logic.
  • The concept of extending the way FPGAs can be programmed to an even wider audience is taken to the next level by Adam Taylor. He talks about PYNQ, an open-source project that allows you to program Xilinx boards in Python. This is extremely interesting as it opens up the world of FPGAs to even more software engineers. Adam demonstrates how you can program an FPGA to accelerate machine learning operations using the PYNQ framework, from creating and training a neural network model to running it on Arm-based Xilinx FPGA with custom hardware accelerator blocks in the FPGA fabric.

FPGAs always had the stigma of being hard and difficult to work on. The idea of programming an FPGA in Python, was something that no one had even imagined a few years ago. But, today, thanks to the many efforts all around our industry, embedded technologies, including FPGAs, are being made more accessible, allowing more developers to participate, experiment, and drive innovation.

I’m excited that more computing technologies are being put in the hands of more developers, improving development standards, driving innovation, and transforming our industry for the better.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

Part 3 of my review can be viewed by clicking here

In case you missed the previous post in this blog series, here it is:

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference! 

Read more…
RSS
Email me when there are new items in this category –

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities