arm (4)

This blog is the final part of a series covering the insights I uncovered at the 2020 Embedded Online Conference.

In the previous blogs in this series, I discussed the opportunities we have in the embedded world to make the next-generation of small, low-power devices smarter and more capable. I also discussed the improved accessibility of embedded technologies, such as FPGAs, that are allowing more developers to participate, experiment, and drive innovation in our industry.

Today, I’d like to discuss another topic that is driving change in our industry and was heavily featured at the Embedded Online Conference – security. 

Security is still being under-prioritised in our industry. You only have to watch the first 12 minutes of Maria "Azeria" Markstedter’s ‘defending against Hackers’ talk to see the lack of security features in widely used IoT devices today. 

Security is often seen as a burden - but, it doesn’t need to be. In recent years, many passionate security researchers have helped to highlight some simple steps you can take to vastly improve the overall security of your system. In fact, by clearly identifying the threats and utilizing appropriate and well-defined mitigation techniques, systems become much harder to compromize. I’d recommend watching these talks to familiarize yourself with some of the different aspects of security you need to consider: 

  • Azeria is a security researcher and Arm Innovator, she is passionate about educating developers on how to defend their applications against malicious attacks. In this talk, Maria focusses on shedding the light on the most common exploit mitigations to consider for memory-corruption-based exploits, when writing code for Arm Cortex-A processors, such as Execute Never (XN), Address Space Layout Randomisation (ASLR) and stack canaries. What’s really interesting is that it becomes clear from listening to Azeria’s talk and from seeing the audience comments that there is a lot of low-hanging fruit that we, as developers, are not fully aware of. We should collectively, start to see exploit mitigations as great tools to increase the security of our systems, no matter what type of code we are writing.
  • In the same vein as Maria’s talk, Aljoscha Lautenbach discusses some of the most common vulnerabilities and security mechanisms for the IoT, but with a focus on cryptography. He focusses on how to use block cipher modes correctly, common insecure algorithms to watch out for and the importance of entropy and initialization vectors (IVs)
  • A different approach is taken by Colin O'Flynn in his talk, Hardware Hacking: Hands-On. I personally really appreciate the angle that Colin takes, as it is something that, as software engineers, we tend to forget. The IoT and embedded devices running our code can be physically tampered in order to extract our secrets. As Colin mentions protecting from these attacks is usually costly, but there are a lot of steps we can take to substantially mitigate the risk. The first step is to analyse the weaknesses of our system by performing a threat analysis to ensure we are covering all bases when architecting and implementing our code. A popular framework to address the issue of security is the Platform Security Architecture (PSA) that Jacob Beningo describes in detail during his talk. Colin then moves on to introduce practical tools and techniques that you can use to test the ability of your systems to resist physical attacks. 

The passion of the security community to educate embedded software developers on security system flaws is shown during the talks and the answers to the questions submitted.

With the growing number of news headlines depicting compromised IoT devices, it is clear that security is no longer optional. The collaboration between the security researchers and the software and hardware communities I have seen at this and at many other conferences and events reassures me that we really are on the verge of putting security first.  

It has been great to see so many talks at the Embedded Online Conference, highlighting the new opportunities for developers in the embedded world. If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!

In case you missed the previous posts in this series, here they are:

Read more…

This blog is the second part of a series covering the insights I uncovered at the 2020 Embedded Online Conference. 

Last week, I wrote about the fascinating intersection of the embedded and IoT world with data science and machine learning, and the deeper co-operation I am experiencing between software and hardware developers. This intersection is driving a new wave of intelligence on small and cost-sensitive devices.

Today, I’d like to share with you my excitement around how far we have come in the FPGA world, what used to be something only a few individuals in the world used to be able to do, is at the verge of becoming more accessible.

I’m a hardware guy and I started my career writing in VHDL at university. I then started working on designing digital circuits with Verilog and C and used Python only as a way of automating some of the most tedious daily tasks. More recently, I have started to appreciate the power of abstraction and simplicity that is achievable through the use of higher-level languages, such as Python, Go, and Java. And I dream of a reality in which I’m able to use these languages to program even the most constrained embedded platforms.

At the Embedded Online Conference, Clive Maxfield talked about FPGAs, he mentions “in a world of 22 million software developers, there are only around a million core embedded programmers and even fewer FPGA engineers.” But, things are changing. As an industry, we are moving towards a world in which taking advantage of the capabilities of a reconfigurable hardware device, such as an FPGA, is becoming easier.

  • What the FAQ is an FPGA, by Max the Magnificent, starts with what an FPGA is and the beauties of parallelism in hardware – something that took me quite some time to grasp when I first started writing in HDL (hardware description languages). This is not only the case for an FPGA, but it also holds true in any digital circuit. The cool thing about an FPGA is the fact that at any point you can just reprogram the whole board to operate in a different hardware configuration, allowing you to accelerate a completely new set of software functions. What I find extremely interesting is the new tendency to abstract away even further, by creating HLS (high-level synthesis) representations that allow a wider set of software developers to start experimenting with programmable logic.
  • The concept of extending the way FPGAs can be programmed to an even wider audience is taken to the next level by Adam Taylor. He talks about PYNQ, an open-source project that allows you to program Xilinx boards in Python. This is extremely interesting as it opens up the world of FPGAs to even more software engineers. Adam demonstrates how you can program an FPGA to accelerate machine learning operations using the PYNQ framework, from creating and training a neural network model to running it on Arm-based Xilinx FPGA with custom hardware accelerator blocks in the FPGA fabric.

FPGAs always had the stigma of being hard and difficult to work on. The idea of programming an FPGA in Python, was something that no one had even imagined a few years ago. But, today, thanks to the many efforts all around our industry, embedded technologies, including FPGAs, are being made more accessible, allowing more developers to participate, experiment, and drive innovation.

I’m excited that more computing technologies are being put in the hands of more developers, improving development standards, driving innovation, and transforming our industry for the better.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

Part 3 of my review can be viewed by clicking here

In case you missed the previous post in this blog series, here it is:

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference! 

Read more…

I recently joined the Embedded Online Conference thinking I was going to gain new insights on embedded and IoT techniques. But I was pleasantly surprised to see a huge variety of sessions with a focus on modern software development practices. It is becoming more and more important to gain familiarity with a more modern software approach, even when you’re programming a constrained microcontroller or an FPGA.

Historically, there has been a large separation between application developers and those writing code for constrained embedded devices. But, things are now changing. The embedded world intersecting with the world of IoT, data science, and ML, and the deeper co-operation between software and hardware communities is driving innovation. The Embedded Online Conference, artfully organised by Jacob Beningo, represented exactly this cross-section, projecting light on some of the most interesting areas in the embedded world - machine learning on microcontrollers, using test-driven development to reduce bugs and programming an FPGA in Python are all things that a few years ago, had little to do with the IoT and embedded industry.

This blog is the first part of a series discussing these new and exciting changes in the embedded industry. In this article, we will focus on machine learning techniques for low-power and cost-sensitive IoT and embedded Arm-based devices.

Think like a machine learning developer

Considered for many year's an academic dead end of limited practical use, machine learning has gained a lot of renewed traction in recent years and it has now become one of the most interesting trends in the IoT space. TinyML is the buzzword of the moment. And this was a hot topic at the Embedded Online Conference. However, for embedded developers, this buzzword can sometimes add an element of uncertainty.

The thought of developing IoT applications with the addition of machine learning can seem quite daunting. During Pete Warden’s session about the past, present and future of embedded ML, he described the embedded and machine learning worlds to be very fragmented; there are so many hardware variants, RTOS’s, toolchains and sensors meaning the ability to compile and run a simple ‘hello world’ program can take developers a long time. In the new world of machine learning, there’s a constant churn of new models, which often use different types of mathematical operations. Plus, exporting ML models to a development board or other targets is often more difficult than it should be.

Despite some of these challenges, change is coming. Machine learning on constrained IoT and embedded devices is being made easier by new development platforms, models that work out-of-the-box with these platforms, plus the expertise and increased resources from organisations like Arm and communities like tinyML. Here are a few must-watch talks to help in your embedded ML development: 

  • New to the tinyML space is Edge Impulse, a start-up that provides a solution for collecting device data, building a model based around it and deploying it to make sense of the data directly on the device. CTO at Edge Impulse, Jan Jongboom talks about how to use a traditional signal processing pipeline to detect anomalies with a machine learning model to detect different gestures. All of this has now been made even easier by the announced collaboration with Arduino, which simplifies even further the journey to train a neural network and deploy it on your device.
  • Arm recently announced new machine learning IP that not only has the capabilities to deliver a huge uplift in performance for low-power ML applications, but will also help solve many issues developers are facing today in terms of fragmented toolchains. The new Cortex-M55 processor and Ethos-U55 microNPU will be supported by a unified development flow for DSP and ML workloads, integrating optimizations for machine learning frameworks. Watch this talk to learn how to get started writing optimized code for these new processors.
  • An early adopter implementing object detection with ML on a Cortex-M is the OpenMV camera - a low-cost module for machine vision algorithms. During the conference, embedded software engineer, Lorenzo Rizzello walks you through how to get started with ML models and deploying them to the OpenMV camera to detect objects and the environment around the device.

Putting these machine learning technologies in the hands of embedded developers opens up new opportunities. I’m excited to see and hear what will come of all this amazing work and how it will improve development standards and transform embedded devices of the future.

If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com

*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!

Part 2 of my review can be viewed by clicking here

Read more…

The Meltdown and Spectre microprocessor bugs not only compromise billions of desktops, laptops, servers, clouds, tablets and smartphones, they also put tens of billions more embedded, IoT, and control systems at risk.

Read more…

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities