Featured Posts (670)
In this IoT Central Video Feature, we present Jacob Sorber's video, "How to Get Started Learning Embedded Systems." Jacob is a computer scientist, researcher, teacher, and Internet of Things enthusiast. He teaches systems and networking courses at Clemson University and leads the PERSIST research lab. His “get started” videos are valuable for those early in their practice.
From Jacob: I've been meaning to start making more embedded systems videos — that is, computer science videos oriented to things you don't normally think of as computers (toys, robots, machines, cars, appliances). I hope this video helps you take the first step.
This blog is the final part of a series covering the insights I uncovered at the 2020 Embedded Online Conference.
In the previous blogs in this series, I discussed the opportunities we have in the embedded world to make the next-generation of small, low-power devices smarter and more capable. I also discussed the improved accessibility of embedded technologies, such as FPGAs, that are allowing more developers to participate, experiment, and drive innovation in our industry.
Today, I’d like to discuss another topic that is driving change in our industry and was heavily featured at the Embedded Online Conference – security.
Security is still being under-prioritised in our industry. You only have to watch the first 12 minutes of Maria "Azeria" Markstedter’s ‘defending against Hackers’ talk to see the lack of security features in widely used IoT devices today.
Security is often seen as a burden - but, it doesn’t need to be. In recent years, many passionate security researchers have helped to highlight some simple steps you can take to vastly improve the overall security of your system. In fact, by clearly identifying the threats and utilizing appropriate and well-defined mitigation techniques, systems become much harder to compromize. I’d recommend watching these talks to familiarize yourself with some of the different aspects of security you need to consider:
- Azeria is a security researcher and Arm Innovator, she is passionate about educating developers on how to defend their applications against malicious attacks. In this talk, Maria focusses on shedding the light on the most common exploit mitigations to consider for memory-corruption-based exploits, when writing code for Arm Cortex-A processors, such as Execute Never (XN), Address Space Layout Randomisation (ASLR) and stack canaries. What’s really interesting is that it becomes clear from listening to Azeria’s talk and from seeing the audience comments that there is a lot of low-hanging fruit that we, as developers, are not fully aware of. We should collectively, start to see exploit mitigations as great tools to increase the security of our systems, no matter what type of code we are writing.
- In the same vein as Maria’s talk, Aljoscha Lautenbach discusses some of the most common vulnerabilities and security mechanisms for the IoT, but with a focus on cryptography. He focusses on how to use block cipher modes correctly, common insecure algorithms to watch out for and the importance of entropy and initialization vectors (IVs)
- A different approach is taken by Colin O'Flynn in his talk, Hardware Hacking: Hands-On. I personally really appreciate the angle that Colin takes, as it is something that, as software engineers, we tend to forget. The IoT and embedded devices running our code can be physically tampered in order to extract our secrets. As Colin mentions protecting from these attacks is usually costly, but there are a lot of steps we can take to substantially mitigate the risk. The first step is to analyse the weaknesses of our system by performing a threat analysis to ensure we are covering all bases when architecting and implementing our code. A popular framework to address the issue of security is the Platform Security Architecture (PSA) that Jacob Beningo describes in detail during his talk. Colin then moves on to introduce practical tools and techniques that you can use to test the ability of your systems to resist physical attacks.
The passion of the security community to educate embedded software developers on security system flaws is shown during the talks and the answers to the questions submitted.
With the growing number of news headlines depicting compromised IoT devices, it is clear that security is no longer optional. The collaboration between the security researchers and the software and hardware communities I have seen at this and at many other conferences and events reassures me that we really are on the verge of putting security first.
It has been great to see so many talks at the Embedded Online Conference, highlighting the new opportunities for developers in the embedded world. If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com
*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!
In case you missed the previous posts in this series, here they are:
- Part 1, Embedded Online Conference – Developing for a Changing World
- Part 2, Embedded Online Conference – Software Defined Hardware
Abstract
Blockchain (BC) in the Internet of Things (IoT) is a novel technology that acts with decentralized, distributed, public and real-time ledger to store transactions among IoT nodes. A blockchain is a series of blocks, each block is linked to its previous blocks. Every block has the cryptographic hash code, previous block hash, and its data. The transactions in BC are the basic units that are used to transfer data between IoT nodes. The IoT nodes are different kind of physical but smart devices with embedded sensors, actuators, programs and able to communicate with other IoT nodes. The role of BC in IoT is to provide a procedure to process secured records of data through IoT nodes. BC is a secured technology that can be used publicly and openly. IoT requires this kind of technology to allow secure communication among IoT nodes in heterogeneous environment. The transactions in BC could be traced and explored through anyone who are authenticated to communicate within the IoT. The BC in IoT may help to improve the communication security. In this paper, I explored this approach, its opportunities and challenges.
Keywords : Blockchain, Internet of Things (IoT), Cryptography, Security, Communication.
INTRODUCTION
The IoT is growing exponentially year by year with its aim in 5G technologies, like Smart Homes and Cities, e-Health, distributed intelligence etc. but it has challenges in security and privacy. The IoT devices are connected in a decentralized approach. So, it is very complex to use the standard existing security techniques in the communication among IoT nodes. The BC is a technology the provide the security in transactions among the IoT devices. It provides a decentralize, distribute and publicly available shared ledger to store the data of the blocks that are processed and verified in an IoT network. The data stored in the public ledger is managed automatically by using the Peer-to-peer topology. The BC is a technology where transactions fired in the form of a block in the BC among IoT nodes. The blocks are linked with each other and every device has its previous device address. The blockchain and IoT together work in the framework of IoT and Cloud integration. In the future, the BC would revolutionize the IoT communication [1]. The goals of BC and IoT integration could be summarized as follows.
Figure 1 : Blockchains and IoT
- i) Decentralized framework: This approach is similar in IoT and BC. It is removed the centralized system and provide the facility of a decentralized It improves the failure probability and performance of the overall system.
- ii) Security: In the BC, the transactions among nodes are secured. It is a very novel approach for secure communication. BC allows IoT devices to communicate with each other in a secure way.
iii) Identification: In IoT, all the connected devices are uniquely identified with a unique ID. Every block in BC is also uniquely identified. So, BC is a trusted technology that provides uniquely identified data stored in public ledger.
- iv) Reliability: IoT nodes in BC have the capabilities to authenticate the information passed in the network. The data is reliable because it is verified by the miners before entering in BC. Only verified blocks can enter in the BC.
- v) Autonomous: In BC, all IoT nodes are free to communicate with any node in the network without the centralized
- vi) Scalability: In BC, the IoT devices will communicate in high-available, a distributed intelligence network that connects with destination device in a real-time and exchange information.
The rest is summarized as follows: section 2 represents the literature survey, section 3 introduces the role of BC in IoT, section 4 represents the opportunities of the integrated approach, section 5 represents the challenges and section 6 represents the conclusion.
- LITERATURE SURVEY
The security and privacy in the communication among IoT devices paid too much attention in the year of 2017 and 2018. Several papers are published during the year 2017 and 2018. In the year of 1990, Stuart Haber and W. Scott Stornetta were written an article [3] on exchanging a document with privacy without storing any information on the time-stamping service. The idea of blockchains comes from [3] but the first blockchains were presented by Satoshi Nakamoto in 2008. He presented a paper where the blocks were added in a chain and form a blockchain [4]. In the article [5], the authors presented the “IoTChain” for authentication of information exchanged between two nodes in an IoT network. They have presented an algorithm to exchange the information in IoT and blockchains (fig 2) [5]. In this paper, authors are focused on the authorization part of the security in the IoTChain framework.
Figure 2 : IoT Chain framework
In the article [6], the authors explored the cloud and MANET framework to connect the smart devices in the internet of things and provide communication security. In the article [7], authors represent a very nice framework called internet-cloud framework, it is a good idea to provide secure communication to the IoT devices. In the article [8], the authors provide a middleware framework in the cloud-MANET architecture for accessing data among the IoT devices. Article [9,10] represents the reliability in the communication among IoT nodes. The articles [11,12,13,14,15] are providing the mobility models for communication in 5G networks. In the article [16], the fuzzy logic-based mobility framework is explained for communication security. In the article [17], a nice survey on blockchains and IoT done by the researchers. They present the idea of the security in the BC-IoT to develop the IoT apps with the power of BCs.
III. THE ROLE OF BC IN IoT
The IoT enables the connected physical things to exchange their information in the heterogeneous network [18]. The IoT could be divided into the following sections.
- Physical Things: The IoT provide the unique id for each connected thing in the network. The physical things are able to exchange data with other IoT nodes.
- Gateways: The gateways are the devices work among physical things and the cloud to ensure that the connection is established and security provided to the network.
- Networking: it is used to control the flow of data and establish the shortest route among the IoT nodes.
- Cloud: It is used to store and compute the data.
The BC is a chain of verified and cryptographic blocks of transactions held by the device connected in a network. The blocks data are stored in the digital ledger that is publicly shared and distributed. The BC provides secure communication in IoT network. The blockchain can be a private, public or consortium with different properties. The following table represents the differentiation among all kind of blockchains.
Table 1 : Kinds of Blockchains and their properties
BC/ Properties |
Efficiency |
Decentralized |
Accord growth |
immovableness |
Reading |
Determining |
Private BC |
good |
No |
yes |
Can be |
Can be publicly |
Only one industry |
Public BC |
worse |
Yes |
no |
No |
publicly |
All miners |
Consortium BC |
good |
Sometimes |
yes |
Can be |
Can be publicly |
IoT devices |
The database in blockchains has the properties such as decentralized trust model, high security, highly publicly accessed, privacy is low to high and the transferable identities while in a centralized database, the properties are centralized trust model, low in security, low publicly accessed, privacy is high and non-transferable identities. From the above properties, the blockchain is more advanced than the centralized storage.
Figure 3 : (a) Centralized (b) Decentralized (c) Distributed
The following platforms are used to develop IoT applications using blockchain technology.
- IOTA: The IOTA is the new platform for the blockchain and IoT called Next generation blockchains. This platform facilitates the high data integrity, high performance of transactions and high validity of blocks with using fewer resources. It resolves the limitations of blockchains [19].
- IOTIFY: It provides web-based internet of things solution to minimize the limitations of blockchains technology in the form of custom apps [20].
- iExec: It is an open source blockchain based tool. It facilitates your apps the decentralized cloud advantages [21].
- Xage: It is the secure blockchain platform for IoT to increase automation and secure information [22].
- SONM: It is a decentralized blockchain based fog computing platform to provide secure cloud services.
The IoT and blockchains are increasing the business opportunities and opening the new markets where everyone or everything can communicate in a real-time with authenticity, privacy and security in a decentralized approach. The integration of these novel technologies will change the current world where the devices will communicate without the humans in various stages. The objective of the framework is to get the secured data on the right location, on the right format, at real-time. The BC could be used to track billions of IoT connected things, coordinate these things, enabling the processing of the transactions, resolving or eliminating the failures and making the flexible ecosystem for running the physical things on it. Hashing techniques are used in blocks of data by BC to create information privacy for the users.
- OPPORTUNITIES
The BC-IoT integration approach has a lot of remarkable opportunities. It opens the new doors for both together. Some of the opportunities are described as follows.
- Building the Trust between parties: The BC-IoT approach will build trust among the various connected devices because of its security features. Only verified devices can communicate in the network and every block of the transaction will first verify by the miners then they can enter in the BC.
- Reduce the Cost: This approach will reduce the cost because it communicates directly without the third party. It eliminates all the third-party nodes between the sender and the receiver. It provides direct communication.
Figure 4 : Opportunities in BC-IoT
3: Reduce Time: This approach is reduced the time a lot. It reduces the time taken in transactions from days to second.
4: Security and Privacy: It provides security and privacy to the devices and information.
- Social Services: This approach provides public and social services to the connected devices. All connected devices can communicate and exchange information between them.
- financial Services: This approach transfer funds in a secure way without the third party. It provides fast, secure and private financial service. It reduced transfer cost and time.
- Risk management: This approach is played the important roles to analyze and reduce the risk of failing the resources and transactions.
- CHALLENGES
The IoT and BC could face a lot of challenges such as scale, store, skills, discover etc. The following are the challenges faced by the integration approach.
- Scalability: The BC can become hang because of its heavy load of the transaction. The Bitcoin storage is becoming more than 197 GB storage in 2019 [24]. Imagine if IoT integrates with BC then the load will be heavier than the current situation.
- Storage: The digital ledger will be stored on every IoT node. By the time, it will increase in its storage size that will be a challenging task and become a heavy load on each and every connected device.
- Lack of Skills: The BC is a new technology. It is known by very few people in the world. So, it is also a challenge to train the people about the technology.
Figure 5 : Challenges in BC-IoT
- Discovery and Integration: Actually, BC is not designed for IoT. It is a very challenging task for the connected devices to discover another device in BC and IoT. So, IoT nodes can discover each other but they can be unable to discover and integrate the BC with another device.
- Privacy: The ledger is distributed publicly to every connected node. They can see the ledger transactions. So, privacy is also a challenging task in the integrated approach.
- Interoperability: The BC can be public or private. So, the interoperability between public and private blockchains is also a challenge in the BC-IoT approach.
- Rules and Regulation: The IoT-BC will act globally, so it faces many rules and regulations for implementing this approach globally.
- CONCLUSION
The BC and IoT is a novel approach explored in this article. Many opportunities and challenges are described. Also, available platforms are listed in this article. This approach can be the future of the internet because it can overhaul the current internet system and change it with the new one where every smart device will connect to other devices using the peer-to-peer network in a real-time. It can reduce the current cost and time and provide the right information to the right device in a real-time. So, it can be very useful in the future.
VIII. REFERENCES
- Reyna, Ana, et al. "On blockchain and its integration with IoT. Challenges and opportunities." Future Generation Computer Systems (2018). DOI: https://doi.org/10.1016/j.future.2018.05.046
- Zheng, Zibin, et al. "Blockchain challenges and opportunities: A survey." International Journal of Web and Grid Services 14.4 (2018): 352-375. DOI: https://doi.org/10.1504/IJWGS.2018.095647
- Haber, Stuart, and W. Scott Stornetta. "How to time-stamp a digital document." Conference on the Theory and Application of Cryptography. Springer, Berlin, Heidelberg, 1990.
- Nakamoto, Satoshi. "Bitcoin: A peer-to-peer electronic cash system." (2008).
- Alphand, Olivier, et al. "IoTChain: A blockchain security architecture for the Internet of Things." Wireless Communications and Networking Conference (WCNC), 2018 IEEE. IEEE, 2018.
- Alam T, Benaida M. The Role of Cloud-MANET Framework in the Internet of Things (IoT). International Journal of Online Engineering (iJOE). 2018;14(12):97-111. DOI: https://doi.org/10.3991/ijoe.v14i12.8338
- Alam T, Benaida M. CICS: Cloud–Internet Communication Security Framework for the Internet of Smart Devices. International Journal of Interactive Mobile Technologies (iJIM). 2018 Nov 1;12(6):74-84. DOI: https://doi.org/10.3991/ijim.v12i6.6776
- Alam, Tanweer. "Middleware Implementation in Cloud-MANET Mobility Model for Internet of Smart Devices", International Journal of Computer Science and Network Security, 17(5), 2017. Pp. 86-94
- Tanweer Alam, "A Reliable Communication Framework and Its Use in Internet of Things (IoT)", International Journal of Scientific Research in Computer Science, Engineering and Information Technology (IJSRCSEIT), Volume 3, Issue 5, pp.450-456, May-June.2018 URL: http://ijsrcseit.com/CSEIT1835111
- Alam, Tanweer. (2018) "A reliable framework for communication in internet of smart devices using IEEE 802.15.4." ARPN Journal of Engineering and Applied Sciences 13(10), 3378-3387
- Alam, Tanweer, Arun Pratap Srivastava, Sandeep Gupta, and Raj Gaurang Tiwari. "Scanning the Node Using Modified Column Mobility Model." Computer Vision and Information Technology: Advances and Applications 455 (2010).
- Alam, Tanweer, Parveen Kumar, and Prabhakar Singh. "SEARCHING MOBILE NODES USING MODIFIED COLUMN MOBILITY MODEL.", International Journal of Computer Science and Mobile Computing, (2014).
- Alam, Tanweer, and B. K. Sharma. "A New Optimistic Mobility Model for Mobile Ad Hoc Networks." International Journal of Computer Applications 8.3 (2010): 1-4. DOI: https://doi.org/10.5120/1196-1687
- Singh, Parbhakar, Parveen Kumar, and Tanweer Alam. "Generating Different Mobility Scenarios in Ad Hoc Networks.", International Journal of Electronics Communication and Computer Technology, 4(2), 2014
- Sharma, Abhilash, Tanweer Alam, and Dimpi Srivastava. "Ad Hoc Network Architecture Based on Mobile Ipv6 Development." Advances in Computer Vision and Information Technology (2008): 224.
- Alam, Tanweer. "Fuzzy control based mobility framework for evaluating mobility models in MANET of smart devices." ARPN Journal of Engineering and Applied Sciences 12, no. 15 (2017): 4526-4538.
- Conoscenti, Marco, Antonio Vetro, and Juan Carlos De Martin. "Blockchain for the Internet of Things: A systematic literature review." Computer Systems and Applications (AICCSA), 2016 IEEE/ACS 13th International Conference of. IEEE, 2016.
- Gubbi, Jayavardhana, et al. "Internet of Things (IoT): A vision, architectural elements, and future directions." Future generation computer systems 29.7 (2013): 1645-1660. DOI: https://doi.org/10.1016/j.future.2013.01.010
- https://www.iota.org
- https://iotify.org
- https://iex.ec/overview
- https://xage.com
- https://www.i-scoop.eu/blockchain-distributed-ledger-technology/blockchain-iot
- https://www.statista.com/statistics/647523/worldwide-bitcoin-blockchain-size
Reference:
Tanweer Alam. " Blockchain and its Role in the Internet of Things (IoT).", International Journal of Scientific Research in Computer Science, Engineering and Information Technology. Vol 5(1), 2019. DOI: 10.32628/CSEIT195137
Helium, the company behind one of the world’s first peer-to-peer wireless networks, is announcing the introduction of Helium Tabs, its first branded IoT tracking device that runs on The People’s Network. In addition, after launching its network in 1,000 cities in North America within one year, the company is expanding to Europe to address growing market demand with Helium Hotspots shipping to the region starting July 2020.
Since its launch in June 2019, Helium quickly grew its footprint with Hotspots covering more than 700,000 square miles across North America. Helium is now expanding to Europe to allow for seamless use of connected devices across borders. Powered by entrepreneurs looking to own a piece of the people-powered network, Helium’s open-source blockchain technology incentivizes individuals to deploy Hotspots and earn Helium (HNT), a new cryptocurrency, for simultaneously building the network and enabling IoT devices to send data to the Internet. When connected with other nearby Hotspots, this acts as the backbone of the network.
“We’re excited to launch Helium Tabs at a time where we’ve seen incredible growth of The People’s Network across North America,” said Amir Haleem, Helium’s CEO and co-founder. “We could not have accomplished what we have done, in such a short amount of time, without the support of our partners and our incredible community. We look forward to launching The People’s Network in Europe and eventually bringing Helium Tabs and other third-party IoT devices to consumers there.”
Introducing Helium Tabs that Run on The People’s Network
Unlike other tracking devices,Tabs uses LongFi technology, which combines the LoRaWAN wireless protocol with the Helium blockchain, and provides network coverage up to 10 miles away from a single Hotspot. This is a game-changer compared to WiFi and Bluetooth enabled tracking devices which only work up to 100 feet from a network source. What’s more, due to Helium’s unique blockchain-based rewards system, Hotspot owners will be rewarded with Helium (HNT) each time a Tab connects to its network.
In addition to its increased growth with partners and customers, Helium has also seen accelerated expansion of its Helium Patrons program, which was introduced in late 2019. All three combined have helped to strengthen its network.
Patrons are entrepreneurial customers who purchase 15 or more Hotspots to help blanket their cities with coverage and enable customers, who use the network. In return, they receive discounts, priority shipping, network tools, and Helium support. Currently, the program has more than 70 Patrons throughout North America and is expanding to Europe.
Key brands that use the Helium Network include:
- Nestle, ReadyRefresh, a beverage delivery service company
- Agulus, an agricultural tech company
- Conserv, a collections-focused environmental monitoring platform
Helium Tabs will initially be available to existing Hotspot owners for $49. The Helium Hotspot is now available for purchase online in Europe for €450.
This blog is the second part of a series covering the insights I uncovered at the 2020 Embedded Online Conference.
Last week, I wrote about the fascinating intersection of the embedded and IoT world with data science and machine learning, and the deeper co-operation I am experiencing between software and hardware developers. This intersection is driving a new wave of intelligence on small and cost-sensitive devices.
Today, I’d like to share with you my excitement around how far we have come in the FPGA world, what used to be something only a few individuals in the world used to be able to do, is at the verge of becoming more accessible.
I’m a hardware guy and I started my career writing in VHDL at university. I then started working on designing digital circuits with Verilog and C and used Python only as a way of automating some of the most tedious daily tasks. More recently, I have started to appreciate the power of abstraction and simplicity that is achievable through the use of higher-level languages, such as Python, Go, and Java. And I dream of a reality in which I’m able to use these languages to program even the most constrained embedded platforms.
At the Embedded Online Conference, Clive Maxfield talked about FPGAs, he mentions “in a world of 22 million software developers, there are only around a million core embedded programmers and even fewer FPGA engineers.” But, things are changing. As an industry, we are moving towards a world in which taking advantage of the capabilities of a reconfigurable hardware device, such as an FPGA, is becoming easier.
- What the FAQ is an FPGA, by Max the Magnificent, starts with what an FPGA is and the beauties of parallelism in hardware – something that took me quite some time to grasp when I first started writing in HDL (hardware description languages). This is not only the case for an FPGA, but it also holds true in any digital circuit. The cool thing about an FPGA is the fact that at any point you can just reprogram the whole board to operate in a different hardware configuration, allowing you to accelerate a completely new set of software functions. What I find extremely interesting is the new tendency to abstract away even further, by creating HLS (high-level synthesis) representations that allow a wider set of software developers to start experimenting with programmable logic.
- The concept of extending the way FPGAs can be programmed to an even wider audience is taken to the next level by Adam Taylor. He talks about PYNQ, an open-source project that allows you to program Xilinx boards in Python. This is extremely interesting as it opens up the world of FPGAs to even more software engineers. Adam demonstrates how you can program an FPGA to accelerate machine learning operations using the PYNQ framework, from creating and training a neural network model to running it on Arm-based Xilinx FPGA with custom hardware accelerator blocks in the FPGA fabric.
FPGAs always had the stigma of being hard and difficult to work on. The idea of programming an FPGA in Python, was something that no one had even imagined a few years ago. But, today, thanks to the many efforts all around our industry, embedded technologies, including FPGAs, are being made more accessible, allowing more developers to participate, experiment, and drive innovation.
I’m excited that more computing technologies are being put in the hands of more developers, improving development standards, driving innovation, and transforming our industry for the better.
If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com
Part 3 of my review can be viewed by clicking here
In case you missed the previous post in this blog series, here it is:
*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!
I recently joined the Embedded Online Conference thinking I was going to gain new insights on embedded and IoT techniques. But I was pleasantly surprised to see a huge variety of sessions with a focus on modern software development practices. It is becoming more and more important to gain familiarity with a more modern software approach, even when you’re programming a constrained microcontroller or an FPGA.
Historically, there has been a large separation between application developers and those writing code for constrained embedded devices. But, things are now changing. The embedded world intersecting with the world of IoT, data science, and ML, and the deeper co-operation between software and hardware communities is driving innovation. The Embedded Online Conference, artfully organised by Jacob Beningo, represented exactly this cross-section, projecting light on some of the most interesting areas in the embedded world - machine learning on microcontrollers, using test-driven development to reduce bugs and programming an FPGA in Python are all things that a few years ago, had little to do with the IoT and embedded industry.
This blog is the first part of a series discussing these new and exciting changes in the embedded industry. In this article, we will focus on machine learning techniques for low-power and cost-sensitive IoT and embedded Arm-based devices.
Think like a machine learning developer
Considered for many year's an academic dead end of limited practical use, machine learning has gained a lot of renewed traction in recent years and it has now become one of the most interesting trends in the IoT space. TinyML is the buzzword of the moment. And this was a hot topic at the Embedded Online Conference. However, for embedded developers, this buzzword can sometimes add an element of uncertainty.
The thought of developing IoT applications with the addition of machine learning can seem quite daunting. During Pete Warden’s session about the past, present and future of embedded ML, he described the embedded and machine learning worlds to be very fragmented; there are so many hardware variants, RTOS’s, toolchains and sensors meaning the ability to compile and run a simple ‘hello world’ program can take developers a long time. In the new world of machine learning, there’s a constant churn of new models, which often use different types of mathematical operations. Plus, exporting ML models to a development board or other targets is often more difficult than it should be.
Despite some of these challenges, change is coming. Machine learning on constrained IoT and embedded devices is being made easier by new development platforms, models that work out-of-the-box with these platforms, plus the expertise and increased resources from organisations like Arm and communities like tinyML. Here are a few must-watch talks to help in your embedded ML development:
- New to the tinyML space is Edge Impulse, a start-up that provides a solution for collecting device data, building a model based around it and deploying it to make sense of the data directly on the device. CTO at Edge Impulse, Jan Jongboom talks about how to use a traditional signal processing pipeline to detect anomalies with a machine learning model to detect different gestures. All of this has now been made even easier by the announced collaboration with Arduino, which simplifies even further the journey to train a neural network and deploy it on your device.
- Arm recently announced new machine learning IP that not only has the capabilities to deliver a huge uplift in performance for low-power ML applications, but will also help solve many issues developers are facing today in terms of fragmented toolchains. The new Cortex-M55 processor and Ethos-U55 microNPU will be supported by a unified development flow for DSP and ML workloads, integrating optimizations for machine learning frameworks. Watch this talk to learn how to get started writing optimized code for these new processors.
- An early adopter implementing object detection with ML on a Cortex-M is the OpenMV camera - a low-cost module for machine vision algorithms. During the conference, embedded software engineer, Lorenzo Rizzello walks you through how to get started with ML models and deploying them to the OpenMV camera to detect objects and the environment around the device.
Putting these machine learning technologies in the hands of embedded developers opens up new opportunities. I’m excited to see and hear what will come of all this amazing work and how it will improve development standards and transform embedded devices of the future.
If you missed the conference and would like to catch the talks mentioned above*, visit www.embeddedonlineconference.com
*This blog only features a small collection of all the amazing speakers and talks delivered at the Conference!
Part 2 of my review can be viewed by clicking here
Recovering from a system failure or a software glitch can be no easy task. The longer the fault occurs the harder it can be to identify and recover. The use of an external watchdog is an important and critical tool in the embedded systems engineer toolbox. There are five tips that should be taken into account when designing a watchdog system.
Tip #1 – Monitor a heartbeat
The simplest function that an external watchdog can have is to monitor a heartbeat that is produced by the primary application processor. Monitoring of the heartbeat should serve two distinct purposes. First, the microcontroller should only generate the heartbeat after functional checks have been performed on the software to ensure that it is functioning. Second, the heartbeat should be able to reveal if the real-time response of the system has been jeopardized.
Monitoring the heartbeat for software functionality and real-time response can be done using a simple, “dumb” external watchdog. The external watchdog should have the capability to assign a heartbeat period along with a window that the heartbeat must appear within. The purpose of the heartbeat window is to allow the watchdog to detect that the real-time response of the system is compromised. In the event that either functional or real-time checks fail the watchdog then attempts to recover the system through a reset of the application processor.
Tip #2 – Use a low capability MCU
External watchdogs that can be to monitor a heartbeat are relatively low cost but can severely limit the capabilities and recovery possibilities of the watchdog system. A low capability microcontroller can cost nearly the same amount as an external watchdog timer so why not add some intelligence to the watchdog and use a microcontroller. The microcontroller firmware can be developed to fulfill the windowed heartbeat monitoring with the addition of so much more. A “smart” watchdog like this is sometimes referred to as a supervisor or safety watchdog and has actually been used for many years in different industries such as automotive. Generally a microcontroller watchdog has been reserved for safety critical applications but given the development tools and the cost of hardware it can be cost effective in other applications as well.
Tip #3 – Supervise critical system functions
The decision to use a small microcontroller as a watchdog opens nearly endless possibilities of how the watchdog can be used. One of the first roles of a smart watchdog is usually to supervise critical system functions such as a system current or sensor state. One example of how a watchdog could supervise a current would be to take an independent measurement and then provide that value to the application processor. The application processor could then compare its own reading to that of the watchdog. If there were disagreement between the two then the system would execute a fault tree that was deemed to be appropriate for the application.
Tip #4 – Observe a communication channel
Sometimes an embedded system can appear to be operating as expected to the watchdog and the application processor but from an external observer be in a non-responsive state. In such cases it can be useful to tie the smart watchdog to a communication channel such as a UART. When the watchdog is connected to a communication channel it not only monitor channel traffic but even commands that are specific to the watchdog. A great example of this is a watchdog designed for a small satellite that monitors radio communications between the flight computer and ground station. If the flight computer becomes non-responsive to the radio, a command could be sent to the watchdog that is then executed and used to reset the flight computer.
Tip #5 – Consider an externally timed reset function
The question of who is watching the watchdog is undoubtedly on the minds of many engineers when using a microcontroller for a watchdog. Using a microcontroller to implement extra features adds some complexity and a new software element to the system. In the event that the watchdog goes off into the weeds how is the watchdog going to recover? One option would be to use an external watchdog timer that was discussed earlier. The smart watchdog would generate a heartbeat to keep itself from being reset by the watchdog timer. Another option would be to have the application processor act as the watchdog for the watchdog. Careful thought needs to be given to the best way to ensure both processors remain functioning as intended.
Conclusion
The purpose of the smart watchdog is to monitor the system and the primary microcontroller to ensure that they operate as expected. During the design of a system watchdog it can be very tempting to allow the number of features supported to creep. Developers need to keep in mind that as the complexity of the smart watchdog increases so does the probability that the watchdog itself will contain potential failure modes and bugs. Keeping the watchdog simple and to the minimum necessary feature set will ensure that it can be exhaustively tested and proven to work.
Originally Posted here
In the comments section of my 2020 embedded salary survey, quite a few respondents felt that much of the embedded world is being subsumed by canned solutions. Will OSes like Linux and cheap, powerful boards like the Raspberry Pi and Arduino replace traditional engineering? Has that already happened?
A number of people complained their colleagues no longer understand low-level embedded things like DMA, chip selects, diddling I/O registers, and the like. They feel these platforms isolate the engineer from those details.
Part of me says yeah! That's sort of what we want. Reuse and abstraction means the developer can focus on the application rather than bringing up a proprietary board. Customers want solutions and don't care about implementation details. We see these abstractions working brilliantly when we buy a TCP/IP stack, often the better part of 100K lines of complex code. Who wants to craft those drivers?
Another part of me says "save me from these sorts of products." It is fun to design a board. To write the BSP and toss bits at peripheral registers. Many of us got a rush the first time we made an LED blink or a motor spin. I still find that fulfilling.
So what's the truth? Is the future all Linux and Pis?
The answer is a resounding "no." A search for "MCU" on Digi-Key gets 89,149 part numbers. Sure, many of these are dups with varying packages and the like, but that's still a ton of controllers.
Limiting that search to 8 bitters nets 30,574 parts. I've yet to see Linux run on a PIC or other tiny device.
Or filter to Cortex-M devices only. You still get 16,265 chips. None of those run Linux, Windows, BSD, or any other general-purpose OS. These are all designed into proprietary boards. Those engineers are working on the bare metal... and having a ton of fun.
The bigger the embedded world gets the more applications are found. Consider machine learning. That's for big iron, for Amazon Web Services, right? Well, partly. Eta Compute and other companies are moving ML to the edge with smallish MCUs running at low clock rates with limited memory. Power consumption rules, and 2 GB of RAM at 1 GHz just doesn't cut it when harvesting tiny amounts of energy.
Then there's cost. If you can reduce the cost of a product made in the millions by just a buck the business prospers. Who wants a ten dollar CPU when a $0.50 microcontroller will do?
Though I relish low-level engineering our job is to get products to market as efficiently as possible. Writing drivers for a timer is sort of silly when you realize that thousands of engineers using the same part are doing the same thing. Sure, semi vendors often deliver code to handle all of this, but in my experience most of that is either crap or uses the peripherals in the most limited ways. A few exceptions exist, such as Renesas's Synergy. They go so far as to guarantee that code. My fiddling with it leaves me impressed, though the learning curve is steep. But that sort of abstraction surely must be a part of this industry going forward. Just as we don't write protocol stacks and RTOSes any more, canned code will become more common.
Linux and canned boards have important roles in this business. But an awful lot of us will still work on proprietary systems.
View original post here
For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe
All the representations I’ve seen regarding social distancing guidelines for engineers have depicted what appear to be two male members of the species, which is “So mid-20th century, my dear!”
5 Tips for Expanding your Embedded Skills
As embedded systems engineers, we work in a field that is constantly changing. Not only does change come quickly, the amount of work and the skills we need in order to successfully do our jobs is constantly expanding. A firmware engineer used to need to know the microcontroller hardware and assembly language. Today, they need to know the hardware, several languages, machine learning, security, and dozen other topics. In today’s post, we are going to look at five ways to expand your skillset and stay ahead of the game.
Tip #1 – Take an online course
Taking an online course is a great way to enhance and add to your skillset. If anyone tries to tell you that you don’t need additional coursework don’t let them fool. I’ve often been called an expert in embedded systems, but just like everyone else, I need to take courses to learn and maintain my skillset. In fact, just this week I took a course on Test Driven Development taught by James Grenning, the expert in TDD. I’ve been playing with TDD on and off for several years but despite that familiarity, working with an expert in a subject matter will dramatically improve your skills. I was able to pick James’ brain on TDD, enhance my skills and walked away with several action items to work on over the next several months.
Start by identifying an area of your own skillset that is deficient, rusty or even an area that you want to just move to the next level in. Then find the expert on that topic and take an online, interactive or self-paced course with them. (I won’t mention my own courses that you can find here … ooopps! )
Tip #2 – Read a book
Books can be a great way to enhance your skills. There are dozens of books on embedded system design that can easily be found at any bookstore or online. Some books are better than others. I’ve started to write-up reviews on the books that I’ve read in order to provide you with recommendations on books. This is just in its infancy and can be found at: https://www.beningo.com/?s=book (I’ll be adding a category in the near future to the blog).
You might also want to check out Jack Ganssles book reviews as well which you can find at: http://www.ganssle.com/bkreviews.htm
Books that I am currently working through myself that I’ve been finding to be fantastic so far include:
- TinyML
- Clean Code
- The object-oriented thought process
Tip #3 – Watch a webinar
Webinars are a great way to get a high-level understanding of a new skill or topic. I don’t think a day goes by where I don’t get an advertisement for a webinar in my inbox. Unfortunately, all webinars are not created equal. I’ve come across many webinars that sound fantastic, only to later discover that they are totally marketing focused with little real technical information. I produced anywhere from 8 – 12 webinars per year and always try to include high-level theory, some low-level details and then a practical example through a demonstration. It doesn’t always work out that way and every now and then they undoubtedly flirt with being marketing versus technical, but I always try to make sure that developers get what they need and know where they need to go to dive deeper.
Over the coming months keep a close eye on webinars as a potential source to enhance your skills. I know that I’ll be attending several on Bluetooth Mesh networking (hoping they aren’t pure marketing pitches), and I will also be pulling together several of my own.
Tip #4 – Build something for fun
There is no better way to learn a new skill than to do something! I’ve always found that people who attend my webinars, courses, etc learn more if there are demonstrations and hands-on materials. It’s great to read about machine learning of continuous integration servers but unless you set one up, it’s just theory. We all know that the devil is in the details and applying the skill is what sharpens it.
I highly recommend that developers build something for fun. More than a decade ago when I wanted to learn how to design and layout PCB’s and work with USB firmware, I decided that I was going to develop a USB controlled light bar. I went through an accelerated development schedule and designed schematics and a PCB, had it fabricated and then hand soldered the parts. I wrote all the firmware and eventually had a working device. I learned so much building that simple light bar and even used it for as an example during interviews when I was looking for a new job (this was before I started my business).
Even today, I will still pick a project when I want to learn something. When I was evaluating MicroPython I built an internet connected weather station. It forced me to exercise many details and forced me to solve problems that I otherwise might not have considered if I hadn’t dived into the deep end.
Tip #5 – Find a mentor
The times that I’ve accelerated my understanding of something the most has usually been under the guidance of a mentor or coach. Someone who has mastered the skill you are trying to work with, has made every mistake and can share their wisdom. It’s certainly possible to learn and advance without a mentor but having feedback and the ability to answer a question and then get an educated response can dramatically accelerate the time involved. That’s one of the reasons why I often host interactive webinars and even have a coaching and trusted advisor offering for my clients. It’s just extremely helpful!
Conclusions
No matter how good you are at developing embedded software, hardware and systems, if you don’t take the time to update your skills then within just a few years you’ll find that everyone else is passing you by. You’ll be less efficient and find that you are struggling. Continuing education is critical to engineers to ensure that they are up to date on the latest and greatest practices and contribute their products success.
Originally posted here
By Jack Ganssle
As Good As It Gets
How good does firmware have to be? How good can it be? Is our search for perfection, or near-perfection an exercise in futility?
Complex systems are a new thing in this world. Many of us remember the early transistor radios which sported a half dozen active devices, max. Vacuum tube televisions, common into the 70s, used 15 to 20 tubes, more or less equivalent to about the same number of transistors. The 1940s-era ENIAC computer required 18,000 tubes, so many that technicians wheeled shopping carts of spares through the room, constantly replacing those that burned out. Though that sounds like a lot of active elements, even the 25 year old Z80 chip used a quarter of that many transistors, in a die smaller than just one of the hundreds of thousands of resistors in the ENIAC.
Now the Pentium IV, merely one component of a computer, has 45 million transistors. A big memory chip might require a third of a billion. Intel predicts that later this decade their processors will have a billion transistors. I'd guess that the very simplest of embedded systems, like an electronic greeting card, requires thousands of active elements.
Software has grown even faster, especially in embedded applications. In 1975 10,000 lines of assembly code was considered huge. Given the development tools of the day - paper tape, cassettes for mass storage, and crude teletypes for consoles - working on projects of this size was very difficult. Today 10,000 lines of C - representing perhaps 3 to five times as much assembly - is a small program. A cell phone might contain a million lines of C or C++, astonishing considering the device's small form factor and miniscule power requirements.
Another measure of software size is memory usage. The 256 byte (that's not a typo) EPROMs of 1975 meant even a measly 4k program used 16 devices. Clearly, even small embedded systems were quite pricey. Today? 128k of Flash is nothing, even for a tiny app. The switch from 8 to 16 bit processors, and then from 16 to 32 bitters, is driven more by addressing space requirements than raw horsepower.
In the late 70s Seagate introduced the first small Winchester hard disk, a 5 Mb 10 pound beauty that cost $1500. 5 Mb was more disk space than almost anyone needed. Now 20 Gb fits into a shirt pocket, is almost free, and fills in the blink of an eye.
So, our systems are growing rapidly in both size and complexity. And, I contend, in failure modes. Are we smart enough to build these huge applications correctly?
It's hard to make even a simple application perfect; big ones will possibly never be faultless. As the software grows it inevitably becomes more intertwined; a change in one area impacts other sections, often profoundly. Sometimes this is due to poor design; often, it's a necessary effect of system growth.
The hardware, too, is certainly a long way from perfect. Even mature processors usually come with an errata sheet, one that can rival the datasheet in size. The infamous Pentium divide bug was just one of many bugs - even today the Pentium 3's errata sheet (renamed "specification update") contains 83 issues. Motorola documents nearly a hundred problems in the MPC555.
I salute the vendors for making these mistakes public. Too many companies frustrate users by burying their mistakes.
What is the current state of the reliability of embedded systems? No one knows. It's an area devoid of research. Yet a lot of raw data is available, some of which suggests we're not doing well.
The Mars Pathfinder mission succeeded beyond anyone's dreams, despite a significant error that crashed the software during the lander's descent. A priority inversion problem - noticed on Earth but attributed to a glitch and ignored - caused numerous crashes. A well-designed watchdog timer recovery strategy saved the mission. This was a very instructive failure as it shows the importance of adding external hardware and/or software to deal with unanticipated software errors.
The August 15, 2001 issue of the Journal of the American Medical Association contained a study of recalls of pacemakers and implantable cardioverter-defibrillators. (Since these devices are implanted subcutaneously I can't imagine how a recall works). Surely designers of these devices are on the cutting edge of building the very best software. I hope. Yet between 1990 and 2000 firmware errors accounted for about 40% of the 523,000 devices recalled.
Over the ten years of the study, of course, we've learned a lot about building better code. Tools have improved and the amount of real software engineering that takes place is much greater. Or so I thought. Turns out that the annual number of recalls between 1995 and 2000 increased.
In defense of the pacemaker developers, no doubt they solve very complex problems. Interestingly, heart rhythms can be mathematically chaotic. A slight change in stimulus can cause the heartbeat to burst into quite unexpected randomness. And surely there's a wide distribution of heart behavior in different patients.
Perhaps a QA strategy for these sorts of life-critical devices should change. What if the responsible person were one with heart disease! who had to use the latest widget before release to the general public?
A pilot friend tells me the 747 operator's manual is a massive tome that describes everything one needs to know about the aircraft and its systems. He says that fully half of the book documents avionics (read: software) errors and workarounds.
The Space Shuttle's software is a glass half-empty/half-full story. It's probably the best code ever written, with an average error rate of about one per 400,000 lines of code. The cost: $1000 per line. So, it is possible to write great code, but despite paying vast sums perfection is still elusive. Like the 747, though, the stuff works "good enough", which is perhaps all we can ever expect.
Is this as good as it gets?
The Human Factor
Let's remember we're not building systems that live in isolation. They're all part of a much more complex interacting web of other systems, not the least of which is the human operator or user. When tools were simple - like a hammer or a screwdriver - there weren't a lot of complex failure modes. That's not true anymore. Do you remember the USS Vincennes? She is a US Navy battle cruiser, equipped with the incredibly sophisticated Aegis radar system. In July, 1988 the cruiser shot down an Iranian airliner over the the target wasn't an incoming enemy warplane, but the data was displayed on a number of terminals that weren't easy to see. So here's a failure where the system worked as designed, but the human element created a terrible failure. Was the software perfect since it met the requirements?
Unfortunately, airliners have become common targets for warplanes. This past October a Ukrainian missile apparently shot down a Sibir Tu-154 commercial jet, killing all 78 passengers and crew. As I write the cause is unknown, or unpublished, but local officials claim the missile had been targeted on a close-by drone. It missed, flying 150 miles before hitting the jet. Software error? Human error?
The war in Afghanistan shows the perils of mixing men and machines. At least one smart bomb missed its target and landed on civilians. US military sources say wrong target data was entered. Maybe that means someone keyed in wrong GPS coordinates. It's easy to blame an individual for mistyping! but doesn't it make more sense to look at the entire system as a whole, including bomb and operator? Bombs have pretty serious safety-critical aspects. Perhaps a better design would accept targeting parameters in a string that includes a checksum, rather like credit card numbers. A mis-keyed entry would be immediately detected by the machine.
It's well-known that airplanes are so automated that on occasion both pilots have slipped off into sleep as the craft flies itself. Actually, that doesn't really bother me much, since the autopilot beeps when at the destination, presumably waking the crew. But, before leaving the fliers enter the destination in latitude/longitude format into the computers. What if they make a mistake (as has happened)? Current practice requires pilot and co-pilot to check each other's entries, which will certainly reduce the chance of failure. Why not use checksummed data instead and let the machine validate the data?
Another US vessel, the Yorktown, is part of the Navy's "Smart Ship" initiative. Hugely automating the engineering (propulsion) department reduces crew needs by 10% and saves some $2.8 million per year on this one ship. Yet the computers create new vulnerabilities. Reports suggest that an operator entered an incorrect parameter which resulted in a divide-by-zero error. The entire network of Windows NT machines crashed. The Navy claims the ship was dead in the water for about three hours; other sources claim it was towed into port for two days of system maintenance. Users are now trained to check their parameters more carefully. I can't help wonder what happens in the heat of battle, when these young sailors may be terrified, with smoke and fire perhaps raging. How careful will the checks be?
Some readers may also shudder at the thought of NT controlling a safety-critical system. I admire the Navy's resolve to use a commercial, off the shelf product, but wonder if Windows, which is the target of every hacker's wrath, might not itself create other vulnerabilities. Will the next war be won by the nation with the best hackers?
A plane crash in Florida, in which software did not contribute to the disaster, was a classic demonstration of how difficult it is to put complicated machines in the hands of less-than-perfect people. An instrument lamp burned out. It wasn't an important problem, but both pilots became so obsessed with tapping on the device they failed to notice that the autopilot was off. The plane very gently descended till it crashed, killing everyone.
People will always behave in unpredictable ways, leading to failures and disasters with even the best system designs. As our devices grow more complex their human engineering becomes ever more important. Yet all too often this is neglected in our pursuit of technical solutions.
Solutions?
I'm a passionate believer in the value of firmware standards, code inspections, and a number of other activities characteristic of disciplined development. It's my observation that an ad hoc or a non-existent process generally leads to crummy products. Smaller systems can succeed from the dedication of a couple of overworked experts, but as things scale up in size heroics becomes less and less successful.
Yet it seems an awful lot of us don't know about basic software engineering rules. When talking to groups I usually ask how many participants have (and use) rules about the maximum size of a function. A basic rule of software engineering is to limit routines to a page or less. Yet only rarely does anyone raise their hand. Most admit to huge blocks of code, sometimes thousands of lines. Often this is a result of changes and revisions, of the code evolving over the course of time. Yet it's a practice that inevitably leads to problems.
By and large methodologies have failed. Most are too big, too complex, or too easy to thwart and subvert. I hold great hopes for UML, which seems to offer a way to build products that integrates hardware and software, and that is an intrinsic part of development from design to implementation. But UML will fail if management won't pay for quite extensive training, or toss the approach when panic reigns.
The FDA, FAA, and other agencies are slowing becoming aware of the perils of poor software, and have guidelines that can improve development. Britain's MISRA (Motor Industry Software Reliability Association) has guidelines for the safer use of C. They feel that we need to avoid certain constructs and use others in controlled ways to eliminate potential error sources. I agree. Encouragingly, some tool vendors (notably Tasking) offer compilers that can check code against the MISRA standard. This is a powerful aid to building better code.
I doubt, though, that any methodology or set of practices can, in the real world of schedule pressures and capricious management, lead to perfect products. The numbers tell the story. The very best use of code inspections, for example, will detect about 70% of the mistakes before testing begins. (However, inspections will find those errors very cheaply). That suggests that testing must pick up the other 30%. Yet studies show that often testing checks only about 50% of the software!
Sure, we can (and must) design better tests. We can, and should, use code coverage tools to insure every execution path runs. These all lead to much better products, but not to perfection. Because all of the code is known to have run doesn't mean that complex interactions between inputs won't lead to bizarre outputs. As the number of decision paths increases - as the code grows - the difficulty of creating comprehensive tests skyrockets.
When time to market dominates development, quality naturally slips. If low cost is the most important parameter, we can expect more problems to slip into the product.
Software is astonishingly fragile. One wrong bit out of a hundred million can bring a massive system down. It's amazing that things work as well as they do!
Perhaps the nature of engineering is that perfection itself is not really a goal. Products are as good as they have to be. Competition is a form of evolution that often does lead to better quality. In the 70s Japanese automakers, who had practically no US market share, started shipping cars that were reliable and cheap. They stunned Detroit, which was used to making a shoddy product which dealers improved and customers tolerated. Now the playing field has leveled, but at an unprecedented level of reliability.
Perfection may elude us, but we must be on a continual quest to find better ways to build our products. Wise developers will spend their entire careers engaged in the search.
Originally posted here.
For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe
Introduction
Today several organizations are designing and developing their IT and network architectures by using IoT technology. The organizations are designing their own IoT network architecture too by choosing a network structure that is quite different in many ways in comparison to the traditional ones.
IoT networks collect and capture data from physical networks all across the world and translate it into workable infrastructure or network. That still uses many algorithms, programs and AI technology in itself. Many IoT app development companies of the industry rely upon this innovative way to get connected.
A Brief Introduction of IoT Network
In IoT network architecture, the data centres are combined through cloud technology. Other network components like core services, connective layers like 4G and 5G, embedded, Ethernet and sensory-based learning, are also connected through the cloud.
IoT Network Layers:
- Collection: The sensors and devices on product side collects and measures collections
- Operational: These are connectors and services that are located in the middle and are responsible for making the calls and creating the gateways
- Distribution: The distribution layer is the third and end layer that is responsible for connecting the rest two layers and deliver data and measurements in a meaningful manner
An IoT network collects and distributes data with the help of these three layers. Now let us discuss the six components of the IoT network:
IoT Network Architecture Components
The network architecture of IoT is divided into several areas as per Tim Zimmerman, a Gartner analyst. He says the first component is the device or sensor itself that can be either an MRI machine in healthcare or any other similar machine connected to the network.
The second component of the IoT network is communication. This will include how the device sends and receives data over the network. IoT network communication usually takes place with the help of two types of enterprise network architectures:
- Wide-area communication
- Cloud-application or on-premise communication
The third framework component for the network is secure. The security technology implementation is quite essential for devices and platforms. They can be protected from any breach and security threat. The devices can communicate safely and securely through these measures.
The fourth component for IoT network framework is the gateway. The gateways can house the application logic, store data and help in network communication to the devices. As per Patrick Filkins, the primary function of the IoT gateway is to perform protocol conversion.
The fifth component of the IoT framework is that aggregation point where one or multiple products or sites connect, and data gets collected.
The final and sixth component of the IoT network is the application itself. The app is the user-interface. From these applications, the users can monitor and control their cars, smart homes and other devices smartly.
Data Challenges for IoT Network
To use IoT data, it should be appropriately collected in a structured manner. Here are the top three challenges that are usually observed by the organizations:
Huge Data Volumes
IDC or International Data Corporation estimates that around 40,000 extra bytes will be created due to IoT devices by next year. The organizations may not be able to handle such a massive amount of data, and it can be a challenging task for them. Large industries will have to collect billions of data sets from sensors, machines and internal business applications.
Data preparation may require 80% of time and resources for data collection, and it can be a time-intensive process for the organizations as organizations may have to take new data challenges. Hence, they need to consider new technologies and methods that can help them in keeping up with the vast influx of data volumes.
Complexity
The complex nature of data is also a challenge for the organizations. The organizations not only have to deal with a timestamp and geo-tag data but may also have to combine data from various sources and in multiple forms. Excel-like standard data tools can’t handle this complexity, so organizations must know how to tackle such challenges?
Interoperability
The computer systems used by the businesses are not able to process such complex and vast amounts of data to pull out information. It is quite challenging to integrate machine-generated data with business applications like Salesforce or Marketo. So, the organizations must have some solutions that may allow data and applications to talk to each other.
Conclusion
The real benefits of IoT technology can only be realized with proper data preparation and strategies. The organizations and iot app development company must equip their teams with the latest data preparation platforms that can handle a complex and huge amount of IoT data. Incorporation of Big Data and IoT can undoubtedly provide intelligent solutions. Even the sensor data can also provide innovative solutions but may have to be collected smartly as well.
E-commerce has been seeing growth since the past decades. E-commerce had become a trend in retailers and popular in consumers. Owning to its quick, easy, and reliable service, e-commerce's popularity is known to all. Now with the advent of IoT(internet of things), where devices transfer data among themselves without any human interactions, it has benefited e-commerce in myriad ways.
Faster, More Reliable
The first and foremost advantage that IoT provides to e-commerce is that it has added to the reliability of the transactions. As there is no chance of human error anymore. Owning to its automated systems, the data of transactions are reliable and quicker.
Enhancing the business of retailers
Sell more, make money more. The IoT has created a possibility to exactly know the customer's needs and desires using the technology to collect data about the trend on social media. This collected data is then applied to sell the desired products accordingly leading to more and more growth in business via e-commerce. This way it is not only advantageous to the retailers but also to customers as IOT allows a great deal in customer care.
It leads to enhancement in marketing and promotions. Product promotion is also made through IoT and on the other hand it leads to an increase in customer care.
Securing items in the Warehouses
The IoT technology has made it possible to make sure the items do not get over stoked in the Warehouses or the items do not expired/get bruised in them by remotely sensing the products in the warehouses. This has ensured the optimization of productivity. The IoT has the ability to keep in check even at the times when there are lots of chances of human error. So, the items are more secured when the surveillance is through IoT.
Easy Tracking of theft and Losses
The products are always under surveillance, it's location, temperature through multiple devices which keeps a track on this tracking ID. GPS enabled e-commerce business makes it possible to keep track of the products in every instance. Hence making it less prone to theft and other losses. The product is never out of sight and the whole travel history is being constantly recorded. The automated e-mails and texts regarding the product's departure and arrival make it secure in delivering to the right place in a safe way.
E-commerce web Development and Design
When it comes to selling, buying online, eCommerce websites need lucrative web designs to captivate customers and this is one reason Shopify developers are well in demand. Also, not only captivating but also fast and quick. The web development now largely inclined towards using the IoT technologies, as in to make the work fast and more reliable. The IoT devices are meant to communicate more safely. Hence, they are more admired and desired. The web-based user interface also prefer IOT devices for reliability and for making things faster. Als, the IoT enabled websites makes it easier for consumers with a low-speed internet connection by adjusting the response time by minimizing it between the web server and IoT enabled sites.
There's more to come yet in IoT, with its ever-increasing usage of devices. This will help in the growth of e-commerce even more in the future.
Author Bio: Abdullah Ali is Co-founder and Shopify Developer in Los Angeles
by Jack Ganssle
Recently our electric toothbrush started acting oddly – differently from before. I complained to Marybeth who said, “I think it’s in the wrong mode.”
Really? A toothbrush has modes?
We in the embedded industry have created a world that was unimaginable prior to the invention of the microprocessor. Firmware today controls practically everything, from avionics to medical equipment to cars to, well everything.
And toothbrushes.
But we’re working too hard at it. Too many of us use archaic development strategies that aren’t efficient. Too many of us ship code with too many errors. That's something that can, and must, change.
Long ago the teachings of Deming and Juran revolutionized manufacturing. One of Deming's essential insights was that fixing defects will never lead to quality. Quality comes from correct design rather than patches applied on the production line. And focusing on quality lowers costs.
The software industry never got that memo.
The average embedded software project devotes 50% of the schedule to debugging and testing the code. It's stunning that half of the team’s time is spent finding and fixing mistakes.
Test is hugely important. But, as Dijkstra observed, testing can only prove the presence of errors, not the absence of bugs.
Unsurprisingly, and mirroring Deming's tenets, it has repeatedly been shown that a focus on fixing bugs will never lead to a quality product - all that will do is extend the schedule and insure defective code goes out the door.
Focusing on quality has another benefit: the project gets done faster. Why? That 50% of the schedule used to deal with bugs gets dramatically shortened. We shorten the schedule by not putting the bugs in in the first place.
High quality code requires a disciplined approach to software engineering - the methodical use of techniques and approaches long known to work. These include inspection of work products, using standardized ways to create the software, seeding code with constructs that automatically catch errors, and using various tools that scan the code for defects. Nothing that is novel or unexpected, nothing that a little Googling won't reveal. All have a long pedigree of studies proving their efficacy.
Yet only one team out of 50 makes disciplined use of these techniques.
What about metrics? Walk a production line and you'll see the walls covered with charts showing efficiency, defect rates, inventory levels and more. Though a creative discipline like engineering can't be made as routine as manufacturing, there are a lot of measurements that can and must be used to understand the team's progress and the product's quality, and to drive the continuous improvement we need.
Errors are inevitable. We will ship bugs. But we need a laser-like focus on getting the code right. How right? We have metrics; we know how many bugs the best and mediocre teams ship. Defect Removal Efficiency is a well-known metric used to evaluate quality of shipped code; it's the percentage of the entire universe of bugs found in a product that were removed prior to shipping (it's measured until 90 days after release). The very best teams, representing just 0.4% of the industry, eliminates over 99% of bugs pre-shipment. Most embedded groups only removed 95%.
Where does your team stand on this scale? Can one control quality if it isn’t measured?
We have metrics about defect injection rates, about where in the lifecycle they are removed, about productivity vs. any number of parameters and much more. Yet few teams collect any numbers.
Engineering without numbers isn’t engineering. It’s art.
Want to know more about metrics and quality in software engineering? Read any of Capers Jones’ books. They are dense, packed with tables of numbers, and sometimes difficult as the narrative is not engaging, but they paint a picture of what we can measure and how differing development activities effect errors and productivity.
Want to understand where the sometimes-overhyped agile methods make sense? Read Agile! by Bertrand Meyer and Balancing Agility and Discipline by Barry Boehm and Richard Turner.
Want to learn better ways to schedule a project and manage requirements? Read any of Karl Wiegers’ books and articles.
The truth is that we know of better ways to get great software done more efficiently and with drastically reduced bug rates.
When will we start?
Jack Ganssle has written over 1000 articles and six books about embedded systems, as well as one about his sailing fiascos. He has started and sold three electronics companies. He welcomes dialog at [email protected] or at www.ganssle.com.
Internet of Things is the perfect example of something being so simple and elegant yet being an astounding and breakthrough innovation in the modern era of disruptive technologies. This technology has already projected its influence over typical machine-based industries like oil & gas, automotive, manufacturing, utilities, etc.
However, IoT is not only beneficial for production-based companies but can also be used for practical applications in B2C businesses like tourism and hospitality.
Internet of Things in the hospitality business not only helps hotels and restaurants to improve their services but also empower their guests to enjoy exceptional hands-on experiences. It creates a network of connected devices that offer smart and autonomous experiences to the visitors.
Internet of Things offers a ton of possibilities to a hospitality business. Big hotel chains like Marriott and Hilton have already implemented this disruptive technology to enhance their generous services and provide their guests with out of the box experiences.
Below are some applications of IoT that a hotel or any hospitality business can use:
1.Guestroom Automation to Elate Customers:
After a long journey, guests expect a pleasant and warm stay from their temporary accommodation. They prefer a completely customized service as per their expectations and likings. Smart IoT solutions now empower hotels and guesthouses to provide their visitors exactly what they desire.
IoT allows the development of a centralized and connected network between different automated systems and appliances. For example, based on their desire and liking your guests can alter the luminosity and intensity of the lights from IoT based smart lighting solutions. Moreover, appliances can also conduct operations autonomously. For example, proximity sensors embedded in the room can detect the movement of the guest and turn on the coffee machine to brew the beverage.
You can also use this connected network to identify the preferences of your customers and use this information to surprise your customers with customized and personalized services the next time they visit.
Furthermore, hospitality businesses having their hotels in different locations can also share data about their customers in a common CRM to make sure that the guests come across the same experience in every branch of the hotel chain.
This cross-property integration allows hotels to keep their customers’ profiles in a centralized system that can be accessed distantly. IoT plays a crucial role in this as it enables a hotel to collect guest’s data and share it with its patrons via the common info management software.
2. Predictive Maintenance of Room Appliance:
The biggest disappointment for a guest is when they enter their previously booked room and find a leaky pipe or damaged air conditioner. These instances not only affects the immediate experience of the visitor but also the overall reputation of your hotel.
In order to prevent these situations, you can use the predictive analytics capabilities of the IoT solutions. Smart sensors and meters can be installed in appliances and pipeline networks to identify the possibility of unexpected breakdowns and malfunctions before your guest encounters them. These sensors will notify the room service staff about bottlenecks and enable them to fix the issue before it actually occurs.
This predictive analytics system can hence be used by hotels to improve maintenance systems and prevent sudden failure of any appliance in any of the rooms. This not only will help you to boost your customer service but also protect your hotel chain’s reputation from getting spoiled. Additionally, you will also save a lot of money that is generally spent to repair the broken equipment at a moment’s notice.
3. Guestroom Transforming Features:
The appeal of any hotel lies in its rooms. Primarily, it is the main aspect of a hospitality business that visitors’ book. Even if you give your users with relaxing spa vouchers or free-swimming pool amenities, they are more likely to be disappointed if you don’t provide them with best in class staying experience.
It is hence of utmost importance for any hotel to keep its rooms abreast with amazing features. One way to do so is by using devices powered with quintessential technologies that are capable of presenting an amazing experience to the guests.
Some of these devices include smart switches, electronic key cards, and voice assistants. Voice assistants Amazon Alexa can be programmed to specifically cater to the demands of the customer staying in the room. This IoT and AI-powered device will enable hotel staff to monitor the preferences and likings of the guests and provide personalized services the next time they visit.
4. Smart Solutions for Hotel management:
IoT not only empowers hospitality businesses to provide outstanding services to its guests but also manage other tasks related to its conventional operations. By using facility management services of IoT, a hotel can manage the consumption of its utilities and reduce the cost associated with its usage.
Furthermore, these solutions can also be used by hotels to manage inventory and optimize resource utilization. Hence, hotels can reduce their manpower and cut costs. Moreover, these services will also aid the business to increase its guest satisfaction through its unique staying experiences.
CONCLUSION:
The success of any hospitality business depends on the satisfaction it can provide to its guests. By using the technology of IoT and its features, a hotel can enhance its services and capture the heart of its guests.
IoT helps the hospitality business to enhance its services related to housekeeping and accommodation that in turn boosts the satisfaction of the customers. This also increases the reputation of the hotel chain which results in better business opportunities.