Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Featured Posts (665)

Sort by

How Doews IoT help in Retail? Continuous and seamless communication is now a reality between people, processes and things.  IoT has been enabling retailers to connect with people and businesses and gain useful insight about product performance and engagement of people with such products. 

Importance of IoT in Retail

  • It helps improve customer experience in new ways and helps brick and mortar shops compete with their online counterparts by engaging customers in different ways.
  • IoT can track customer preferences, analyze their habits and share relevant information with the marketing teams and help improve the product or brand features and design and keep the customer updated on new products, delivery status etc.
  • Using IoT retailers can increase efficiency and profitability in various ways for their benefit.
  • IoT can significantly improve the overall customer experience, like automated checkouts and integration with messaging platforms and order systems.
  • It helps increase efficiency in transportation and logistics by reducing the time to deliver goods to market or store. It helps in vehicle management, and tracking deliveries. This helps in reducing costs, improving the bottom line and increasing customer satisfaction.
  • Inventory management becomes easier with IoT. Tracking inventory is much easier and simpler from the stocking of goods to initiating a purchase.
  • It helps increase operational efficiency in warehouses, by optimizing temperature controls, improving maintenance, and managing the warehouse. 

Use Cases of IoT in Retail

  1. IoT is used in Facility management to ensure day-to-day areas are clean and can be used to monitor consumable supplies levels. It can be used to monitor store environments like temperature, lighting, ventilation and refrigeration. IoT can identify key areas that can provide a complete 360 degrees view of facility management.
  2. It can help in tracking the number of persons entering a facility. This is especially useful because of the pandemic situation, to ensure that no overcrowding takes place.
    Occupancy sensors provide vital data on store traffic patterns and also on the time spent in any particular area. This helps retailers with better planning and product placement strategies. This helps in guided selling with more effective display setups, layouts, and space management.
  3. IoT helps in a big way for Supply chain and logistics, by providing information on the stock levels. 
  4. IoT helps in asset tracking in items like shopping carts and baskets. Sensors can ensure that location data is available for all carts making retrieval easy. It can help lock carts if they are taken out of location.
  5. IoT devices can and are being used to personalize user experience. Bluetooth beacons are used to send personalized real-time alerts to phones when the customer is near an aisle or a store. This can prompt a customer to enter the store or look at the aisle area and take advantage of offers etc. IoT-based beacons, helps Target, collect user data and also send hyper-personalized content to customers.
  6. Smart shelves are another example of innovative IoT ideas. Maintaining shelves to refill products or ensure correct items are placed on the right shelves is a time-consuming task. Smart shelves automate these tasks easily. They can help save time and resolve manual errors.

Businesses should utilize new technologies to revolutionize the retail sector in a better way. Digitalization or digital transformation of brick and mortar stores is not a new concept. With every industry wanting to improve its services and facilities and trying to stay ahead of the competition, digitalization in retail industry is playing a big role in this transformation. To summarize, digitalization helps in enhanced data collection, helps data-driven customer insights, gives a better customer experience, and increases profits and productivity. It encourages a digital culture.

Read more…

The Internet of Things is one of the technologies making yesterday’s science fiction the reality of today. It will act as a force multiplier for digitization and can potentially transform the world into a smart one - smart cities, smart vehicles, smart manufacturing, smart homes, and many others. According to IDC, spending on IoT by businesses and other entities is going to reach $1 trillion in 2022. Further, out of the projected connected devices of 29 billion by 2022, around 18 billion are expected to be related to IoT. And the data generated by these devices will be to the tune of 73.1 zettabytes by 2025.

In other words, ignoring the penetration of IoT across domains and not investing in its vast sweep could be detrimental to the competitiveness of business enterprises in the future. Even though the IoT will continue on its upward trajectory in use cases and device numbers, enterprises should take into account the challenges related to interoperability and security. Let us discuss the top IoT predictions that IoT testing services, or for that matter, the CIOs of enterprises, should acknowledge and incorporate in their value chain.

Top IoT Forecasts for CIOs to Recognize

As a smart technology, the Internet of Things is going to change the landscape of the digital world. The top IoT forecasts for the years to come are mentioned below:

# AI-based IoT data analysis: With IoT being adopted as a frontline technology by most organizations, there will be a need to gather, store, process, and analyze huge amounts of data generated by it. This is where AI-based data analysis will take over from traditional analysis wherein data mined by IoT devices will be analyzed for known patterns to draw insights about various aspects of an organization. AI is going to be applied to a host of IoT-generated data in the form of still images, video, speech, text, and network traffic activities. This should drive the CIOs of business enterprises to implement the necessary skills and tools to leverage AI in their IoT testing approach.

# IoT with legal, social, and ethical dimensions: With the increased adoption of IoT across business segments, a wide range of social, ethical, and legal issues may come to the fore. These may include privacy, regulatory compliance, algorithmic bias, and ownership of data, among others. In fact, the success of any IoT solution should not be based on its technical prowess or effectiveness alone, but on social acceptability as well. Hence, CIOs should review their corporate strategy, IoT and AI systems, and key algorithms by external agencies for any potential bias. In doing so, they may include external IoT testing services to not only validate the technical aspects of such systems but their social, ethical, and legal dimensions as well.

# Data broking and infonomics: According to a Gartner survey, businesses are going to include the buying and selling of IoT data as an essential part of their strategy. As per the theory of infonomics, the monetization of IoT data will be treated as a strategic asset by businesses and included in their accounts. CIOs should educate their staff on the opportunities and risks pertaining to data broking and set the appropriate IT policies, including incorporating mandatory IoT testing in the value chain.

# Transition from Intelligent Edge to Intelligent Mesh: The transition from cloud to edge architectures in the IoT space is underway and is likely to give way to a more unstructured architecture in the form of a dynamic mesh. The mesh architectures will lead to more intelligent, responsive, and flexible IoT systems, but with additional complexities. As a result, CIOs must prepare their organisations for the impact of mesh architectures on IoT systems. Consequently, the focus of the Internet of Things QA testing should be to ensure every aspect of the IoT and mesh architecture performs as desired.

# IoT Governance: With the expansion of the IoT space, a proper setup for governance, including an IoT testing framework, should be instituted. This is to ensure appropriate behavior in the generation, storage, deletion, and usage of IoT-related data. IoT governance would entail device audits, control of devices, firmware updates, and the usage of information delivered, among others. CIOs must educate their organizations on issues related to IoT governance.

Conclusion 

The Internet of Things will continue to expand and play an important role for business enterprises in areas such as data mining, analysis, and management, decision-making, privacy, security, and others. CIOs must make their enteprises ready to leverage the opportunities offered by the IoT as well as set up proper architectures, including IoT security testing, to mitigate any associated risks.

Read more…

When analyzing whether a machine learning model works well, we rely on accuracy numbers, F1 scores and confusion matrices - but they don't give any insight into why a machine learning model misclassifies data. Is it because data looks very similar, is it because data is mislabeled, or is it because preprocessing parameters are chosen incorrectly? To answer these questions we have now added the feature explorer to all neural network blocks in Edge Impulse. The feature explorer shows your complete dataset in one 3D graph, and shows you whether data was classified correct or incorrect.

8481148295?profile=RESIZE_710x

Showing exactly which data samples are misclassified in the feature explorer.

If you haven't used the feature explorer before: it's one of the most interesting options in the Edge Impulse. The axes are the output of the signal processing process (we heavily rely on signal processing to extract interesting features beforehand, making smaller and more reliable ML models), and they can let you quickly validate whether your data separates nicely. In addition the feature explorer is integrated in Live classification, where you can compare incoming test data directly with your training set.

8481149063?profile=RESIZE_710x

Redesign of the neural network pages.

This work has been part of a redesign of our neural network pages. These pages are now more compact, giving you full insight in both your neural network architecture, and the training performance - and giving you an easy way to compare models with different optimization options (like comparing an int8 quantized model vs. an unoptimized model) and show accurate on-device performance metrics for a wide variety of targets.

Next steps

Currently the feature explorer shows the performance of your training set, but over the next weeks we'll also integrate the feature explorer and the new confusion matrix to the Model testing page in Edge Impulse. This will give you direct insight in the performance of your test set in the same way, so keep an eye out for that!

Want to try the new feature explorer out? Just head to any neural network block in your Edge Impulse project and retrain. Don't have a project yet?! Followone of our tutorials on building embedded machine learning models on real sensor data, it takes 30 minutes and you can even use your phone as a sensor.

Article originally written by Jan Jongboom, the CTO and co-founder of Edge Impulse. He loves pretty pictures, colors, and insight in his ML models.

Read more…

Figure 1: Solution architecture with AWS IoT core

Critical and high-value assets are always on the move, and this holds across practically every industry vertical relying on supply chain and logistics operations. Naturally, enterprises seek ways to track their assets with the shipment carrier in ways that are most optimal to their requirements. The end goal is often to have greater visibility and control of assets while in transit with the shipment carrier while opening up opportunities to optimize business operations based on insights-driven decisions.

For assets in transit, proactive shipment monitoring results in greater reliability of the shipment's integrity by way of real-time updates about the shipment's location, transit status, and conditions like temperature and humidity (for perishable shipments). All this information helps quick issue identification and remediation by the respective stakeholders. This helps minimize losses and reduced insurance claims, which results in further cost optimization for the enterprise while delivering a delightful purchase experience to their customers.

A solution to address such requirements would need to be an IoT (Internet of Things) solution requiring a combination of tracker devices (hardware), cloud apps (software platform), enterprise systems integrations (with SCM, ERP & similar systems), and professional services & support for field installation & continuous data insights. For most enterprises, this implementation of the Internet of things is complex and non-core. Such an IoT solution requires the investment of capital, time, and expertise to build and deploy such a solution, especially one that's secure and scalable.

In this post, we'll discuss such an IoT solution that is built using AWS IoT Core and can be delivered in an affordable manner such that it's ready to use within a matter of days or weeks. The solution leverages GPS-enabled tracker hardware comprising condition monitoring sensors like temperature, humidity, shock impact, and ambient light. This device can be used to track entire containers or pallets having multiple cartons or even individual item boxes depending on the requirements. The shock impact sensor on the device indicates asset mishandling based on threshold limits, and the light sensor can indicate potentially unauthorized use/asset theft. Such a device requires a cellular connectivity service to communicate sensor data to the cloud per pre-configured rules.

By way of API integrations using AWS SDKs for the Internet of things, the tracker devices are first connected and authenticated. The data they generate is published to a cloud app powered by AWS IoT Core in real-time or at preset time intervals. The data sent to the cloud app is in JSON-format message payloads sent via the MQTT protocol supported by AWS IoT Core; and is presented on the Frontend Dashboard UI in a rich, interactive manner on a map interface with sensor-specific information available within a couple of taps/clicks.

These sensor data messages are further forwarded to other back-end systems like AWS IoT Analytics. The data is usually saved in a time-series data store for analysis and insights later in time. Additionally, API integrations can also be easily built for the cloud app to work with enterprise apps like Transport Management Systems and Warehouse Management Systems to realize autonomous supply chain operations. Business rules define such movement of data- and operations-specific logic and is handled via AWS's Rules Engine service, which also can be used to transform device data before forwarding it to a different application.

However, not every data point a sensor picks up needs to be sent to the cloud app unless such a mandate exists, often due to regulatory compliance requirements in verticals like healthcare and pharmaceuticals. The Dashboard UI on the cloud provides a simple interface to set ranges of minimum and maximum sensor readings acceptable. Any breach of this range is immediately notified as an alert to the team responsible for monitoring the shipment. The team can then contact the shipment carrier to take corrective action. Such ranges can also be separately configured within mere seconds for each shipment per its monitoring requirements.

The secure bidirectional messaging between the tracker device and the cloud app is enabled via AWS IoT Core's Device Gateway, which scales automatically to process millions of messages in either direction while ensuring low latency mission-critical applications.

This makes the purpose-built shipment monitoring solution completely configurable and hence scalable while still being quickly deployable without the hassles of capital expenses and significant resource time spent in custom building such solutions from scratch.

Summary

The intelligent shipment monitoring solution enables enterprises to have greater control over the movement of their assets while having enough data and insights over time to optimize business operations as required.

With AWS IoT Core and AWS IoT Analytics, such a data-driven outcome approach to handle supply chain operations delivers transformational benefits such as reduced losses, greater cost control, and improved customer satisfaction rates that can result in sustainable competitive advantage in the marketplace.

Originally posted HERE.

Read more…

As industrial organizations connect more devices, enable more remote access, and build new applications, the airgap approach to protecting industrial networks against cyber threats is no longer sufficient. As industries are becoming more digital, cyberattacks are getting more sophisticated, and yet many organizations are lagging in the adoption of updated and reliable industrial cybersecurity postures. And when these organization’s security leaders start building a strategy to secure operations beyond the industrial demilitarized zone (IDMZ), they realize it might not be as simple as they thought.

Industrial assets (as well as industrial networks, in many cases) are managed by the operations team, which is typically focused on production integrity, continuity, and physical safety, rather than cyber safety. The IT teams often have the required cybersecurity skills and experience but generally lack the operations context and the knowledge of the industrial processes that are required to take security measures without disrupting production.

Building a secure industrial network requires strong collaboration between IT and operations teams. Only together can they appreciate what needs to be protected and how best to protect it. Only together can they implement security best practices to build secure industrial operations.

Enhancing the security of industrial networks will not happen overnight: IT and operations teams have to build their relationship; new security tools might have to be deployed; networks might need to be upgraded and segmented; new correlation policies will have to be developed.

Security is a journey. Only a phased and pragmatic approach can lay the ground for a converged IT/OT security architecture. Each phase must be an opportunity to build the foundation for the next. This will ensure your industrial security project addresses crucial security needs at minimal costs. It will also help you raise skills and maturity levels throughout the organization to gain wide acceptance and ensure effective collaboration.

Being the leader in both the cybersecurity and industrial networking markets, we looked at the successful projects Cisco has been involved in. This led us to recommend a three-step journey outlined in Cisco’s Industrial Security Validated Design.

What is a Cisco Validated Design (CVD)? CVDs provide the foundation for systems design based on common use cases or current engineering system priorities. They incorporate a broad set of technologies, features, and applications to address customer needs. Each one has been comprehensively tested and documented by Cisco engineers to ensure faster, more reliable, and fully predictable deployment.

Our approach to industrial security is focused on crucial needs, while creating a framework for IT and operations to build an effective and collaborative workflow. It enables protection against the most common devastating cybersecurity threats, at optimized cost. And provides a practical approach to simplify adoption.

To learn more, read our solution brief or watch the replay of the webinar I just presented. A detailed design and implementation guide will be available soon for helping to accelerate proof-of-concepts and deployment efforts.

Originally posted HERE.

Read more…
Fig.1 Arrow Shield 96 Trusted Platform

Introduction

IoT product development crosses several domains of expertise from embedded design to communication protocols and cloud computing. Because of this complexity “end-to-end” or “edge-to-cloud” IoT security is becoming a challenging concept in the industry. Edge in many cases refers to the device as a single element in the edge-to-cloud chain. But the device must not be regarded as a whole when security requirements are defined. Trust must first be established within the processing unit and propagated through several layers of the software stack before the device becomes a trusted end node. Securing the processor requires to properly integrate multiple layers of security and use security features implemented in hardware. Embedded security expertise and experience is required to accomplish such tasks. It is very easy to put a lot of effort on implementing security for an IoT product and in the same time missing to cover key use cases. A simpler way to narrowing down on defining the end-to-end security is to start with identifying the minimum set of business requirements.

Brand image, how a company’s customers perceive and value it, is one of the most valuable assets of any corporation. Two of the most important characteristics of an IoT device that can promote a positive brand image are: resiliency and privacy. For resiliency, this might mean adding features that increase the device’s ability to self-recover from malfunctions or cyber-attacks. For privacy, this means protecting user information and data but also the intellectual property (IP), the product invested in the product. This means that preventing exploitation through vectors such as product\device cloning and over production becomes important. Another business driver is the overall cost of ownership for the product. Are there security related features that can drive the cost down? We include here not just operational cost but also liabilities.

In this blog, we dive deeper into solutions that support these business requirements. We will also discuss a demo we have created in collaboration with our partners Sequitur Labs and Arrow to demonstrate a commercially available approach to solving a number of several security use cases for IoT.

Security in depth – a methodical approach for securing connected products

IoT security must start with securing the device, so that data, data collection, and information processing can be trusted. Security must be applied in layers and facilitate trust propagation from the silicon hardware root of trust (HWRoT) to the public/private cloud or the application provider back-end. Furthermore, the connected paradigm provides the opportunity to delegate access control and security monitoring in the cloud, outside of the device. Narrowing down further, device security must be rooted by enabling fundamental capabilities of the processor or system on chip and consider all three stages of the device lifecycle: inception (manufacturing, first boot), operation, and decommissioning.

In a nutshell we should consider the following layers for securing any IoT product:

  • Set a hardware root of trust – secure programming and provisioning (firmware, key material, fuses)
  • Implement hardware enforced isolation – system partitioning secure / non-secure
  • Design secure boot – authenticated boot chain all the way to an authenticated kernel
  • Build for resiliency – fail-safe to an alternative firmware image and restore from off-board location
  • Enable Trusted Execution – establish a logical secure enclave
  • Abstract hardware security – streamline application development
  • Enable security monitoring – cloud based, actionable security monitoring for a fleets of devices

These capabilities provide a foundation sufficient to fulfill the most common security requirements of any IoT product.

Embedded security features needed to build the security layers described above are available today from many silicon providers. However, software is needed to turn these into a usable framework for application developers to easily implement higher layer security use cases without the need for advanced silicon expertise.

Such software products must be architected to be easily ported to diverse silicon designs. Secondly, the software solution must work with the established IoT manufacturing process. “Turning on” embedded security features triggers changes to existing manufacturing flows to accommodate hardware testing before final firmware image can be programmed, burning fuses in the silicon in a specific order and overall handling sensitive cryptographic key material. The fragmentation, complexity, and expertise required are the reasons why embedded security is a challenge to implement at scale in IoT today.

A closer look – commercially available secure platform with Arrow Shield96

AWS partnered with Sequitur Labs and Arrow to provide a commercial solution that follows the approach described in the previous paragraph. This solution follows the NIST SP 800-193 for Platform Firmware Resilience Guidelines and goes beyond to create a secure platform fitted for embedded and IoT products. In the same time it is abstracting the complexity of understanding and utilizing embedded security IP such as hardware crypto, random number generators, fuse controllers, tampers, hardware integrity checkers, TrustZone, on-the-fly memory encryption.

For this blog, we created a demo using the Arrow Shield 96 Trusted Platform (Fig 1) single board computer running Sequitur Labs custom firmware image based on the EmSPARK Security Suite. The Arrow Shield96 board is based on the Microchip SAMD27, a Cortex A5 entry level MPU that embeds a set of security IP capable to fulfill the most stringent security requirements.

Let’s dive deeper into the technical implementation first then into the demo scenarios that fulfill some of customers’ business needs.

Security inception and propagation of trust

Secure boot and firmware provisioning

Introducing secure boot requires initial programming of the CPU, essentially burning keys in the processor’s fuses, setting up the boot configuration, establishing the Hardware Root of Trust, and ensuring the processor only boots authenticated, trusted firmware. Secure boot implementation is tightly correlated to the processor programming and the device firmware provisioning. The following section provides details how secure boot and firmware provisioning can be done properly to establish a trusted security foundation for any application.

Firmware provisioning

EmSPARK Security Suite methodology for provisioning and programming the Shield96 board minimizes complexity and the need for embedded security expertise. It provides a tool and software building blocks that guide the device makers to create an encrypted manufacturing firmware image first. The manufacturing firmware image packages the final components: encrypted blobs of the final device firmware, a provisioning application, and customer specific key material such as private key and X.509 certificate for cloud connectivity, certificate authorities to authenticate firmware components and application updates.
The actual firmware provisioning and CPU programming is performed automatically during the very first boot of the device flashed with the manufacturing image. With the CPU running in secure mode the provisioning application burns the necessary CPU fuses and generates keys using the embedded TRNG (true random number generator) to uniquely encrypt the software components that together form the final firmware. Such components are the Trusted Execution Environment (CoreTEE), Linux kernel, customer applications, Trusted Applications, and key material (such as key material needed to authenticate with AWS IoT Core).

The output – establishing a trusted foundation

The result is firmware encrypted uniquely with a key derived from the HWRoT for each device in a process that does not leave room for device secrets mismanagement or human error. Device diversification achieved this way drastically reduces the cost of manufacturing by eliminating the need for HSMs and secure facilities while providing protection from class break attacks (break one break all).
Another task the provisioning process performs during the very first boot is creating and securely storing a unique device certificate from a preloaded CSR (Certificate Signing Request) template and a key pair generated using the HW TRNG then signed with a customer provided private key only usable securely during the device first boot. The device certificate serves as the immutable device identity for cloud authentication.

Secure boot

The secure boot implemented creates the system partitioning in secure and non-secure domains making sure all peripherals are set to the desired domain. Arm TrustZone and Microchip security IP are at the core of the implementation. CoreTEE, the operating system for the secure domain runs in on-the-fly AES encrypted DDR memory. This protects a critical software component (the TEE) from memory probing attacks. Secure boot has been designed so at the end of the boot process, before handing over control of the processor from the secure domain to the non-secure domain (Linux) to close access to the fuse controller, secure JTAG, and other peripherals that can be leveraged to breach the security.

Building for resilience

Secure boot implements two features that boost device resilience – a fail-over boot from a secondary image (B) when primary boot (A) fails, and the ability to restore a known good image (A) from an off-board location. The solution includes a hardware watchdog and a boot-loop counter (as set by the device maker) that Linux resets to maximum after each successful boot. If Linux fails to boot repeatedly and the counter reaches zero the B partition is set for the next boot. After such failure once the failover boot B is loaded, the device connects to an off-board location (in our demo that is a repository on AWS) retrieves the latest firmware image and re-installs it as the primary one (A). These two features help to reduce operational cost by allowing devices in the field to self-heal. In addition, AWS IoT Device Defender checks device behaviors for ongoing analysis and triggers alerts when behaviors deviate from expected ranges.

In our demo when the alternative firmware image (B) is loaded, an event is triggered in the AWS IoT Device Defender agent. The AWS IoT Device Defender agent running as a TA in the secure domain sends these events to the AWS IoT Device Defender Detect service for evaluation. The TA, running in the secure domain, also signs AWS IoT Device Defender messages to facilitate integrity validation for each reported event.

Another key component of the EmSPARK Suite is the secure update process. Since secure boot is the only process that can decrypt firmware components during device start it is also involved in performing the firmware update. The firmware update feature is facilitated in Linux as an API call that requires a manifest and the signed and/or encrypted new firmware image. The API call performs image signature verification and sets the flag for the boot to update and restarts the board. During next boot the secure boot process decrypts the new image using a pre-provisioned key and re-encrypts it with the board-specific key. The manifest indicates which components need to be updated – Linux Kernel, TEE, TAs and/or bootloader.

Enabling easy development through security abstraction

Arrow Shield through the EmSPARK Suite product preloads a number of TAs (Trusted Applications) with the Shield96 firmware. The figure below is a view of the dual domain implementation and the software components provided with the Shield96 Trusted product in our demo.

8275351859?profile=RESIZE_710x

Fig 2. Software architecture enabling TrustZone\TEE with EmSPARK Suite

These TAs expose a set of secure functions to Linux via a C SDK (called the CoreLocker APIs). The Arrow board and Sequitur’s security suite preloads the following TAs for our demo:

  • Cryptographic engine – providing symmetric, asymmetric crypto operations and key generation integrating silicon-specific hardware crypto
    Key-store and a CA-store managed (add, delete) via signed commands
  • Secure firmware update
  • Secure storage for files and stream data
  • TLS and MQTT stacks
  • AWS IoT Device Defender secure agent

In addition, a tamper detection and remediation TA has been added for our demo purposes (as detailed in “The demo” section below). These TAs provide a preloaded framework for implementing a comprehensive set of security use cases assuring that security operations are executed in isolation from the application OS in an authenticated and resilient environment. Such use cases include confidentiality, authentication and authorization, access control, attestation, privacy, integrity protection, device health monitoring, secure communication with the cloud or other devices, secure lifecycle management.

All TA functions are made available to application development through a set of C APIs via an SDK. Developers do not need to understand the complexity of creating TAs or using HW security provided by the chipset.

Translating TAs to security use cases

Through a securely managed CA-store (Certificate Authority) the device can authenticate payloads against a set of CAs optionally loaded at manufacturing or later in the device lifecycle. Having the ability to update securely the CAs the device or product owner can transfer the ownership of certain functions such as firmware update or application update to other entities. For example, the customer owns the applications but the firmware update and security management may be delegated to a third party Managed Service Provider while maintaining privacy requirements.
The cryptographic engine is core to anything related to security and implement a set of symmetric and asymmetric cryptographic functions and key generation allowing applications in non-secure domain to execute crypto in isolation. HW crypto is used when implemented by the chipset.

The Microchip SAMA5D2 implements in hardware the ability to monitor in real time regions of memory. In the Shield96 firmware this feature – ICM, Integrity Check Monitoring – is used to monitor the integrity of the Linux kernel. Any modification of the Linux kernel triggers an interrupt in the secure domain. The hardware isolation implemented through TrustZone prevents Linux to even “be aware” of such interrupts. The interrupt triggers a remediation function implemented in a TA and together with the Device Defender Secure Agent TA that does three operations:

  • records the tampering event and restarts Linux from the verified, authenticated encrypted image provided through secure boot
  • after restart packages the tampering event into a JSON format, signs it for integrity assurance and stores it
  • publishes the JSON package to the AWS IoT Device Defender monitoring service

Complementing the edge-to-cloud security strategy with AWS IoT Device Defender

AWS IoT Device Defender audits device cloud configuration based on security best practices and monitors anomalies and threats on devices based on expected cloud- and device-side behaviors on an ongoing basis. In this demo and for complementing the defense mechanisms implemented at the device level, AWS IoT Device Defender performs its monitoring capability and enables customers to receive alerts when it evaluates that anomalous or threat events occurred on an end-node. This demo required installing AWS IoT Device Defender agents on both the non-secure and secure domains of the Shield96 board. The security domain is providing the secure crypto signature (using securely a private key) to device health reports and also isolates the detection and reporting processes from being intercepted by malicious applications. AWS IoT Device Defender agent collects monitored behaviors in the forms of metrics from both domains; then from the secure domain, AWS IoT Device Defender agent sends the metrics to the AWS Cloud for evaluation.

The Demo

For a full demo tutorial, please watch this video .

8275363691?profile=RESIZE_710x

Fig. 3 Edge-to-cloud IoT security demo at Arrow Embedded to Go 2020

The demo covers the following scenarios:

  • Out of the box experience
  • Firmware personalization – secure firmware rotation to provide a logistical separation between manufacturing and production firmware
  • Device registration to AWS IoT Core
  • Device decommissioning (de-registration) from AWS IoT Core
  • Secure firmware update
  • Resilience demonstration – tamper event simulation and remediation
  • Event reporting to AWS IoT Device Defender

Demonstrating resilience and tamper violation reporting with AWS IoT Device Defender

The boot logic for the demo includes a safety check for tamper events. In this case, we connected a button to an environmental tamper pin. The tamper violation generated by the button press is detected in the next boot sequence so the initial boot code switches to the secondary boot stack, and proceeds to boot the “fail-safe” boot image. Once booted the system will publish the tamper event to AWS IoT Device Defender for logging and analysis. In the demo, the primary and secondary images are identical, so each tamper event simply switches to the other. This allows the demo scenario to be repeated with each tamper event switching the system from A to B or B to A firmware images.

Streamlining personalized firmware to commercial boards

The commercial solution introduced by Arrow with the Shiled96 board includes a cloud based secure firmware rotation from the manufacturing generic firmware using AWS thus streamlining device personalization and providing a production ready device to a multitude of customers.

Out of manufacturing, the Shield96 Trusted board comes preloaded with a minimum and generic version of Linux. The out of the box experience to get to a personalized and up to date firmware is as simple as inserting an SD card and connecting the board to the Internet. The device boots securely, partitions the SD card then using Just-in-Time Registration of Device Certificates on AWS IoT (JITR) registers the device to AWS IoT Core and provisions it to Sequitur’s AWS IoT Core endpoint and to the Sandbox application. Next, the device automatically downloads the most recent generic or customer-specific file system, installs it and restarts. Thus the Sandbox provides lifecycle device management and firmware updates.

The 2-stage firmware deployment starting with a generic preloaded firmware at Arrow Programming Center followed by a cloud based final firmware rotation gives customers valuable features. For instance, an Original Equipment Manufacturer (OEM)\Original Device Manufacturer (ODM) may need to produce devices with firmware variations for deployment in different geographical regions or customized for different customers. Alternatively, the OEM\ODM may want to optimize logistics, manufacture in volume while the firmware is still in development, and load the final firmware in a distribution facility before shipping to customers. It also eliminates the opportunity for IP theft in manufacturing since the final firmware is never present at the manufacturer.

Conclusion

The solution introduced with this blog demonstrates that manufacturers can produce devices at scale while security is implemented properly, taking full advantage of the silicon embedded security IP. This implementation extends niche expertise and years of experience into a framework accessible to any developer.
Why is this important? Advanced security implemented right, massively reduces time to market and cost; the solution is also highly portable to other silicon. Sequitur Lab’s EmSPARK Security Suite is already available for NXP microprocessors (i.MX and QuorIQ Layerscape families) and nVidia Xavier bringing the same level of abstraction to IoT and embedded developers.
In this relationship Arrow proposes a secure single board computer fully provisioned. Arrow adds greater value by offering the ability to customize the hardware and the firmware. Customers can choose to add or remove hardware components, customize the Linux kernel, and subscribe for firmware management and security monitoring.
APN partners complement existing AWS services to enable customers in deploying a comprehensive security architecture and a seamless experience. In this case, Sequitur Labs and Arrow bring to market a game changing product complementing existing AWS edge and cloud services to enable any project of any size to use advanced security without the need for qualified embedded security experts.
Moreover, the product builds on top of HW security features of existing processors while providing the necessary software tools and process to work with existing manufacturing flows and not require secure manufacturing.
For a deeper dive into this solution the Getting Started Guide on the AWS Partner Device Catalog provides board bring up steps and example code for many of the supported use cases.

Originally posted HERE.

Read more…

Written by: Mirko Grabel

Edge computing brings a number of benefits to the Internet of Things. Reduced latency, improved resiliency and availability, lower costs, and local data storage (to assist with regulatory compliance) to name a few. In my last blog post I examined some of these benefits as a means of defining exactly where is the edge. Now let’s take a closer look at how edge computing benefits play out in real-world IoT use cases.

Benefit No. 1: Reduced latency

Many applications have strict latency requirements, but when it comes to safety and security applications, latency can be a matter of life or death. Consider, for example, an autonomous vehicle applying brakes or roadside signs warning drivers of upcoming hazards. By the time data is sent to the cloud and analyzed, and a response is returned to the car or sign, lives can be endangered. But let’s crunch some numbers just for fun.

Say a Department of Transportation in Florida is considering a cloud service to host the apps for its roadside signs. One of the vendors on the DoT’s shortlist is a cloud in California. The DoT’s latency requirement is less than 15ms. The light speed in fiber is about 5 μs/km. The distance from the U.S. east coast to the west coast is about 5,000 km. Do the math and the resulting round-trip latency is 50ms. It’s pure physics. If the DoT requires a real-time response, it must move the compute closer to the devices.

Benefit No. 2: Improved resiliency/availability

Critical infrastructure requires the highest level of availability and resiliency to ensure safety and continuity of services. Consider a refinery gas leakage detection system. It must be able to operate without Internet access. If the system goes offline and there’s a leakage, that’s an issue. Compute must be done at the edge. In this case, the edge may be on the system itself.

While it’s not a life-threatening use case, retail operations can also benefit from the availability provided by edge compute. Retailers want their Point of Sale (PoS) systems to be available 100% of the time to service customers. But some retail stores are in remote locations with unreliable WAN connections. Moving the PoS systems onto their edge compute enables retailers to maintain high availability.

Benefit No. 3: Reduced costs

Bandwidth is almost infinite, but it comes at a cost. Edge computing allows organizations to reduce bandwidth costs by processing data before it crosses the WAN. This benefit applies to any use case, but here are two example use-cases where this is very evident: video surveillance and preventive maintenance. For example, a single city-deployed HD video camera may generate 1,296GB a month. Streaming that data over LTE easily becomes cost prohibitive. Adding edge compute to pre-aggregate the data significantly reduces those costs.

Manufacturers use edge computing for preventive maintenance of remote machinery. Sensors are used to monitor temperatures and vibrations. The currency of this data is critical, as the slightest variation can indicate a problem. To ensure that issues are caught as early as possible, the application requires high-resolution data (for example, 1000 per second). Rather than sending all of this data over the Internet to be analyzed, edge compute is used to filter the data and only averages, anomalies and threshold violations are sent to the cloud.

Benefit No. 4: Comply with government regulations

Countries are increasingly instituting privacy and data retention laws. The European Union’s General Data Protection Regulation (GDPR) is a prime example. Any organization that has data belonging to an EU citizen is required to meet the GDPR’s requirements, which includes an obligation to report leaks of personal data. Edge computing can help these organizations comply with GDPR. For example, instead of storing and backhauling surveillance video, a smart city can evaluate the footage at the edge and only backhaul the meta data.

Canada’s Water Act: National Hydrometric Program is another edge computing use case that delivers regulatory compliance benefits. As part of the program, about 3,000 measurement stations have been implemented nationwide. Any missing data requires justification. However, storing data at the edge ensures data retention.

Bonus Benefit: “Because I want to…”

Finally, some users simply prefer to have full control. By implementing compute at the edge rather than the cloud, users have greater flexibility. We have seen this in manufacturing. Technicians want to have full control over the machinery. Edge computing gives them this control as well as independence from IT. The technicians know the machinery best and security and availability remain top of mind.

Summary

By reducing latency and costs, improving resiliency and availability, and keeping data local, edge computing opens up a new world of IoT use cases. Those described here are just the beginning. It will be exciting to see where we see edge computing turn up next. 

Originaly posted: here

Read more…

 

Question 1 : So, let’s start with the obvious question. What is DevOps and why is it inevitable for today’s businesses to adopt?

Answer : DevOps at the end of the day, if you look at it from a higher level, it is really the automation of agile, a better way to perform application design, development and deployment. This is far superior than the waterfall method which I actually started working on and was even teaching college, back in the 80s and 90s. So, the idea is that we’re going to, in essence, continuously improve the software through an agile methodology where we get together to deal with events versus some sort of a schedule or sequence, and how software is delivered.

So, DevOps really is the ability to automate that. And so, it’s the idea that we can actually code applications, say an application that runs on Linux, we can hit a button and it automatically goes through testing, including penetration testing, security testing, stability testing, performance testing. And then moves into a continuous integration process, then moves into a continuous deployment process and then is pushed out to a particular staging area and then it’s pushed out from the staging area to a production server.

The goal of DevOps is really kind of remove the humans from that process even though we haven’t done that completely yet. It is, in essence, to create a repeatable process as leverage with the number of tool sets that are working together to streamline the modification and delivery of software in a way that’s going to be better quality each time the software is delivered. There’s some cultural issues around DevOps as well, by the way, that are just important, it’s the ability to, in essence, understand that thinkers are going to be integrated iterative, the ability to deal with feedback directly from the testers and the operators, the ability to flatten the organization, and have a very open and interactive organization moving forward. And that’s the other side of the coin.

So people have a tendency to look at DevOps as just a tool chain with lots of cool tools, continuous integration, continuous testing, those sorts of things are working together, but it’s really a combination of a toolchain of process, and also a cultural change that’s probably more important than any of the technological changes.

Question 2 : One interesting point you mentioned about agile. So, I mean, as we all know, agile is a very commonly adopted methodology that’s in the software industry and I mean lot of companies are implementing agile successfully. So, as we talk about DevOps, I know it’s an extension, but how is this complementing to agile from a practical implementation standpoint?

Answer : Again, DevOps is really going to be very much of the automation of agile. So Agile is going to take a cultural change, an organizational change in order to make it effective. And ultimately, we’re leveraging a toolchain within DevOps as a way to automate everything that occurs in an agile environment. So, if we’re getting together on a daily basis to form a scrum and we’re talking about what needs to be changed, then typically the DevOps toolchain is where those changes are going to occur. 

Read more…

 

As small as a postage stamp, the Seeeduino XIAO boasts a 32-bit Arm Cortex-M0+ processor running at 48 MHz with 256 KB of flash memory and 32 KB of SRAM.

A couple of months ago, I penned a column, The Worm Turns, in which I revealed that — although I’d been bravely fighting my urges — my will had crumbled and I had decided to create a display comprising a 12 x 12 = 144 array of ping pong balls, each illuminated with a tricolor WS2818 LED (a.k.a. a NeoPixel).

8221240097?profile=RESIZE_400x

The author proudly presenting his 12 x 12 ping pong ball array (Click image to see a larger version — Image source: Max Maxfield)

First, I found a pack of 144 ping pong balls on Amazon for only $11. I ordered two cartons because I knew I would need some spares. Of course, this immediately tempted me to increase the size of my array to 15 = 15 = 225 ping pong balls, but I’d already ordered 150 NeoPixels in the form of five meters of 30 pixels/meter strips from Adafruit, so I decided to stick with the original plan, which we will call “Plan A” so no one gets confused.

Thank goodness I restrained myself, because the 12 x 12 array is proving to be a lot more work than I expected — a 15 x 15 array would have brought me to my knees.

The next step was to build a 2-ball prototype because I wanted to see whether it was best to attach the NeoPixel to the outside of the ball (the fast-and-easy option) or inside the ball (the slow-and-painful alternative). Although you can’t see it from the picture or from this video, there is a slight but noticeable difference in the real-world, and one method is indeed better than the other — can you guess which one?

8221246481?profile=RESIZE_400x

A prototype using two ping pong balls (Click image to see a larger version — Image source: Max Maxfield)

Have you ever tried to drill 3/8” holes into 144 ping pong balls? Me neither. Over the years, I’ve learned a thing or two, and one of the things I’ve learned is that drilling holes in ping pong balls always ends in tears. Thus, I ended up cutting these holes using a small pair of curved nail scissors (there’s one long evening I’ll never see again).

The reason for using the strips is that this is the cheapest way to purchase NeoPixels with associated capacitors in the easiest-to-use form. Unfortunately, the ball-to-ball spacing (43 mm) on the board is greater than the pixel-to-pixel spacing (33 mm) on the strip. This means chopping the strip into segments, attaching each segment to its associated ping pong ball, and then connecting adjacent segments together using three wires. So, 144 x 3 = 432 short wires to strip and solder. Do you have any idea how long this takes? I do!

8221247276?profile=RESIZE_400x

The Seeeduino XIAO is the size of a small postage stamp (Click image to see a larger version — Image source: Seeed Studio)

Now, you may have noticed that I was driving my 2-ball prototype with an Arduino Uno, but this is too large to be used in my array. In the past, I would have been tempted to use an Arduino Nano, which is reasonably small and not-too-expensive. On the other hand, the fact that this is an 8-bit processor running at only 16 MHz with only 32 KB of flash memory and only 2 KB of SRAM would limit the effects I could achieve.

Sometimes (rarely) the fates decide to roll the dice in one’s favor. In this case, while I was pondering which processor to employ, the folks from Seeed Studio contacted me to tell me about their Seeeduino XIAO.

OMG! This little rapscallion — which is only the size of a small postage stamp and costs only $5 — is awesome! In addition to a 32-bit Arm Cortex-M0+ processor running at 48 MHz, this bodacious beauty boasts 256 KB of flash memory and 32 KB of SRAM.

As an aside, it’s important to note is that the Seeeduino XIAO’s programming connector is USB Type-C, which means you’re going to need a USB-A to USB Type-C cable.

8221253899?profile=RESIZE_584x

The Seeeduino XIAO’s 11 input/output pins pack a punch (Click image to see a larger version — Image source: Seeed Studio)

In addition to its power and ground pins, the Seeeduino XIAO has 11 data pins, each of which can act as an analog input or a digital input/output (I/O). Furthermore, one of these pins can by driven by an internal digital-to-analog converter (DAC) and act as a true analog output, while the other pins can be used to provide I2C, SPI, and UART interfaces.

Sad to relate, there is one small fly in the soup or a large elephant in the room (I’m feeling generous today, so I’ll let you pick the metaphor you prefer). The problem is that, although it can be powered with the same 5 V supply as the NeoPixels, the Seeeduino XIAO’s I/O pins use a 3.3 V interface, but the NeoPixels require 5 V data signals, so we need some way to convert between the two.

In the past, I would probably have used a full-up bidirectional logic level converter, like the 4-channel BOB (breakout board) from SparkFun, but I only need a single unidirectional signal, so this seems a bit of overkill.

Happily, I recently ran across an awesome hack on Hackaday.com that provides a simple solution requiring only a single general-purpose IN4001 diode.

8221260061?profile=RESIZE_584x

A cheap-and-cheerful voltage level converter hack (Click image to see a larger version — Image source: Max Maxfield)

The way this works is rather clever. From the NeoPixel’s data sheet we learn that a logic 1 is considered to be 0.7 * Vcc. Since we are powering our NeoPixels with 5 V, this means a logic 1 will be 0.7 * 5 = 3.5 V, which is higher than the XIAO’s 3.3 V digital output. Bummer!

Actually, if the truth be told, there is some “wriggle room” here, and the 3.3 V signal from the XIAO might work, but are we the sort of people for whom “might” is good enough? Of course we aren’t!

The solution is to add a “sacrificial NeoPixel” at the beginning of the chain, and to power this pixel via our IN4001 diode. Since the IN4001 has a forward voltage drop of 0.7 V, the first NeoPixel will see a Vcc of 5 – 0.7 = 4.3 V. Remember that the NeoPixel considers a logic 1 to be 0.7 * Vcc, so this first NeoPixel will accept anything above 0.7 * 4.3 = 3.01 V as being a logic 1. Meanwhile, the next NeoPixel in the chain will see the 4.3 V data signal coming out of the first NeoPixel as being a valid logic 1. Pretty clever, eh?

I’m currently about half of the way through wiring everything up. I cannot wait to see my array light up for the first time. Once everything is up and running, I will return to regale you with more details. Until that frabjous day, I will delight to hear your comments, questions, and suggestions.

Originally posted HERE.

Read more…

Ever wanted the power of the all new Raspberry Pi 4 Single Board Computer, but in a smaller form factor? With more options to expand the I/Os and its functions? Well, The Raspberry Pi Compute Module 4 (a.k.a. CM4) got you covered! In this article, we’ll be taking a deep dive into the all-new CM4, see what’s new and how different the latest iteration is from its predecessor, CM3.

Introduction - The System on Module Architecture

The CM4 can be described as a ‘stripped-down’ version of the Raspberry Pi 4 Model B, which contains the same processor, memory, eMMC flash memory and the power regulation circuitry built-in. The CM4 looks almost like a breakout board with two connectors underneath, hence the name “System on Module (SoM)”. However, what differs the CM4 (all compute modules for that matter) from the regular Raspberry Pi 4 is that the CM4 does not come equipped with any hardware I/O ports such as USB, Ethernet and HDMI, but offers access to all the useful I/O pins of the cpu to be utilized to connect external peripherals that the designers include in their circuit designs. This offers the ultimate freedom to the designers and developers to use the computing power of the Raspberry Pi 4, while reducing the overall cost of their designs by only having to use what’s necessary in their designs.

 

What’s New In The CM4?

The key difference with the CM4, at first glance, is the form factor of the module. The previous versions, including the CM3 were designed to have the DDR2-SODIMM (mechanically compatible) form factor which looked like a laptop RAM stick. The successor, CM4 comes in a smaller form factor, with 2x 100-pin High-Density connector which can be ‘popped-on’ to the receiving board.

8221226476?profile=RESIZE_710x

Key Features

The CM4 comes in 32 different variants which has varying Flash and RAM options and optional wireless connectivity. Similar to the predecessors, there is also a CM4Lite version, which does not come with a built-in eMMC memory, reducing the cost of the module to a minimum of $25. However, all the variants of CM4 are equipped with following key features:

 
  • Broadcom BCM2711, Quad Core Cortex-A72 (Arm V8) 64-bit System on Chip, running at 1.5 GHz

  • 1/2/4/8GB LPDDR4 RAM options

  • 0(CM4Lite)/8/16/32GB of eMMC storage options (upto 100MB/s bandwidth)

  • Smaller footprint of 55mm x 40mm x 4.7mm (w x l x h)

  • Supports H.265 (4Kp60 Decode); H.264 (1080p60fps Decode, 1080p30fps Encode) using OpenGL ES 3.0 graphics

  • Radio Module

  • 2.4/5GHz IEEE 802.11 b/g/n/ac Wireless (optional)

  • Bluetooth 5.0 BLE

  • On-board selector to switch between PCB trace antenna and external antenna

  • On-board Gigabit Ethernet PHY supporting IEEE 1588 standard

  • 1x PCI Express Gen2.0 lane (5Gbps)

  • 2x HDMI2.0 ports (upto 4k60fps)

  • 1x USB 2.0 port (480MBps)

  • 28x GPIO pins, with the support on both 1.8V or 3.3V logic levels along with the peripheral options:

  • 2x PWM channels

  • 3x GPCLK

  • 6x UART (Serial)

  • 6x I2C

  • 5x SPI

  • 1x SDIO interface

  • 1x DPI

  • 1x PCM

  • MIPI DSI (Serial Display)

  • 1x 2-lane MIPI DSI display port

  • 1x 4-lane MIPI DSI display port

  • MIPI CSI-2 (Serial Camera)

  • 1x 2-lane MIPI CSI camera port

  • 1x 4-lane MIPI CSI camera port

  • 1x +5V Power Supply Input (on-board regulator circuitry available)

 

The Applications - DIY? Industrial?

The CM4 can be integrated into end products, designed and prototyped using the full-size Raspberry Pi 4 SBC. This allows the removal of unused ports, peripherals and components which reduces the overall cost and complexity. Therefore application ideas are virtually limitless and range all the way from DIY projects such as the PiBoy to industrial IoT designs such as integrated home automation systems, small scale hosting servers, data exchange hubs and portable electronics which require the processing power offered by the CM4, all while maintaining the smaller form factor and power consumption. Compute Module Clusters such as the Turing Pi 2, which harnesses the power of multiple Compute Modules are also an option with this powerful, yet small System on Module, the Raspberry Pi CM4.

 

How Can I Use Upswift Solutions On My Compute Module 4 Based Design?

Upswift offers hassle-free management solutions for all Linux-based embedded systems (CM4 included), by providing you a one-click solution to monitor, control and manage all your connected devices, from one place.

Originally posted HERE.

Read more…

It’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations.

I’m sorry about the title of this blog, but I’m feeling a little wackadoodle at the moment. I think the problem is that I’m giddy with excitement at the thought of the forthcoming Thanksgiving holiday.

So, here’s the deal. Starting sometime in 2021, I’m going to be writing a series of columns for Practical Electronics magazine in the UK teaching digital logic fundamentals to absolute beginners.

This will have a hands-on component with an accompanying circuit board. We’re going to start by constructing some simple logic gates at the transistor level, then use primitive logic gates in 7400-series ICs to construct more sophisticated functions, and work our way up to… but I fear I can say no more at the moment.

After we’ve created some really simple combinatorial functions — like a 2:1 multiplexer — by hand, we’re going to introduce things like Boolean algebra, DeMorgan transforms, and Karnaugh maps, and then we are going to use what we’ve learned to implement more complex combinatorial functions, cumulating in a BCD to 7-segment decoder, before we progress to sequential circuits.

I was sketching out some notes this past weekend. Prior to the BCD to 7-segment decoder, we’ll already have tackled a BCD to decimal decoder, so a lot of the groundwork will have been laid. We’ll start by explaining how the segments in the 7-segment display are identified using the letters ‘a’ through ‘f’ and showing the combinations of segments we use to create the decimal digits 0 through 9.

8217684257?profile=RESIZE_710x

Using a 7-segment display to represent the decimal digits 0 through 9 (Click image to see a larger version — Image source: Max Maxfield)

Next, we will create the truth table. We’ll be using a common cathode 7-segment display, which means active-high outputs from our decoder because this is easier for newbies to wrap their brains around.

8217685658?profile=RESIZE_710x

Truth table for BCD to 7-segment decoder with active-high outputs (Click image to see a larger version — Image source: Max Maxfield)

Observe the input combinations shown in red in the truth table. We’ll point out that, in our case, we aren’t planning on using these input combinations, which means we don’t care what the corresponding outputs are because we will never actually see them (we’re using ‘X’ characters to represent the “don’t care” values). In turn, this means we can use these don’t care values in our Karnaugh maps to aid us in our logic minimization and optimization.

The funny thing is that it’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations. Just for giggles and grins, I’ve shown the populated maps below. Before you look at my solutions, why don’t you take a couple of minutes to perform your own minimizations to see how much you remember?

 8217691254?profile=RESIZE_710x

Use these populated maps to perform your own minimizations and optimizations (Click image to see a larger version — Image source: Max Maxfield)

I should point out that I’m a bit rusty at this sort of thing, so you might want to check that I’ve correctly captured the truth table and accurately populated these maps before you leap into the fray with gusto and abandon.

Remember that we’re dealing with absolute beginners here, so — even though I will have recently introduced them to Karnaugh map techniques, I think it would be a good idea to commence this portion of the discussions by walking them through the process for segment ‘a’ step-by-step as illustrated below.

8217692064?profile=RESIZE_710x

Karnaugh map minimizations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Next, I extracted the Boolean equations corresponding to the Karnaugh map minimizations. As shown below, I’ve color-coded any product terms that appear multiple times. I don’t recall seeing this done before, but I think it could be a useful aid for beginners. Once again, I’d be interested to hear your thoughts about this.

8217692289?profile=RESIZE_710x

Boolean equations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Actually, I’d love to hear your thoughts on anything I’ve shown here. Do you think the way I’ve drawn the diagrams is conducive to beginners understanding what’s going on? Can you spot anything I’ve missed or could do better? I can’t wait for you to see what we have planned with regards to the circuit board and the “hands-on” part of this forthcoming series (I will, of course, be reporting back further in the future). Until then, as always, I welcome your comments, questions, and suggestions.

Originally posted HERE.

Read more…

Everybody Needs a ShieldBuddy

 

Arduino Mega footprint; three 32-bit cores all running at 200 MHz; 4 Mbytes of Flash and 500 Kbytes of RAM; works with the Arduino IDE; what’s not to love?

I tend to have a lot of hobby projects on the go at any particular time. Occasionally, I even manage to finish one. More rarely, one actually works.

maxncb-0102-01-awesome-audio-reactive-artifact-300x218.jpg?profile=RESIZE_400x

 Awesome Audio Reactive Artifact (Click image to see a larger version — Image source: Max Maxfield)

I also have a soft spot for 8-bit microprocessors and microcontrollers. Thus, many of my hobby projects are based on the Arduino Nano, Uno, or Mega platforms.

Take my Awesome Audio Reactive Artifact, for example. This little rascal is currently powered using an Arduino Uno, which is driving 145 tricolor NeoPixels. In turn, these NeoPixels are mounted under 31 defunct vacuum tubes (see also Awesome Audio-Reactive Artifact meets BirmingHAMfest).

The Awesome Audio Reactive Artifact also includes an ADMP401-based MEMS Microphone breakout board (BOB), which costs $10.95 from the guys and gals at SparkFun. In turn, this feeds a cheap-and-cheerful MSGEQ7 audio spectrum analyzer chip, which relieves the Arduino of a lot of processing pain (see also Using MSGEQ7s In Audio-Reactive Projects).

 

maxncb-0102-02-countdown-timer-300x200.jpg?profile=RESIZE_400x Countdown Timer (Click image to see a larger version — Image source: Max Maxfield) 

Sad to relate, 8-bit Arduinos sometimes run out of steam. Consider my Countdown Timer, for example, whose task it is to display the years (YY), months (MM), days (DD), hours (HH), minutes (MM), and seconds (SS) to my 100th birthday (see also Yes! My Countdown Timer is Alive!).

This little scamp employs 12 Lixie displays, each of which contains 20 NeoPixels, which gives us 240 NeoPixels in all. As the sophistication of the effects I was trying to implement increased, so did my processing requirements. Thus, I decided to use a Teensy 3.6, which features a 32-bit 180 MHz ARM Cortex-M4 processor with a floating-point unit. Furthermore, the Teensy 3.6 boasts 1 Mbyte of Flash memory for code, along with 256 Kbytes of RAM for dynamic data and variables.

maxncb-0102-03-inamorata-prognostication-engine-152x300.jpg?profile=RESIZE_180x180 Prognostication Engine (Click image to see a larger version — Image source: Max Maxfield)

All of which brings us to the pièce de résistance in the form of my Pedagogical and Phantasmagorical Prognostication Engine (see also The Color of Prognostication). This bodacious beauty sports two knife switches, eight toggle switches, ten pushbutton switches, five motorized potentiometers, six analog meters, and a variety of sensors (temperature, barometric pressure, humidity, proximity). All of this requires a bunch of analog and digital general-purpose input/output (GPIO) pins.

Furthermore, in addition to a wealth of weird, wonderful, and wide-ranging sound effects, the engine is equipped with 354 NeoPixels. These could potentially be daisy-chained from a single pin, although I ended up partitioning them into five strands. More importantly, the various effects require a lot of processing and memory.

When things finally started to come together on this project, I was initially thinking of using an Arduino Mega to power the beast, mainly because it has 54 digital pins and 16 analog inputs. On the downside, we have to remember that this is only an 8-bit processor gamely running at 16 MHz with a scant 256 Kbytes of Flash memory and 8 Kbytes of RAM. Furthermore, the Mega doesn’t have a floating-point unit (FPU), which means that if you need to use floating-point operations, this will really impact the performance of your programs.

maxncb-0102-04-hitex-shieldbuddy-300x155.jpg?profile=RESIZE_400x The tri-core ShieldBuddy (Click image to see a larger version — Image source: Hitex)

But turn that frown upside down into a smile, because the boffins at Hitex (hitex.com) have taken the Arduino Mega form factor and slapped an awesome Infineon Aurix TC275 processor down on it.

These processors are typically found only in state-of-the-art embedded systems. they rarely make it into the maker world (like the somewhat disheveled scientist who is desperately in need of a haircut says in the movie Independence: “They don’t let us out very often”).

The result is called the ShieldBuddy. As you can see in this video, I just took delivery of my first ShieldBuddy, and I’m really rather excited (I say “first” because I have no doubt this is going to be one of many).

So, what makes the ShieldBuddy so special? Well, how about the fact that the TC275 boasts three independent 32-bit cores, all running at 200 MHz, each with its own FPU, and all sharing 4 Mbytes of Flash and 500 Kbytes of RAM (actually, this is a bit of a simplification, but it will suffice for now). 

There’s no need for you to be embarrassed — I’m squealing in excitement alongside you. Now, if you are a professional programmer, you’ll be delighted to hear that the main ShieldBuddy toolchain is the Eclipse-based “FreeEntryToolchain” from HighTec/PLS/ Infineon. This is a full-on C/C++ development environment with source-level debugger and suchlike. 

But how about if — like me — you aren’t used to awesomely powerful (and correspondingly complicated) Eclipse-based toolchains? Well, there’s no need to worry, because the guys and gals at Hitex also have a solution for the Arduino’s integrated development environment (IDE). 

Sit up straight and pay attention, because this is where things start to get really clever. In addition to any functions you create yourself, an Arduino sketch (program) always contains two functions: setup(), which runs only one time, and loop(), which runs over and over again. 

Now, remember that the ShieldBuddy has three processor cores, which we might call Core 0, Core 1, and Core 2. Well, you can take your existing sketches and compile/upload them for the ShieldBuddy, and — by default — they will run on Core 0. 

You could achieve the same effect by renaming your setup() function to be setup0(), and renaming your loop() function to be loop0(), which explicitly tells the compiler to target these functions at Core 0. 

The point is that you can also create setup1() and loop1() functions, which will automatically be compiled to run on Core 1, and you can create setup2() and loop2() functions, which will automatically be compiled to run on Core 2. Any of your remaining functions will be compiled in such a way as to run on whichever of the cores need to use them. 

Although each core runs independently, they can communicate between themselves using techniques like shared memory. Also, you can use interrupts to coordinate and communicate between cores. 

And things just keep on getting better and better, because it turns out that a NeoPixel library is already available for the ShieldBuddy. 

I’m just about to start experimenting with this little beauty. All I can say is that everybody needs a ShieldBuddy and my ShieldBuddy is my new best friend (sorry Little Steve). How about you? Could any of your projects benefit from the awesome processing power provided by the ShieldBuddy? 

Originally posted HERE.

Read more…

The COVID phenomenon has upended businesses in several verticals, and among the most impacted is the foodservice industry, characterized by high-touch, labor-intensive and, mostly indoor operations. However, this industry has taken the initiative to adopt modern technology to fight its way out of the crisis.

Case in point - a major US foodservice company with nearly $10B in revenue and over 1000 stores nationwide recently made the decision to partner with Ayla Networks as part of its digital transformation initiative. Why? The company’s leadership team saw that new technology, specifically IoT, was a way to improve store operations, accelerate menu innovation, and ultimately deliver a superior guest experience. The results are real and quantifiable – an estimated 25% in OpEx savings, 10% revenue uplift, and 15% fewer compliance incidents just in the first year.

thumbnail_quick-serve-restaurant.jpg


These are impressive numbers and are reflective of a broader trend in the foodservice industry despite the fact that foodservice companies have unique challenges such as a large base of deployed equipment that require retrofitting, heterogenous equipment types, and poor connectivity infrastructure. The quick-service restaurant segment is not the only one experimenting with new connected technology; other segments such as coffee shop chains, retail & convenience stores, and commercial kitchens are also adopting IoT to drive improved food safety and quality assurance, for faster new recipe rollouts or to improve equipment reliability. The trend is evident not just in the fully owned store model but also in the franchise-based model.

Some of the key IoT-driven business outcomes include:

Food Safety & Quality Assurance: Sensor-based IoT solutions can monitor food preparation procedures and ensure they are followed consistently in adherence to policy. Any anomalies can be flagged and remediated to better manage risk and food quality.

Faster Recipe Rollouts: The foodservice industry is a competitive tight margin business where the smallest differentiators can make a difference. The ability to use automated over the air recipe updates from the cloud can speed up new offerings and keep the menu fresh, driving topline growth. Most importantly an effective recipe management system enables A/B testing of new recipes across markets.

Improve Equipment Reliability: Foodservice enterprises have a large base of deployed assets including ovens, ranges, fryers, coolers, soda machines among others - complex equipment with unpredictable failure patterns. Using IoT analytics to improve uptime means minimizing lost revenue and less business disruption. Equally, understanding the true cost of support for different products and product types can streamline decisions related to purchasing, warranties, and when to fix vs. replace.

Screen Shot 2020-08-13 at 4.00.13 PM.png


Overall, IoT can play a powerful role in enhancing business performance and guest satisfaction in foodservice organizations. By leveraging the power of sensors, cloud, analytics, and mobile applications, foodservice companies can gain an unfair competitive advantage and realize sustainable growth for the long term. However, one shouldn’t expect this innovation to come from the OEMs that supply the operators, it’s the equivalent of the fox guarding the henhouse since these equipment makers have much service & maintenance revenue at stake. The key to success is choosing a neutral platform provider that can reliably scale to managing millions of devices (equipment) across thousands of distributed locations, have the ability to work seamlessly across appliances from a variety of OEMs, and possess the analytical edge to transform the ‘big data’ from stores to perform advanced analysis for descriptive and prescriptive purposes.

Originally posted here

Read more…

On-chip UHD SS–MSCs as a device-unitized power source. Credit: Professor Sang-Young Lee, UNIST

by Ulsan National Institute of Science and Technology

A tiny microsupercapacitor (MSC) that is as small as the width of a person's fingerprint and can be integrated directly with an electronic chip has been developed. This has attracted major attention as a novel technology to lead the era of Internet of Things (IoT) since it can be driven independently when applied to individual electronic components.

 

Through the study, Professor Sang-Young Lee and his research team in the School of Energy and Chemical Engineering at UNIST have unveiled a new class of ultrahigh areal number density solid-state MSCs (UHD SS–MSCs) on a chip via electrohydrodynamic (EHD) jet printing. According to the research team, this is the first study to exploit EHD jet printing in the MSCs.

A supercapacitor (SC), also known as an ultracapacitor, can store much more energy than ordinary capacitors. The benefits of supercapacitors include having high power delivery and longer cycle life compared to lithium-based secondary batteries. In particular, it can be produced as small as the width of a person's fingerprint via semiconductor manufacturing process, and thus can be also applicable for wearables and internet of things (IoT) devices.

However, becuase the heat produced in manufacturing process may cause deterioration of the electrical characteristics of the supercapacitor, it has been difficult to connect them directly to electronic components. In addition, the fabrication method that combines supercapacitors with electronic components via inkjet printing technique has also the disadvantage of lower precision.

The research team solved this issue using EHD jet printing, a high-resolution patterning technique in microelectronics. EHD jet printing uses the electrode and electrolyte for printing purpose similar to that of conventional inkjet printing, yet it can control printed liquid with an electric field.

"We were able to produce up to 54.9 unit cells per square centimeter (cm2) via electro-hydrodynamic jet printing technique, and thus the output of 65.9 volts (V) was achieved in the same area," says Kwonhyung Lee (Combined M.S/Ph.D. of Energy and Chemical Engineering, UNIST), the first author of the study.

The team also succeeded in fabricating 36 unit cells on a chip (area = 8.0 mm × 8.2 mm, 54.9 cells cm−2) and areal operating voltage (65.9 V cm−2) that lie far beyond those of previously reported MSCs fabricated by printing techniques. Besides, upon exposure to hot temperature (80 degrees C), these cells maintained normal cyclic voltammetry (CV) profiles, and thus has proven they can withstand excessive heat generated during the operation of actual electronic component. In addition, these batteries can provide customized powere supplies, as they can be connected either in series or parallel.

"In this study, we have demonstrated on-chip UHD SS–MSCs fabricated via EHD jet printing," says Professor Lee. "The on-chip UHD SS–MSCs presented here hold great promise as a new platform technology for miniaturized monolithic power sources with customized design and tunable electrochemical properties."

Originaly posted HERE

 
Read more…

In order to form proper networks to share data, the Internet of Things (IoT) needs reliable communications and connectivity. Because of popular demand, there’s a wide range of connectivity technologies that operators, as well as developers, can opt for.

IoT Connectivity Groups

The IoT connectivity technologies are currently divided into two groups. The first one is cellular-based, and the second one is unlicensed LPWAN. The first group is based around a licensed spectrum, something which offers an infrastructure that is consistent and better. This group supports larger data rates, but it comes with a cost of short battery life and expensive hardware. However, you don’t have to worry about this a lot as its hardware is becoming cheaper.

Cellular-Based IoT

Because of all this, cellular-based IoT is only offered by giant operators. The reason behind this is that acquiring licensed spectrum is expensive. But these big operators have access to this licensed spectrum, as well as expensive hardware. The cellular IoT connectivity also has its own two types. The first one being the narrowband IoT (NB-IoT) and category M1 IoT (Cat-M1).

Although both are based on cellular standards, there is one big difference between the two. That NB-IoT has a smaller bandwidth than Cat-M1, and thus offers a lower transmission power. In fact, its bandwidth is 10x smaller than that of Cat-M1. However, both still have a very long range with NB-IoT offering a range of up to 100 Km.

The cellular standard based IoT connectivity ensure more reliability. Their device operational lifetimes are longer as compared to unlicensed LPWAN. But when it comes to choosing, most operators prefer NB-IoT over Cat-M1. This is because Cat-M1 provides higher data rates that are not usually necessary. In addition to this, the higher costs of it prevent operators from choosing it.

Cat-M1 is mostly chosen by large-scale operators because it provides mobility support. This is something suitable for transportation and traffic control-based network. It can also be useful in emergency response situations as it offers voice data transfer.

The hardware (module) used for cellular IoT is relatively more expensive compared to LPWAN. It can cost around $10, compared to $2 LPWAN. However, this cost has been dropping rapidly recently because of its popular demand. 

Unlicensed LPWAN

As for the unlicensed LPWANs, they are used by those who don’t have the budget to afford cellular-based IoT. They are designed for customized IoT networks and offer lower data rates, but with increased battery life and long transmission range. They can also be deployed easily. At the moment, there are two types of unlicensed LPWANs, LoRa (Long Range) and SigFox.

Both types are amazing as they designed for devices that have a lower price, increased battery life, and long range. Their coverage range can be up to 10 Km, and their connectivity cost is as low as $2 per module. Not only this, but the cost is even lower than this sometimes. Therefore, they are ideal for local areas.

Weightless LPWAN

Although there are many variants of the LPWAN, Weightless is considered to be the most popular one. This is because the Weightless Special Interest Group, or the SIG, currently offers three different protocols. These include the Weightless-N, the Weightless-W, and the Weightless-P. All three work in a different way as they have different modalities.

Weightless-W

First off, we have the Weightless-W open standard model. This one is designed to operate in TV white space (TVWS). TV Whitespace (TVWS) is the inactive or unoccupied space found between channels actively used in UHF and VHF spectrum its frequency spans from 470 MHz – 790 MHz. For those who don’t know, this is similar to what Neul was developing before getting acquired by Huawei. Now, while using TVWS can be great as it uses ultra-high frequency spectrum, it has one downside. In theory, it seems perfect. But in practice, it is difficult because the rules and regulations for utilizing TVWS for IoT vary greatly.

In addition to this, the end nodes of this model don’t work like they are supposed to. They are designed to operate in a small part of the spectrum. As is difficult to design an antenna that can cover a such wide band of spectrum. This is why TVWS can be difficult when it comes to installing it. The Weightless-W is considered a good option in:

  • Smart Oil sector.
  • Gas sector.

Weightless-N

Second up we have the ultra-narrowband system, the Weightless-N. This model is similar to SigFox as both have a lot in common. The best thing about it is it is made up of different networks instead of being an end-to-end enclosed system. Weightless-N uses differential binary phase shift keying (DBPSK) digital modulation scheme same as of used in SigFox.

The Weightless-N line is operated by Nwave, a popular IoT hardware and software developer. However, while is model is best for sensor-based networks, temperature readings, tank level monitoring, and more, there are some problems with it. For instance, Nwave has a special requirement for TCXO, that is the temperature compensated crystal oscillator.

 In addition to this, it has an unbalanced link budget. The reason behind why this is bad is that there will be much more sensitivity going up to the base station compared to what will be coming down. 

Weightless-P

Finally, we have the Weightless-P. This model is the latest one in the group as it was launched some time after the above two. What people love the most about this one is that it has two-way features. In addition to this, it has a 12.5 kHz channel that is pretty amazing. The Weightless-P doesn’t require a TXCO, something which makes it different from Weightless-N and -W.

The main company behind Weightless-P is Ubiik. The only downside about this model is that it is not ideal for wide-area networks as it offers a range of around 2 Km. However, the Weightless-P is still ideal for:

  • Private Networks
  • Extra sophisticated use cases.
  • Areas where uplink data and downlink control are important.

Capacity

Because of the fact that the Weightless protocols are based on SDR, its base station for narrowband signals is much more complex. This is something that ends up creating thousands of small binary phase-shift keying channels. Although this will let you get more capacity, it will become a burden on your wallet.

In addition to this, since the Weightless-N end nodes require a TXCO, it will be more expensive. The TXCO is used when there is a threat of the frequency becoming unstable when the temperature gets disturbed at the end node.

Range

Talking about the ranges, the Weightless-N and -W has a range of around 5 Km in Urban environments. As for the Weightless-P, it can go up to 2 Km.

Comparison

Weightless and SigFox

If we take the technology into consideration, then the Weightless-N and SigFox are pretty similar. However, they are different when it comes to go-to-market. Since Weightless is a standard, it will require another company to create an IoT based on it. However, this is not the case with SigFox as it is a different type of solution.

Weightless and LoRa

In terms of technology, the Weightless and LoRa. Lorawan are different. However, the functionally of the Weightless-N and LoRaWAN is similar. This is because both are uplink-based systems. Weightless is also sometimes considered as the very good alternative when LoRa is not feasible to the user.

Weightless and Symphony Link

The Symphony Link and Weightless-P standards are more similar to each other. For instance, both focus on private networks. However, Symphony Link has a much more better range performance because it uses LoRa instead of Minimum-shift keying modulation MSK.

Originaly posted here

Read more…

PYNQ is great for accelerating Python applications in programmable logic. Let's take a look at how we can use it with OpenMV camera.

Things used in this project

Hardware:

  • Avnet Ultra96-V2 (Can also use V1 or V3)
  • OpenMV Cam M7
  • Avnet Ultra96 (Can use V1 or V2)

Software:

  • Xilinx PYNQ Framework

Introduction

Image processing is required for a range of applications from vision guided robotics to machine vision in industrial applications.

In this project we are going to look at how we can fuse the OpenMV camera with the Ultra96 running PYNQ. This will allow out PYNQ application to offload some image processing to the camera. Doing so will provide a higher performance system and open the Ultra96 using PYNQ to be able to work with the OpenMV ecosystem.

 

What Is the OpenMV Camera 

The OpenMV camera is a low cost machine vision camera which is developed using Python. Thanks to this architecture of the OpenMV Camera we can therefore offload some of the image processing to the camera. Meaning the image frames received by our Ultra96 already have faces identified, eyes tracked or Sobel filtering, it all depends on how we set up the OpenMV Camera.

As the OpenMV camera has been designed to be extensible it provides 10 external IO which can be used to drive external sensors. These 10 are able to support a range of interfaces from UART to SPI, I2C and PWM. Of course the PWM is very useful for driving servos.

On very useful feature of the OpenMV camera is its LEDs mine (OpenMV M7) provides a tri-colour LED which can be used to output Red, Green, Blue and a separate IR LED. As the sensor is IR sensitive this can be useful for low light performance.

8100406101?profile=RESIZE_400xOpenMV Camera

How Does the OpenMV Camera Work

OpenMV Cam uses micro python to control the imager and output frames over the USB link. Micro python is intended for use on micro controllers and is based on Python 3.4. To use the OpenMV camera we need to first generate a micro python script which configures the camera for the given algorithm we wish to implement. We then execute this script by uploading and running it over the USB link.

This means we need some OpenMV APIs and libraries on a host machine to communicate with the OpenMV Camera.

To develop the script we want to be able to ensure it works, which is where the OpenMV IDE comes into its own, this allows us to develop and test the script which we later use in our Ultra96 application.

We can develop this script using either a Windows, MAC or Linux desktop.

 

Creating the OpenMV Script using the OpenMV IDE

To get started with the OpenMV IDE we frist need to download and install it. Once it is installed the next step is to connect our OpenMV camera to it using the USB link and then running a script on it.

To get started we can run the example hello world provided, which configures the camera to outputs standard RGB image at QVGA resolution. On the right hand side of the IDE you will be able to see the images output from the camera.

 

We can use this IDE to develop scripts for the OpenMV camera such as the one below which detects and identifies circles in the captured image.

Note the frame rate is lower when the camera is connected to the IDE.

 

We can use the scripts developed here in our Ultra96 PYNQ implementation let's take a look at how we set up the Ultra96 and PYNQ

Setting Up the Ultra96 PYNQ Image

The first thing we need to do if we have not already done it, is to download and create a PYNQ SD Card so we can run the PYNQ framework on the Ultra96.

As we want to use the Xilinx image processing overlay we should download the Ultra96 PYNQ v2.3 image.

Once you have this image creating a SD Card is very simple, extract the ISO image from the compressed file and write it to a SD Card. To write the ISO image to the SD Card we need a program such a etcher or win32 disk imager.

With a SD Card available we can then boot the Ultra96 and connect to the PYNQ framework using either

  • Use a USB Ethernet connection over the MicroUSB (upstream USB connection).
  • Connect via WiFi.
  • Use the Ultra96 as a single-board computer and connect a monitor, keyboard and mouse.

For this project I used the USB Ethernet connection.

The next thing to do is to ensure we have the necessary overlays to be able to accelerate image processing functions into the programmable logic. To to this we need to install the PYNQ computer vision overlay. 

Downloading the Image Processing Overlay

Installing this overlay is very straight forward. Open a browser window and connect to the web address of 192.168.3.1 (USB Ethernet address). This will open a log in page to the Jupyter notebooks, the password is Xilinx

 

Upon log in you will see the following folders and scripts

 

Click on new and select terminal, this will open a new terminal window in a browser window. To download and use the PYNQ Computer Vision overlays we enter the following command

sudo pip3 install --upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
 

Once these are downloaded if you look back at the Jupyter home page you will see a new directory called pynqOpenCV.

 

Using these Jupyter notebooks we can test the image processing performance when we accelerate OpenCV functions into the programmable logic.

 

Typically the hardware acceleration as can be seen in the image above greatly out performs implementing the algorithm in SW.

Of course we can call this overlay from our own Jupyter notebooks

 

Setting Up the OpenMV Camera in PYNQ

The next step is to configure the Ultra96 PYNQ instance to be able to control the OpenMV camera using its APIs. We can obtain these by downloading the OpenMV git repo using the command below in a terminal window on the Ultra96.

git clone https://github.com/openmv/openmv
 

Once this is downloaded we need to move the file pyopenmv.py

From openmv/tools

To /usr/lib/python3.6

This will allow us to control the OpenMV camera from within our Jupyter applications.

To be able to do this we need to know which serial port the OpenMV camera enumerates as. This will generally be ttyACM0 or ttyACM1 we can find this out by doing a LS of the /dev directory

 

Now we are ready to begin working with the OpenMV camera in our applications let's take a look at how we set it up our Jupyter Scripts

 

Initial Test of OpenMV Camera

The first thing we need to do in a new Jupyter notebook is to import the necessary packages. This includes the pyopenmv as we just installed.

We will alos be importing numpy as the image is returned as a numpy array so that we can display it using numpy functionality.

import pyopenmvimport timeimport sysimport numpy as np 

The first thing we need to do is define the script we developed in the IDE, for the "first light" with the PYNQ and OpenMV we will use the hello world script to obtain a simple image.

script = """

# Hello World Example

#

# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time

import pyb

sensor.reset()                      # Reset and initialize the sensor.

sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)

sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)

sensor.skip_frames(time = 2000)     # Wait for settings take effect.

clock = time.clock()                # Create a clock object to track the FPS.

red_led = pyb.LED(1)

red_led.off()

red_led.on()

while(True):

   clock.tick() 

   img = sensor.snapshot()         # Take a picture and return the image.

"""

Once the script is defined the next thing we need to do is connect to the OpenMV camera and download the script.

 

portname = "/dev/ttyACM0"

connected = False

pyopenmv.disconnect()

for i in range(10):

   try:

       # opens CDC port.

       # Set small timeout when connecting

       pyopenmv.init(portname, baudrate=921600, timeout=0.050)

       connected = True

       break

   except Exception as e:

       connected = False

       sleep(0.100)

if not connected:

   print ( "Failed to connect to OpenMV's serial port.\n"

           "Please install OpenMV's udev rules first:\n"

           "sudo cp openmv/udev/50-openmv.rules /etc/udev/rules.d/\n"

           "sudo udevadm control --reload-rules\n\n")

   sys.exit(1)

# Set higher timeout after connecting for lengthy transfers.

pyopenmv.set_timeout(1*2) # SD Cards can cause big hicups.

pyopenmv.stop_script()

pyopenmv.enable_fb(True)

pyopenmv.exec_script(script)

Finally once the script has been downloaded and is executing, we want to be able to read out the frame buffer. This Cell below reads out the framebuffer and saves it as a jpg file in the PYNQ file system.

 

running = True

import numpy as np

from PIL import Image

from matplotlib import pyplot as plt

while running:

   fb = pyopenmv.fb_dump()

   if fb != None:

       img = Image.fromarray(fb[2], 'RGB')

       img.save("frame.jpg")

       img = Image.open("frame.jpg")

       img

       time.sleep(0.100)

 

When I ran this script the first light image below was received of me working in my office.

 

Having achieved this the next step is to start working with advanced scripts in the PYNQ Jupyter notebook. using the same approach as above we can redefine scripts which can be used for different processing including

script = """

import sensor, image, time

sensor.reset() # Initialize the camera sensor.

sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565

sensor.set_framesize(sensor.QQVGA) # or sensor.QVGA (or others)

sensor.skip_frames(time = 2000) # Let new settings take affect.

sensor.set_gainceiling(8)

clock = time.clock() # Tracks FPS.

while(True):

   clock.tick() # Track elapsed milliseconds between snapshots().

   img = sensor.snapshot() # Take a picture and return the image.

   # Use Canny edge detector

   img.find_edges(image.EDGE_CANNY, threshold=(50, 80))

   # Faster simpler edge detection

   #img.find_edges(image.EDGE_SIMPLE, threshold=(100, 255))

   print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while

"""

For Canny edge detection when imaging a MiniZed Board

 

Alternatively we can also extract key points from images for tracking in subsequent images.

script = """

import sensor, time, image

# Reset sensor

sensor.reset()

# Sensor settings

sensor.set_contrast(3)

sensor.set_gainceiling(16)

sensor.set_framesize(sensor.VGA)

sensor.set_windowing((320, 240))

sensor.set_pixformat(sensor.GRAYSCALE)

sensor.skip_frames(time = 2000)

sensor.set_auto_gain(False, value=100)

def draw_keypoints(img, kpts):

   if kpts:

       print(kpts)

       img.draw_keypoints(kpts)

       img = sensor.snapshot()

       time.sleep(1000)

kpts1 = None

# NOTE: uncomment to load a keypoints descriptor from file

#kpts1 = image.load_descriptor("/desc.orb")

#img = sensor.snapshot()

#draw_keypoints(img, kpts1)

clock = time.clock()

while (True):

   clock.tick()

   img = sensor.snapshot()

   if (kpts1 == None):

       # NOTE: By default find_keypoints returns multi-scale keypoints extracted from an image pyramid.

       kpts1 = img.find_keypoints(max_keypoints=150, threshold=10, scale_factor=1.2)

       draw_keypoints(img, kpts1)

   else:

       # NOTE: When extracting keypoints to match the first descriptor, we use normalized=True to extract

       # keypoints from the first scale only, which will match one of the scales in the first descriptor.

       kpts2 = img.find_keypoints(max_keypoints=150, threshold=10, normalized=True)

       if (kpts2):

           match = image.match_descriptor(kpts1, kpts2, threshold=85)

           if (match.count()>10):

               # If we have at least n "good matches"

               # Draw bounding rectangle and cross.

               img.draw_rectangle(match.rect())

               img.draw_cross(match.cx(), match.cy(), size=10)

           print(kpts2, "matched:%d dt:%d"%(match.count(), match.theta()))

           # NOTE: uncomment if you want to draw the keypoints

           #img.draw_keypoints(kpts2, size=KEYPOINTS_SIZE, matched=True)

   # Draw FPS

   img.draw_string(0, 0, "FPS:%.2f"%(clock.fps()))

"""

Circle Detection

 

import sensor, image, time

sensor.reset()

sensor.set_pixformat(sensor.RGB565) # grayscale is faster

sensor.set_framesize(sensor.QQVGA)

sensor.skip_frames(time = 2000)

clock = time.clock()

while(True):

   clock.tick()

   img = sensor.snapshot().lens_corr(1.8)

   # Circle objects have four values: x, y, r (radius), and magnitude. The

   # magnitude is the strength of the detection of the circle. Higher is

   # better...

   # `threshold` controls how many circles are found. Increase its value

   # to decrease the number of circles detected...

   # `x_margin`, `y_margin`, and `r_margin` control the merging of similar

   # circles in the x, y, and r (radius) directions.

   # r_min, r_max, and r_step control what radiuses of circles are tested.

   # Shrinking the number of tested circle radiuses yields a big performance boost.

   for c in img.find_circles(threshold = 2000, x_margin = 10, y_margin = 10, r_margin = 10,

           r_min = 2, r_max = 100, r_step = 2):

       img.draw_circle(c.x(), c.y(), c.r(), color = (255, 0, 0))

       print(c)

   print("FPS %f" % clock.fps())

 

 

 

This fusion of ability to offload processing to either the OpenMV camera or the Ultra96 programmable logic running Pynq provides the system designer with maximum flexibility.

 

Wrap Up

The ability to use the OpenMV camera, coupled with the PYNQ computer vision libraries along with other overlays such as the klaman filter and base overlays. We can implement algorithms which can be used to enable us to implement vision guided robotics. Using the base overlay and the Input Output processors also enables us to communicate with lower level drives, interfaces and other sensors required to implement such a solution.

Originaly posted here.

 

Read more…

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

Will We Ever Get Quantum Computers?

In a recent issue of IEEE Spectrum, Mikhail Dyakonov makes a pretty compelling argument that quantum computing (QC) isn't going to fly anytime soon. Now, I'm no expert on QC, and there sure is a lot of money being thrown at the problem by some very smart people, but having watched from the sidelines QC seems a lot like fusion research. Every year more claims are made, more venture capital gets burned, but we don't seem to get closer to useful systems.

Consider D-Wave Systems. They've been trying to build a QC for twenty years, and indeed do have products more or less on the market, including, it's claimed, one of 1024 q-bits. But there's a lot of controversy about whether their machines are either quantum computers at all, or if they offer any speedup over classical machines. One would think that if a 1K q-bit machine really did work the press would be all abuzz, and we'd be hearing constantly of new incredible results. Instead, the machines seem to disappear into research labs.

Mr. Duakonov notes that optimistic people expect useful QCs in the next 5-10 years; those less sanguine expect 20-30 years, a prediction that hasn't changed in two decades. He thinks a window of many decades to never is more realistic. Experts think that a useful machine, one that can do the sort of calculations your laptop is capable of, will require between 1000 and 100,000 q-bits. To me, this level of uncertainty suggests that there is a profound lack of knowledge about how these machines will work and what they will be able to do.

According to the author, a 1000 q-bit machine can be in 21000 states (a classical machine with N transistors can be in only 2N states), which is about 10300, or more than the number of sub-atomic particles in the universe. At 100,000 q-bits we're talking 1030,000, a mind-boggling number.

Because of noise, expect errors. Some theorize that those errors can be eliminated by adding q-bits, on the order of 1000 to 100,000 additional per q-bit. So a useful machine will need at least millions, or perhaps many orders of magnitude more, of these squirrelly microdots that are tamed only by keeping them at 10 millikelvin.

A related article in Spectrum mentions a committee formed of prestigious researchers tasked with assessing the probability of success with QC concluded that:

"[I]t is highly unexpected" that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable "noisy intermediate-scale quantum computers" will be built within that time frame, "there are at present no known algorithms/applications that could make effective use of this class of machine," the committee says."

I don't have a dog in this fight, but am relieved that useful QC seems to be no closer than The Distant Shore (to quote Jan de Hartog, one of my favorite writers). If it were feasible to easily break encryption schemes banking and other systems could collapse. I imagine Blockchain would fail as hash algorithms became reversable. The resulting disruption would not be healthy for our society.

On the other hand, Bruce Schneier's article in the March issue of IEEE Computing Edge suggests that QC won't break all forms of encryption, though he does think a lot of our current infrastructure will be vulnerable. The moral: if and when QC becomes practical, expect chaos.

I was once afraid of quantum computing, as it involves mechanisms that I'll never understand. But then I realized those machines will have an API. Just as one doesn't need to know how a computer works to program in Python, we'll be insulated from the quantum horrors by layers of abstraction.

Originaly posted here

Read more…

SSE Airtricity employees Derek Conty, left, Francie Byrne, middle, and Ryan Doran, right, install solar panels on the roof of Kinsale Community School in Kinsale, Ireland. The installation is part of a project with Microsoft to demonstrate the feasibility of distributed power purchase agreements. Credit: Naoise Culhane

by John Roach

Solar panels being installed on the roofs of dozens of schools throughout Dublin, Ireland, reflect a novel front in the fight against global climate change, according to a senior software engineer and a sustainability lead at Microsoft.

The technology copmpany partnered with SSE Airtricity, Ireland's largest provider of 100% green energy and a part of FTSE listed SSE Group, to install and manage the internet-connected solar panels, which are connected via Azure IoT to Microsoft Azure, a cloud computing platform.

The software tools aggregate and analyze real-time data on energy generated by the solar panels, demonstrating a mechanism for Microsoft and other corporations to achieve sustainability goals and reduce the carbon footprint of the electric power grid.

"We need to decarbonize the global economy to avoid catastrophic climate change," said Conor Kelly, the software engineer who is leading the distributed solar energy project for Microsoft Azure IoT. "The first thing we can do, and the easiest thing we can do, is focus on electricity."

Microsoft's $1.1 million contribution to the project builds on the company's ongoing investment in renewable energy technologies to offset carbon emissions from the operation of its datacenters.

A typical approach to power datacenters with renewable energy is for companies such as Microsoft to sign so-called power purchase agreements with energy companies.The agreements provide financial guarantees needed to build industrial-scale wind and solar farms and connections to the power grid.

The new project demonstrates the feasibility of agreements to install solar panels on rooftops distributed across towns with existing grid connections and use internet of things, or IoT, technologies to aggregate the accumulated energy production for carbon offset accounting.

"It utilizes existing assets that are sitting there unmonetized, which are roofs of buildings that absorb sunlight all day," Kelly said.

New Business Model

The project is also a proof-of-concept, or blueprint, for how energy providers can adapt as the falling price of solar panels enables distributed electric power generation throughout the existing electric power grid.

Traditionally, suppliers purchase power from central power plants and industrial-scale wind and solar farms and sell it to consumers on the distribution grid. Now, energy providers like SSE Airtricity provide renewable energy solutions that allow end consumers to generate power, from sustainable sources, using the existing grid connection on their premises.

"The more forward-thinking energy providers that we are working with, like SSE Airtricity, identify this as an opportunity and industry changing shift in how energy will be generated and consumed," Kelly noted.

The opportunity comes in the ability to finance the installation of solar panels and batteries at homes, schools, businesses and other buildings throughout a community and leverage IoT technology to efficiently perform a range of services from energy trading to carbon offset accounting.

Kelly and his team with Azure IoT are working with SSE Airtricity to develop the tools and machine learning models necessary to unlock this opportunity.

"Instead of having utility scale solar farms located outside of cities, you could have a solar farm at the distribution level, spread across a number of locations," said Fergal Ahern, a business energy solutions manager and renewable energy expert with SSE Airtricity.

For the distributed power purchase agreement, SSE Airtricity uses Azure IoT to aggregate the generation of all the solar panels installed across 27 schools around the provinces of Leinster, Munster and Connacht and run it through a machine learning model to determine the carbon emissions that the solar panels avoid.

The schools use the electricity generated by the solar panels, which reduces their utility bills; Microsoft receives the renewable energy credits for the generated electricity, which the company applies to its carbon neutrality commitments.

The panels are expected to produce enough energy annually to power the equivalent of 68 Irish homes for a year and abate more than 2.1 million kilograms, which is equivalent to 4.6 million pounds, of carbon dioxide emissions over the 15 years of the agreement, according to Kelly.

"This is additional renewable energy that wouldn't have otherwise happened," he said. "Every little bit counts when it comes to meeting our sustainability targets and combatting climate change."

Every little bit counts

Victory Luke, a 16 year old student at Collinstown Park Community College in Dublin, has lived by the "every little bit counts" mantra since she participated in a "Generation Green" sustainability workshop in 2019 organized by the Sustainable Energy Authority of Ireland, SSE Airtricity and Microsoft.

The workshop was part of an education program surrounding the installation of solar panels and batteries at her school along with a retrofit of the lighting system with LEDs. Digital screens show the school's energy use in real time, allowing students to see the impact of the energy efficiency upgrades.

Luke said the workshop captured her interest on climate change issues. She started reading more about sustainability and environmental conservation and agreed to share her newfound knowledge with the younger students at her school.

"I was going around and talking to them about energy efficiency, sharing tips and tricks like if you are going to boil a kettle, only boil as much water as you need, not too much," she explained.

That June, the Sustainable Energy Authority of Ireland invited her to give a speech at the Global Conference on Energy Efficiency in Dublin, which was organized by the International Energy Agency, an organization that works with governments and industry to shape sustainable energy policy.

"It kind of felt surreal because I honestly felt like I wasn't adequate enough to be speaking about these things," she said, noting that the conference attendees included government ministers, CEOs and energy experts from around the world.

At the time, she added, the global climate strike movement and its youth leaders were making international headlines, which made her advocacy at school feel even smaller. "Then I kind of realized that it is those smaller things that make the big difference," she said.

SSE Airtricity and Microsoft plan to replicate the educational program that inspired Luke and her classmates at dozens of the schools around Ireland that are participating in the project.

"When you've got solar at a school and you can physically point at the installation and a screen that monitors the power being generated, it brings sustainability into daily school life," Ahern said.

Proof of concept for policymakers

The project's education campaign extends to renewable energy policymakers, Kelly noted. He explained that renewable energy credits—a market incentive for corporations to support renewable energy projects—are currently unavailable for distributed power purchase agreements.

For this project, Microsoft will receive genuine renewable energy credits from a wind farm that SSE Airtricity also operates, he added.

"And," he said, "we are hoping to use this project as an example of what regulation should look like, to say, 'You need to award renewable energy credits to distributed generation because they would allow corporates to scale-up this type of project.'"

For her part, Luke supports steps by multinational corporations such as Microsoft to invest in renewable energy projects that address global climate change.

"It is a good thing to see," she said. "Once one person does something, other people are going to follow.

Originaly posted HERE

Read more…

Premier Sponsors