Subscribe to our Newsletter | Join our LinkedIn Group | Post on IoT Central


Programming (223)

WEBINAR SERIES:
 
Fast and Fearless - The Future of IoT Software Development
 8995382285?profile=RESIZE_400x

SUMMARY

The IoT is transforming the software landscape. What was a relatively straightforward embedded software stack, has been revolutionized due to the IoT where developers juggle specialized workloads, security, machine learning, real-time connectivity, managing devices in the field - the list goes on.

How can our industry help developers prototype ‘fearlessly’ because the tools and platforms allow them to navigate varying IoT components? How can developers move to production quickly, capitalizing on innovation opportunities in emerging IoT markets? 

This webinar series will take you through the fundamental steps, tools and opportunities for simplifying IoT development. Each webinar will be a panel discussion with industry experts who will share their experience and development tips on the below topics.

 

Part One of Four: The IoT Software Developer Experience

Date: Tuesday, May 11, 2021

Webinar Recording Available Here
 

Part Two of Four: AI and IoT Innovation

Date: Tuesday, June 29, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Two
 

Part Three of Four: Making the Most of IoT Connectivity

Date: Tuesday, September 28, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Three
 

Part Four of Four: IoT Security Solidified and Simplified

Date: Tuesday, November 16, 2021

Time: 8:00 am PDT/ 3:00 pm UTC

Duration: 60 minutes

Click Here to Register for Part Four
 
Read more…

Happy Friday (or whatever day it is when you find yourself reading this). I’m currently bouncing off the walls in excitement because I’ve been invited to host a panel discussion as part of a webinar series — Fast and Fearless: The Future of IoT Software Development — being held under the august auspices of IoTCentral.io

8902866889?profile=RESIZE_584x

Panel members Joe Alderson (upper left), Pamela Cortez (upper right), Katherine Scott (lower left), and Ihor Dvoretskyi (bottom right)

At this event, the first of a 4-part series, we will be focusing on “The IoT Software Developer Experience.”

As we all know, the IoT is transforming the software landscape. What used to be a relatively straightforward embedded software stack has been revolutionized by the IoT, with developers now having to juggle specialized workloads, security, machine learning, real-time connectivity, managing devices that have been deployed into the field… the list goes on.

In this webinar — which will be held on Tuesday 11 May 2021 from 10:00 a.m. to 11:00 a.m. CDT — I will be joined by four industry luminaries to discuss the development challenges engineers are facing today, how the industry is helping to make IoT development easier, an overview of development processes (including cloud-based continuous integration (CI) workflows and low-code development), and what the future looks like for developers who are building for the IoT. 

The luminaries in question (and whom I will be questioning) are Joe Alderson (Director of Embedded Tools and User Experience at Arm), Pamela Cortez (IoT Developer Advocate and Sr. Program Manager at Microsoft Azure IoT), Katherine Scott, Developer Advocate at Open Robotics, and Ihor Dvoretskyi (Developer Advocate at Cloud Native Computing Foundation).

So, what say you? Dare I hope that we will have the pleasure of your company and that you will be able to join us to (a) tease your auditory input systems with our discussions and (b) join our question-and-answer free-for-all at the end?

Recording available:

Read more…

By Bee Hayes-Thakore

The Android Ready SE Alliance, announced by Google on March 25th, paves the path for tamper resistant hardware backed security services. Kigen is bringing the first secure iSIM OS, along with our GSMA certified eSIM OS and personalization services to support fast adoption of emerging security services across smartphones, tablets, WearOS, Android Auto Embedded and Android TV.

Google has been advancing their investment in how tamper-resistant secure hardware modules can protect not only Android and its functionality, but also protect third-party apps and secure sensitive transactions. The latest android smartphone device features enable tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the hardware-backed Keystore that resides in a hardware security module.

To accelerate adoption of new Android use cases with stronger security, Google announced the formation of the Android Ready SE Alliance. Secure Element (SE) vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. On March 25th, Google launched the General Availability (GA) version of StrongBox for SE.

8887974290?profile=RESIZE_710x

Hardware based security modules are becoming a mainstay of the mobile world. Juniper Research’s latest eSIM research, eSIMs: Sector Analysis, Emerging Opportunities & Market Forecasts 2021-2025, independently assessed eSIM adoption and demand in the consumer sector, industrial sector, and public sector, and predicts that the consumer sector will account for 94% of global eSIM installations by 2025. It anticipates that established adoption of eSIM frameworks from consumer device vendors such as Google, will accelerate the growth of eSIMs in consumer devices ahead of the industrial and public sectors.


Consumer sector will account for 94% of global eSIM installations by 2025

Juniper Research, 2021.

Expanding the secure architecture of trust to consumer wearables, smart TV and smart car

What’s more? A major development is that now this is not just for smartphones and tablets, but also applicable to WearOS, Android Auto Embedded and Android TV. These less traditional form factors have huge potential beyond being purely companion devices to smartphones or tablets. With the power, size and performance benefits offered by Kigen’s iSIM OS, OEMs and chipset vendors can consider the full scope of the vast Android ecosystem to deliver new services.

This means new secure services and innovations around:

🔐 Digital keys (car, home, office)

🛂 Mobile Driver’s License (mDL), National ID, ePassports

🏧 eMoney solutions (for example, Wallet)

How is Kigen supporting Google’s Android Ready SE Alliance?

The alliance was created to make discrete tamper resistant hardware backed security the lowest common denominator for the Android ecosystem. A major goal of this alliance is to enable a consistent, interoperable, and demonstrably secure applets across the Android ecosystem.

Kigen believes that enabling the broadest choice and interoperability is fundamental to the architecture of digital trust. Our secure, standards-compliant eSIM and iSIM OS, and secure personalization services are available to all chipset or device partners in the Android Ready SE Alliance to leverage the benefits of iSIM for customer-centric innovations for billions of Android users quickly.

Vincent Korstanje, CEO of Kigen

Kigen’s support for the Android Ready SE Alliance will allow our industry partners to easily leapfrog to the enhanced security and power efficiency benefits of iSIM technology or choose a seamless transition from embedded SIM so they can focus on their innovation.

We are delighted to partner with Kigen to further strengthen the security of Android through StrongBox via Secure Element (SE). We look forward to widespread adoption by our OEM partners and developers and the entire Android ecosystem.

Sudhi Herle, Director of Android Platform Security 

In the near term, the Google team is prioritizing and delivering the following Applets in conjunction with corresponding Android feature releases:

  • Mobile driver’s license and Identity Credentials
  • Digital car keys

Kigen brings the ability to bridge the physical embedded security hardware to a fully integrated form factor. Our Kigen standards-compliant eSIM OS (version 2.2. eUICC OS) is available to support chipsets and device makers now. This announcement is a start to what will bring a whole host of new and exciting trusted services offering better experience for users on Android.

Kigen’s eSIM (eUICC) OS brings

8887975464?profile=RESIZE_710x

The smallest operating system, allowing OEMs to select compact, cost-effective hardware to run it on.

Kigen OS offers the highest level of logical security when employed on any SIM form factor, including a secure enclave.

On top of Kigen OS, we have a broad portfolio of Java Card™ Applets to support your needs for the Android SE Ready Alliance.

Kigen’s Integrated SIM or iSIM (iUICC) OS further this advantage

8887975878?profile=RESIZE_710x

Integrated at the heart of the device and securely personalized, iSIM brings significant size and battery life benefits to cellular Iot devices. iSIM can act as a root of trust for payment, identity, and critical infrastructure applications

Kigen’s iSIM is flexible enough to support dual sim capability through a single profile or remote SIM provisioning mechanisms with the latter enabling out-of-the-box connectivity, secure and remote profile management.

For smartphones, set top boxes, android auto applications, auto car display, Chromecast or Google Assistant enabled devices, iSIM can offer significant benefits to incorporate Artificial intelligence at the edge.

Kigen’s secure personalization services to support fast adoption

SIM vendors have in-house capabilities for data generation but the eSIM and iSIM value chains redistribute many roles and responsibilities among new stakeholders for the personalization of operator credentials along different stages of production or over-the-air when devices are deployed.

Kigen can offer data generation as a service to vendors new to the ecosystem.

Partner with us to provide cellular chipset and module makers with the strongest security, performance for integrated SIM leading to accelerate these new use cases.

Security considerations for eSIM and iSIM enabled secure connected services

Designing a secure connected product requires considerable thought and planning and there really is no ‘one-size-fits-all’ solution. How security should be implemented draws upon a multitude of factors, including:

  • What data is being stored or transmitted between the device and other connected apps?
  • Are there regulatory requirements for the device? (i.e. PCI DSS, HIPAA, FDA, etc.)
  • What are the hardware or design limitations that will affect security implementation?
  • Will the devices be manufactured in a site accredited by all of the necessary industry bodies?
  • What is the expected lifespan of the device?

End-to-end ecosystem and services thinking needs to be a design consideration from the very early stage especially when considering the strain on battery consumption in devices such as wearables, smart watches and fitness devices as well as portable devices that are part of the connected consumer vehicles.

Originally posted here.

Read more…

Before getting stuck to the point, which language is best for development? Let us know what exactly IoT is. Why is it important?

What is the Internet of Things (IoT)? 

The Internet of Things (IoT) epitomizes the pattern of once formerly autonomous devices getting progressively associated with the Internet. IoT alludes to different "things" that can speak with each other to accomplish more than if they were working all alone. Devices that fuse a microchip and information correspondence abilities are IoT gadgets.

The “Internet” alludes to the capacity for gadgets to speak with each other. In numerous IoT frameworks, correspondence between things is not really over the Internet. Things may utilize Internet conventions to speak with one another. On the other hand, they may utilize restrictive conventions. Nonetheless, in many frameworks, an association with the Internet is available eventually. Basic models utilizing the Internet includes gadgets conveying to one of the accompanying's: 

  • A mobile phone
  • A gateway device
  • An embedded cellular connection

This is true even if the IoT devices themselves do not use a connection, but when the user is the mobile device does. The Internet of Things will produce the data concerning the connected objects and analyze them, and create the decisions; in straightforward terms, we can say that one can tell that the web of Things is far smarter than the web. The protection cameras, sensors, vehicles, buildings, and therefore the software package a number of the examples are of things, which will exchange the info among one another.

The top 6 programming language suited for IoT and will be the best choice are:

1. C/C++

Java is not the only famous programming language in IoT programming. C and C++ are most popularly being utilized for IoT projects for an assortment of purposes. For example, designers may utilize the C language with IoT sheets or C++in installed IoT frameworks. Given that the two dialects have moderately low energy utilization and progressed adaptability, developers can utilize them to viably code for inserted frameworks that interface with the underlying hardware.


  • As you would have speculated, many "things" will not exist without quite possibly the main programming language, C. It is fundamentally a beginning stage and is the most famous language for embedded devices. C has been utilized regularly even though different languages may rank much higher. C is probably the most established language still generally utilized today. Despite the numerous languages to tag along since there are still a lot of activities that utilize C. Some even just utilize C. There is a valid justification for this, as well: execution. Different languages use this utilization at runtime, which implies that either bytecode or the actual code you write is being interpreted when your program runs. C, then again, accumulates to machine code. This implies that C projects are for the most part a lot quicker than their reciprocals in different languages.
  • C++ 
    C++ programming language has preparation control over C. This benefit makes C++ ideal as a pre-handling impetus for C. C++ invigorates the handling force of C, assisting it with running more significant level programming dialects. In spite of the fact that C++ is an intricate language and designers can have errors with it, it remains software engineers' top pick. This programming language shows its strength in Linux projects and the inserted programming space with its capacity for reflections and item layers. C++ is an improved variant of the C language ordinarily utilized for object-arranged programming. It was intended to run huge scope applications, an impediment in C. C++ is broadly utilized in implanted frameworks, GUI-based applications, internet browsers, working frameworks with applications across businesses like medical care, money, etc.

2. JAVA

Another of the broadly utilized programming languages that are making IoT-controlled devices a reality. JavaScript when joined with Node.js turns out magnificently for creating both public just as private IoT organizations. Additionally, Tessel and Espruino microcontrollers utilize this programming language. This half makes it a lot of appropriate arrangement once we use low-force or short microcontrollers. JavaScript has a precarious expectation to absorb information and even students with no experience can begin chipping away at IoT advancement projects without the need to go through years dominating it. This is utilized in web development and HTML programs. This is a benefit as the code composed of this language can be effortlessly adjusted for an IoT application. This is likewise one of the suggested languages for the IoT application improvement organization. Designers will not have to get familiar with any new language to create code for sensors. JavaScript runs on Node.js, which is a decent choice for gathering information and sending them to the center. All that said, Java requires explicit libraries to work with specific hardware Nevertheless, However, it is amongst the most preferred tools used by developers today for IoT development.

3. PYTHON

The vast majority of web applications use Python as their programming language. Python has been extremely well known among IoT designers as it is not difficult to learn, adaptable, speedy and its force permits specialists to work with information-heavy applications. This is a flexible language as it tends to be adjusted on any type of device. Any designer can learn a moderately simple programming language. Consequently, this can be utilized by the IoT application advancement organization. The syntax is simple and readable. This makes the advancement of an IoT application simple. Python is likewise well known for keeping up complex codes. It is the best programming language to send complex information. As of late, this is viewed as a steady language. It is useful for all little to medium estimated projects. The handling power is moderate and is acquiring prevalence for IoT frameworks. A universally useful language, Python turns out impeccably for backend web improvement, information examination, computerized reasoning, and logical processing. Developers likewise use it to build up efficient devices, games, and work area applications. It is one of the quickest developing languages for installed figuring.

4. JAVASCRIPT

Another of the broadly utilized programming languages that are making IoT-controlled devices a reality. JavaScript when joined with Node.js turns out magnificently for creating both public just as private IoT organizations. Additionally, Tessel and Espruino microcontrollers utilize this programming language. This makes it a suitable arrangement when utilizing low-force or quick microcontrollers. JavaScript has a precarious expectation to absorb information and even students with no experience can begin chipping away at IoT advancement projects without the need to go through years dominating it. This is utilized in web development and HTML programs. This is a benefit as the code composed of this language can be effortlessly adjusted for an IoT application. This is likewise one of the suggested languages for the IoT application improvement organization. Designers will not have to get familiar with any new language to create code for sensors. JavaScript runs on Node.js, which is a decent choice for gathering information and sending them to the center.

5. SWIFT

Swift is the programming language that is utilized for making the applications for macOS or Apple's iOS devices. On the off chance that you need to communicate with the iPhones and iPads with your focal home center, Swift is the way. Swift is acquiring notoriety as a programming language that its processor Objective-C. Apple to accomplish its objective of turning into the head of IoT at home is building libraries. These libraries can deal with a large part of the work; it will make it simpler for designers to zero in on the agenda. A universally useful, multi-worldview, and gathered programming language, Swift is assembled utilizing the current way to deal with security, execution, and programming configuration designs. Swift is an open-source language, an extraordinary decision for the improvement of amazing Home kit arrangements. The home kit is a system by Apple Inc. for speaking with and controlling associated frills in a client's home. Swift will be accessible for Cloud kit too. Cloud kit is JavaScript that keeps your applications associated and state-of-the-art across iOS and macOS. 

Swift being an amazing, open-source, stage-viable improvement arrangement equipped for running applications both on the gadget and in the cloud, settles on it as an obvious decision for IoT items.

6. PHP

PHP is an open-source, language, open-source interpreted, object-oriented, server-side scripting language. A PHP script can execute quicker than scripts written in different languages. It is cross-stage, which implies that a PHP application created in one operating system can be effortlessly executed in other operating systems. Aside from that, PHP code can be effectively implanted inside HTML labels and scripts. The engineers to their heap of codes are adding PHP. The code's fundamental target is to shuffle microservices on the worker. They can transform the lowliest thing of the web into a full web worker. With the assistance of PHP, applications are created utilizing GPS Data from IoT gadgets. 

PHP is not a harder language to comprehend. Nevertheless, it is a bit more troublesome than HTML and CSS. PHP being a sensible language with rationale-based orders and articulations set aside an effort to dominate. It is prescribed to learn PHP after HTML/CSS and JavaScript.

How IoT application benefits your business

  1. IoT applications bring more business openings by improving the business modules and quality of the service provided.
  2. It improves resource use by observing hardware through sensors and taking preventive support for continuous access. 
  3. IoT applications can without much of a stretch associate with cameras and sensors to screen the hardware to keep away from actual dangers. 
  4. It expands business profitability by offering preparation to representatives and upgrading their work effectiveness and keeping away from the ability to improve business efficiency. 
  5. By upgrading the business module, resource usage, gear observing, and worker preparing administrations, IoT applications likewise save your general business cost.

Significance of Internet of Thing Technology 

IoT is viewed as the huge outskirts that can improve practically all exercises in our lives. The majority of the gadgets, which have not recently been associated with the web, can be arranged and react in a similar path as brilliant gadgets. By 2020, the world is set to be IoT arranged. Here are the advantages, which accompany this innovation.

Innovation is currently important for our lives, it is reevaluating the fun of each action and the web of things takes a critical offer in making it conceivable. In a world dominated by advanced innovation, the IoT assumes a conspicuous part in our lives. It has made an environment that joins numerous frameworks to give brilliant exhibitions in each undertaking. The expansion of the IoT has made another advancement of phones, homes, and other installed applications that are associated with the web. They have immaculately incorporated human correspondence in manners we never anticipated. These devices can infer significant data utilizing orders dependent on information investigation, share the information on the cloud, and break down it securely to give the necessary yield. Numerous organizations are quickly changing from multiple points of view, because of the IoT. 

The IoT is making various changes in our lives. It is interfacing with a large number of gadgets that were recently segregated. This is dramatically expanding the worth of huge information and smoothing out numerous regular errands. Many organizations in the world such as Eddie Stobart Transport and Logistics Company, Amazon, Dell, Aviva, John Deere Company, and Walt Disney Land are all utilizing the Internet of Things technology to monitor various activities and advance their existing systems. The IoT is making various changes in our lives. It is associating a great many devices that were recently disconnected. This is exponentially expanding the worth of huge information and smoothing out numerous regular undertakings.

  • IoT advances effective asset usage. 
  • It limits human endeavors in numerous life aspects. a 
  • Empowering IoT will decrease the expense of creating and expanding the profits 
  • It settles on examination choices quicker and precisely 
  • It helps the continuous advertising of items 
  • Give a superior customer experience 
  • It guarantees high-quality data and secured processing

Conclusion

Every one of the programming languages recorded above has its qualities and shortcomings, so organizations need to completely look at the attributes of each language and discover which of them coordinates with the d they will utilize. The openness of an advancement climate, devices, and libraries might be another factor to consider. Engineers might need to pick open-source language as they give solid local area support and a wide scope of tools.

Author Bio: 

Sidharth Jain, Proud Founder of Graffersid, Web and Mobile App Development Company based in India. Graffersid has a team of designers and dedicated developers. Hire laravel Developers, Trusted by start-ups in YC, Harvard, Google Incubation, BluChilli. He understands how to solve problems using technology and contributes his knowledge to the leading blogging sites.

Read more…

By Sachin Kotasthane

In his book, 21 Lessons for the 21st Century, the historian Yuval Noah Harari highlights the complex challenges mankind will face on account of technological challenges intertwined with issues such as nationalism, religion, culture, and calamities. In the current industrial world hit by a worldwide pandemic, we see this complexity translate in technology, systems, organizations, and at the workplace.

While in my previous article, Humane IIoT, I discussed the people-centric strategies that enterprises need to adopt while onboarding IoT initiatives of industrial IoT in the workforce, in this article, I will share thoughts on how new-age technologies such as AI, ML, and big data, and of course, industrial IoT, can be used for effective management of complex workforce problems in a factory, thereby changing the way people work and interact, especially in this COVID-stricken world.

Workforce related problems in production can be categorized into:

  1. Time complexity
  2. Effort complexity
  3. Behavioral complexity

Problems categorized in either of the above have a significant impact on the workforce, resulting in a detrimental effect on the outcome—of the product or the organization. The complexity of these problems can be attributed to the fact that the workforce solutions to such issues cannot be found using just engineering or technology fixes as there is no single root-cause, rather, a combination of factors and scenarios. Let us, therefore, explore a few and seek probable workforce solutions.8829066088?profile=RESIZE_584x

Figure 1: Workforce Challenges and Proposed Strategies in Production

  1. Addressing Time Complexity

    Any workforce-related issue that has a detrimental effect on the operational time, due to contributing factors from different factory systems and processes, can be classified as a time complex problem.

    Though classical paper-based schedules, lists, and punch sheets have largely been replaced with IT-systems such as MES, APS, and SRM, the increasing demands for flexibility in manufacturing operations and trends such as batch-size-one, warrant the need for new methodologies to solve these complex problems.

    • Worker attendance

      Anyone who has experienced, at close quarters, a typical day in the life of a factory supervisor, will be conversant with the anxiety that comes just before the start of a production shift. Not knowing who will report absent, until just before the shift starts, is one complex issue every line manager would want to get addressed. While planned absenteeism can be handled to some degree, it is the last-minute sick or emergency-pager text messages, or the transport delays, that make the planning of daily production complex.

      What if there were a solution to get the count that is almost close to the confirmed hands for the shift, an hour or half, at the least, in advance? It turns out that organizations are experimenting with a combination of GPS, RFID, and employee tracking that interacts with resource planning systems, trying to automate the shift planning activity.

      While some legal and privacy issues still need to be addressed, it would not be long before we see people being assigned to workplaces, even before they enter the factory floor.

      During this course of time, while making sure every line manager has accurate information about the confirmed hands for the shift, it is also equally important that health and well-being of employees is monitored during this pandemic time. Use of technologies such as radar, millimeter wave sensors, etc., would ensure the live tracking of workers around the shop-floor and make sure that social distancing norms are well-observed.

    • Resource mapping

      While resource skill-mapping and certification are mostly HR function prerogatives, not having the right resource at the workstation during exigencies such as absenteeism or extra workload is a complex problem. Precious time is lost in locating such resources, or worst still, millions spent in overtime.

      What if there were a tool that analyzed the current workload for a resource with the identified skillset code(s) and gave an accurate estimate of the resource’s availability? This could further be used by shop managers to plan manpower for a shift, keeping them as lean as possible.

      Today, IT teams of OEMs are seen working with software vendors to build such analytical tools that consume data from disparate systems—such as production work orders from MES and swiping details from time systems—to create real-time job profiles. These results are fed to the HR systems to give managers the insights needed to make resource decisions within minutes.

  2. Addressing Effort Complexity

    Just as time complexities result in increased  production time, problems in this category result in an increase in effort by the workforce to complete the same quantity of work. As the effort required is proportionate to the fatigue and long-term well-being of the workforce, seeking workforce solutions to reduce effort would be appreciated. Complexity arises when organizations try to create a method out-of-madness from a variety of factors such as changing workforce profiles, production sequences, logistical and process constraints, and demand fluctuations.

    Thankfully, solutions for this category of problems can be found in new technologies that augment existing systems to get insights and predictions, the results of which can reduce the efforts, thereby channelizing it more productively. Add to this, the demand fluctuations in the current pandemic, having a real-time operational visibility, coupled with advanced analytics, will ensure meeting shift production targets.

    • Intelligent exoskeletons

      Exoskeletons, as we know, are powered bodysuits designed to safeguard and support the user in performing tasks, while increasing overall human efficiency to do the respective tasks. These are deployed in strain-inducing postures or to lift objects that would otherwise be tiring after a few repetitions. Exoskeletons are the new-age answer to reducing user fatigue in areas requiring human skill and dexterity, which otherwise would require a complex robot and cost a bomb.

      However, the complexity that mars exoskeleton users is making the same suit adaptable for a variety of postures, user body types, and jobs at the same workstation. It would help if the exoskeleton could sense the user, set the posture, and adapt itself to the next operation automatically.

      Taking a leaf out of Marvel’s Iron Man, who uses a suit that complements his posture that is controlled by JARVIS, manufacturers can now hope to create intelligent exoskeletons that are always connected to factory systems and user profiles. These suits will adapt and respond to assistive needs, without the need for any intervention, thereby freeing its user to work and focus completely on the main job at hand.

      Given the ongoing COVID situation, it would make the life of workers and the management safe if these suits are equipped with sensors and technologies such as radar/millimeter wave to help observe social distancing, body-temperature measuring, etc.

    • Highlighting likely deviations

      The world over, quality teams on factory floors work with checklists that the quality inspector verifies for every product that comes at the inspection station. While this repetitive task is best suited for robots, when humans execute such repetitive tasks, especially those that involve using visual, audio, touch, and olfactory senses, mistakes and misses are bound to occur. This results in costly reworks and recalls.

      Manufacturers have tried to address this complexity by carrying out rotation of manpower. But this, too, has met with limited success, given the available manpower and ever-increasing workloads.

      Fortunately, predictive quality integrated with feed-forwards techniques and some smart tracking with visuals can be used to highlight the area or zone on the product that is prone to quality slips based on data captured from previous operations. The inspector can then be guided to pay more attention to these areas in the checklist.

  3. Addressing Behavioral Complexity

    Problems of this category usually manifest as a quality issue, but the root cause can often be traced to the workforce behavior or profile. Traditionally, organizations have addressed such problems through experienced supervisors, who as people managers were expected to read these signs, anticipate and align the manpower.

    However, with constantly changing manpower and product variants, these are now complex new-age problems requiring new-age solutions.

    • Heat-mapping workload

      Time and motion studies at the workplace map the user movements around the machine with the time each activity takes for completion, matching the available cycle-time, either by work distribution or by increasing the manpower at that station. Time-consuming and cumbersome as it is, the complexity increases when workload balancing is to be done for teams working on a single product at the workstation. Movements of multiple resources during different sequences are difficult to track, and the different users cannot be expected to follow the same footsteps every time.

      Solving this issue needs a solution that will monitor human motion unobtrusively, link those to the product work content at the workstation, generate recommendations to balance the workload and even out the ‘congestion.’ New industrial applications such as short-range radar and visual feeds can be used to create heat maps of the workforce as they work on the product. This can be superimposed on the digital twin of the process to identify the zone where there is ‘congestion.’ This can be fed to the line-planning function to implement corrective measures such as work distribution or partial outsourcing of the operation.

    • Aging workforce (loss of tribal knowledge)

      With new technology coming to the shop-floor, skills of the current workforce get outdated quickly. Also, with any new hire comes the critical task of training and knowledge sharing from experienced hands. As organizations already face a shortage of manpower, releasing more hands to impart training to a larger workforce audience, possibly at different locations, becomes an even more daunting task.

      Fully realizing the difficulties and reluctance to document, organizations are increasingly adopting AR-based workforce trainings that map to relevant learning and memory needs. These AR solutions capture the minutest of the actions executed by the expert on the shop-floor and can be played back by the novice in-situ as a step-by-step guide. Such tools simplify the knowledge transfer process and also increase worker productivity while reducing costs.

      Further, in extraordinary situations such  as the one we face at present, technologies such as AR offer solutions for effective and personalized support to field personnel, without the need to fly in specialists at multiple sites. This helps keep them safe, and accessible, still.

Key takeaways and Actionable Insights

The shape of the future workforce will be the result of complex, changing, and competing forces. Technology, globalization, demographics, social values, and the changing personal expectations of the workforce will continue to transform and disrupt the way businesses operate, increasing the complexity and radically changing where, and when of future workforce, and how work is done. While the need to constantly reskill and upskill the workforce will be humongous, using new-age techniques and technologies to enhance the effectiveness and efficiency of the existing workforce will come to the spotlight.

8829067296?profile=RESIZE_710x

Figure 2: The Future IIoT Workforce

Organizations will increasingly be required to:

  1. Deploy data farming to dive deep and extract vast amounts of information and process insights embedded in production systems. Tapping into large reservoirs of ‘tribal knowledge’ and digitizing it for ingestion to data lakes is another task that organizations will have to consider.
  2. Augment existing operations systems such as SCADA, DCS, MES, CMMS with new technology digital platforms, AI, AR/VR, big data, and machine learning to underpin and grow the world of work. While there will be no dearth of resources in one or more of the new technologies, organizations will need to ‘acqui-hire’ talent and intellectual property using a specialist, to integrate with existing systems and gain meaningful actionable insights.
  3. Address privacy and data security concerns of the workforce, through the smart use of technologies such as radar and video feeds.

Nonetheless, digital enablement will need to be optimally used to tackle the new normal that the COVID pandemic has set forth in manufacturing—fluctuating demands, modular and flexible assembly lines, reduced workforce, etc.

Originally posted here.

Read more…

In my last post, I explored how OTA updates are typically performed using Amazon Web Services and FreeRTOS. OTA updates are critically important to developers with connected devices. In today’s post, we are going to explore several best practices developers should keep in mind with implementing their OTA solution. Most of these will be generic although I will point out a few AWS specific best practices.

Best Practice #1 – Name your S3 bucket with afr-ota

There is a little trick with creating S3 buckets that I was completely oblivious to for a long time. Thankfully when I checked in with some colleagues about it, they also had not been aware of it so I’m not sure how long this has been supported but it can help an embedded developer from having to wade through too many AWS policies and simplify the process a little bit.

Anyone who has attempted to create an OTA Update with AWS and FreeRTOS knows that you have to setup several permissions to allow an OTA Update Job to access the S3 bucket. Well if you name your S3 bucket so that it begins with “afr-ota”, then the S3 bucket will automatically have the AWS managed policy AmazonFreeRTOSOTAUpdate attached to it. (See Create an OTA Update service role for more details). It’s a small help, but a good best practice worth knowing.

Best Practice #2 – Encrypt your firmware updates

Embedded software must be one of the most expensive things to develop that mankind has ever invented! It’s time consuming to create and test and can consume a large percentage of the development budget. Software though also drives most features in a product and can dramatically different a product. That software is intellectual property that is worth protecting through encryption.

Encrypting a firmware image provides several benefits. First, it can convert your firmware binary into a form that seems random or meaningless. This is desired because a developer shouldn’t want their binary image to be easily studied, investigated or reverse engineered. This makes it harder for someone to steal intellectual property and more difficult to understand for someone who may be interested in attacking the system. Second, encrypting the image means that the sender must have a key or credential of some sort that matches the device that will decrypt the image. This can be looked at a simple source for helping to authenticate the source, although more should be done than just encryption to fully authenticate and verify integrity such as signing the image.

Best Practice #3 – Do not support firmware rollbacks

There is often a debate as to whether firmware rollbacks should be supported in a system or not. My recommendation for a best practice is that firmware rollbacks be disabled. The argument for rollbacks is often that if something goes wrong with a firmware update then the user can rollback to an older version that was working. This seems like a good idea at first, but it can be a vulnerability source in a system. For example, let’s say that version 1.7 had a bug in the system that allowed remote attackers to access the system. A new firmware version, 1.8, fixes this flaw. A customer updates their firmware to version 1.8, but an attacker knows that if they can force the system back to 1.7, they can own the system. Firmware rollbacks seem like a convenient and good idea, in fact I’m sure in the past I used to recommend them as a best practice. However, in today’s connected world where we perform OTA updates, firmware rollbacks are a vulnerability so disable them to protect your users.

Best Practice #4 – Secure your bootloader

Updating firmware Over-the-Air requires several components to ensure that it is done securely and successfully. Often the focus is on getting the new image to the device and getting it decrypted. However, just like in traditional firmware updates, the bootloader is still a critical piece to the update process and in OTA updates, the bootloader can’t just be your traditional flavor but must be secure.

There are quite a few methods that can be used with the onboard bootloader, but no matter the method used, the bootloader must be secure. Secure bootloaders need to be capable of verifying the authenticity and integrity of the firmware before it is ever loaded. Some systems will use the application code to verify and install the firmware into a new application slot while others fully rely on the bootloader. In either case, the secure bootloader needs to be able to verify the authenticity and integrity of the firmware prior to accepting the new firmware image.

It’s also a good idea to ensure that the bootloader is built into a chain of trust and cannot be easily modified or updated. The secure bootloader is a critical component in a chain-of-trust that is necessary to keep a system secure.

Best Practice #5 – Build a Chain-of-Trust

A chain-of-trust is a sequence of events that occur while booting the device that ensures each link in the chain is trusted software. For example, I’ve been working with the Cypress PSoC 64 secure MCU’s recently and these parts come shipped from the factory with a hardware-based root-of-trust to authenticate that the MCU came from a secure source. That Root-of-Trust (RoT) is then transferred to a developer, who programs a secure bootloader and security policies onto the device. During the boot sequence, the RoT verifying the integrity and authenticity of the bootloader, which then verifies the integrity and authenticity of any second stage bootloader or software which then verifies the authenticity and integrity of the application. The application then verifies the authenticity and integrity of its data, keys, operational parameters and so on.

This sequence creates a Chain-Of-Trust which is needed and used by firmware OTA updates. When the new firmware request is made, the application must decrypt the image and verify that authenticity and integrity of the new firmware is intact. That new firmware can then only be used if the Chain-Of-Trust can successfully make its way through each link in the chain. The bottom line, a developer and the end user know that when the system boots successfully that the new firmware is legitimate. 

Conclusions

OTA updates are a critical infrastructure component to nearly every embedded IoT device. Sure, there are systems out there that once deployed will never update, however, those are probably a small percentage of systems. OTA updates are the go-to mechanism to update firmware in the field. We’ve examined several best practices that developers and companies should consider when they start to design their connected systems. In fact, the bonus best practice for today is that if you are building a connected device, make sure you explore your OTA update solution sooner rather than later. Otherwise, you may find that building that Chain-Of-Trust necessary in today’s deployments will be far more expensive and time consuming to implement.

Originally posted here.

Read more…

In this blog, we’ll discuss how users of Edge Impulse and Nordic can actuate and stream classification results over BLE using Nordic’s UART Service (NUS). This makes it easy to integrate embedded machine learning into your next generation IoT applications. Seamless integration with nRF Cloud is also possible since nRF Cloud has native support for a BLE terminal. 

We’ve extended the Edge Impulse example functionality already available for the nRF52840 DK and nRF5340 DK by adding the abilities to actuate and stream classification outputs. The extended example is available for download on github, and offers a uniform experience on both hardware platforms. 

Using nRF Toolbox 

After following the instructions in the example’s readme, download the nRF Toolbox mobile application (available on both iOS and Android) and connect to the nRF52840 DK or the nRF5340 DK that will be discovered as “Edge Impulse”. Once connected, set up the interface as follows so that you can get information about the device, available sensors, and start/stop the inferencing process. Save the preset configuration so that you can load it again for future use. Fill out the text of the various commands to use the same convention as what is used for the Edge Impulse AT command set. For example, sending AT+RUNIMPULSE starts the inferencing process on the device. 

IMG_7478_474aa59323.jpg
Figure 1. Setting up the Edge Impulse AT Command set

Once the appropriate AT command set mapping to an icon has been done, hit the appropriate icon. Hitting the ‘play’ button cause the device to start acquiring data and perform inference every couple of seconds. The results can be viewed in the “Logs” menu as shown below.

NUS_ble_logger_view_e9daba3698.jpg
Figure 2. Classification Output over BLE in the Logs View

Using nRF Cloud

Using the nRF Connect for Cloud mobile app for iOS and Android, you can turn your smartphone into a BLE gateway. This allows users to easily connect their BLE NUS devices running Edge Impulse to the nRF Cloud as an easy way to send the inferencing conclusions to the cloud. It’s as easy as setting up the BLE gateway through the app, connecting to the “Edge Impulse” device and watching the same results being displayed in the “Terminal over BLE” window shown below!

Screen_Hunter_229_Feb_16_23_45_26c8913865.jpgFigure 3. Classification Output Shown in nRF Cloud

Summary

Edge Impulse is supercharging IoT with embedded machine learning and we’ve discussed a couple of ways you can easily send conclusions to either the smartphone or to the cloud by leveraging the Nordic UART Service. We look forward to seeing how you’ll leverage Edge Impulse, Nordic and BLE to create your next gen IoT application.  

 

Article originally written for the Edge Impulse blog by Zin Thein Kyaw, Senior User Success Engineer at Edge Impulse.

Read more…

By AKHILESHSINGH SAITHWAR

The LLDP protocol is a Link Layer Discovery Protocol used by network devices to identify their neighbors and their capabilities.

If you want to integrate LLDP protocol in your Linux/Embedded system, there are mainly two open-source codes. The first is lldpd and the other is openlldp. When I needed to integrate the LLDP in my network device, I studied both open-source codes. I am writing this article hoping that it will be useful for others who also want to use LLDP open-source code in their systems or network devices.

Below are the key points which should be considered when selecting the LLDP open-source code.

1. License

License is an important point to consider when you want to integrate an open-source code in your application. The lldpd is published under ISC License, whereas the openlldp is published under GPL-2.0 License. The difference between two licenses is that the ISC License is more permissive than the GPL-2.0 License.

If you use GPL-2.0 licensed open-source code in your application, you need to publish the changes back to the community. In case of ISC License, it is not required to publish your changes back to community. Please note that the scope of the article does not cover the full licensing requirements. Please understand the license before using it in your project.

2. Active Community Support

When picking up open-source code, we should also make sure that the development is active for that code. The development and support in lldpd are more active than the openlldp. When writing this article, there are a total of 8 tags in openlldp and 54 tags in lldpd. This indicates how quickly bugs are fixed and new version is released in lldpd.

3. Supported Protocols

There are other protocols like LLDP to discover the network devices, for example EDP, CDP. When selecting the LLDP open-source code, one should also make sure that it supports other protocols as well. This will make sure that the network devices with other protocols are also discovered. Though I have not verified the protocols listed in the documentations, from the document I can say that the lldpd supports EDP, CDP, FDP, SONMP and the openlldp supports EDP, CDP, EVB, MED, DCBX, VDP.

4. Custom Interface Support

In most of the cases the LLDP would run on standard Ethernet Interface but in some specific cases it may require executing LLDP on non-Ethernet interfaces, like Serial or I2C. In this case, it would be very helpful if the open-source code supports other interfaces. Though both open-source code does not support custom interfaces, the lldpd at least have documentation on how to add the custom interfaces. Adding custom interfaces on openlldp may require more time to understand and implement than lldpd.

5. Multiple Neighbour Support

This is one of the most important features when selecting the LLDP open-source code. Multiple neighbour support is needed if you are supposed to capture more than one LLDP enabled neighbour (network devices) on the same interfaces. As per my understanding, this is very basic feature which should be supported in all LLDP code. But I was surprised to know that this feature is not available in openlldp. Multiple neighbour support is available in lldpd.

6. Daemon Configuration Tool

Daemon configuration tool helps to configure the LLDP parameters, get status, enable/disable interfaces. Both lldpd and openlldp has their configuration tools. The lldpd has lldpcli/lldpctl and the openlldp has lldptool for configuration.

7. LLDP Statistics

Both lldpd and openlldp supports display of interface and neighbour statistics through there configuration tools. The statistics includes Total Frame Outs, Total Error Frame Outs, Total Age Out Frames, Total Discarded Frames, Total Frame In, Total Frame In Errors, Total Discarded Error Frames, Total TLVs in Errors, Total TLV’s Accepted etc.

8. Custom TLV Support

Both the lldpd and openlldp supports reception and transmission of custom TLV’s. The custom TLV’s can be set or get using their configuration tools.

9. SNMP Agent

Both lldpd and openlldp supports SNMP agent.

Comparison table

Based on above points the below table is populated for comparison purpose. One can decide whether lldpd or openlldp should be used in their system or network devices.

8755613068?profile=RESIZE_710x

Conclusion

As per my opinion it is better to choose the lldpd open-source code over the openlldp considering the license, features and community support. The licensing of lldpd is more permissive than the open-lldp. There are more features in lldpd compared to open-lldp. The community support for lldpd is more active than the open-lldp. So unless you have direction from your client to use specific open source lldp package, go for lldpd. eInfochips has in-depth expertise in the areas of firmware design for embedded systems development. We offer end-to-end support for firmware development starting from system requirements to testing for quality and environment.

Originally posted here.

Read more…

by Evelyn Münster

IoT systems are complex data products: they consist of digital and physical components, networks, communications, processes, data, and artificial intelligence (AI). User interfaces (UIs) are meant to make this level of complexity understandable for the user. However, building a data product that can explain data and models to users in a way that they can understand is an unexpectedly difficult challenge. That is because data products are not your run-of-the-mill software product.

In fact, 85% of all big data and AI projects fail. Why? I can say from experience that it is not the technology but rather the design that is to blame.

So how do you create a valuable data product? The answer lies in a new type of user experience (UX) design. With data products, UX designers are confronted with several additional layers that are not usually found in conventional software products: it’s a relatively complex system, unfamiliar to most users, and comprises data and data visualization as well as AI in some cases. Last but not least, it presents an entirely different set of user problems and tasks than customary software products.

Let’s take things one step at a time. My many years in data product design have taught me that it is possible to create great data products, as long as you keep a few things in mind before you begin.

As a prelude to the UX design process, make sure you and your team answer the following nine questions:

1. Which problem does my product solve for the user?

The user must be able to understand the purpose of your data product in a matter of minutes. The assignment to the five categories of the specific tasks of data products can be helpful: actionable insights, performance feedback loop, root cause analysis, knowledge creation, and trust building.

2. What does the system look like?

Do not expect users to already know how to interpret the data properly. They need to be able to construct a fairly accurate mental model of the system behind the data.

3. What is the level of data quality?

The UI must reflect the quality of the data. A good UI leads the user to trust the product.

4. What is the user’s proficiency level in graphicacy and numeracy?

Conduct user testing to make sure that your audience will be able to read and interpret the data and visuals correctly.

5. What level of detail do I need?

Aggregated data is often too abstract to explain, or to build user trust. A good way to counter this challenge is to use details that explain things. Then again, too much detail can also be overwhelming.

6. Are we dealing with probabilities?

Probabilities are tricky and require explanations. The common practice of cutting out all uncertainties makes the UI deceptively simple – and dangerous.

7. Do we have a data visualization expert on the design team?

UX design applied to data visualization requires a special skillset that covers the entire process, from data analysis to data storytelling. It is always a good idea to have an expert on the team or, alternatively, have someone to reach out to when required.

8. How do we get user feedback?

As soon as the first prototype is ready, you should collect feedback through user testing. The prototype should present content in the most realistic and consistent way possible, especially when it comes to data and figures.

9. Can the user interface boost our marketing and sales?

If the user interface clearly communicates what the data product does and what the process is like, then it could take on a new function: sell your products.

To sum up: we must acknowledge that data products are an unexplored territory. They are not just another software product or dashboard, which is why, in order to create a valuable data product, we will need a specific strategy, new workflows, and a particular set of skills: Data UX Design.

Originally posted HERE 

Read more…

By Sanjay Tripathi, Lauren Luellwitz, and Kevin Egge

There are petabytes of data generated by intelligent, interconnected and autonomous systems of Industry 4.0. When combined with artificial intelligence tools that provide actionable insight, it has the potential to improve every function within a plant, i.e. operations, engineering, quality, reliability and maintenance.

The maintenance function, while crucial to the smooth functioning of a plant has, until recently not seen much innovation. Many among us have experienced the equipment downtime, process drifts, massive hits to yield, and decline in product reliability because of maintenance performed poorly or late. Yet, Enterprise Asset Management (EAM) systems – ERP systems that help maintain assets – remained as systems of record that typically generated work-orders and recorded maintenance performed. Even as production processes became mind-numbingly complex, EAM systems remained much the same.

IBM Maximo 8.0, or Maximo Application Suite, is one example of a system that combines artificial intelligent (AI), big data and cloud computing technologies with domain expertise from operating technologies (OT) to simplify maintenance and deliver production resilience.

Maximo 8.0 leverages AI to visually inspect gas pipelines, rail tracks, bridges and tunnels; AI guides technicians as they conduct complex repairs; it provides maintenance supervisors real-time visibility into the health and safety of their technicians. Domain expertise is incorporated in the form of data to train AI models. These capabilities improve the ability to avoid unscheduled downtime, improve first-time-fix rate, and reduce safety incidents.

Maintenance records residing in Maximo are combined with real-time operational data from production assets and their associated asset model to better predict when maintenance is required. In this example, asset models embody domain expertise. These models characterize how a production asset such as a power generator or catalytic converter should perform in the context of where it is installed in the process.

The Maximo application itself is encapsulated (containerized) using Red Hat’s OpenShift technology. Containerization allows the application to be easily deployed on-premises, on private clouds or hybrid clouds. This flexibility in deployment benefits IT organizations that need to continually evolve their infrastructure, which is almost every organization.

Maximo 8.0 is available as a suite that includes both core and advanced capabilities. A single software entitlement provides access to all capabilities. The entitlement provides access to the core EAM functionality of work and resource scheduling, asset management, industry-specific customizations, EHS guidelines, and mobile functionality. And it provides access to advanced functionality such as Maximo Monitor, which automatically detects anomalies in how an asset may be performing; Maximo Health, which measures equipment health; Maximo Predict, which, as the name suggests, predicts when maintenance is required; and Maximo Assist which assists technicians conduct repairs.

Originally posted here.

Read more…

When analyzing whether a machine learning model works well, we rely on accuracy numbers, F1 scores and confusion matrices - but they don't give any insight into why a machine learning model misclassifies data. Is it because data looks very similar, is it because data is mislabeled, or is it because preprocessing parameters are chosen incorrectly? To answer these questions we have now added the feature explorer to all neural network blocks in Edge Impulse. The feature explorer shows your complete dataset in one 3D graph, and shows you whether data was classified correct or incorrect.

8481148295?profile=RESIZE_710x

Showing exactly which data samples are misclassified in the feature explorer.

If you haven't used the feature explorer before: it's one of the most interesting options in the Edge Impulse. The axes are the output of the signal processing process (we heavily rely on signal processing to extract interesting features beforehand, making smaller and more reliable ML models), and they can let you quickly validate whether your data separates nicely. In addition the feature explorer is integrated in Live classification, where you can compare incoming test data directly with your training set.

8481149063?profile=RESIZE_710x

Redesign of the neural network pages.

This work has been part of a redesign of our neural network pages. These pages are now more compact, giving you full insight in both your neural network architecture, and the training performance - and giving you an easy way to compare models with different optimization options (like comparing an int8 quantized model vs. an unoptimized model) and show accurate on-device performance metrics for a wide variety of targets.

Next steps

Currently the feature explorer shows the performance of your training set, but over the next weeks we'll also integrate the feature explorer and the new confusion matrix to the Model testing page in Edge Impulse. This will give you direct insight in the performance of your test set in the same way, so keep an eye out for that!

Want to try the new feature explorer out? Just head to any neural network block in your Edge Impulse project and retrain. Don't have a project yet?! Followone of our tutorials on building embedded machine learning models on real sensor data, it takes 30 minutes and you can even use your phone as a sensor.

Article originally written by Jan Jongboom, the CTO and co-founder of Edge Impulse. He loves pretty pictures, colors, and insight in his ML models.

Read more…

When I think about the things that held the planet together in 2020, it was digital experiences delivered over wireless connectivity that made remote things local.

While heroes like doctors, nurses, first responders, teachers, and other essential personnel bore the brunt of the COVID-19 response, billions of people around the world found themselves cut off from society. In order to keep people safe, we were physically isolated from each other. Far beyond the six feet of social distancing, most of humanity weathered the storm from their homes.

And then little by little, old things we took for granted, combined with new things many had never heard of, pulled the world together. Let’s take a look at the technologies and trends that made the biggest impact in 2020 and where they’re headed in 2021:

The Internet

The global Internet infrastructure from which everything else is built is an undeniable hero of the pandemic. This highly-distributed network designed to withstand a nuclear attack performed admirably as usage by people, machines, critical infrastructure, hospitals, and businesses skyrocketed. Like the air we breathe, this primary facilitator of connected, digital experiences is indispensable to our modern society. Unfortunately, the Internet is also home to a growing cyberwar and security will be the biggest concern as we move into 2021 and beyond. It goes without saying that the Internet is one of the world’s most critical utilities along with water, electricity, and the farm-to-table supply chain of food.

Wireless Connectivity

People are mobile and they stay connected through their smartphones, tablets, in cars and airplanes, on laptops, and other devices. Just like the Internet, the cellular infrastructure has remained exceptionally resilient to enable communications and digital experiences delivered via native apps and the web. Indoor wireless connectivity continues to be dominated by WiFi at home and all those empty offices. Moving into 2021, the continued rollout of 5G around the world will give cellular endpoints dramatic increases in data capacity and WiFi-like speeds. Additionally, private 5G networks will challenge WiFi as a formidable indoor option, but WiFi 6E with increased capacity and speed won’t give up without a fight. All of these developments are good for consumers who need to stay connected from anywhere like never before.

Web Conferencing

With many people stuck at home in 2020, web conferencing technology took the place of traveling to other locations to meet people or receive education. This technology isn’t new and includes familiar players like GoToMeeting, Skype, WebEx, Google Hangouts/Meet, BlueJeans, FaceTime, and others. Before COVID, these platforms enjoyed success, but most people preferred to fly on airplanes to meet customers and attend conferences while students hopped on the bus to go to school. In 2020, “necessity is the mother of invention” took hold and the use of Zoom and Teams skyrocketed as airplanes sat on the ground while business offices and schools remained empty. These two platforms further increased their stickiness by increasing the number of visible people and adding features like breakout rooms to meet the demands of businesses, virtual conference organizers, and school teachers. Despite the rollout of the vaccine, COVID won’t be extinguished overnight and these platforms will remain strong through the first half of 2021 as organizations rethink where and when people work and learn. There’s way too many players in this space so look for some consolidation.

E-Commerce

“Stay at home” orders and closed businesses gave e-commerce platforms a dramatic boost in 2020 as they took the place of shopping at stores or going to malls. Amazon soared to even higher heights, Walmart upped their game, Etsy brought the artsy, and thousands of Shopify sites delivered the goods. Speaking of delivery, the empty city streets became home to fleets FedEx, Amazon, UPS, and DHL trucks bringing packages to your front doorstep. Many retail employees traded-in working at customer-facing stores for working in a distribution centers as long as they could outperform robots. Even though people are looking forward to hanging out at malls in 2021, the e-commerce, distribution center, delivery truck trinity is here to stay. This ball was already in motion and got a rocket boost from COVID. This market will stay hot in the first half of 2021 and then cool a bit in the second half.

Ghost Kitchens

The COVID pandemic really took a toll on restaurants in the 2020, with many of them going out of business permanently. Those that survived had to pivot to digital and other ways of doing business. High-end steakhouses started making burgers on grills in the parking lot, while takeout pizzerias discovered they finally had the best business model. Having a drive-thru lane was definitely one of the keys to success in a world without waiters, busboys, and hosts. “Front of house” was shut down, but the “back of house” still had a pulse. Adding mobile web and native apps that allowed customers to easily order from operating “ghost kitchens” and pay with credit cards or Apple/Google/Samsung Pay enabled many restaurants to survive. A combination of curbside pickup and delivery from the likes of DoorDash, Uber Eats, Postmates, Instacart and Grubhub made this business model work. A surge in digital marketing also took place where many restaurants learned the importance of maintaining a relationship with their loyal customers via connected mobile devices. For the most part, 2021 has restauranteurs hoping for 100% in-person dining, but a new business model that looks a lot like catering + digital + physical delivery is something that has legs.

The Internet of Things

At its very essence, IoT is all about remotely knowing the state of a device or environmental system along with being able to remotely control some of those machines. COVID forced people to work, learn, and meet remotely and this same trend applied to the industrial world. The need to remotely operate industrial equipment or an entire “lights out” factory became an urgent imperative in order to keep workers safe. This is yet another case where the pandemic dramatically accelerated digital transformation. Connecting everything via APIs, modeling entities as digital twins, and having software bots bring everything to life with analytics has become an ROI game-changer for companies trying to survive in a free-falling economy. Despite massive employee layoffs and furloughs, jobs and tasks still have to be accomplished, and business leaders will look to IoT-fueled automation to keep their companies running and drive economic gains in 2021.

Streaming Entertainment

Closed movie theaters, football stadiums, bowling alleys, and other sources of entertainment left most people sitting at home watching TV in 2020. This turned into a dream come true for streaming entertainment companies like Netflix, Apple TV+, Disney+, HBO Max, Hulu, Amazon Prime Video, Youtube TV, and others. That said, Quibi and Facebook Watch didn’t make it. The idea of binge-watching shows during the weekend turned into binge-watching every season of every show almost every day. Delivering all these streams over the Internet via apps has made it easy to get hooked. Multiplayer video games fall in this category as well and represent an even larger market than the film industry. Gamers socially distanced as they played each other from their locked-down homes. The rise of cloud gaming combined with the rollout of low-latency 5G and Edge computing will give gamers true mobility in 2021. On the other hand, the video streaming market has too many players and looks ripe for consolidation in 2021 as people escape the living room once the vaccine is broadly deployed.

Healthcare

With doctors and nurses working around the clock as hospitals and clinics were stretched to the limit, it became increasingly difficult for non-COVID patients to receive the healthcare they needed. This unfortunate situation gave tele-medicine the shot in the arm (no pun intended) it needed. The combination of healthcare professionals delivering healthcare digitally over widespread connectivity helped those in need. This was especially important in rural areas that lacked the healthcare capacity of cities. Concurrently, the Internet of Things is making deeper inroads into delivering the health of a person to healthcare professionals via wearable technology. Connected healthcare has a bright future that will accelerate in 2021 as high-bandwidth 5G provides coverage to more of the population to facilitate virtual visits to the doctor from anywhere.

Working and Living

As companies and governments told their employees to work from home, it gave people time to rethink their living and working situation. Lots of people living in previously hip, urban, high-rise buildings found themselves residing in not-so-cool, hollowed-out ghost towns comprised of boarded-up windows and closed bars and cafés. Others began to question why they were living in areas with expensive real estate and high taxes when they not longer had to be close to the office. This led to a 2020 COVID exodus out of pricey apartments/condos downtown to cheaper homes in distant suburbs as well as the move from pricey areas like Silicon Valley to cheaper destinations like Texas. Since you were stuck in your home, having a larger house with a home office, fast broadband, and a back yard became the most important thing. Looking ahead to 2021, a hybrid model of work-from-home plus occasionally going into the office is here to stay as employees will no longer tolerate sitting in traffic two hours a day just to sit in a cubicle in a skyscraper. The digital transformation of how and where we work has truly accelerated.

Data and Advanced Analytics

Data has shown itself to be one of the world’s most important assets during the time of COVID. Petabytes of data has continuously streamed-in from all over the world letting us know the number of cases, the growth or decline of infections, hospitalizations, contact-tracing, free ICU beds, temperature checks, deaths, and hotspots of infection. Some of this data has been reported manually while lots of other sources are fully automated from machines. Capturing, storing, organizing, modeling and analyzing this big data has elevated the importance of cloud and edge computing, global-scale databases, advanced analytics software, and the growing importance of machine learning. This is a trend that was already taking place in business and now has a giant spotlight on it due to its global importance. There’s no stopping the data + advanced analytics juggernaut in 2021 and beyond.

Conclusion

2020 was one of the worst years in human history and the loss of life was just heartbreaking. People, businesses, and our education system had to become resourceful to survive. This resourcefulness amplified the importance of delivering connected, digital experiences to make previously remote things into local ones. Cheers to 2021 and the hope for a brighter day for all of humanity.

Read more…

Written by: Mirko Grabel

Edge computing brings a number of benefits to the Internet of Things. Reduced latency, improved resiliency and availability, lower costs, and local data storage (to assist with regulatory compliance) to name a few. In my last blog post I examined some of these benefits as a means of defining exactly where is the edge. Now let’s take a closer look at how edge computing benefits play out in real-world IoT use cases.

Benefit No. 1: Reduced latency

Many applications have strict latency requirements, but when it comes to safety and security applications, latency can be a matter of life or death. Consider, for example, an autonomous vehicle applying brakes or roadside signs warning drivers of upcoming hazards. By the time data is sent to the cloud and analyzed, and a response is returned to the car or sign, lives can be endangered. But let’s crunch some numbers just for fun.

Say a Department of Transportation in Florida is considering a cloud service to host the apps for its roadside signs. One of the vendors on the DoT’s shortlist is a cloud in California. The DoT’s latency requirement is less than 15ms. The light speed in fiber is about 5 μs/km. The distance from the U.S. east coast to the west coast is about 5,000 km. Do the math and the resulting round-trip latency is 50ms. It’s pure physics. If the DoT requires a real-time response, it must move the compute closer to the devices.

Benefit No. 2: Improved resiliency/availability

Critical infrastructure requires the highest level of availability and resiliency to ensure safety and continuity of services. Consider a refinery gas leakage detection system. It must be able to operate without Internet access. If the system goes offline and there’s a leakage, that’s an issue. Compute must be done at the edge. In this case, the edge may be on the system itself.

While it’s not a life-threatening use case, retail operations can also benefit from the availability provided by edge compute. Retailers want their Point of Sale (PoS) systems to be available 100% of the time to service customers. But some retail stores are in remote locations with unreliable WAN connections. Moving the PoS systems onto their edge compute enables retailers to maintain high availability.

Benefit No. 3: Reduced costs

Bandwidth is almost infinite, but it comes at a cost. Edge computing allows organizations to reduce bandwidth costs by processing data before it crosses the WAN. This benefit applies to any use case, but here are two example use-cases where this is very evident: video surveillance and preventive maintenance. For example, a single city-deployed HD video camera may generate 1,296GB a month. Streaming that data over LTE easily becomes cost prohibitive. Adding edge compute to pre-aggregate the data significantly reduces those costs.

Manufacturers use edge computing for preventive maintenance of remote machinery. Sensors are used to monitor temperatures and vibrations. The currency of this data is critical, as the slightest variation can indicate a problem. To ensure that issues are caught as early as possible, the application requires high-resolution data (for example, 1000 per second). Rather than sending all of this data over the Internet to be analyzed, edge compute is used to filter the data and only averages, anomalies and threshold violations are sent to the cloud.

Benefit No. 4: Comply with government regulations

Countries are increasingly instituting privacy and data retention laws. The European Union’s General Data Protection Regulation (GDPR) is a prime example. Any organization that has data belonging to an EU citizen is required to meet the GDPR’s requirements, which includes an obligation to report leaks of personal data. Edge computing can help these organizations comply with GDPR. For example, instead of storing and backhauling surveillance video, a smart city can evaluate the footage at the edge and only backhaul the meta data.

Canada’s Water Act: National Hydrometric Program is another edge computing use case that delivers regulatory compliance benefits. As part of the program, about 3,000 measurement stations have been implemented nationwide. Any missing data requires justification. However, storing data at the edge ensures data retention.

Bonus Benefit: “Because I want to…”

Finally, some users simply prefer to have full control. By implementing compute at the edge rather than the cloud, users have greater flexibility. We have seen this in manufacturing. Technicians want to have full control over the machinery. Edge computing gives them this control as well as independence from IT. The technicians know the machinery best and security and availability remain top of mind.

Summary

By reducing latency and costs, improving resiliency and availability, and keeping data local, edge computing opens up a new world of IoT use cases. Those described here are just the beginning. It will be exciting to see where we see edge computing turn up next. 

Originaly posted: here

Read more…

It’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations.

I’m sorry about the title of this blog, but I’m feeling a little wackadoodle at the moment. I think the problem is that I’m giddy with excitement at the thought of the forthcoming Thanksgiving holiday.

So, here’s the deal. Starting sometime in 2021, I’m going to be writing a series of columns for Practical Electronics magazine in the UK teaching digital logic fundamentals to absolute beginners.

This will have a hands-on component with an accompanying circuit board. We’re going to start by constructing some simple logic gates at the transistor level, then use primitive logic gates in 7400-series ICs to construct more sophisticated functions, and work our way up to… but I fear I can say no more at the moment.

After we’ve created some really simple combinatorial functions — like a 2:1 multiplexer — by hand, we’re going to introduce things like Boolean algebra, DeMorgan transforms, and Karnaugh maps, and then we are going to use what we’ve learned to implement more complex combinatorial functions, cumulating in a BCD to 7-segment decoder, before we progress to sequential circuits.

I was sketching out some notes this past weekend. Prior to the BCD to 7-segment decoder, we’ll already have tackled a BCD to decimal decoder, so a lot of the groundwork will have been laid. We’ll start by explaining how the segments in the 7-segment display are identified using the letters ‘a’ through ‘f’ and showing the combinations of segments we use to create the decimal digits 0 through 9.

8217684257?profile=RESIZE_710x

Using a 7-segment display to represent the decimal digits 0 through 9 (Click image to see a larger version — Image source: Max Maxfield)

Next, we will create the truth table. We’ll be using a common cathode 7-segment display, which means active-high outputs from our decoder because this is easier for newbies to wrap their brains around.

8217685658?profile=RESIZE_710x

Truth table for BCD to 7-segment decoder with active-high outputs (Click image to see a larger version — Image source: Max Maxfield)

Observe the input combinations shown in red in the truth table. We’ll point out that, in our case, we aren’t planning on using these input combinations, which means we don’t care what the corresponding outputs are because we will never actually see them (we’re using ‘X’ characters to represent the “don’t care” values). In turn, this means we can use these don’t care values in our Karnaugh maps to aid us in our logic minimization and optimization.

The funny thing is that it’s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations. Just for giggles and grins, I’ve shown the populated maps below. Before you look at my solutions, why don’t you take a couple of minutes to perform your own minimizations to see how much you remember?

 8217691254?profile=RESIZE_710x

Use these populated maps to perform your own minimizations and optimizations (Click image to see a larger version — Image source: Max Maxfield)

I should point out that I’m a bit rusty at this sort of thing, so you might want to check that I’ve correctly captured the truth table and accurately populated these maps before you leap into the fray with gusto and abandon.

Remember that we’re dealing with absolute beginners here, so — even though I will have recently introduced them to Karnaugh map techniques, I think it would be a good idea to commence this portion of the discussions by walking them through the process for segment ‘a’ step-by-step as illustrated below.

8217692064?profile=RESIZE_710x

Karnaugh map minimizations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Next, I extracted the Boolean equations corresponding to the Karnaugh map minimizations. As shown below, I’ve color-coded any product terms that appear multiple times. I don’t recall seeing this done before, but I think it could be a useful aid for beginners. Once again, I’d be interested to hear your thoughts about this.

8217692289?profile=RESIZE_710x

Boolean equations for 7-segment display (Click image to see a larger version — Image source: Max Maxfield)

Actually, I’d love to hear your thoughts on anything I’ve shown here. Do you think the way I’ve drawn the diagrams is conducive to beginners understanding what’s going on? Can you spot anything I’ve missed or could do better? I can’t wait for you to see what we have planned with regards to the circuit board and the “hands-on” part of this forthcoming series (I will, of course, be reporting back further in the future). Until then, as always, I welcome your comments, questions, and suggestions.

Originally posted HERE.

Read more…

PYNQ is great for accelerating Python applications in programmable logic. Let's take a look at how we can use it with OpenMV camera.

Things used in this project

Hardware:

  • Avnet Ultra96-V2 (Can also use V1 or V3)
  • OpenMV Cam M7
  • Avnet Ultra96 (Can use V1 or V2)

Software:

  • Xilinx PYNQ Framework

Introduction

Image processing is required for a range of applications from vision guided robotics to machine vision in industrial applications.

In this project we are going to look at how we can fuse the OpenMV camera with the Ultra96 running PYNQ. This will allow out PYNQ application to offload some image processing to the camera. Doing so will provide a higher performance system and open the Ultra96 using PYNQ to be able to work with the OpenMV ecosystem.

 

What Is the OpenMV Camera 

The OpenMV camera is a low cost machine vision camera which is developed using Python. Thanks to this architecture of the OpenMV Camera we can therefore offload some of the image processing to the camera. Meaning the image frames received by our Ultra96 already have faces identified, eyes tracked or Sobel filtering, it all depends on how we set up the OpenMV Camera.

As the OpenMV camera has been designed to be extensible it provides 10 external IO which can be used to drive external sensors. These 10 are able to support a range of interfaces from UART to SPI, I2C and PWM. Of course the PWM is very useful for driving servos.

On very useful feature of the OpenMV camera is its LEDs mine (OpenMV M7) provides a tri-colour LED which can be used to output Red, Green, Blue and a separate IR LED. As the sensor is IR sensitive this can be useful for low light performance.

8100406101?profile=RESIZE_400xOpenMV Camera

How Does the OpenMV Camera Work

OpenMV Cam uses micro python to control the imager and output frames over the USB link. Micro python is intended for use on micro controllers and is based on Python 3.4. To use the OpenMV camera we need to first generate a micro python script which configures the camera for the given algorithm we wish to implement. We then execute this script by uploading and running it over the USB link.

This means we need some OpenMV APIs and libraries on a host machine to communicate with the OpenMV Camera.

To develop the script we want to be able to ensure it works, which is where the OpenMV IDE comes into its own, this allows us to develop and test the script which we later use in our Ultra96 application.

We can develop this script using either a Windows, MAC or Linux desktop.

 

Creating the OpenMV Script using the OpenMV IDE

To get started with the OpenMV IDE we frist need to download and install it. Once it is installed the next step is to connect our OpenMV camera to it using the USB link and then running a script on it.

To get started we can run the example hello world provided, which configures the camera to outputs standard RGB image at QVGA resolution. On the right hand side of the IDE you will be able to see the images output from the camera.

 

We can use this IDE to develop scripts for the OpenMV camera such as the one below which detects and identifies circles in the captured image.

Note the frame rate is lower when the camera is connected to the IDE.

 

We can use the scripts developed here in our Ultra96 PYNQ implementation let's take a look at how we set up the Ultra96 and PYNQ

Setting Up the Ultra96 PYNQ Image

The first thing we need to do if we have not already done it, is to download and create a PYNQ SD Card so we can run the PYNQ framework on the Ultra96.

As we want to use the Xilinx image processing overlay we should download the Ultra96 PYNQ v2.3 image.

Once you have this image creating a SD Card is very simple, extract the ISO image from the compressed file and write it to a SD Card. To write the ISO image to the SD Card we need a program such a etcher or win32 disk imager.

With a SD Card available we can then boot the Ultra96 and connect to the PYNQ framework using either

  • Use a USB Ethernet connection over the MicroUSB (upstream USB connection).
  • Connect via WiFi.
  • Use the Ultra96 as a single-board computer and connect a monitor, keyboard and mouse.

For this project I used the USB Ethernet connection.

The next thing to do is to ensure we have the necessary overlays to be able to accelerate image processing functions into the programmable logic. To to this we need to install the PYNQ computer vision overlay. 

Downloading the Image Processing Overlay

Installing this overlay is very straight forward. Open a browser window and connect to the web address of 192.168.3.1 (USB Ethernet address). This will open a log in page to the Jupyter notebooks, the password is Xilinx

 

Upon log in you will see the following folders and scripts

 

Click on new and select terminal, this will open a new terminal window in a browser window. To download and use the PYNQ Computer Vision overlays we enter the following command

sudo pip3 install --upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
 

Once these are downloaded if you look back at the Jupyter home page you will see a new directory called pynqOpenCV.

 

Using these Jupyter notebooks we can test the image processing performance when we accelerate OpenCV functions into the programmable logic.

 

Typically the hardware acceleration as can be seen in the image above greatly out performs implementing the algorithm in SW.

Of course we can call this overlay from our own Jupyter notebooks

 

Setting Up the OpenMV Camera in PYNQ

The next step is to configure the Ultra96 PYNQ instance to be able to control the OpenMV camera using its APIs. We can obtain these by downloading the OpenMV git repo using the command below in a terminal window on the Ultra96.

git clone https://github.com/openmv/openmv
 

Once this is downloaded we need to move the file pyopenmv.py

From openmv/tools

To /usr/lib/python3.6

This will allow us to control the OpenMV camera from within our Jupyter applications.

To be able to do this we need to know which serial port the OpenMV camera enumerates as. This will generally be ttyACM0 or ttyACM1 we can find this out by doing a LS of the /dev directory

 

Now we are ready to begin working with the OpenMV camera in our applications let's take a look at how we set it up our Jupyter Scripts

 

Initial Test of OpenMV Camera

The first thing we need to do in a new Jupyter notebook is to import the necessary packages. This includes the pyopenmv as we just installed.

We will alos be importing numpy as the image is returned as a numpy array so that we can display it using numpy functionality.

import pyopenmvimport timeimport sysimport numpy as np 

The first thing we need to do is define the script we developed in the IDE, for the "first light" with the PYNQ and OpenMV we will use the hello world script to obtain a simple image.

script = """

# Hello World Example

#

# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time

import pyb

sensor.reset()                      # Reset and initialize the sensor.

sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)

sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)

sensor.skip_frames(time = 2000)     # Wait for settings take effect.

clock = time.clock()                # Create a clock object to track the FPS.

red_led = pyb.LED(1)

red_led.off()

red_led.on()

while(True):

   clock.tick() 

   img = sensor.snapshot()         # Take a picture and return the image.

"""

Once the script is defined the next thing we need to do is connect to the OpenMV camera and download the script.

 

portname = "/dev/ttyACM0"

connected = False

pyopenmv.disconnect()

for i in range(10):

   try:

       # opens CDC port.

       # Set small timeout when connecting

       pyopenmv.init(portname, baudrate=921600, timeout=0.050)

       connected = True

       break

   except Exception as e:

       connected = False

       sleep(0.100)

if not connected:

   print ( "Failed to connect to OpenMV's serial port.\n"

           "Please install OpenMV's udev rules first:\n"

           "sudo cp openmv/udev/50-openmv.rules /etc/udev/rules.d/\n"

           "sudo udevadm control --reload-rules\n\n")

   sys.exit(1)

# Set higher timeout after connecting for lengthy transfers.

pyopenmv.set_timeout(1*2) # SD Cards can cause big hicups.

pyopenmv.stop_script()

pyopenmv.enable_fb(True)

pyopenmv.exec_script(script)

Finally once the script has been downloaded and is executing, we want to be able to read out the frame buffer. This Cell below reads out the framebuffer and saves it as a jpg file in the PYNQ file system.

 

running = True

import numpy as np

from PIL import Image

from matplotlib import pyplot as plt

while running:

   fb = pyopenmv.fb_dump()

   if fb != None:

       img = Image.fromarray(fb[2], 'RGB')

       img.save("frame.jpg")

       img = Image.open("frame.jpg")

       img

       time.sleep(0.100)

 

When I ran this script the first light image below was received of me working in my office.

 

Having achieved this the next step is to start working with advanced scripts in the PYNQ Jupyter notebook. using the same approach as above we can redefine scripts which can be used for different processing including

script = """

import sensor, image, time

sensor.reset() # Initialize the camera sensor.

sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565

sensor.set_framesize(sensor.QQVGA) # or sensor.QVGA (or others)

sensor.skip_frames(time = 2000) # Let new settings take affect.

sensor.set_gainceiling(8)

clock = time.clock() # Tracks FPS.

while(True):

   clock.tick() # Track elapsed milliseconds between snapshots().

   img = sensor.snapshot() # Take a picture and return the image.

   # Use Canny edge detector

   img.find_edges(image.EDGE_CANNY, threshold=(50, 80))

   # Faster simpler edge detection

   #img.find_edges(image.EDGE_SIMPLE, threshold=(100, 255))

   print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while

"""

For Canny edge detection when imaging a MiniZed Board

 

Alternatively we can also extract key points from images for tracking in subsequent images.

script = """

import sensor, time, image

# Reset sensor

sensor.reset()

# Sensor settings

sensor.set_contrast(3)

sensor.set_gainceiling(16)

sensor.set_framesize(sensor.VGA)

sensor.set_windowing((320, 240))

sensor.set_pixformat(sensor.GRAYSCALE)

sensor.skip_frames(time = 2000)

sensor.set_auto_gain(False, value=100)

def draw_keypoints(img, kpts):

   if kpts:

       print(kpts)

       img.draw_keypoints(kpts)

       img = sensor.snapshot()

       time.sleep(1000)

kpts1 = None

# NOTE: uncomment to load a keypoints descriptor from file

#kpts1 = image.load_descriptor("/desc.orb")

#img = sensor.snapshot()

#draw_keypoints(img, kpts1)

clock = time.clock()

while (True):

   clock.tick()

   img = sensor.snapshot()

   if (kpts1 == None):

       # NOTE: By default find_keypoints returns multi-scale keypoints extracted from an image pyramid.

       kpts1 = img.find_keypoints(max_keypoints=150, threshold=10, scale_factor=1.2)

       draw_keypoints(img, kpts1)

   else:

       # NOTE: When extracting keypoints to match the first descriptor, we use normalized=True to extract

       # keypoints from the first scale only, which will match one of the scales in the first descriptor.

       kpts2 = img.find_keypoints(max_keypoints=150, threshold=10, normalized=True)

       if (kpts2):

           match = image.match_descriptor(kpts1, kpts2, threshold=85)

           if (match.count()>10):

               # If we have at least n "good matches"

               # Draw bounding rectangle and cross.

               img.draw_rectangle(match.rect())

               img.draw_cross(match.cx(), match.cy(), size=10)

           print(kpts2, "matched:%d dt:%d"%(match.count(), match.theta()))

           # NOTE: uncomment if you want to draw the keypoints

           #img.draw_keypoints(kpts2, size=KEYPOINTS_SIZE, matched=True)

   # Draw FPS

   img.draw_string(0, 0, "FPS:%.2f"%(clock.fps()))

"""

Circle Detection

 

import sensor, image, time

sensor.reset()

sensor.set_pixformat(sensor.RGB565) # grayscale is faster

sensor.set_framesize(sensor.QQVGA)

sensor.skip_frames(time = 2000)

clock = time.clock()

while(True):

   clock.tick()

   img = sensor.snapshot().lens_corr(1.8)

   # Circle objects have four values: x, y, r (radius), and magnitude. The

   # magnitude is the strength of the detection of the circle. Higher is

   # better...

   # `threshold` controls how many circles are found. Increase its value

   # to decrease the number of circles detected...

   # `x_margin`, `y_margin`, and `r_margin` control the merging of similar

   # circles in the x, y, and r (radius) directions.

   # r_min, r_max, and r_step control what radiuses of circles are tested.

   # Shrinking the number of tested circle radiuses yields a big performance boost.

   for c in img.find_circles(threshold = 2000, x_margin = 10, y_margin = 10, r_margin = 10,

           r_min = 2, r_max = 100, r_step = 2):

       img.draw_circle(c.x(), c.y(), c.r(), color = (255, 0, 0))

       print(c)

   print("FPS %f" % clock.fps())

 

 

 

This fusion of ability to offload processing to either the OpenMV camera or the Ultra96 programmable logic running Pynq provides the system designer with maximum flexibility.

 

Wrap Up

The ability to use the OpenMV camera, coupled with the PYNQ computer vision libraries along with other overlays such as the klaman filter and base overlays. We can implement algorithms which can be used to enable us to implement vision guided robotics. Using the base overlay and the Input Output processors also enables us to communicate with lower level drives, interfaces and other sensors required to implement such a solution.

Originaly posted here.

 

Read more…

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

Will We Ever Get Quantum Computers?

In a recent issue of IEEE Spectrum, Mikhail Dyakonov makes a pretty compelling argument that quantum computing (QC) isn't going to fly anytime soon. Now, I'm no expert on QC, and there sure is a lot of money being thrown at the problem by some very smart people, but having watched from the sidelines QC seems a lot like fusion research. Every year more claims are made, more venture capital gets burned, but we don't seem to get closer to useful systems.

Consider D-Wave Systems. They've been trying to build a QC for twenty years, and indeed do have products more or less on the market, including, it's claimed, one of 1024 q-bits. But there's a lot of controversy about whether their machines are either quantum computers at all, or if they offer any speedup over classical machines. One would think that if a 1K q-bit machine really did work the press would be all abuzz, and we'd be hearing constantly of new incredible results. Instead, the machines seem to disappear into research labs.

Mr. Duakonov notes that optimistic people expect useful QCs in the next 5-10 years; those less sanguine expect 20-30 years, a prediction that hasn't changed in two decades. He thinks a window of many decades to never is more realistic. Experts think that a useful machine, one that can do the sort of calculations your laptop is capable of, will require between 1000 and 100,000 q-bits. To me, this level of uncertainty suggests that there is a profound lack of knowledge about how these machines will work and what they will be able to do.

According to the author, a 1000 q-bit machine can be in 21000 states (a classical machine with N transistors can be in only 2N states), which is about 10300, or more than the number of sub-atomic particles in the universe. At 100,000 q-bits we're talking 1030,000, a mind-boggling number.

Because of noise, expect errors. Some theorize that those errors can be eliminated by adding q-bits, on the order of 1000 to 100,000 additional per q-bit. So a useful machine will need at least millions, or perhaps many orders of magnitude more, of these squirrelly microdots that are tamed only by keeping them at 10 millikelvin.

A related article in Spectrum mentions a committee formed of prestigious researchers tasked with assessing the probability of success with QC concluded that:

"[I]t is highly unexpected" that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable "noisy intermediate-scale quantum computers" will be built within that time frame, "there are at present no known algorithms/applications that could make effective use of this class of machine," the committee says."

I don't have a dog in this fight, but am relieved that useful QC seems to be no closer than The Distant Shore (to quote Jan de Hartog, one of my favorite writers). If it were feasible to easily break encryption schemes banking and other systems could collapse. I imagine Blockchain would fail as hash algorithms became reversable. The resulting disruption would not be healthy for our society.

On the other hand, Bruce Schneier's article in the March issue of IEEE Computing Edge suggests that QC won't break all forms of encryption, though he does think a lot of our current infrastructure will be vulnerable. The moral: if and when QC becomes practical, expect chaos.

I was once afraid of quantum computing, as it involves mechanisms that I'll never understand. But then I realized those machines will have an API. Just as one doesn't need to know how a computer works to program in Python, we'll be insulated from the quantum horrors by layers of abstraction.

Originaly posted here

Read more…

A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them 'smart.' The study was published in Electronics.

Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance. For comparison, in ordinary modern computers, random access memory is calculated in gigabytes. Although microcontrollers possess significantly less processing power than laptops and smartphones, they are smaller and can be interfaced with household items. Smart doors, refrigerators, shoes, glasses, kettles and coffee makers create the foundation for so-called ambient intelligece. The term denotes an environment of interconnected smart devices. 

An example of ambient intelligence is a smart home. The devices with limited memory are not able to store a large number of keys for secure data transfer and arrays of neural network settings. It prevents the introduction of artificial intelligence into Internet of Things devices, as they lack the required computing power. However, artificial intelligence would allow smart devices to spend less time on analysis and decision-making, better understand a user and assist them in a friendly manner. Therefore, many new opportunities can arise in the creation of environmental intelligence, for example, in the field of health care.

Andrei Velichko from Petrozavodsk State University, Russia, has created a new neural network architecture that allows efficient use of small volumes of RAM and opens the opportunities for the introduction of low-power devices to the Internet of Things. The network, called LogNNet, is a feed-forward neural network in which the signals are directed exclusively from input to output. Its uses deterministic chaotic filters for the incoming signals. The system randomly mixes the input information, but at the same time extracts valuable data from the information that are invisible initially. A similar mechanism is used by reservoir neural networks. To generate chaos, a simple logistic mapping equation is applied, where the next value is calculated based on the previous one. The equation is commonly used in population biology and as an example of a simple equation for calculating a sequence of chaotic values. In this way, the simple equation stores an infinite set of random numbers calculated by the processor, and the network architecture uses them and consumes less RAM.

7978216495?profile=RESIZE_584x

The scientist tested his neural network on handwritten digit recognition from the MNIST database, which is considered the standard for training neural networks to recognize images. The database contains more than 70,000 handwritten digits. Sixty-thousand of these digits are intended for training the neural network, and another 10,000 for network testing. The more neurons and chaos in the network, the better it recognized images. The maximum accuracy achieved by the network is 96.3%, while the developed architecture uses no more than 29 KB of RAM. In addition, LogNNet demonstrated promising results using very small RAM sizes, in the range of 1-2kB. A miniature controller, Atmega328, can be embedded into a smart door or even a smart insole, has approximately the same amount of memory.

"Thanks to this development, new opportunities for the Internet of Things are opening up, as any device equipped with a low-power miniature controller can be powered with artificial intelligence. In this way, a path is opened for intelligent processing of information on peripheral devices without sending data to cloud services, and it improves the operation of, for example, a smart home. This is an important contribution to the development of IoT technologies, which are actively researched by the scientists of Petrozavodsk State University. In addition, the research outlines an alternative way to investigate the influence of chaos on artificial intelligence," said Andrei Velichko.

Originally posted HERE.

by Russian Science Foundation

Image Credit: Andrei Velichko

 

 

 

 

Read more…

Impact of IoT in Inventory

Internet of Things (IoT) has revolutionized many industries including inventory management. IoT is a concept where devices are interconnected via the internet. It is expected that by 2020, there will be 26 billion devices connected worldwide. These connections are important because it allows data sharing which then can perform actions to make life and business more efficient. Since inventory is a significant portion of a company’s assets, inventory data is vital for an accounting department for the company’s asset management and annual report.

Inventory solutions based on IoT and RFID, individual inventory item receives an RFID tag. Each tag has a unique identification number (ID) that contains information about an inventory item, e.g. a model, a batch number, etc. these tags are scanned by RF reader. Upon scanning, a reader extracts its IDs and transmits them to the cloud for processing. Along with the tag’s ID, the cloud receives location and the time of reading. This data is used for updates about inventory items’, allowing users to monitor the inventory from anywhere, in real-time.

Industrial IoT

The role of IoT in inventory management is to receive data and turn it into meaningful insights about inventory items’ location, status, and giving users a corresponding output. For example, based on the data, and inventory management solution architecture, we can forecast the number of raw materials needed for the upcoming production cycle. The output of the system can also send an alert if any individual inventory item is lost.

Moreover, IoT based inventory management solutions can be integrated with other systems, i.e. ERP and share data with other departments.

RFID in Industrial IoT

RFID consist of three main components tag, antenna, and a reader

Tags: An RFID tag carries information about a specific object. It can be attached to any surface, including raw materials, finished goods, packages, etc.

RFID antennas: An RFID antenna receives signals to supply power and data for tags’ operation

RFID readers: An RFID reader, uses radio signals to read and write to the tags. The reader receives data stored in the tag and transmits it to the cloud.

Benefits of IoT in inventory management

The benefits of IoT on the supply chain are the most exciting physical manifestations we can observe. IoT in the supply chain creates unparalleled transparency that increases efficiencies.

Inventory tracking

The major benefit of inventory management is asset tracking, instead of using barcodes to scan and record data, items have RFID tags which can be registered wirelessly. It is possible to accurately obtain data and track items from any point in the supply chain.

With RFID and IoT, managers don’t have to spend time on manual tracking and reporting on spreadsheets. Each item is tracked and the data about it is recorded automatically. Automated asset tracking and reporting save time and reduce the probability of human error.

Inventory optimization

Real-time data about the quantity and the location of the inventory, manufacturers can reduce the amount of inventory on hand while meeting the needs of the customers at the end of the supply chain.

The data about the amount of available inventory and machine learning can forecast the required inventory which allows manufacturers to reduce the lead time.

Remote tracking

Remote product tracking makes it easy to have an eye on production and business. Knowing production and transit times, allows you to better tweak orders to suit lead times and in response to fluctuating demand. It shows which suppliers are meeting production and shipping criteria and which needs monitoring for the required outcome.

It gives visibility into the flow of raw materials, work-in-progress and finished goods by providing updates about the status and location of the items so that inventory managers see when an individual item enters or leaves a specific location.

Bottlenecks in the operations

With the real-time data about the location and the quantity, manufacturers can reveal bottlenecks in the process and pinpoint the machine with lower utilization rates. For instance, if part of the inventory tends to pile up in front of a machine, a manufacturer assumes that the machine is underutilized and needs to be seen to.

The Outcomes

The data collected by inventory management is more accurate and up-to-date. By reducing these time delays, the manufacturing process can enhance accuracy and reduce wastage. An IoT-based inventory management solution offers complete visibility on inventory by providing real-time information fetched by RFID tags. It helps to track the exact location of raw materials, work-in-progress and finished goods. As a result, manufacturers can balance the amount of on-hand inventory, increase the utilization of machines, reduce lead time, and thus, avoid costs bound to the less effective methods. This is all about optimizing inventory and ensuring anything ordered can be sold through whatever channel necessary.

Originally posted here

Read more…

By: Tom Jeltes, Eindhoven University of Technology

The Internet of Things (IoT) consists of billions of sensors and other devices connected to each other via internet, all of which need to be protected against hackers with malicious purposes. A low-cost and energy efficient solution for the security of IoT devices uses the unique characteristics of the built-in memory chips. Ph.D. candidate Lieneke Kusters investigated how to make optimal use of the chip's digital fingerprint to generate a security key.

The higher the number of devices connected to each other via the Internet of Things, the greater the risk that malicious hackers might gain access to important information, or even take over entire systems. Quite apart from all kinds of privacy issues, it's not hard to imagine that that someone who, for example, has control over temperature sensors in a chemical or nuclear plant, could cause serious damage.

 To prevent problems like these from occurring, each IoT device needs to be able, as it were, to show an identity document—"authentication," in professional terms. Normally, speaking, this is done with a kind of password, which is sent in encrypted form to the person who is communicating with the device. The security key needed for that has to be stored in the IoT device one way or another, Lieneke Kusters explains. "But these are often small and cheap devices that aren't supposed to use much energy. To safely store a key in these devices, you need extra hardware with constant power supply. That's not very practical."

Digital fingerprint

There is a different way: namely by deducing the security key from a unique physical characteristic of the memory chip (Static Random-Access Memory, or SRAM) that can be found in practically every IoT device. Depending on the random circumstances during the chip's manufacturing process, the memory locations have a random default value of 0 or 1.

"That binary code which you can read out when activating the chip, constitutes a kind of digital fingerprint of the device," says Kusters, who gained her doctorate at the Information and Communication Theory Laboratory at the TU/e department of Electrical Engineering. This fingerprint is known as a Physical Unclonable Function (PUF). "The Eindhoven-based company Intrinsic ID sells digital security based on SRAM-PUFs. I collaborated with them for my doctoral research, during which I focused on how to generate, in a reliable way, a key from that digital fingerprint that is as long as possible. The longer, the safer."

The major advantage of security keys based on SRAM-PUFs is that the key exists only at the moment when authentication is required. "The device restarts itself to read out the SRAM-PUF and in doing so creates the key, which subsequently gets erased immediately after use. That makes it all but impossible for an attacker to steal the key."

Noise and reliability

But that's not the entire story, because some bits of the SRAM do not always have the same value during activation, Kusters explains. Ten to fifteen percent of the bits turn out not to be determined, which makes the digital fingerprint a bit fuzzy. How do you use that fuzzy fingerprint to make a key of the highest possible complexity that nevertheless still fits into the receiving lock—practically—each time?

"What you want to prevent is that the generated key won't be recognized by the receiving party as a consequence of the 'noise' in the SRAM-PUF," Kusters explains. "It's alright if that happens one in a million times perhaps, preferably less often." The probability of error is smaller with a shorter key, but such a key is also easier to guess for people with bad intentions. "I've searched for the longest reliable key, given a certain amount of noise in the measurement. It helps if you store extra information about the SRAM-PUF, but that must not be of use to a potential attacker. My thesis is an analysis of how you can reach the optimal result in different situations with that extra information."

Originaly posted here.


 
Read more…
RSS
Email me when there are new items in this category –

Charter Sponsors

Upcoming IoT Events

More IoT News

IoT Career Opportunities