Subscribe to our Newsletter | To Post On IoT Central, Click here


Programming (208)

Arm DevSummit 2020 debuted this week (October 6 – 8) as an online virtual conference focused on engineers and providing them with insights into the Arm ecosystem. The summit lasted three days over which Arm painted an interesting technology story about the current and future state of computing and where developers fit within that story. I’ve been attending Arm Techcon for more than half a decade now (which has become Arm DevSummit) and as I perused content, there were several take-a-ways I noticed for developers working on microcontroller based embedded systems. In this post, we will examine these key take-a-ways and I’ll point you to some of the sessions that I also think may pique your interest.

(For those of you that aren’t yet aware, you can register up until October 21st (for free) and still watch the conferences materials up until November 28th . Click here to register)

Take-A-Way #1 – Expect Big Things from NVIDIAs Acquisition of Arm

As many readers probably already know, NVIDIA is in the process of acquiring Arm. This acquisition has the potential to be one of the focal points that I think will lead to a technological revolution in computing technologies, particularly around artificial intelligence but that will also impact nearly every embedded system at the edge and beyond. While many of us have probably wondered what plans NVIDIA CEO Jensen Huang may have for Arm, the Keynotes for October 6th include a fireside chat between Jensen Huang and Arm CEO Simon Segars. Listening to this conversation is well worth the time and will help give developers some insights into the future but also assurances that the Arm business model will not be dramatically upended.

Take-A-Way #2 – Machine Learning for MCU’s is Accelerating

It is sometimes difficult at a conference to get a feel for what is real and what is a little more smoke and mirrors. Sometimes, announcements are real, but they just take several years to filter their way into the market and affect how developers build systems. Machine learning is one of those technologies that I find there is a lot of interest around but that developers also aren’t quite sure what to do with yet, at least in the microcontroller space. When we hear machine learning, we think artificial intelligence, big datasets and more processing power than will fit on an MCU.

There were several interesting talks at DevSummit around machine learning such as:

Some of these were foundational, providing embedded developers with the fundamentals to get started while others provided hands-on explorations of machine learning with development boards. The take-a-way that I gather here is that the effort to bring machine learning capabilities to microcontrollers so that they can be leveraged in industry use cases is accelerating. Lots of effort is being placed in ML algorithms, tools, frameworks and even the hardware. There were several talks that mentioned Arm’s Cortex-M55 architecture that will include Helium technology to help accelerate machine learning and DSP processing capabilities.

Take-A-Way #3 – The Constant Need for Reinvention

In my last take-a-way, I eluded to the fact that things are accelerating. Acceleration is not just happening though in the technologies that we use to build systems. The very application domain that we can apply these technology domains to is dramatically expanding. Not only can we start to deploy security and ML technologies at the edge but in domains such as space and medical systems. There were several interesting talks about how technologies are being used around the world to solve interesting and unique problems such as protecting vulnerable ecosystems, mapping the sea floor, fighting against diseases and so much more.

By carefully watching and listening, you’ll notice that many speakers have been involved in many different types of products over their careers and that they are constantly having to reinvent their skill sets, capabilities and even their interests! This is what makes working in embedded systems so interesting! It is constantly changing and evolving and as engineers we don’t get to sit idly behind a desk. Just as Arm, NVIDIA and many of the other ecosystem partners and speakers show us, technology is rapidly changing but so are the problem domains that we can apply these technologies to.

Take-A-Way #4 – Mbed and Keil are Evolving

There are also interesting changes coming to the Arm toolchains and tools like Mbed and Keil MDK. In Reinhard Keil’s talk, “Introduction to an Open Approach for Low-Power IoT Development“, developers got an insight into the changes that are coming to Mbed and Keil with the core focus being on IoT development. The talk focused on the endpoint and discussed how Mbed and Keil MDK are being moved to an online platform designed to help developers move through the product development faster from prototyping to production. The Keil Studio Online is currently in early access and will be released early next year.

(If you are interested in endpoints and AI, you might also want to check-out this article on “How Do We Accelerate Endpoint AI Innovation? Put Developers First“)

Conclusions

Arm DevSummit had a lot to offer developers this year and without the need to travel to California to participate. (Although I greatly missed catching up with friends and colleagues in person). If you haven’t already, I would recommend checking out the DevSummit and watching a few of the talks I mentioned. There certainly were a lot more talks and I’m still in the process of sifting through everything. Hopefully there will be a few sessions that will inspire you and give you a feel for where the industry is headed and how you will need to pivot your own skills in the coming years.

Originaly posted here

Read more…

Will We Ever Get Quantum Computers?

In a recent issue of IEEE Spectrum, Mikhail Dyakonov makes a pretty compelling argument that quantum computing (QC) isn't going to fly anytime soon. Now, I'm no expert on QC, and there sure is a lot of money being thrown at the problem by some very smart people, but having watched from the sidelines QC seems a lot like fusion research. Every year more claims are made, more venture capital gets burned, but we don't seem to get closer to useful systems.

Consider D-Wave Systems. They've been trying to build a QC for twenty years, and indeed do have products more or less on the market, including, it's claimed, one of 1024 q-bits. But there's a lot of controversy about whether their machines are either quantum computers at all, or if they offer any speedup over classical machines. One would think that if a 1K q-bit machine really did work the press would be all abuzz, and we'd be hearing constantly of new incredible results. Instead, the machines seem to disappear into research labs.

Mr. Duakonov notes that optimistic people expect useful QCs in the next 5-10 years; those less sanguine expect 20-30 years, a prediction that hasn't changed in two decades. He thinks a window of many decades to never is more realistic. Experts think that a useful machine, one that can do the sort of calculations your laptop is capable of, will require between 1000 and 100,000 q-bits. To me, this level of uncertainty suggests that there is a profound lack of knowledge about how these machines will work and what they will be able to do.

According to the author, a 1000 q-bit machine can be in 21000 states (a classical machine with N transistors can be in only 2N states), which is about 10300, or more than the number of sub-atomic particles in the universe. At 100,000 q-bits we're talking 1030,000, a mind-boggling number.

Because of noise, expect errors. Some theorize that those errors can be eliminated by adding q-bits, on the order of 1000 to 100,000 additional per q-bit. So a useful machine will need at least millions, or perhaps many orders of magnitude more, of these squirrelly microdots that are tamed only by keeping them at 10 millikelvin.

A related article in Spectrum mentions a committee formed of prestigious researchers tasked with assessing the probability of success with QC concluded that:

"[I]t is highly unexpected" that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable "noisy intermediate-scale quantum computers" will be built within that time frame, "there are at present no known algorithms/applications that could make effective use of this class of machine," the committee says."

I don't have a dog in this fight, but am relieved that useful QC seems to be no closer than The Distant Shore (to quote Jan de Hartog, one of my favorite writers). If it were feasible to easily break encryption schemes banking and other systems could collapse. I imagine Blockchain would fail as hash algorithms became reversable. The resulting disruption would not be healthy for our society.

On the other hand, Bruce Schneier's article in the March issue of IEEE Computing Edge suggests that QC won't break all forms of encryption, though he does think a lot of our current infrastructure will be vulnerable. The moral: if and when QC becomes practical, expect chaos.

I was once afraid of quantum computing, as it involves mechanisms that I'll never understand. But then I realized those machines will have an API. Just as one doesn't need to know how a computer works to program in Python, we'll be insulated from the quantum horrors by layers of abstraction.

Originaly posted here

Read more…

A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them 'smart.' The study was published in Electronics.

Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance. For comparison, in ordinary modern computers, random access memory is calculated in gigabytes. Although microcontrollers possess significantly less processing power than laptops and smartphones, they are smaller and can be interfaced with household items. Smart doors, refrigerators, shoes, glasses, kettles and coffee makers create the foundation for so-called ambient intelligece. The term denotes an environment of interconnected smart devices. 

An example of ambient intelligence is a smart home. The devices with limited memory are not able to store a large number of keys for secure data transfer and arrays of neural network settings. It prevents the introduction of artificial intelligence into Internet of Things devices, as they lack the required computing power. However, artificial intelligence would allow smart devices to spend less time on analysis and decision-making, better understand a user and assist them in a friendly manner. Therefore, many new opportunities can arise in the creation of environmental intelligence, for example, in the field of health care.

Andrei Velichko from Petrozavodsk State University, Russia, has created a new neural network architecture that allows efficient use of small volumes of RAM and opens the opportunities for the introduction of low-power devices to the Internet of Things. The network, called LogNNet, is a feed-forward neural network in which the signals are directed exclusively from input to output. Its uses deterministic chaotic filters for the incoming signals. The system randomly mixes the input information, but at the same time extracts valuable data from the information that are invisible initially. A similar mechanism is used by reservoir neural networks. To generate chaos, a simple logistic mapping equation is applied, where the next value is calculated based on the previous one. The equation is commonly used in population biology and as an example of a simple equation for calculating a sequence of chaotic values. In this way, the simple equation stores an infinite set of random numbers calculated by the processor, and the network architecture uses them and consumes less RAM.

7978216495?profile=RESIZE_584x

The scientist tested his neural network on handwritten digit recognition from the MNIST database, which is considered the standard for training neural networks to recognize images. The database contains more than 70,000 handwritten digits. Sixty-thousand of these digits are intended for training the neural network, and another 10,000 for network testing. The more neurons and chaos in the network, the better it recognized images. The maximum accuracy achieved by the network is 96.3%, while the developed architecture uses no more than 29 KB of RAM. In addition, LogNNet demonstrated promising results using very small RAM sizes, in the range of 1-2kB. A miniature controller, Atmega328, can be embedded into a smart door or even a smart insole, has approximately the same amount of memory.

"Thanks to this development, new opportunities for the Internet of Things are opening up, as any device equipped with a low-power miniature controller can be powered with artificial intelligence. In this way, a path is opened for intelligent processing of information on peripheral devices without sending data to cloud services, and it improves the operation of, for example, a smart home. This is an important contribution to the development of IoT technologies, which are actively researched by the scientists of Petrozavodsk State University. In addition, the research outlines an alternative way to investigate the influence of chaos on artificial intelligence," said Andrei Velichko.

Originally posted HERE.

by Russian Science Foundation

Image Credit: Andrei Velichko

 

 

 

 

Read more…

Impact of IoT in Inventory

Internet of Things (IoT) has revolutionized many industries including inventory management. IoT is a concept where devices are interconnected via the internet. It is expected that by 2020, there will be 26 billion devices connected worldwide. These connections are important because it allows data sharing which then can perform actions to make life and business more efficient. Since inventory is a significant portion of a company’s assets, inventory data is vital for an accounting department for the company’s asset management and annual report.

Inventory solutions based on IoT and RFID, individual inventory item receives an RFID tag. Each tag has a unique identification number (ID) that contains information about an inventory item, e.g. a model, a batch number, etc. these tags are scanned by RF reader. Upon scanning, a reader extracts its IDs and transmits them to the cloud for processing. Along with the tag’s ID, the cloud receives location and the time of reading. This data is used for updates about inventory items’, allowing users to monitor the inventory from anywhere, in real-time.

Industrial IoT

The role of IoT in inventory management is to receive data and turn it into meaningful insights about inventory items’ location, status, and giving users a corresponding output. For example, based on the data, and inventory management solution architecture, we can forecast the number of raw materials needed for the upcoming production cycle. The output of the system can also send an alert if any individual inventory item is lost.

Moreover, IoT based inventory management solutions can be integrated with other systems, i.e. ERP and share data with other departments.

RFID in Industrial IoT

RFID consist of three main components tag, antenna, and a reader

Tags: An RFID tag carries information about a specific object. It can be attached to any surface, including raw materials, finished goods, packages, etc.

RFID antennas: An RFID antenna receives signals to supply power and data for tags’ operation

RFID readers: An RFID reader, uses radio signals to read and write to the tags. The reader receives data stored in the tag and transmits it to the cloud.

Benefits of IoT in inventory management

The benefits of IoT on the supply chain are the most exciting physical manifestations we can observe. IoT in the supply chain creates unparalleled transparency that increases efficiencies.

Inventory tracking

The major benefit of inventory management is asset tracking, instead of using barcodes to scan and record data, items have RFID tags which can be registered wirelessly. It is possible to accurately obtain data and track items from any point in the supply chain.

With RFID and IoT, managers don’t have to spend time on manual tracking and reporting on spreadsheets. Each item is tracked and the data about it is recorded automatically. Automated asset tracking and reporting save time and reduce the probability of human error.

Inventory optimization

Real-time data about the quantity and the location of the inventory, manufacturers can reduce the amount of inventory on hand while meeting the needs of the customers at the end of the supply chain.

The data about the amount of available inventory and machine learning can forecast the required inventory which allows manufacturers to reduce the lead time.

Remote tracking

Remote product tracking makes it easy to have an eye on production and business. Knowing production and transit times, allows you to better tweak orders to suit lead times and in response to fluctuating demand. It shows which suppliers are meeting production and shipping criteria and which needs monitoring for the required outcome.

It gives visibility into the flow of raw materials, work-in-progress and finished goods by providing updates about the status and location of the items so that inventory managers see when an individual item enters or leaves a specific location.

Bottlenecks in the operations

With the real-time data about the location and the quantity, manufacturers can reveal bottlenecks in the process and pinpoint the machine with lower utilization rates. For instance, if part of the inventory tends to pile up in front of a machine, a manufacturer assumes that the machine is underutilized and needs to be seen to.

The Outcomes

The data collected by inventory management is more accurate and up-to-date. By reducing these time delays, the manufacturing process can enhance accuracy and reduce wastage. An IoT-based inventory management solution offers complete visibility on inventory by providing real-time information fetched by RFID tags. It helps to track the exact location of raw materials, work-in-progress and finished goods. As a result, manufacturers can balance the amount of on-hand inventory, increase the utilization of machines, reduce lead time, and thus, avoid costs bound to the less effective methods. This is all about optimizing inventory and ensuring anything ordered can be sold through whatever channel necessary.

Originally posted here

Read more…

A fingerprint for the Internet of Things

By: Tom Jeltes, Eindhoven University of Technology

The Internet of Things (IoT) consists of billions of sensors and other devices connected to each other via internet, all of which need to be protected against hackers with malicious purposes. A low-cost and energy efficient solution for the security of IoT devices uses the unique characteristics of the built-in memory chips. Ph.D. candidate Lieneke Kusters investigated how to make optimal use of the chip's digital fingerprint to generate a security key.

The higher the number of devices connected to each other via the Internet of Things, the greater the risk that malicious hackers might gain access to important information, or even take over entire systems. Quite apart from all kinds of privacy issues, it's not hard to imagine that that someone who, for example, has control over temperature sensors in a chemical or nuclear plant, could cause serious damage.

 To prevent problems like these from occurring, each IoT device needs to be able, as it were, to show an identity document—"authentication," in professional terms. Normally, speaking, this is done with a kind of password, which is sent in encrypted form to the person who is communicating with the device. The security key needed for that has to be stored in the IoT device one way or another, Lieneke Kusters explains. "But these are often small and cheap devices that aren't supposed to use much energy. To safely store a key in these devices, you need extra hardware with constant power supply. That's not very practical."

Digital fingerprint

There is a different way: namely by deducing the security key from a unique physical characteristic of the memory chip (Static Random-Access Memory, or SRAM) that can be found in practically every IoT device. Depending on the random circumstances during the chip's manufacturing process, the memory locations have a random default value of 0 or 1.

"That binary code which you can read out when activating the chip, constitutes a kind of digital fingerprint of the device," says Kusters, who gained her doctorate at the Information and Communication Theory Laboratory at the TU/e department of Electrical Engineering. This fingerprint is known as a Physical Unclonable Function (PUF). "The Eindhoven-based company Intrinsic ID sells digital security based on SRAM-PUFs. I collaborated with them for my doctoral research, during which I focused on how to generate, in a reliable way, a key from that digital fingerprint that is as long as possible. The longer, the safer."

The major advantage of security keys based on SRAM-PUFs is that the key exists only at the moment when authentication is required. "The device restarts itself to read out the SRAM-PUF and in doing so creates the key, which subsequently gets erased immediately after use. That makes it all but impossible for an attacker to steal the key."

Noise and reliability

But that's not the entire story, because some bits of the SRAM do not always have the same value during activation, Kusters explains. Ten to fifteen percent of the bits turn out not to be determined, which makes the digital fingerprint a bit fuzzy. How do you use that fuzzy fingerprint to make a key of the highest possible complexity that nevertheless still fits into the receiving lock—practically—each time?

"What you want to prevent is that the generated key won't be recognized by the receiving party as a consequence of the 'noise' in the SRAM-PUF," Kusters explains. "It's alright if that happens one in a million times perhaps, preferably less often." The probability of error is smaller with a shorter key, but such a key is also easier to guess for people with bad intentions. "I've searched for the longest reliable key, given a certain amount of noise in the measurement. It helps if you store extra information about the SRAM-PUF, but that must not be of use to a potential attacker. My thesis is an analysis of how you can reach the optimal result in different situations with that extra information."

Originaly posted here.


 
Read more…

Can AI Replace Firmware?

Scott Rosenthal and I go back about a thousand years; we've worked together, helped midwife the embedded field into being, had some amazing sailing adventures, and recently took a jaunt to the Azores just for the heck of it. Our sons are both big data people; their physics PhDs were perfect entrees into that field, and both now work in the field of artificial intelligence.

At lunch recently we were talking about embedded systems and AI, and Scott posed a thought that has been rattling around in my head since. Could AI replace firmware?

Firmware is a huge problem for our industry. It's hideously expensive. Only highly-skilled people can create it, and there are too few of us.

What if an AI engine of some sort could be dumped into a microcontroller and the "software" then created by training that AI? If that were possible - and that's a big "if" - then it might be possible to achieve what was hoped for when COBOL was invented: programmers would no longer be needed as domain experts could do the work. That didn't pan out for COBOL; the industry learned that accountants couldn't code. Though the language was much more friendly than the assembly it replaced, it still required serious development skills.

But with AI, could a domain expert train an inference engine?

Consider a robot: a "home economics" major could create scenarios of stacking dishes from a dishwasher. Maybe these would be in the form of videos, which were then fed to the AI engine as it tuned the weighting coefficients to achieve what the home ec expert deems worthy goals.

My first objection to this idea was that these sorts of systems have physical constraints. With firmware I'd write code to sample limit switches so the motors would turn off if at an end-of-motion extreme. During training an AI-based system would try and drive the motors into all kinds of crazy positions, banging destructively into stops. But think how a child learns: a parent encourages experimentation but prevents the youngster from self-harm. Maybe that's the role of the future developer training an AI. Or perhaps the training will be done on a simulator of some sort where nothing can go horribly wrong.

Taking this further, a domain expert could define the desired inputs and outputs, and then a poorly-paid person do the actual training. CEOs will love that. With that model a strange parallel emerges to computation a century ago: before the computer age "computers" were people doing simple math to create tables of logs, trig, ballistics, etc. A room full all labored at a problem. They weren't particularly skilled, didn't make much, but did the rote work under the direction of one master. Maybe AI trainers will be somewhat like that.

Like we outsource clothing manufacturing to Bangladesh, I could see training, basically grunt work, being sent overseas as well.

I'm not wild about this idea as it means we'd have an IoT of idiots: billions of AI-powered machines where no one really knows how they work. They've been well-trained but what happens when there's a corner case?

And most of the AI literature I read suggests that inference successes of 97% or so are the norm. That might be fine for classifying faces, but a 3% failure rate of a safety-critical system is a disaster. And the same rate for less-critical systems like factory controllers would also be completely unacceptable.

But the idea is intriguing.

Original post can be viewed here

Feel free to email me with comments.

Back to Jack's blog index page.

Read more…

Theoratical Embedded Linux requirements

Hardware

SoC

A System on Chip (SoC), is essentially an integrated circuit that takes a single platform and integrates an entire computer system onto it. It combines the power of the CPU with other components that it needs to perform and execute its functions. It is in charge of using the other hardware and running your software. The main advantage of SoC includes lower latency and power saving.

It is made of various building blocks:

  • Core + Caches + MMU – An SoC has a processor at its core which will define its functions. Normally, an SoC has multiple processor cores. For a “real” processor, e.g. ARM Cortex-A9. It’s the main thing kept in mind while choosing an SoC. Maybe co-adjuvanted by e.g. a SIMD co-processor like NEON.
  • Internal RAM – IRAM is composed of very high-speed SRAM located alongside the CPU. It acts similar to a CPU cache, and generally very small. It is used in the first phase of the boot sequence.
  • Peripherals – These can be a simple ADC, DSP, or a Graphical Processing Unit which is connected via some bus to the Core. A low power/real-time co-processor helps the main Core with real-time tasks or handle low power states. Examples of such IP cores are USB, PCI-E, SGX, etc.

External RAM

An SoC uses RAM to store temporary data during and after bootstrap. It is the memory an embedded system uses during regular operation.

Non-Volatile Memory

In an Embedded system or single-board computer, it is the SD card. In other cases, it can be a NAND, NOR, or SPI Data flash memory. It is the source of data the SoC reads and stores all the software components needed for the system to work.

External Peripherals

An SoC must have external interfaces for standard communication protocols such as USB, Ethernet, and HDMI. It also includes wireless technology protocols of Wi-Fi and Bluetooth.

Software

Second-Article-01-1024x576.jpghttps://www.tirichlabs.com/storage/2020/09/Second-Article-01-300x169.jpg 300w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-768x432.jpg 768w, https://www.tirichlabs.com/storage/2020/09/Second-Article-01-1200x675.jpg 1200w" alt="" />

First of all, we introduce the boot chain which is the series of actions that happens when an SoC is powered up.

Boot ROM: It is a piece of code stored in the ROM which is executed by the booting core when it is powered-on. This code contains instructions for the configuration of SoC to allow it to execute applications. The configurations performed by Boot ROM include initialization of the core’s register and stack pointer, enablement of caches and line buffers, programming of interrupt service routine, clock configuration.

Boot ROM also implements a Boot Assist Module (BAM) for downloading an application image from external memories using interfaces like Ethernet, SD/MMC, USB, CAN, UART, etc.

1st stage bootloader

In the first-stage bootloader performs the following

  • Setup the memory segments and stack used by the bootloader code
  • Reset the disk system
  • Display a string “Loading OS…”
  • Find the 2nd stage boot loader in the FAT directory
  • Read the 2nd stage boot loader image into memory at 1000:0000
  • Transfer control to the second-stage bootloader

It copies the Boot ROM into the SoC’s internal RAM. Must be tiny enough to fit that memory usually well under 100kB. It initializes the External RAM and the SoC’s external memory interface, as well as other peripherals that may be of interest (e.g. disable watchdog timers). Once done, it executes the next stage, depending on the context, which could be called MLO, SPL, or else.

2nd stage bootloader

This is the main bootloader and can be 10 times bigger than the 1st stage, it completes the initialization of the relevant peripherals.

  • Copy the boot sector to a local memory area
  • Find kernel image in the FAT directory
  • Read kernel image in memory at 2000:0000
  • Reset the disk system
  • Enable the A20 line
  • Setup interrupt descriptor table at 0000:0000
  • Setup the global descriptor table at 0000:0800
  • Load the descriptor tables into the CPU
  • Switch to protected mode
  • Clear the prefetch queue
  • Setup protected mode memory segments and stack for use by the kernel code
  • Transfer control to the kernel code using a long jump

Linux Kernel

The Linux kernel is the main component of a Linux OS and is the core interface between hardware and processes. It communicates between the hardware and processes, managing resources as efficiently as possible. The kernel performs following jobs

  • Memory management: Keep track of memory, how much is used to store what, and where
  • Process management: Determine which processes can use the processor, when, and for how long
  • Device drivers: Act as an interpreter between the hardware and the processes
  • System calls and security: Receive requests for the service from processes

To put the kernel in context, they can be interpreted as a Linux machine as having 3 layers:

  • The hardware: The physical machine—the base of the system, made up of memory (RAM) and the processor (CPU), as well as input/output (I/O) devices such as storage, networking, and graphics.
  • The Linux kernel: The core of the OS. It is a software residing in memory that tells the CPU what to do.
  • User processes: These are the running programs that the kernel manages. User processes are what collectively makeup user space. The kernel allows processes and servers to communicate with each other.

Init and rootfs – init is the first non-Kernel task to be run, and has PID 1. It initializes everything needed to use the system. In production embedded systems, it also starts the main application. In such systems, it is either BusyBox or a custom-crafted application.

View original post here

Read more…

7811924256?profile=RESIZE_400x

 

CLICK HERE TO DOWNLOAD

This complete guide is a 212-page eBook and is a must read for business leaders, product managers and engineers who want to implement, scale and optimize their business with IoT communications.

Whether you want to attempt initial entry into the IoT-sphere, or expand existing deployments, this book can help with your goals, providing deep understanding into all aspects of IoT.

CLICK HERE TO DOWNLOAD

Read more…

Edge Products Are Now Managed At The Cloud

Now more than ever, there are billions of edge products in the world. But without proper cloud computing, making the most of electronic devices that run on Linux or any other OS would not be possible.

And so, a question most people keep asking is which is the best Software-as-a-service platform that can effectively manage edge devices through cloud computing. Well, while edge device management may not be something, the fact that cloud computing space is not fully exploited means there is a lot to do in the cloud space.

Product remote management is especially necessary for the 21st century and beyond. Because of the increasing number of devices connected to the internet of things (IoT), a reliable SaaS platform should, therefore, help with maintaining software glitches from anywhere in the world. From smart homes, stereo speakers, cars, to personal computers, any product that is connected to the internet needs real-time protection from hacking threats such as unlawful access to business or personal data.

Data being the most vital asset is constantly at risk, especially if individuals using edge products do not connect to trusted, reliable, and secure edge device management platforms.

Bridges the Gap Between Complicated Software And End Users

Cloud computing is the new frontier through which SaaS platforms help manage edge devices in real-time. But something even more noteworthy is the increasing number of complicated software that now run edge devices at homes and in workplaces.

Edge device management, therefore, ensures everything runs smoothly. From fixing bugs, running debugging commands to real-time software patch deployment, cloud management of edge products bridges a gap between end-users and complicated software that is becoming the norm these days.

Even more importantly, going beyond physical firewall barriers is a major necessity in remote management of edge devices. A reliable Software-as-a-Service, therefore, ensures data encryption for edge devices is not only hackproof by also accessed by the right people. Moreover, deployment of secure routers and access tools are especially critical in cloud computing when managing edge devices. And so, developers behind successful SaaS platforms do conduct regular security checks over the cloud, design and implement solutions for edge products.

Reliable IT Infrastructure Is Necessary

Software-as-a-service platforms that manage edge devices focus on having a reliable IT infrastructure and centralized systems through which they can conduct cloud computing. It is all about remotely managing edge devices with the help of an IT infrastructure that eliminates challenges such as connectivity latency.

Originally posted here

Read more…

Introducing Profiler, by Auptimizer: Select the best AI model for your target device — no deployment required.

Profiler is a simulator for profiling the performance of Machine Learning (ML) model scripts. Profiler can be used during both the training and inference stages of the development pipeline. It is particularly useful for evaluating script performance and resource requirements for models and scripts being deployed to edge devices. Profiler is part of Auptimizer. You can get Profiler from the Auptimizer GitHub page or via pip install auptimizer.

The cost of training machine learning models in the cloud has dropped dramatically over the past few years. While this drop has pushed model development to the cloud, there are still important reasons for training, adapting, and deploying models to devices. Performance and security are the big two but cost-savings is also an important consideration as the cost of transferring and storing data, and building models for millions of devices tends to add up. Unsurprisingly, machine learning for edge devices or Edge AI as it is more commonly known continues to become mainstream even as cloud compute becomes cheaper.

Developing models for the edge opens up interesting problems for practitioners.

  1. Model selection now involves taking into consideration the resource requirements of these models.
  2. The training-testing cycle becomes longer due to having a device in the loop because the model now needs to be deployed on the device to test its performance. This problem is only magnified when there are multiple target devices.

Currently, there are three ways to shorten the model selection/deployment cycle:

  • The use of device-specific simulators that run on the development machine and preclude the need for deployment to the device. Caveat: Simulators are usually not generalizable across devices.
  • The use of profilers that are native to the target device. Caveat: They need the model to be deployed to the target device for measurement.
  • The use of measures like FLOPS or Multiply-Add (MAC) operations to give approximate measures of resource usage. Caveat: The model itself is only one (sometimes insignificant) part of the entire pipeline (which also includes data loading, augmentation, feature engineering, etc.)

In practice, if you want to pick a model that will run efficiently on your target devices but do not have access to a dedicated simulator, you have to test each model by deploying on all of the target devices.

Profiler helps alleviate these issues. Profiler allows you to simulate, on your development machine, how your training or inference script will perform on a target device. With Profiler, you can understand CPU- and memory-usage as well as run-time for your model script on the target device.

How Profiler works

Profiler encapsulates the model script, its requirements, and corresponding data into a Docker container. It uses user-inputs on compute-, memory-, and framework-constraints to build a corresponding Docker image so the script can run independently and without external dependencies. This image can then easily be scaled and ported to ease future development and deployment. As the model script is executed within the container, Profiler tracks and records various resource utilization statistics including Average CPU UtilizationMemory UsageNetwork I/O, and Block I/O. The logger also supports setting the Sample Time to control how frequently Profiler samples utilization statistics from the Docker container.

Get Profiler: Click here

How Profiler helps

Our results show that Profiler can help users build a good estimate of model runtime and memory usage for many popular image/video recognition models. We conducted over 300 experiments across a variety of models (InceptionV3, SqueezeNet, Resnet18, MobileNetV2–0.25x, -0.5x, -0.75x, -1.0x, 3D-SqueezeNet, 3D-ShuffleNetV2–0.25x, -0.5x, -1.0x, -1.5x, -2.0x, 3D-MobileNetV2–0.25x, -0.5x, -0.75x, -1.0x, -2.0x) on three different devices — LG G6 and Samsung S8 phones, and NVIDIA Jetson Nano. You can find the full set of experimental results and more information on how to conduct similar experiments on your devices here.

The addition of Profiler brings Auptimizer closer to the vision of a tool that helps machine learning scientists and engineers build models for edge devices. The hyperparameter optimization (HPO) capabilities of Auptimizer help speed up model discovery. Profiler helps with choosing the right model for deployment. It is particularly useful in the following two scenarios:

  1. Deciding between models — The ranking of the run-times and memory usages of the model scripts measured using Profiler on the development machine is indicative of their ranking on the target device. For instance, if Model1 is faster than Model2 when measured using Profiler on the development machine, Model1 will be faster than Model2 on the device. This ranking is valid only when the CPU’s are running at full utilization.
  2. Predicting model script performance on the device — A simple linear relationship relates the run-times and memory usage measured using Profiler on the development machine with the usage measured using a native profiling tool on the target device. In other words, if a model runs in time x when measured using Profiler, it will run approximately in time (a*x+b) on the target device (where a and b can be discovered by profiling a few models on the device with a native profiling tool). The strength of this relationship depends on the architectural similarity between the models but, in general, the models designed for the same task are architecturally similar as they are composed of the same set of layers. This makes Profiler a useful tool for selecting the best suited model.

Looking forward

Profiler continues to evolve. So far, we have tested its efficacy on select mobile- and edge-platforms for running popular image and video recognition models for inference, but there is much more to explore. Profiler might have limitations for certain models or devices and can potentially result in inconsistencies between Profiler outputs and on-device measurements. Our experiment page provides more information on how to best set up your experiment using Profiler and how to interpret potential inconsistencies in results. The exact use case varies from user to user but we believe that Profiler is relevant to anyone deploying models on devices. We hope that Profiler’s estimation capability can enable leaner and faster model development for resource-constrained devices. We’d love to hear (via github) if you use Profiler during deployment.

Originaly posted here


Authors: Samarth Tripathi, Junyao Guo, Vera Serdiukova, Unmesh Kurup, and Mohak Shah — Advanced AI, LG Electronics USA

Read more…

Industrial IoT Revolution

Why the Nvidia Jetson Nano is responsible for the biggest industrial IoT revolution these days

 
c1f0a2_ecaa338269684f82b2661b550075f528~mv2.webp
 
 

It feels like yesterday when the Raspberry Pi foundation released the first-in-line Single Board Computer (SBC) to the market. Back in 2012, Raspberry Pi wasn't alone in the SBC growing market, however, it was the first to make a community-based product that brings the hardware and the software eco-system to a beautiful harmony on the internet. Before those days, embedded Linux based SBC's and SOM's were a place for Linux kernel and embedded hardware experts, no easy-to-use tools, ready Linux based distros, or most importantly without the enormous amount of questions and answers across the internet on anything related.

Today, 8 years later, the "2012 revolution" happens again

This time, it took a year to understand the impact of the new 'kid' in the market, but now, there are a few indications that defiantly build the route to a revolution.

The Raspberry Pi was the first to make embedded Linux easy while keeping the advantages of reliability and flexibility in terms of fitting to different kinds of industries applications. It's almost impossible to ignore the variety of industries where Raspberry Pi is in its hurt of products to save time-to-market and costs. The power of this magical board leans on the software side: The Raspberry Pi foundation and their community, worked hard across the years to improve and share their knowledge, but, at the same time, without notice or targeting, they brought the Pi development to an extremely "serverless" level.

The Nvidia Jetson Nano

Let's stop talking about the Raspberry Pi and focus on today's industry needs to understand better why the new kid in the town is here to change the market of IoT and smart products forever.

 
c1f0a2_2ca55bc3cd744a10a05bc244c4e092c1~mv2.webp
 
 Why do we need to thanks Nvidia and the Jetson Nano?
 

The market is going forward. AI, Robotics, amazing-looking screen app Gui's, image processing, and long data calculations are all become the new standard of smart edge products.

If a few years ago, you would only want to connect your product to the cloud and receive anything valuable, today, product managers and developers compete in a much tougher industry era. This time, the Raspberry Pi can't be the technology hero again, its resources are limited and the eco-system starts to squint to a better-fit solution.

 
c1f0a2_b46f958fa9b543af88a6ad38b2afce82~mv2.webp
 
 

NVIDIA Jetson devices in Upswift.io device management platform

The Jetson Nano is the first SBC to understood the necessary combination that will drive new products to use it. It's the first SBC designed in the mind of industrial powerful use cases, while not forgetting the prototyping stage and the harmony that gave the Raspberry Pi their success. It's the first solution to bring the whole package for developers and for hardware engineers with a "SaaS" feel: The OS is already perfect thanks to Ubuntu, there is plenty of software instructions by Nvidia and open-source ready-to-use tools custom made for the Jetson family, and for the hardware engineers: they are free to go with the System On Module (SOM) that is connected to a carrier board which includes all the necessary outputs and inputs to make the development stage even faster.

The Jetson Nano combination is basically providing the first world infrastructure for producing a "2020" product with complex software while working in a minimal budget and time-to-market. The Jetson Nano enables developers and product managers to imagine further without compromises, bringing tough software missions to the edge easily.

Originally posted here

Read more…

by Dan Carroll, Carnegie Mellon University, Department of Civil and Environmental Engineering

7451650263?profile=RESIZE_400x
Credit: Pixabay/CC0 Public Domain
 
Across the U.S., there has been some criticism of the cost and efficacy of emissions inspection and maintenance (I/M) programs administered at the state and county level. In response, Engineering and Public Policy (EPP) Ph.D. student Prithvi Acharya and his advisor, Civil and Environmental Engineering's Scott Matthews, teamed up with EPP's Paul Fischbeck. They have created a new method for identifying over-emitting vehicles using remote data transmission and machine learning that would be both less expensive and more effective than current I/M programs.
 

Most states in America require passenger vehicles to undergo periodic emissions inspections to preserve air quality by ensuring that a vehicle's exhaust emissions do not exceed standards set at the time the vehicle was manufactured. What some may not know is that the metrics through which emissions are gauged nowadays are usually measured by the car itself through on-board diagnostics (OBD) systems that process all of the vehicle's data. Effectively, these emissions tests are checking whether a vehicle's "check engine light" is on. While over-emitting identified by this system is 87 percent likely to be true, it also has a 50 percent false pass rate of over-emitters when compared to tailpipe testing of actual emissions.

With cars as smart devices increasingly becoming integrated into the Internet of Things (IoT), there's no longer any reason for state and county administrations to force drivers to come in for regular I/M checkups when all the necessary data is stored on their vehicle's OBD. In an attempt to eliminate these unnecessary costs and improve the effectiveness of I/M programs, Acharya, Matthews, and Fischbeck published their recent study in IEEE Transactions on Intelligent Transportation Systems.

Their new method entails sending data directly from the vehicle to a cloud server managed by the state or county within which the driver lives, eliminating the need for them to come in for regular inspections. Instead, the data would be run through machine learning algorithms that identify trends in the data and codes prevalent among over-emitting vehicles. This means that most drivers would never need to report to an inspection site unless their vehicle's data indicates that it's likely over-emitting, at which point they could be contacted to come in for further inspection and maintenance.

Not only has the team's work shown that a significant amount of time and cost could be saved through smarter emissions inspecting programs, but their study has also shown how these methods are more effective. Their model for identifying vehicles likely to be over-emitting was 24 percent more accurate than current OBD systems. This makes it cheaper, less demanding, and more efficient at reducing vehicle emissions.

This study could have major implications for leaders and residents within the 31 states and countless counties across the U.S. where I/M programs are currently in place. As these initiatives face criticism from proponents of both environmental deregulation and fiscal austerity, this team has presented a novel system that promises both significant reductions to cost and demonstrably improved effectiveness in reducing vehicle emissions. Their study may well redefine the testing paradigm for how vehicle emissions are regulated and reduced in America.

 
Originally posted here on Tech Xplore
 
Read more…

Embedded Linux or RTOS: For IoT

by Tirichlabs

Embedded Linux utilizes Linux kernel for an embedded device, but it is quite different from the standard Linux OS. Its application to embedded systems is motivated by the availability of device support, file-systems, network connectivity, and UI support. It is a customized version of Linux for embedded systems, consequently having a much smaller size and minimal features and requires less processing power. Based on embedded system requirements, the Linux kernel is modified and optimized. Such embedded Linux can only run device-specific purpose-built applications.

The Real-Time Operating System (RTOS) with minimal code is used for such applications where least and fix processing time is required. RTOS is a time-sharing system based on clock interrupts that implement priority sequences to execute a process. In the event of a high priority, interrupt is generated by the system, the running low priority processes are stopped and the interrupt is served. The real-time operating system requires less operational memory and synchronizes the processes in such a way they can communicate with each other hence resources can be used efficiently without wastage of time.

 

COMPARISON

Size

The major difference between Embedded Linux and RTOS is in their sizes. RTOS running on an AVR requires approximately 4.4 kilobytes of ROM. Embedded Linux, on the other hand, is relatively larger. The kernel can be stripped of which are not required and even with that, the footprint is generally measured in megabytes.

Embedded Linux RAM requirement is in order of few megabytes. In practical applications, it requires more than that because some other tasks run under these Linux kernels. RTOS has much smaller memory requirements than Linux. A very simple setup, running two tasks, a scheduler, a queue for communication and a semaphore on an 8-bit architecture would use in the vicinity of 200 bytes.

Scheduler

The scheduler in an RT-system is important to ensure that tasks complete in a fixed time. Compared to a regular scheduler for a general-purpose system, it is not the main task of the scheduler to ensure ’fair’ distribution of CPU-time. A common technique is simply to let the task with the highest priority run before all tasks with lower priority. It works fine for a soft real-time system but for hard real-time, the system must provide a better guarantee.

RTOS scheduler

RTOS uses the highest priority first scheduler. It means that the task having the highest priority is always running. This is achieved by having a preemptive scheduler that at a tick-interrupt decides if the currently running task is allowed to continue executing or it needs to be switched for another task based on priority. The scheduler uses the priority to schedule the task with the highest priority. Tasks having the same priority are given a “fair” process time. This schedular allows us to achieve soft real-time but it is difficult to achieve hard real-time by not having any kind of deadline-based scheduling.

For this purpose, there are choices of having a preemptive or a cooperative scheduler. In preemptive mode, a task can be preempted unlike in cooperative mode where it’s up to all tasks to give away the CPU “often” enough so higher priority tasks get to run. Typical RTOS real-time kernel achieves scheduler latencies from zero to a few microseconds.

Embedded Linux scheduler

In Embedded Linux, there are more choices to choose the scheduler. The modular of Embedded Linux allows to change different parts of the system. A simple insmod gives the possibility to change the scheduler. There are a couple of schedulers designed for different things.

First of all, it has a basic highest priority first scheduler that uses the priority of a task and schedules it first. Embedded Linux also implements the Earliest deadline first which uses the periodic feature of Embedded Linux. Assuming that the deadline for every task is when it is next to be run again one can implement a fast EDF. In theory, it is optimal since it can schedule tasks to 100% CPU-usages. In practice, it is not the same due to some overheads. As in idle process Embedded Linux runs a usual Linux kernel and when there are no rt-tasks that can run, Linux gets to run. which can lead to starvation of Linux and thus effectively disabling Linux. But the importance of a real-time system is to run the real-time tasks this is not a big problem for the system. Typical latencies in real-time Linux schedular are in the order of tens to hundreds of microseconds.

CPU resource

Embedded Linux requires a significant amount of CPU resources, perhaps >200MIPS, 32bit processor, ideally with an MMU, 4Mb of ROM and 16MB of RAM and boot may take several seconds.

An RTOS, on the other hand, runs in less than 10Kb, on microcontrollers from 8-bit up and boot in milliseconds.

IoT Implementation of OS

Embedded Linux is often preferred for extremely low-power applications, such as sensors, run for months on batteries. The low-power nature often precludes direct IP connectivity which serves as a gateway for Internet connectivity. The gateway communicates the low-power protocol to the sensors and would translate them to IP. Linux may have an existing protocol to fulfill the requirements.

The basic requirement of an IoT device is network connectivity, typically in the form of IP via a web server. An RTOS can offer IP connectivity but have a risk to be buggy unless you examine it. For example, usually, RTOSs do not isolate the IP stack user from the IP stack itself. Network connectivity requires potentially dealing with low speed or congested links which can lead to obscure and hard-to-debug buffer handling issues when the stack is intermingled with other code. On the other hand, an embedded Linux leverages hardware separation and a widely utilized IP stack that probably has been exposed to corner cases.

Security is essential in IoT devices, which are often exposed to open Internet. A system compromise on the Internet interface is prone to intruders and information or control of the device can be hijacked. Developers can leverage native, embedded Linux features—multiuser, SELinux, and containers—to contain and limit the damage.

Linux certainly is a robust and secure OS and the system has matured in an embedded operating system. Yet one of the drawbacks is its Memory footprint when compared to a real-time operating system even though it can be trimmed down by removing tools and system services that are not required in embedded systems, it still is a large software. It simply cannot run on 8 or 16-bit MCUs and requires more onboard RAM for the Linux kernel. For example, ARM Cortex-M architecture based MCUs, which typically have only a few hundred kilobytes of RAM, and Linux cannot run on these chips.

A common engineering solution for networked systems is to use two processors in the device. In this arrangement, an 8 or 16-bit MCU is used for the sensor or actuator, while a 32-bit processor is used for the network interface which runs an RTOS. Sales of 32-bit MCUs have exploded in the last several years, and have become the largest segment of the MCU market.

ORIGINALLY POSTED HERE ON TIRICH LABS

Read more…

 

max0492-01-arduino-breakout-board-1024x885.jpg

When I work on a development project, I’ve become a big fan of using development boards that have the Arduino headers on them. The vast number of shields that easily connect to these headers is phenomenal. The one problem that I’ve always had though was that there is always a need to use a breadboard to test a circuit or integrate a sensor that just isn’t in an Arduino header format. The result is a wiring mess that can result in loose or missing connections.

I was recently talking with Max Maxfield and he pointed me to a really cool adapter board designed to remove these wiring jumpers to a breadboard. Max wrote about this board here but I’m so excited about this that I thought I’d add my two cents as well.

The BreadShield, which can be purchased at https://www.crowdsupply.com/loser/breadshield, adapts the Arduino headers to a linear set of header pins designed to be plugged into a breadboard. You can see in the image below that this completely removes all the extra jumpers that one would normally require which has the potential to remove quite a few jumper wires.

max0492-03-arduino-breakout-board-1024x675.jpg

When I heard about these, I purchased three assembled units for about $28 which saves me the time from having to assemble the adapter myself. DIY assembly runs for about $15 for a set of three boards. Either way, a great price to remove a bunch of wires from the workbench.

Now I’m still waiting for mine to arrive, but from the image, you can see that the one challenge to using these adapters might be adapting the height of your breadboard to your hardware stack. While this could be an issue, I keep various length spacers around the office so that I can adapt board heights and undoubtedly there will be a length that will ensure these line up properly.

You can view the original post here

Read more…

Industrial Prototyping for IoT

I-Pi SMARC.jpg

ADLINK is a global leader in edge computing driving data-to-decision applications across industries. The company recently introduced I-Pi SMARC for Industrial IoT prototyping.

-       AdLInk I-Pi SMARC consists of a simple carrier paired with a SMARC Computer on Module

-       SMARC Modules are available from entry level PX30 Rockchip to top of the line Intel Apollo Lake.

-       SMARC modules are specifically designed for typical industrial embedded applications that require long life, high MTBF and strict revision control.

-       Use popular off the shelve sensors and create prototypes or proof of concepts on short notice.

Additional information can be found here

 

Read more…

By: Kelly McNelis

We have faced unprecedented disruption from the many challenges of COVID-19, and PTC’s LiveWorx was no exception. The definitive digital transformation event went virtual this year, and despite the transition from physical to digital, LiveWorx delivered.

Of the many insightful virtual keynotes, one that caught everyone’s attention was ‘Digital Transformation: The Technology & Support You Need to Succeed,’ presented by PTC’s Executive Vice President (EVP) of Products, Kevin Wrenn, and PTC’s EVP and Chief Customer Officer, Eduarda Camacho.

Their keynote focused on how companies should be prioritizing the use of best-in-class technology that will meet their changing needs during times of disruption and accelerated digital transformation. Wrenn and Camacho highlighted five of our customers through interactive case studies on how they are using PTC technology to capitalize on digital transformation to thrive in an era of disruption.

6907721673?profile=RESIZE_400x

Below is a summary of the five customers and their stories that were highlighted during the keynote.

1. Royal Enfield (Mass Customization)

Royal Enfield is an Indian motorcycle company that has been manufacturing motor bikes since 1901. They have British roots, and their main customer base is located in India and Europe. Riders of Royal Enfield wants their bikes to be particular to their brand, so they worked to better manage the complexities of mass customization and respond to market demands.

Royal Enfield is a long time PTC customer, but they were on old versions of PTC technology. They first upgraded Creo and Windchill to the latest releases so they could leverage the new capabilities. They then moved on to transform their processes for platform and variant designs, introduced simulation much earlier by using Creo Simulation Live, and leveraged generative design by bringing AI into engineering and applying it to engine and chassis complex custom forged components. Finally, they retrained and retooled their engineering staff to fully leverage the power of new processes and technologies.

The entire Royal Enfield team now has digital capabilities that accelerate new product designs, variants, and accessories for personalization; as a result, they are able to deliver a much-shortened design cycle. Royal Enfield is continuing their digital transformation trend, and will invest in new ways to create value while leveraging augmented reality with PTC's Vuforia suite.

2. VCST (Manufacturing Efficiency, Quality, and Innovation)

VCST is part of the BMT Group and are a world-class automotive supplier of precision-machined power train and brake components. Their problem was that they had high costs for their production facility in Belgium. They either needed to improve their cost efficiency in their plant or face the potential of needing to shut down the facility and relocate it to another region. VCST decided to implement ThingWorx so that anyone can have instant visibility to asset status and performance. VCST is also creating the ability to digitize maintenance requests and the ability to acquire about spare parts to improve the overall efficiency in support of their costs reduction goals.

Additionally, VCST has a goal to reach zero complaints for their customers and, if any quality problems appear to their customers, they can be required to do a 100% inspection until the problem is solved. Moreover, as cars have gotten quieter with electrification, the noise from the gears has become an issue, and puts pressure on VCST to innovate and reduce gear noise.

VCST has again relied on ThingWorx and Windchill to collect and share data for joint collaborative analysis to innovate and reduce gear noise. VCST also plans to use Vuforia Expert Capture and Vuforia Chalk to train maintenance workers to further improve their efficiency and cost effectiveness. The company is not done with their digital transformation, and they have plans to implement Creo and Windchill to enable end-to-end digital thread connectivity to the factory.

3. BID Group Holdings (Connected Product)

BID Group Holdings operates in the wood processing industry. It is one of the largest integrated suppliers and North American leader in the field. The purpose of BID Group is to deliver a complete range of innovative equipment, digital technologies, turnkey installations, and aftermarket services to their customers. BID Group decided to focus on their areas of expertise, an rely on PTC, Microsoft, and Rockwell Automation’s combined capabilities and scale to deliver SaaS type solutions to their own industry.

Leveraging this combined power, the BID Group developed a digital strategy for service to improve mill efficiency and profitability. The solution is named OPER8 and was built on the ThingWorx platform. This allowed BID Group to provide their customers an out of the box solution with efficient time-to-value and low costs of ownership. BID Group is continuing to work with PTC and Rockwell Automation, to develop additional solutions that will reduce downtime of OPER8 with a predictive analytics module by using ThingWorx Analytics and LogixAI.

4. Hitachi (Service Optimization)

Hitachi operates an extensive service decision that ensures its customers’ data systems remain up and running. Their challenge was not to only meet their customers uptime Service Level Agreements, but to do it without killing their cost structure. Hitachi decided to implement PTC’s Servigistics Service Parts Management software to ensure the right parts are available when and where they are needed for service. With Servigistics, Hitachi was able to accomplish their needs while staying cost effective and delighting their customers.

Hitachi runs on the cloud, which allows them to upgrade to current releases more often, take advantage of new functionality, and avoid unexpected costs.

PTC has driven engagement and support for Hitachi through the PTC Community, and encourages all customers to utilize this platform. The network of collaborative spaces in a gathering place for PTC customers and partners to showcase their work, inspire each other, and share ideas or best practices in order to expand the value of their PTC solutions and services.

5. COVID-19 Response 

COVID-19 has put significant strain on the world’s hospitals and healthcare infrastructure, and hospitalization rates for COVID brought into question the capacity of being able to handle cases. Many countries began thinking of the value field hospitals could bring to safely care for patients and ease the admissions numbers of ‘regular’ hospitals. However, the complication is that field hospitals have essentially no isolation or air filtration capability that is required for treating COVID patients or healthcare workers.

As a result, the US Army Corp of Engineers has put out specifications to create self-contained isolation units, which are fully functioning hospital rooms that can be transported or built onsite. But, the assembly needed to happen fast, and a group of companies (including PTC) led by The Innovation Machine rallied to help design and define the SCIU’s.

With buy-in from numerous companies, a common platform was needed for companies to collaborate. PTC felt compelled to react, and many PTC customers and partners joined in to help create a collaboration platform, with cloud-based Windchill as the foundation. But, PTC didn’t just provide software to this collaboration; PTC also contributed with digital thread and design advice to help the group solve some of the major challenges. This design is a result of the many companies coming together to create deployments across various US state governments, agencies, and FEMA.

Final Thoughts

All of the above customers approached digital transformation as a business imperative. They all had sizeable challenges that needed to be solved and took leadership positions to implement plans that leveraged digital transformation technologies combined with new processes.

PTC will continue to innovate across the digital transformation portfolio and is committed to ensuring that customer success offerings capture value faster and provide the best outcomes.

Original Post Link: https://www.ptc.com/en/product-lifecycle-report/liveworx-digital-transformation–technology-and-support-you-need-to-succeed

Author Bio: Kelly is a corporate communications specialist at PTC. Her responsibilities include drafting and approving content for PTC’s external and social media presence and supporting communications for the Chief Strategy Officer. Kelly has previous experience as a communications specialist working to create and implement materials for the Executive Vice President of the Products Organization and senior management team members.

 

Read more…

The tinyML Foundation is excited to be offering a new activity to our community: tinyML Talks webcast series. A strong line-up of speakers making 30-minute presentations will take place twice a month on Tuesdays at 8 am Pacific time to make sure that tinyML enthusiasts worldwide will have an opportunity to watch them live. Presentations and videos will be available online the day afterwards for those that were not able to join live.

View Schedule of Upcoming Talks

If you want to re-watch all talks starting March 31 or were unable to join us live, the slides and links to our YouTube Channel of the talks are posted at our tinyML Forums. Many questions were asked during the presentations but not all could be answered in the allotted time frame. The answers to some of those can be found on the tinyML Forums as well.

Read more…

6551228278?profile=RESIZE_710x

 

We are living in a digital world, using apps to perform each and every daily task. Augmented reality has gained much popularity in recent years, Pokemon Go is one of the best illustrations of AR games, and you will not find a single person in this world who is not familiar with this game.

Augmented reality is a technology that overlays machine-generated pictures in the real world in the form of animation or making a purchase through smart devices or headsets. AR is a transformation of a normal camera; it offers an impressive, interactive, and reality-based environment to enhance the user experience.

As the future of Augmented reality apps are very bright because the customer's demand is increasing, and they want to try things before they make a purchase. So many SDKs and tools can be useful for a developer to create AR apps. A recent marketing survey shows that in the past 2-3 years, the demand for Augmented reality-based apps has increased by a handsome number.

 

# Vuforia

Vuforia is an advanced and modern AR building tool that offers an attractive platform for building augmented reality apps for iOS and Android platforms. It is a much popular platform in the developer community because it is broad and easily compatible with any other tool.

Vuforia offers an extensive range of products that improves user experience, Vuforia engine, studio, and chalk are some of the widely used tools. If you want to make your 3D project exclusive and want to launch in the market, this is the best ready-to-use tool.

As it is the most popular tool when it comes to developing AR VR apps, it costs $99 per month; it is not that expensive because it offers many functionalities and is very easy to integrate on any operating system. Vuforia uses computer vision technology as it is able to track scanned images and simple 3D objects, such as boxes. It is the ultimate choice for 3D and 2D projects.

 

# ARkit

If you love to work with the open-source platform, ARtoolkit would be a perfect choice to develop AR apps. A recent survey from Wikipedia revealed that it is a very popular tool with more than 160000 downloads every year, and this is the reason why we enjoy many augmented reality apps.

As a programmer, one of the most difficult tasks is to locate the user's location in real-time perfectly, and ARtoolkit solves this problem with ease and able to calculate the position and orientation of the real camera, it helps any AR app to reflect the digital content such as images or 3D models on the real world.

Not only Android, but Apple has also launched an ARKKit tutorial with every new version of iOS, that helps developers to integrate this tool in the app.

 

#Maxst 

As the name suggests, Maxst offers two kinds of different SDKs, one for image tracking and another for environment recognition. The first tool can only recognize 2D images, where the second tool is more powerful and can track 3D objects.

You can generate the data online via the tracking manager, and you can scan 3D objects with the upgraded version. It supports multiple platforms such as Android, iOS, and Windows. Due to its easy integration, this tool is widely used among developers, and the website also offers easy documentation for freshers to understand.

The space mapping tool of Maxst can analyze the input, extract the data, and save it to a map file. If you want to fix the 3D objects in space, this tool is useful. These days, scan QR code and pay instantly, this technology has taken place, even human resource department is using this technology & have developed best human resouce management software, giving unique QR codes to employees, you can swiftly scan an employee's personal details, it saves time and efforts both.

 

#Wikitude

Wikitude is one of the best tools that focus on providing location-based AR experiences and presents real-time data via the Wikitube World Browser App. It has launched its recent version that supports localization and mapping.

The updated version of the Wikitude tool contains a lot of extensive AR features that allow you to create both marker and location-based AR applications. This tools currently provides some amazing features: 

  • Build apps for smart glasses
  • Image recognition and tracking
  • Easy loud recognition means it can target all the images hosted in the cloud
  • Accurate location-based services
  • Numerous external plugins, including Unity.

Wikitude offers a complete package studio to build smart AR apps. All you need to upload an image to the studio, add AR objects, add necessary effects, generate JS code, and directly paste it into the project.

 

# Google ARCore

ARCore is basically launched by Google and supports both the operating systems, respectively. Primarily, its three key technologies for "embedding" virtual content into the real world include motion tracking, lighting recognition, and environmental recognition.

It has the ability to build smart AR apps, and Google has been developing the basic technologies that support mobile AR over the last three years with Tango and based on that, ARCore is developed. 

Another plus point of ARCore is it works without installing any hardware that means it can work across all the Android ecosystems. It can run on millions of devices, and giant smartphone manufacturers such as Samsung, Huawei, LG, and ASUS use this tool for quality and high performance. 

 

Winding Up!

Augmented reality and virtual reality have created a buzz in the techno world, and now every business owner wants to integrate these features in their applications to drive sales. We have already seen Augmented reality apps causing a different level of excitement in users; hence developers need to learn the above tools for better output. After reading this, developers have a wide choice of AR toolkits that helps them to develop market-based and location-based applications.

You need to pick the right augmented reality tool based on your project requirement. Before choosing any tool, it would be advisable to compare features such as 3D recognition, storage facility, Unity, etc. After comparing features now, you can quickly build outstanding AR apps. Ultimately, your main focus should be on providing fast delivery of the product with maximum customer satisfaction.

Read more…
RSS
Email me when there are new items in this category –

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities