Management (41)
The convergence of various technologies is no longer a luxury but a necessity for business success. If you run an eCommerce store, you're already aware of the importance of Search Engine Optimization (SEO) for visibility, traffic, and ultimately, conversions. But have you ever considered how the Internet of Things (IoT) can further enrich your SEO strategy? As disparate as they may seem, IoT and SEO can intersect in fascinating ways to offer significant advantages for your eCommerce business. Let’s delve into the symbiosis between IoT data and eCommerce SEO.
Why Should eCommerce Care About IoT?
IoT can do wonders for the eCommerce sector by enhancing user experience, streamlining operations, and providing unparalleled data insights. Smart homes, wearables, and voice search devices like Amazon's Alexa or Google Home are becoming standard accessories in households, which means that consumers are using IoT for their online shopping needs more than ever.
Integrating IoT Insights into UX Design
IoT data isn't just for SEO; it can also transform your site's User Experience (UX) design. By analyzing real-time user interactions captured by IoT devices, you can refine your site layout and navigation for optimal user engagement and conversion. This seamless blend of IoT insights and UX design elevates your eCommerce platform, making it more responsive to user needs and behaviours.
Unlocking User Behavior Insights
One of the most direct ways IoT can impact your SEO strategy is through enhanced data analytics. Devices like smartwatches or fitness trackers could provide valuable information on consumer habits, routines, and preferences. By integrating this IoT data into your SEO strategy, you can better understand your target audience, refine your keyword focus, and tailor your content to better suit the needs and search intent of potential customers.
Voice Search Optimization
Voice-activated devices are increasingly being used to perform searches and online shopping. As voice search is typically more conversational and question-based, you can use IoT data to understand the common phrases or questions consumers ask these devices. This can help you optimize your product descriptions, FAQs, and even blog posts to align with the natural language used in voice searches.
Local SEO and IoT
The "near me" search query is incredibly popular, thanks in part to IoT devices with geolocation capabilities. People use their smartphones or smartwatches to find the nearest restaurant, gas station, or store. If you have a brick-and-mortar store in addition to your online shop, IoT data can help you target local SEO more effectively by integrating local keywords and ensuring your Google My Business listing is up-to-date.
IoT and Page Experience
With IoT, user experience can go beyond the digital interface to incorporate real-world interactions. For example, a smart fridge could remind users to order more milk, directing them to your online grocery store. If your website isn’t optimized for speed and experience, you could lose these high-intent users. Incorporating IoT insights into your SEO strategy can help you anticipate these needs and optimize your site accordingly.
Real-Time Personalization
IoT devices can collect data in real-time, offering insights into user behaviour that can be immediately acted upon. Imagine someone just completed a workout on their smart treadmill. They might then search for protein shakes or workout gear. With real-time data, you could offer timely discounts or suggestions, personalized to the user's immediate needs, all while improving your SEO through higher user engagement and lower bounce rates.
Wrapping It Up
IoT and SEO may seem like different arenas, but they are more interconnected than you'd think. By adopting a holistic approach that marries the insights from IoT devices with your SEO strategy, you can significantly improve your eCommerce site's performance. From optimizing for voice search and improving local SEO to real-time personalization and superior user experience, the opportunities are endless.
Cloud-based motor monitoring as a service is revolutionizing the way industries manage and maintain their critical assets. By leveraging the power of the cloud, organizations can remotely monitor motors, analyze performance data, and predict potential failures. However, as this technology continues to evolve, several challenges emerge that need to be addressed for successful implementation and operation. In this blog post, we will explore the top challenges faced in cloud-based motor monitoring as a service in 2023.
Data Security and Privacy:
One of the primary concerns in cloud-based motor monitoring is ensuring the security and privacy of sensitive data. As motor data is transmitted and stored in the cloud, there is a need for robust encryption, authentication, and access control mechanisms. In 2023, organizations will face the challenge of implementing comprehensive data security measures to protect against unauthorized access, data breaches, and potential cyber threats. Compliance with data privacy regulations, such as GDPR or CCPA, adds an additional layer of complexity to this challenge.
Connectivity and Network Reliability:
For effective motor monitoring, a reliable and secure network connection is crucial. In remote or industrial environments, ensuring continuous connectivity can be challenging. Factors such as signal strength, network coverage, and bandwidth limitations need to be addressed to enable real-time data transmission and analysis. Organizations in 2023 will need to deploy robust networking infrastructure, explore alternative connectivity options like satellite or cellular networks, and implement redundancy measures to mitigate the risk of network disruptions.
Scalability and Data Management:
Cloud-based motor monitoring generates vast amounts of data that need to be efficiently processed, stored, and analyzed. In 2023, as the number of monitored motors increases, organizations will face challenges in scaling their data management infrastructure. They will need to ensure that their cloud-based systems can handle the growing volume of data, implement efficient data storage and retrieval mechanisms, and utilize advanced analytics and machine learning techniques to extract meaningful insights from the data.
Integration with Existing Systems:
Integrating cloud-based motor monitoring systems with existing infrastructure and software can pose significant challenges. In 2023, organizations will need to ensure seamless integration with their existing enterprise resource planning (ERP), maintenance management, and asset management systems. This includes establishing data pipelines, defining standardized protocols, and implementing interoperability between different systems. Compatibility with various motor types, brands, and communication protocols also adds complexity to the integration process.
Cost and Return on Investment:
While cloud-based motor monitoring offers numerous benefits, organizations must carefully evaluate the cost implications and expected return on investment (ROI). Implementing and maintaining the necessary hardware, software, and cloud infrastructure can incur significant expenses. Organizations in 2023 will face the challenge of assessing the financial viability of cloud-based motor monitoring, considering factors such as deployment costs, ongoing operational expenses, and the potential savings achieved through improved motor performance, reduced downtime, and optimized maintenance schedules.
Connectivity and Reliability:
Cloud-based motor monitoring relies heavily on stable and reliable internet connectivity. However, in certain remote locations or industrial settings, maintaining a consistent connection can be challenging. The availability of high-speed internet, network outages, or intermittent connections may impact real-time monitoring and timely data transmission. Service providers will need to address connectivity issues to ensure uninterrupted monitoring and minimize potential disruptions.
Scalability and Performance:
As the number of monitored motors increases, scalability and performance become critical challenges. Service providers must design their cloud infrastructure to handle the growing volume of data generated by motor sensors. Ensuring real-time data processing, analytics, and insights at scale will be vital to meet the demands of large-scale motor monitoring deployments. Continuous optimization and proactive capacity planning will be necessary to maintain optimal performance levels.
Integration with Legacy Systems:
Integrating cloud-based motor monitoring with existing legacy systems can be a complex undertaking. Many organizations have legacy equipment or infrastructure that may not be inherently compatible with cloud-based solutions. The challenge lies in seamlessly integrating these disparate systems to enable data exchange and unified monitoring. Service providers need to offer flexible integration options, standardized protocols, and compatibility with a wide range of motor types and manufacturers.
Data Analytics and Actionable Insights:
Collecting data from motor sensors is only the first step. The real value lies in extracting actionable insights from this data to enable predictive maintenance, identify performance trends, and optimize motor operations. Service providers must develop advanced analytics capabilities that can process large volumes of motor data and provide meaningful insights in a user-friendly format. The challenge is to offer intuitive dashboards, anomaly detection, and predictive analytics that empower users to make data-driven decisions effectively.
Conclusion:
Cloud-based motor monitoring as a service offers tremendous potential for organizations seeking to optimize motor performance and maintenance. However, in 2023, several challenges need to be addressed to ensure its successful implementation. From data security and connectivity issues to scalability, integration, and advanced analytics, service providers must actively tackle these challenges to unlock the full benefits of cloud-based motor monitoring. By doing so, organizations can enhance operational efficiency, extend motor lifespan, and reduce costly downtime in the ever-evolving landscape of motor-driven industries.
Connected devices in the medical field bring a multitude of benefits, including improved patient care, enhanced diagnostics, and streamlined healthcare processes. However, the complexity associated with these devices is a significant consideration. Here, we explore the intricacies involved in the realm of connected medical devices.
First and foremost, interoperability is a critical challenge. Medical environments comprise various devices from different manufacturers, each with its own communication protocols and data formats. Ensuring seamless connectivity and data exchange between these devices necessitates standardized interfaces and robust interoperability frameworks.
Data security and privacy are paramount in the medical domain. Connected devices generate and transmit sensitive patient data, including personal health information and vital signs. Safeguarding this information from unauthorized access, data breaches, and cyber threats requires robust encryption, authentication mechanisms, and strict adherence to regulatory standards like the Health Insurance Portability and Accountability Act (HIPAA)
The complexity also arises from the diverse range of connected devices used in healthcare. From wearable sensors to implantable devices, infusion pumps to remote monitoring systems, each device has specific requirements, connectivity options, and integration challenges. Managing this ecosystem of devices, ensuring seamless communication, and maintaining their functionality demand specialized expertise and effective device management solutions.
Furthermore, regulatory compliance adds another layer of complexity. Connected medical devices must meet rigorous standards to ensure safety, accuracy, and reliability. Regulatory bodies, such as the U.S. Food and Drug Administration (FDA), closely scrutinize these devices for adherence to quality standards, clinical validation, and risk mitigation measures.
Additionally, healthcare organizations need to navigate the complexity of data analytics and actionable insights. Connected devices generate vast amounts of data that must be processed, analyzed, and transformed into meaningful information for healthcare professionals. Extracting valuable insights from this data necessitates advanced analytics algorithms, machine learning techniques, and data visualization tools.
Overcoming the challenges requires collaboration among manufacturers, healthcare providers, and regulatory bodies to develop robust standards, innovative solutions, and best practices that ensure safe, secure, and effective utilization of connected devices to revolutionize patient care.
2019 was the year that IoT solutions started to become a reality. The internet of things has been predicted for years now, but the implementation of IoT solutions has been slower. The impact that IoT will have on business and society cannot be understated, and many companies are aiming to gain a competitive advantage by implementing IoT solutions.
Every year, more entrepreneurs tend to jump on the IoT bandwagon to leverage the benefits of this rapidly evolving technology. Tapping into the IoT, businesses can achieve a plethora of benefits, including increased revenues, better customer services, and enhanced operations. All over the world, companies are turning to the Internet of Things solutions, especially in high-tech industries, such as automotive and aerospace, and in sectors such as manufacturing and retail.
Artificial intelligence and machine learning (AI/ML) deployed on sensors, devices, and networks through the Internet of Things (IoT) are helping enterprises transform the way they do business. Although IoT adoption rates have increased in recent years, IoT adoption is still relatively low. However, by 2025, the number of connected devices will grow to 75.44 billion from 16.2 billion in 2017, according to Statista.
Digital transformation is not only about technology – it is about transforming the way a business works and interacts with its customers. By leveraging the power of digital technologies, organizations can drive innovation and create real value for their customers and business. Digital transformation is a process of continuous evolution, with organizations constantly on the lookout for new ways to optimize their operations and improve customer experience. Companies must now find ways to use technology to their advantage in order to remain successful in an increasingly complex and rapidly changing business landscape
What does IoT mean for digital transformation?
A digital transformation enterprise is not just about the technology or the platform. It is about a new approach to business. IoT is revolutionizing the way enterprises think about their business and the way in which customers and partners interact with them, providing new opportunities for revenue growth and customer engagement.
The Internet of Things (IoT) presents a wealth of opportunities for businesses to transform their processes and operations, leading to more efficient and effective services. By connecting physical devices and systems to each other and the cloud, businesses can gain greater insight into their operations, access real-time data, and automate processes. This can help businesses to save time, money and resources, as well as improve customer experience and reduce operational costs. IoT can also enable new business models, allowing businesses to develop new products and services and create new revenue streams. In short, IoT is an essential part of any digital transformation strategy
Automation: IoT devices can be used to automate tedious and labor-intensive tasks. Automation can be used to streamline processes, increase efficiency and reduce errors, ultimately speeding up the innovation process. can also be used to streamline data collection, allowing businesses to collect and analyze data in real-time to gain valuable insights and make data-driven decisions. This not only accelerates the development process but also reduces the cost associated with data collection and analysis.
Connectivity: IoT devices can be used to connect disparate systems and enable data and information sharing. This can be used to facilitate collaboration and data sharing, which can speed up the process of digital transformation. By leveraging the power of connectivity, businesses can develop a range of products and services that can bring about a whole new level of efficiency, cost savings and customer satisfaction. This can be achieved by integrating data sources and creating better ways to monitor and manage the connected devices.
Monitoring and analytics: IoT devices can be used to monitor and analyze data in real-time, providing valuable insights and helping to inform decisions and improve the decision-making process. This can help businesses identify trends, identify potential issues before they arise, and reduce downtime. It also provides a way to better understand customer behavior and gain valuable insights into customer preferences. With this data, businesses can tailor their products and services to better meet customer needs, resulting in improved customer loyalty and profitability.
Security: IoT solutions can provide additional layers of security to protect data and systems from potential threats. This can help to reduce the risk of data breaches and other malicious activities. IoT solutions can also help to automate security-related processes and procedures. This can help to reduce the time and effort required to maintain a secure environment, allowing organizations to focus on other areas of their operations. Additionally, by having automated security, organizations can be sure that their security measures are consistently up-to-date and effective in protecting their data and systems.
Scalability: IoT solutions can be used to easily scale up or down resources to meet the changing needs of the organization.
The construction industry is among the many under pressure for optimisation and sustainable growth, driven by the development of smart urban cities. Construction will account for about USD 12.9 trillion global output by 2022 and is predicted to grow globally by 3.1% by 2030. Its demand and spending are growing rapidly in an attempt to answer the need for housing of the fast-growing global population.
However, the industry faces unique challenges on a global scale. Despite continual stable growth underperformance, constant project rework, labour shortage and lack of adopted digital solutions cause production delays and a worrying 1% growth in productivity.
The construction industry is one of the slowest growing sectors for IoT adoption and digitalization. To put this in numbers:
- Only 18% of the construction companies use mobile apps for project data and collaboration.
- Nearly 50% of the companies in the field spend 1% or less on technology
- 95% of all data captured in construction goes unused
- 28% of the UK construction firms point out lack of on-site information as the biggest challenge for productivity
And yet
- 70% of the contractors trust that the advance of technology and software solutions, in particular, can improve their work.
To sum up the data, the construction industry is open to digitalization and IoT, but the advancement is slow and difficult, due to the specific needs of the field. When we talk about technology implementation on the construction site, IoT can help with project data collection, environment condition monitoring, equipment tracking and remote management, as well as safety monitoring with wearables.
However, the implementation of IoT for construction sites calls for careful planning and calculation of costs, as well as trust in the technology. Here is where private LTE networks can come into play and help construction companies take initial steps into advancing their digitalization.
What is Private LTE?
Long-term Evolution or LTE is a broadband technology that allows companies to vertically scale solutions for easier management, improved latency, range, speed and costs. LTE is a connectivity standard used for cases with multiple devices with multiple bands and for global technologies. For construction sites, this would mean LTE can connect all devices on-site, including heavy machinery, mobile devices, trackers, sensors and anything else that requires a stable uninterrupted connection.
LTE requires companies to connect to an MNO and depend on local infrastructure to run the network, much like Wi-Fi. Private LTE, on the other hand, allows companies to create and operate an independent wireless network that covers all their business facilities. Private LTE is often used to reduce congestion, add a layer of security and reduce cost for locations with no existing infrastructure, where constructions sites fall into.
Why Private LTE for construction?
When it comes to the particular needs of construction companies, private LTE offers the following benefits, over public LTE and Wi-Fi.
- Network ownership and autonomy
Private LTE can be seen as creating a connectivity island on the construction site, where all company devices and machinery can be monitored and controlled by the company network team. Owning the network increases flexibility because businesses do not need to rely on local providers for making changes, creating additional secure networks or moving devices from one network to another.
Construction sites do not often come with suitable networks in place, so putting your own one up whenever needed is just as important as being able to take it down quickly. With private LTE, construction companies can do it as they see fit.
- Cost
Private LTE can optimise the cost for running a construction site, not just by providing stable connectivity for IoT implementation, but also from a pure network running point. For example, Wi-Fi is often not sufficient for serving large construction sites and may require a number of repeaters to cover the area, which increases the running cost. Private LTE can run on a single tower and can be combined with CBRS for further cost-reduction. This makes it ideal for locations that would incur high infrastructure installation costs.
- Control and security
With private LTE, companies can control network access, preventing unwanted users or outside network interference. This is critical for securing the project data and device access. Private LTE allows for setting up specific levels of security access for different on-site members.
Network ownership also allows teams to use real-time data to make timely decisions on consumption, device control and management, as well as react in case of emergency. This can further help increase the on-site safety of the team.
- Performance
Compared to public LTE or Wi-Fi, private LTE networks are simply better performing when it comes to hundreds of devices. Because the network is private, it allows the usage of connectivity management platforms for control of individual SIM cards (connected devices), traffic optimisation and control. Public LTE and Wi-Fi networks are often not equipped to handle multiple devices on the site, let alone underground projects, where there are barriers to the network. Uninterrupted performance is also key for real-time data and employee safety at high-risk construction sites.
Private LTE is a technology widely applicable for manufacturing, mining, cargo and freight, as well as for utility, hospitals and smart cities in general. It is considered the stepping stone for 5G implementation, because of its capacity and it is agreed to be the gateway for future-proofing network access.
While the implementation of NB-IoT is progressing, we at JT IoT have already developed solutions suitable for future IoT connectivity. To learn more about private LTE, watch our deep dive into the topic with Pod Group.
Originally posted here.
In my last post, I explored how OTA updates are typically performed using Amazon Web Services and FreeRTOS. OTA updates are critically important to developers with connected devices. In today’s post, we are going to explore several best practices developers should keep in mind with implementing their OTA solution. Most of these will be generic although I will point out a few AWS specific best practices.
Best Practice #1 – Name your S3 bucket with afr-ota
There is a little trick with creating S3 buckets that I was completely oblivious to for a long time. Thankfully when I checked in with some colleagues about it, they also had not been aware of it so I’m not sure how long this has been supported but it can help an embedded developer from having to wade through too many AWS policies and simplify the process a little bit.
Anyone who has attempted to create an OTA Update with AWS and FreeRTOS knows that you have to setup several permissions to allow an OTA Update Job to access the S3 bucket. Well if you name your S3 bucket so that it begins with “afr-ota”, then the S3 bucket will automatically have the AWS managed policy AmazonFreeRTOSOTAUpdate attached to it. (See Create an OTA Update service role for more details). It’s a small help, but a good best practice worth knowing.
Best Practice #2 – Encrypt your firmware updates
Embedded software must be one of the most expensive things to develop that mankind has ever invented! It’s time consuming to create and test and can consume a large percentage of the development budget. Software though also drives most features in a product and can dramatically different a product. That software is intellectual property that is worth protecting through encryption.
Encrypting a firmware image provides several benefits. First, it can convert your firmware binary into a form that seems random or meaningless. This is desired because a developer shouldn’t want their binary image to be easily studied, investigated or reverse engineered. This makes it harder for someone to steal intellectual property and more difficult to understand for someone who may be interested in attacking the system. Second, encrypting the image means that the sender must have a key or credential of some sort that matches the device that will decrypt the image. This can be looked at a simple source for helping to authenticate the source, although more should be done than just encryption to fully authenticate and verify integrity such as signing the image.
Best Practice #3 – Do not support firmware rollbacks
There is often a debate as to whether firmware rollbacks should be supported in a system or not. My recommendation for a best practice is that firmware rollbacks be disabled. The argument for rollbacks is often that if something goes wrong with a firmware update then the user can rollback to an older version that was working. This seems like a good idea at first, but it can be a vulnerability source in a system. For example, let’s say that version 1.7 had a bug in the system that allowed remote attackers to access the system. A new firmware version, 1.8, fixes this flaw. A customer updates their firmware to version 1.8, but an attacker knows that if they can force the system back to 1.7, they can own the system. Firmware rollbacks seem like a convenient and good idea, in fact I’m sure in the past I used to recommend them as a best practice. However, in today’s connected world where we perform OTA updates, firmware rollbacks are a vulnerability so disable them to protect your users.
Best Practice #4 – Secure your bootloader
Updating firmware Over-the-Air requires several components to ensure that it is done securely and successfully. Often the focus is on getting the new image to the device and getting it decrypted. However, just like in traditional firmware updates, the bootloader is still a critical piece to the update process and in OTA updates, the bootloader can’t just be your traditional flavor but must be secure.
There are quite a few methods that can be used with the onboard bootloader, but no matter the method used, the bootloader must be secure. Secure bootloaders need to be capable of verifying the authenticity and integrity of the firmware before it is ever loaded. Some systems will use the application code to verify and install the firmware into a new application slot while others fully rely on the bootloader. In either case, the secure bootloader needs to be able to verify the authenticity and integrity of the firmware prior to accepting the new firmware image.
It’s also a good idea to ensure that the bootloader is built into a chain of trust and cannot be easily modified or updated. The secure bootloader is a critical component in a chain-of-trust that is necessary to keep a system secure.
Best Practice #5 – Build a Chain-of-Trust
A chain-of-trust is a sequence of events that occur while booting the device that ensures each link in the chain is trusted software. For example, I’ve been working with the Cypress PSoC 64 secure MCU’s recently and these parts come shipped from the factory with a hardware-based root-of-trust to authenticate that the MCU came from a secure source. That Root-of-Trust (RoT) is then transferred to a developer, who programs a secure bootloader and security policies onto the device. During the boot sequence, the RoT verifying the integrity and authenticity of the bootloader, which then verifies the integrity and authenticity of any second stage bootloader or software which then verifies the authenticity and integrity of the application. The application then verifies the authenticity and integrity of its data, keys, operational parameters and so on.
This sequence creates a Chain-Of-Trust which is needed and used by firmware OTA updates. When the new firmware request is made, the application must decrypt the image and verify that authenticity and integrity of the new firmware is intact. That new firmware can then only be used if the Chain-Of-Trust can successfully make its way through each link in the chain. The bottom line, a developer and the end user know that when the system boots successfully that the new firmware is legitimate.
Conclusions
OTA updates are a critical infrastructure component to nearly every embedded IoT device. Sure, there are systems out there that once deployed will never update, however, those are probably a small percentage of systems. OTA updates are the go-to mechanism to update firmware in the field. We’ve examined several best practices that developers and companies should consider when they start to design their connected systems. In fact, the bonus best practice for today is that if you are building a connected device, make sure you explore your OTA update solution sooner rather than later. Otherwise, you may find that building that Chain-Of-Trust necessary in today’s deployments will be far more expensive and time consuming to implement.
Originally posted here.
Wi-Fi, NB-IoT, Bluetooth, LoRaWAN… This webinar will help you to choose the appropriate connectivity protocol for your IoT application.
Connectivity is cool! The cornucopia of connectivity choices available to us today would make engineers gasp in awe and disbelief just a few short decades ago.
I was just pondering this point and – as usual – random thoughts started to bounce around my poor old noggin. Take the topic of interoperability, for example (for the purposes of these discussions, we will take “interoperability” to mean “the ability of computer systems or software to exchange and make use of information”).
Don’t get me started on the subject of the Endian Wars. Instead, let’s consider the 7-bit American Standard Code for Information Interchange (ASCII) that we know and love. The currently used ASCII standard of 96 printing characters and 32 control characters was first defined in 1968. For machines that supported ASCII, this greatly facilitated their ability to exchange information.
For reasons of their own, the folks at IBM decided to go their own way by developing a proprietary 8-bit code called the Extended Binary Coded Decimal Interchange Code (EBCDIC). This code was first used on the IBM 360 computer, which was presented to the market in 1964. Just for giggles and grins, IBM eventually introduced 57 different variants EBCDIC targeted at different countries (a “standard” that came in 57 different flavors!). This obviously didn’t help IBM machines in different countries to make use of each other’s files. Even worse, different types of IBM computers found difficult to talk to each other, let alone with machines from other manufacturers.
There’s an old joke that goes, “Standard are great – everyone should have one.” The problem is that almost everybody did. Sometime around late-1980 or early 1981, for example, I was working at International Computers (ICL) in Manchester, England. I recall being invited to what I was told was going to be a milestone event. This turned out to be a demonstration in which a mainframe computer was connected to a much smaller computer (akin to one of the first PCs) via a proprietary wired network. With great flourish and fanfare, the presenter created and saved a simple ASCII text file on the mainframe, then – to the amazement of all present – opened and edited the same file on the small computer.
This may sound like no big deal to the young folks of today, but it was an event of such significance at that time that journalists from the national papers came up on the train from London to witness this august occasion with their own eyes so that they could report back to the unwashed masses.
Now, of course, we have a wide variety of wired standards, from simple (short range) protocols like I2C and SPI, to sophisticated (longer range) offerings like Ethernet. And, of course, we have a cornucopia of wireless standards like Wi-Fi, NB-IoT, Bluetooth, and LoRaWAN. In some respects, this is almost an embarrassment of riches … there are so many options … how can we be expected to choose the most appropriate connectivity protocol for our IoT applications?
Well, I’m glad you asked, because I will be hosting a one-hour webinar on this very topic on Tuesday 28 September 2021, starting at 8:00 a.m. Pacific Time (11:00 a.m. Eastern Time).
Presented by IoT Central and sponsored by ARM, yours truly will be joined in this webinar by Samuele Falconer (Principal Product Manager at u-blox), Omer Cheema (Head of the Wi-Fi Business Unit at Renesas Semiconductor), Wienke Giezeman (Co-Founder and CEO at The Things Industries), and Thomas Cuyckens (System Architect at Qorvo).
If you are at all interested in connectivity for your cunning IoT creations, then may I make so bold as to suggest you Register Now before all of the good virtual seats are taken. I’m so enthused by this event that I’m prepared to pledge on my honor that – if you fail to learn something new – I will be very surprised (I was going to say that I would return the price of your admission but, since this event is free, that would have been a tad pointless).
So, what say you? Can I dare to hope to see you there? Register Now
Posted by Terri Hiskey
Without mindful and strategic investments, a company’s supply chain could become wedged in its own proverbial Suez Canal, ground to a halt by outside forces and its inflexible, complex systems.
It’s a dramatic image, but one that became reality for many companies in the last year. Supply chain failures aren’t typically such high-profile events as the Suez Canal blockage, but rather death by a thousand inefficiencies, each slowing business operations and affecting the customer experience.
Delay by delay and spreadsheet by spreadsheet, companies are at risk of falling behind more nimble, cloud-enabled competitors. And as we emerge from the pandemic with a new understanding of how important adaptable, integrated supply chains are, company leaders have critical choices to make.
The Hannover Messe conference (held online from April 12-16) gives manufacturing and supply chain executives around the world a chance to hear perspectives from industry leaders and explore the latest manufacturing and supply chain technologies available.
Technology holds great promise. But if executives don’t ask key strategic questions to supply chain software vendors, they could unknowingly introduce a range of operational and strategic obstacles into their company’s future.
If you’re attending Hannover Messe, here are a few critical questions to ask:
Are advanced technologies like machine learning, IoT, and blockchain integrated into your supply chain applications and business processes, or are they addressed separately?
It’s important to go beyond the marketing. Is the vendor actually promoting pilots of advanced technologies that are simply customized use cases for small parts of an overall business process hosted on a separate platform? If so, it may be up to your company to figure out how to integrate it with the rest of that vendor’s applications and to maintain those integrations.
To avoid this situation, seek solutions that have been purpose-built to leverage advanced technologies across use cases that address the problems you hope to solve. It’s also critical that these solutions come with built-in connections to ensure easy integration across your enterprise and to third party applications.
Are your applications or solutions written specifically for the cloud?
If a vendor’s solution for a key process (like integrated business planning or plan to produce, for example) includes applications developed over time by a range of internal development teams, partners, and acquired companies, what you’re likely to end up with is a range of disjointed applications and processes with varying user interfaces and no common data model. Look for a cloud solution that helps connect and streamline your business processes seamlessly.
Update schedules for the various applications could also be disjointed and complicated, so customers can be tempted to skip updates. But some upgrades may be forced, causing disruption in key areas of your business at various times.
And if some of the applications in the solution were written for the on-premises world, business processes will likely need customization, making them hard-wired and inflexible. The convenience of cloud solutions is that they can take frequent updates more easily, resulting in greater value driven by the latest innovations.
Are your supply chain applications fully integrated—and can they be integrated with other key applications like ERP or CX?
A lack of integration between and among applications within the supply chain and beyond means that end users don’t have visibility into the company’s operations—and that directly affects the quality and speed of business decisions. When market disruptions or new opportunities occur, unintegrated systems make it harder to shift operations—or even come to an agreement on what shift should happen.
And because many key business processes span multiple areas—like manufacturing forecast to plan, order to cash, and procure to pay—integration also increases efficiency. If applications are not integrated across these entire processes, business users resort to pulling data from the various systems and then often spend time debating whose data is right.
Of course, all of these issues increase operational costs and make it harder for a company to adapt to change. They also keep the IT department busy with maintenance tasks rather than focusing on more strategic projects.
Do you rely heavily on partners to deliver functionality in your supply chain solutions?
Ask for clarity on which products within the solution belong to the vendor and which were developed by partners. Is there a single SLA for the entire solution? Will the two organizations’ development teams work together on a roadmap that aligns the technologies? Will their priority be on making a better solution together or on enhancements to their own technology? Will they focus on enabling data to flow easily across the supply chain solution, as well as to other systems like ERP? Will they be able to overcome technical issues that arise and streamline customer support?
It’s critical for supply chain decision-makers to gain insight into these crucial questions. If the vendor is unable to meet these foundational needs, the customer will face constant obstacles in their supply chain operations.
Originally posted here.
Waste management is a global concern. According to The World Bank report, about 2.01 billion tonnes of solid waste is generated globally every year. 33% of that waste is not managed in an environmentally safe manner. Waste management in densely populated urban areas is a major problem. The lack of it leads to environmental contamination. It ends up spreading diseases in epidemic proportions. It is a challenge for both developed and developing countries.
By 2050, it is estimated to grow to 3.40 billion tonnes. But here is the catch. IoT waste management systems can help, Municipalities across the globe can employ IoT to manage waste better. IoT technologies are already being employed for modern supply chains. IoT waste management systems have become invaluable as they optimize and automate most of the processes in the industry. IoT adoption, however, is far more significant on the supply chain side. While many IoT-based waste management systems are already in place, a lot of challenges hold them back.
A smart city collects data of personal vehicles, buildings, public transport, components of urban infrastructures such as power grids and waste management systems, and citizens. The insights derived from the real-time data help municipalities to manage these systems. IoT waste management is a new frontier for local authorities, aiming to reduce municipal waste. As per a recent survey by IoT Analytics, over 70% of cities have deployed IoT systems for security, traffic, and water level monitoring. It is yet to be fully deployed for smart waste management systems using IoT.
With rapid population increase, sanitation-related issues concerning garbage management are on a decline. It creates unhygienic conditions for the citizens in the surrounding areas, leading to the spread of diseases. IoT in waste management is a trending solution. By using IoT, waste management companies can increase operational efficiency and reduce costs.
The waste collection process in urban areas is complex. It requires a significant amount of resources. More than $300 million per capita is spent annually in collecting and managing waste. Most of the high-income cities charge their citizens to cover a fraction of this expense. The rest of the expense is compensated from the tax revenue, which financially burdens the local government.
Municipalities and waste management companies have improved route efficiencies. But they haven't leveraged technological innovations for improving operational efficiency. Even with the route optimization process, the manual process wastes money and time. The use of smart devices, machine-to-machine connectivity, sensors, and IoT can reduce costs. A smart waste management system using IoT can reduce expenses in the trash collection process. But how? How does the use of IoT in waste management improve waste collection efficiencies?
How Does IoT in Waste management Respond to Operational Inefficiencies?
A smart waste management system using IoT improves the efficiency of collecting waste and recycling. Route optimization is the most common use case for using IoT waste management solutions, which reduces fuel consumption.
IoT-powered, smart waste management solutions comprise endpoints (sensors), IoT platforms, gateways, and web and mobile applications. Sensors are attached to dumpsters to check their fill level. Gateways bridge the gap between the IoT platform and the sensor, sending data to the cloud. IoT platforms then transform the raw data into information.
Benefits of IoT Waste Management Solutions
There are several advantages of using IoT-powered waste management solutions.
- Reduced Waste Collection Costs:
Dumpsters that employ IoT can transmit their real-time information on fill-level. The data is shared with the waste collectors. The use of data and selection of optimum routes leads the waste collection trucks to consider the dumpsters with high fill levels. This saves fuel, money, and effort.
- No Missed Pickups:
The smart IoT waste management system eliminates the overflowing of trash bins. The authorities are immediately notified when the trash bins are about to fill up to their capacity. And the collection trucks are scheduled for pickup. - Waste Generation Analysis:
IoT waste management isn't about route optimization alone. The actual value of an IoT-powered process lies in data analysis. Most IoT solutions are coupled with data analytics capabilities. They help IoT waste management companies anticipate future waste generation. - Reduction In Carbon Dioxide Emission:
Optimized routes cause less fuel consumption. They reduce the carbon footprint and make the waste management process eco-friendlier. - Efficient Recycling:
Over the years, the appearance of consumer electronic devices in landfills has become a growing concern. This is due to its harmful chemicals and valuable components. But this concern also presents an opportunity. IoT offers an opportunity for businesses by using sanitation systems to recycle e-waste for resources. - Automating IoT Management Systems:
IoT waste management can also be helpful in waste categorization. The use of digital bins can help in automating the sorting, segregation, and categorization of waste. This saves a lot of man-hours. A Polish company Bin-e combines AI-based object recognition, fill level control and data processing. Its 'Smart Waste Bins' identifies and sorts waste into four categories - paper, glass, plastic, and metal. This makes waste processing more efficient.
Future of IoT Waste Management
IoT waste management is a boon. The growing use of IoT linked with the management of everyday urban life improves the everyday experience of the citizens. Additionally, it reduces carbon footprint. But to do so in the waste management segment, more support is needed from the public sector through incentives and regulations. The private sector needs to contribute via innovation. Engagement from the various state agencies is required to implement the usage of IoT applications. This will help build a more sustainable future.
Conclusion
Those managing the waste collection, sorting, segregation and categorization, can benefit from a smart waste management system using IoT. By employing IoT in waste management, waste management companies can increase operational efficiency. It can reduce costs and enhance the satisfaction level of citizens by ensuring dumpsters don't overflow.
By Ricardo Buranello
What Is the Concept of a Virtual Factory?
For a decade, the first Friday in October has been designated as National Manufacturing Day. This day begins a month-long events schedule at manufacturing companies nationwide to attract talent to modern manufacturing careers.
For some period, manufacturing went out of fashion. Young tech talents preferred software and financial services career opportunities. This preference has changed in recent years. The advent of digital technologies and robotization brought some glamour back.
The connected factory is democratizing another innovation — the virtual factory. Without critical asset connection at the IoT edge, the virtual factory couldn’t have been realized by anything other than brand-new factories and technology implementations.
There are technologies that enable decades-old assets to communicate. Such technologies allow us to join machine data with physical environment and operational conditions data. Benefits of virtual factory technologies like digital twin are within reach for greenfield and legacy implementations.
Digital twin technologies can be used for predictive maintenance and scenario planning analysis. At its core, the digital twin is about access to real-time operational data to predict and manage the asset’s life cycle. It leverages relevant life cycle management information inside and outside the factory. The possibilities of bringing various data types together for advanced analysis are promising.
I used to see a distinction between IoT-enabled greenfield technology in new factories and legacy technology in older ones. Data flowed seamlessly from IoT-enabled machines to enterprise systems or the cloud for advanced analytics in new factories’ connected assets. In older factories, while data wanted to move to the enterprise systems or the cloud, it hit countless walls. Innovative factories were creating IoT technologies in proof of concepts (POCs) on legacy equipment, but this wasn’t the norm.
No matter the age of the factory or equipment, everything looks alike. When manufacturing companies invest in machines, the expectation is this asset will be used for a decade or more. We had to invent something inclusive to new and legacy machines and systems.
We had to create something to allow decades-old equipment from diverse brands and types (PLCs, CNCs, robots, etc.) to communicate with one another. We had to think in terms of how to make legacy machines to talk to legacy systems. Connecting was not enough. We had to make it accessible for experienced developers and technicians not specialized in systems integration.
If plant managers and leaders have clear and consumable data, they can use it for analysis and measurement. Surfacing and routing data has enabled innovative use cases in processes controlled by aged equipment. Prescriptive and predictive maintenance reduce downtime and allow access to data. This access enables remote operation and improved safety on the plant floor. Each line flows better, improving supply chain orchestration and worker productivity.
Open protocols aren’t optimized for connecting to each machine. You need tools and optimized drivers to connect to the machines, cut latency time and get the data to where it needs to be in the appropriate format to save costs. These tools include:
- Machine data collection
- Data transformation and visualization
- Device management
- Edge logic
- Embedded security
- Enterprise integration
Plants are trying to get and use data to improve overall equipment effectiveness. OEE applications can calculate how many good and bad parts were produced compared to the machine’s capacity. This analysis can go much deeper. Factories can visualize how the machine works down to sub-processes. They can synchronize each movement to the millisecond and change timing to increase operational efficiency.
The technology is here. It is mature. It’s no longer a question of whether you want to use it — you have it to get to what’s next. I think this makes it a fascinating time for smart manufacturing.
Originally posted here.
By Jacqi Levy
The Internet of Things (IoT) is transforming every facet of the building – how we inhabit them, how we manage them, and even how we build them. There is a vast ecosystem around today’s buildings, and no part of the ecosystem is untouched.
In this blog series, I plan to examine the trends being driven by IoT across the buildings ecosystem. Since the lifecycle of building begins with design and construction, let’s start there. Here are four ways that the IoT is radically transforming building design and construction.
Building information modeling
Building information modeling (BIM) is a process that provides an intelligent, 3D model of a building. Typically, BIM is used to model a building’s structure and systems during design and construction, so that changes to one set of plans can be updated simultaneously in all other impacted plans. Taken a step further, however, BIM can also become a catalyst for smart buildings projects.
Once a building is up and running, data from IoT sensors can be pulled into the BIM. You can use that data to model things like energy usage patterns, temperature trends or people movement throughout a building. The output from these models can then be analyzed to improve future buildings projects. Beyond its impact on design and construction, BIM also has important implications for the management of building operations.
Green building
The construction industry is a huge driver of landfill waste – up to 40% of all solid waste in the US comes from the buildings projects. This unfortunate fact has ignited a wave of interest in sustainable architecture and construction. But the green building movement has become about much more than keeping building materials out of landfills. It is influencing the design and engineering of building systems themselves, allowing buildings to reduce their impact on the environment through energy management.
Today’s green buildings are being engineered to do things like shut down unnecessary systems automatically when the building is unoccupied, or open and close louvers automatically to let in optimal levels of natural light. In a previous post, I talk about 3 examples of the IoT in green buildings, but these are just some of the cool ways that the construction industry is learning to be more sustainable with help from the IoT.
Intelligent prefab
Using prefabricated building components can be faster and more cost effective than traditional building methods, and it has an added benefit of creating less construction waste. However, using prefab for large commercial buildings projects can be very complex to coordinate. The IoT is helping to solve this problem.
Using RFID sensors, individual prefab parts can be tracked throughout the supply chain. A recent example is the construction of the Leadenhall Building in London. Since the building occupies a relatively small footprint but required large prefabricated components, it was a logistically complex task to coordinate the installation. RFID data was used to help mitigate the effects of any downstream delays in construction. In addition, the data was the fed into the BIM once parts were installed, allowing for real time rendering of the building in progress, as well as establishment of project controls and KPIs.
Construction management
Time is money, so any delays on a construction project can be costly. So how do you prevent your critical heavy equipment from going down and backing up all the other trades on site? With the IoT!
Heavy construction equipment is being outfitted with sensors, which can be remotely monitored for key indicators of potential maintenance issues like temperature fluctuations, excessive vibrations, etc. When abnormal patterns are detected, alerts can trigger maintenance workers to intervene early, before critical equipment fails. Performing predictive maintenance in this way can save time and money, as well as prevent unnecessary delays in construction projects.
Originally posted here.
By Ashley Ferguson
Thanks to the introduction of connected products, digital services, and increased customer expectations, it has been the trend for IoT enterprise spend to consistently increase. The global IoT market is projected to reach $1.4 trillion USD by 2027. The pressure to build IoT solutions and get a return on those investments has teams on a frantic search for IoT engineers to secure in-house IoT expertise. However, due to the complexity of IoT solutions, finding this in a single engineer is a difficult or impossible proposition.
So how do you adjust your search for an IoT engineer? The first step is to acknowledge that IoT solution development requires the fusion of multiple disciplines. Even simple IoT applications require hardware and software engineering, knowledge of protocols and connectivity, web development skills, and analytics. Certainly, there are many engineers with IoT knowledge, but complete IoT solutions require a team of partners with diverse skills. This often requires utilizing external sources to supplement the expertise gaps.
THE ANATOMY OF AN IoT SOLUTION
IoT solutions provide enterprises with opportunities for innovation through new product offerings and cost savings through refined operations. An IoT solution is an integrated bundle of technologies that help users answer a question or solve a specific problem by receiving data from devices connected to the internet. One of the most common IoT use cases is asset tracking solutions for enterprises who want to monitor trucks, equipment, inventory, or other items with IoT. The anatomy of an asset tracking IoT solution includes the following:
This is a simple asset tracking example. For more complex solutions including remote monitoring or predictive maintenance, enterprises must also consider installation, increased bandwidth, post-development support, and UX/UI for the design of the interface for customers or others who will use the solution. Enterprise IoT solutions require an ecosystem of partners, components, and tools to be brought to market successfully.
Consider the design of your desired connected solution. Do you know where you will need to augment skills and services?
If you are in the early stages of IoT concept development and at the center of a buy vs. build debate, it may be a worthwhile exercise to assess your existing team’s skills and how they correspond with the IoT solution you are trying to build.
IoT SKILLS ASSESSMENT
- Hardware
- Firmware
- Connectivity
- Programming
- Cloud
- Data Science
- Presentation
- Technical Support and Maintenance
- Security
- Organizational Alignment
MAKING TIME FOR IoT APPLICATION DEVELOPMENT
The time it will take your organization to build a solution is dependent on the complexity of the application. One way to estimate the time and cost of IoT application development is with Indeema’s IoT Cost Calculator. This tool can help roughly estimate the hours required and the cost associated with the IoT solution your team is interested in building. In MachNation’s independent comparison of the Losant Enterprise IoT Platform and Azure, it was determined that developers could build an IoT solution in 30 hours using Losant and in 74-94 hours using Microsoft Azure.
As you consider IoT application development, consider the makeup of your team. Is your team prepared to dedicate hours to the development of a new solution, or will it be a side project? Enterprise IT teams are often in place to maintain existing operating systems and to ensure networks are running smoothly. In the event that an IT team is tapped to even partially build an IoT solution, there is a great chance that the IT team will need to invite partners to build or provide part of the stack.
HOW THE IoT JOB GETS DONE
Successful enterprises recognize early on that some of these skills will need to be augmented through additional people, through an ecosystem, or with software. It will require more than one ‘IoT engineer’ for the job. According to the results of a McKinsey survey, “the preferences of IoT leaders suggest a greater willingness to draw capabilities from an ecosystem of technology partners, rather than rely on homegrown capabilities.”
IoT architecture alone is intricate. Losant, an IoT application enablement platform, is designed with many of the IoT-specific components already in place. Losant enables users to build applications in a low-to-no code environment and scale them up to millions of devices. Losant is one piece in the wider scope of an IoT solution. In order to build a complete solution, an enterprise needs hardware, software, connectivity, and integration. For those components, our team relies on additional partners from the IoT ecosystem.
The IoT ecosystem, also known as the IoT landscape, refers to the network of IoT suppliers (hardware, devices, software platforms, sensors, connectivity, software, systems integrators, data scientists, data analytics) whose combined services help enterprises create complete IoT solutions. At Losant, we’ve built an IoT ecosystem with reliable experienced partners. When IoT customers need custom hardware, connectivity, system integrators, dev shops, or other experts with proven IoT expertise, we can tap one of our partners to help in their areas of expertise.
SECURE, SCALABLE, SEAMLESS IoT
Creating secure, scalable, and seamless IoT solutions for your environment begins by starting small. Starting small gives your enterprise the ability to establish its ecosystem. Teams can begin with a small investment and apply learnings to subsequent projects. Many IoT success stories begin with enterprises setting out to solve one problem. The simple beginnings have enabled them to now reap the benefits of the data harvest in their environments.
Originally posted here.
By GE Digital
“The End of Cloud Computing.” “The Edge Will Eat The cloud.” “Edge Computing—The End of Cloud Computing as We Know It.”
Such headlines grab attention, but don’t necessarily reflect reality—especially in Industrial Internet of Things (IoT) deployments. To be sure, edge computing is rapidly emerging as a powerful force in turning industrial machines into intelligent machines, but to paraphrase Mark Twain: “The reports of the death of cloud are greatly exaggerated.”
The Tipping Point: Edge Computing Hits Mainstream
We’ve all heard the stats—billions and billions of IoT devices, generating inconceivable amounts of big data volumes, with trillions and trillions of U.S. dollars to be invested in IoT over the next several years. Why? Because industrials have squeezed every ounce of productivity and efficiency out of operations over the past couple of decades, and are now looking to digital strategies to improve production, performance, and profit.
The Industrial Internet of Things (IIoT) represents a world where human intelligence and machine intelligence—what GE Digital calls minds and machines—connect to deliver new value for industrial companies.
In this new landscape, organizations use data, advanced analytics, and machine learning to drive digital industrial transformation. This can lead to reduced maintenance costs, improved asset utilization, and new business model innovations that further monetize industrial machines and the data they create.
Despite the “cloud is dead” headlines, GE believes the cloud is still very important in delivering on the promise of IIoT, powering compute-intense workloads to manage massive amounts of data generated by machines. However, there’s no question that edge computing is quickly becoming a critical factor in the total IIoT equation.
“The End of Cloud Computing.” “The Edge Will Eat The cloud.” “Edge Computing—The End of Cloud Computing as We Know It.”
Such headlines grab attention, but don’t necessarily reflect reality—especially in Industrial Internet of Things (IoT) deployments. To be sure, edge computing is rapidly emerging as a powerful force in turning industrial machines into intelligent machines, but to paraphrase Mark Twain: “The reports of the death of cloud are greatly exaggerated.”
The Tipping Point: Edge Computing Hits Mainstream
We’ve all heard the stats—billions and billions of IoT devices, generating inconceivable amounts of big data volumes, with trillions and trillions of U.S. dollars to be invested in IoT over the next several years. Why? Because industrials have squeezed every ounce of productivity and efficiency out of operations over the past couple of decades, and are now looking to digital strategies to improve production, performance, and profit.
The Industrial Internet of Things (IIoT) represents a world where human intelligence and machine intelligence—what GE Digital calls minds and machines—connect to deliver new value for industrial companies.
In this new landscape, organizations use data, advanced analytics, and machine learning to drive digital industrial transformation. This can lead to reduced maintenance costs, improved asset utilization, and new business model innovations that further monetize industrial machines and the data they create.
Despite the “cloud is dead” headlines, GE believes the cloud is still very important in delivering on the promise of IIoT, powering compute-intense workloads to manage massive amounts of data generated by machines. However, there’s no question that edge computing is quickly becoming a critical factor in the total IIoT equation.
What is edge computing?
The “edge” of a network generally refers to technology located adjacent to the machine which you are analyzing or actuating, such as a gas turbine, a jet engine, or magnetic resonance (MR) scanner.
Until recently, edge computing has been limited to collecting, aggregating, and forwarding data to the cloud. But what if instead of collecting data for transmission to the cloud, industrial companies could turn massive amounts of data into actionable intelligence, available right at the edge? Now they can.
This is not just valuable to industrial organizations, but absolutely essential.
Edge computing vs. Cloud computing
Cloud and edge are not at war … it’s not an either/or scenario. Think of your two hands. You go about your day using one or the other or both depending on the task. The same is true in Industrial Internet workloads. If the left hand is edge computing and the right hand is cloud computing, there will be times when the left hand is dominant for a given task, instances where the right hand is dominant, and some cases where both hands are needed together.
Scenarios in which edge computing will take a leading position include things such as low latency, bandwidth, real-time/near real-time actuation, intermittent or no connectivity, etc. Scenarios where cloud will play a more prominent role include compute-heavy tasks, machine learning, digital twins, cross-plant control, etc.
The point is you need both options working in tandem to provide design choices across edge to cloud that best meet business and operational goals.
Edge Computing and Cloud Computing: Balance in Action
Let’s look at a couple of illustrations. In an industrial context, examples of intelligent edge machines abound—pumps, motors, sensors, blowout preventers and more benefit from the growing capabilities of edge computing for real-time analytics and actuation.
Take locomotives. These modern 200 ton digital machines carry more than 200 sensors that can pump one billion instructions per second. Today, applications can not only collect data locally and respond to changes on that data, but they can also perform meaningful localized analytics. GE Transportation’s Evolution Series Tier 4 Locomotive uses on-board edge computing to analyze data and apply algorithms for running smarter and more efficiently. This improves operational costs, safety, and uptime.
Sending all that data created by the locomotive to the cloud for processing, analyzing, and actuation isn’t useful, practical, or cost-effective.
Now let’s switch gears (pun intended) and talk about another mode of transportation—trucking. Here’s an example where edge plays an important yet minor role, while cloud assumes a more dominant position. In this example, the company has 1,000 trucks under management. There are sensors on each truck tracking performance of the vehicle such as engine, transmission, electrical, battery, and more.
But in this case, instead of real-time analytics and actuation on the machine (like our locomotive example), the data is being ingested, then stored and forwarded to the cloud where time series data and analytics are used to track performance of vehicle components. The fleet operator then leverages a fleet management solution for scheduled maintenance and cost analysis. This gives him or her insights such as the cost over time per part type, or the median costs over time, etc. The company can use this data to improve uptime of its vehicles, lower repair costs, and improve the safe operation of the vehicle.
What’s next in edge computing
While edge computing isn’t a new concept, innovation is now beginning to deliver on the promise—unlocking untapped value from the data being created by machines.
GE has been at the forefront of bridging minds and machines. Predix Platform supports a consistent execution environment across cloud and edge devices, helping industrials achieve new levels of performance, production, and profit.
Originally posted here.
Computer vision is fundamental to capturing real-world data within the IoT. Arm technology provides a secure ecosystem for smart cameras in business, industrial and home applications
By Mohamed Awad, VP IoT & Embedded, Arm
Computer vision leverages artificial intelligence (AI) to enable devices such as smart cameras to interpret and understand what is happening in an image. Recreating a sensor as powerful as the human eye with technology opens up a wide and varied range of use cases for computers to perform tasks that previously required human sight – so it’s no wonder that computer vision is quickly becoming one of the most important ways to capture and act on real-world data within the Internet of Things (IoT).
Smart cameras now use computer vision in a range of business and industrial applications, from counting cars in parking lots to monitoring footfall in retail stores or spotting defects on a production line. And in the home, smart cameras can tell us when a package has been delivered, whether the dog escaped from the back yard or when our baby is awake.
Across the business and consumer worlds, the adoption of smart camera technology is growing exponentially. In its 2020 report “Cameras and Computing for Surveillance and Security”, market research and strategy consulting company Yole Développement estimates that for surveillance alone, there are approximately one billion cameras across the world. That number of installations is expected to double by 2024.
This technology features key advancements in security, heterogeneous computing, image processing and cloud services – enabling future computer vision products that are more capable than ever.
Smart camera security is top priority for computer vision
IoT security is a key priority and challenge for the technology industry. It’s important that all IoT devices are secure from exploitation by malicious actors, but it’s even more critical when that device captures and stores image data about people, places and high-value assets.
Unauthorized access to smart cameras tasked with watching over factories, hospitals, schools or homes would not only be a significant breach of privacy, it could also lead to untold harm—from plotting crimes to the leaking of confidential information. Compromising a smart camera could also provide a gateway, giving a malicious actor access to other devices within the network – from door, heating and lighting controls to control over an entire smart factory floor.
We need to be able to trust smart cameras to maintain security for us all, not open up new avenues for exploitation. Arm has embraced the importance of security in IoT devices for many years through its product portfolio offerings such as Arm TrustZone for both Cortex-A and Cortex-M.
In the future, smart camera chips based on the Armv9 architecture will add further security enhancements for computer vision products through the Arm Confidential Compute Architecture (CCA).
Further to this, Arm promotes common standards of security best practice such as PSA Certified and PARSEC. These are designed to ensure that all future smart camera deployments have built-in security, from the point the image sensor first records the scene to storage, whether that data is stored locally or in the cloud by using advanced security and data encryption techniques.
Endpoint AI powers computer vision in smart camera devices
The combination of image sensor technology and endpoint AI is enabling smart cameras to infer increasingly complex insights from the vast amounts of computer vision data they capture. New machine learning capabilities within smart camera devices meet a diverse range of use cases – such as detecting individual people or animals, recognizing specific objects and reading license plates. All of these applications for computer vision require ML algorithms running on the endpoint device itself, rather than sending data to the cloud for inference. It’s all about moving compute closer to data.
For example, a smart camera employed at a busy intersection could use computer vision to determine the number and type of vehicles waiting at a red signal at various hours throughout the day. By processing its own data and inferring meaning using ML, the smart camera could automatically adjust its timings in order to reduce congestion and limit build-up of emissions automatically without human involvement.
Arm’s investment in AI for applications in endpoints and beyond is demonstrated through its range of Ethos machine learning processors: highly scalable and efficient NPUs capable of supporting a range of 0.1 to 10 TOP/s through many-core technologies. Software also plays a vital role in ML and this is why Arm continues to support the open-source community through the Arm NN SDK and TensorFlow Lite for Microcontrollers (TFLM) open-source frameworks.
These machine learning workload frameworks are based on existing neural networks and power-efficient Arm Cortex-A CPUs, Mali GPUs and Ethos NPUs as well as Arm Compute library and CMSIS-NN – a collection of low-level machine learning functions optimized for Cortex-A CPU, Cortex-M CPU and Mali GPU architectures.
The Armv9 architecture supports enhanced AI capabilities, too, by providing accessible vector arithmetic (individual arrays of data that can be computed in parallel) via Scalable Vector Extension 2 (SVE2). This enables scaling of the hardware vector length without having to rewrite or recompile code. In the future, extensions for matrix multiplication (a key element in enhancing ML) will push the AI envelope further.
Smart cameras connected in the cloud
Cloud and edge computing is also helping to expedite the adoption of smart cameras. Traditional CCTV architectures saw camera data stored on-premises via a Network Video Recorder (NVR) or a Digital Video Recorder (DVR). This model had numerous limitations, from the vast amount of storage required to the limited number of physical connections on each NVR.
Moving to a cloud-native model simplifies the rollout of smart cameras enormously: any number of cameras can be provisioned and managed via a configuration file downloaded to the device. There’s also a virtuous cycle at play: Data from smart cameras can be now used to train the models in the cloud for specific use-cases so that cameras become even smarter. And the smarter they become, the less data they need to send upstream.
The use of cloud computing also enables automation of processes via AI sensor fusion by combining computer vision data from multiple smart cameras. Taking our earlier example of the smart camera placed at a road intersection, cloud AI algorithms could combine data from multiple cameras to constantly adjust traffic light timings holistically across an entire city, keeping traffic moving.
Arm enables the required processing continuum from cloud to endpoint. Cortex-M microcontrollers and Cortex-A processors power smart cameras, with Cortex-A processors also powering edge gateways. Cloud and edge servers harness the capabilities of the Neoverse platform.
New hardware and software demands on smart cameras
The compute needs for computer vision devices continue to grow year over year, with ultra-high resolution video capture (8K 60fps) and 64-bit (Armv8-A) processing marking the current standard for high-end smart camera products.
As a result, the system-on-chip (SoC) within next-generation smart cameras will need to embrace heterogenous architectures, combining CPUs, GPUs, NPUs alongside dedicated hardware for functions like computer vision, image processing, video encoding and decoding.
Storage, too, is a key concern: While endpoint AI can reduce storage requirements by processing images locally on the camera, many use cases will require that data be retained somewhere for safety and security – whether on the device, in edge servers or in the cloud.
To ensure proper storage of high-resolution computer vision data, new video encoding and decoding standards such as H.265 and AV1 are becoming the de facto standard.
New use cases driving continuous innovation
Overall, the demands from the new use cases are driving the need for continuous improvement in computing and imaging technologies across the board.
When we think about image-capturing devices such as CCTV cameras today, we should no longer imagine grainy images of barely recognizable faces passing by a camera. Advancements in computer vision – more efficient and powerful compute coupled with the intelligence of AI and machine learning – are making smart cameras not just image sensors but image interpreters. This bridge between the analog and digital worlds is opening up new classes of applications and use cases that were unimaginable a few years ago.
Originally posted here.
Augmented Reality not only enhances reality through virtual and bundled information, but also offers untapped opportunities for companies and their customers. This technology can significantly support customers in processing information more efficiently and relieve them cognitively. Early adopters such as Amazon and IKEA are already using augmented reality in online shopping for product demonstrations. This gives customers a more comprehensive insight into the product, which supports their purchase intention. Industry is already using the technology in a more versatile way and exploiting the advantages, for example, in engineering, in production, in service or for employee training. This justifiably raises the question of why this potential of AR is not also being exploited at the customer level.
For customer participation, it would be groundbreaking to reliably empower customers in their contribution to involvement, regardless of their skills and prior knowledge. In the future, customers would no longer have to bother with paper instructions when assembling furniture but would be able to follow work instructions more easily using their smartphones. This can also be applied to other everyday situations, such as repairing one's own bicycle or helping to indicate a malfunction in the heating system when it displays a message again.
The examples listed all have the common feature that AR acts as a medium for guided work instructions so that customers can be supported more efficiently in their actions. As an expert in the field of AR and IoT, I have questioned at this point whether there really an increase in efficiency is, how this possible increase in efficiency makes itself felt, and how the effects could be explained. To get to the bottom of the problem, an empirical survey was designed in which a 26-step assembly task had to be accomplished. The test persons were divided into two groups. While the experimental group received instructions in an AR app via iPad, the control group worked with classic paper instructions. After the experiment, all participants were asked about their subjective perceptions during assembly using a standardized questionnaire.
The results of the empirical study are in line with the media perception or hype of augmented reality. The members of the experimental group had a significantly shorter processing time, made significantly fewer errors and were more satisfied overall with the assembly task. Based on the subjective perception of our test subjects, it can be shown that the increase in efficiency on the part of the experimental group can be explained by a reduction in their cognitive load.
Overall, the survey not only reveals efficiency gains using augmented reality, but also raises the prospect of other factors. The participants were more efficient in their actions and were also significantly more satisfied with the process. According to existing marketing literature on satisfaction, it follows that there is an increased repurchase intention, an increased willingness to pay, positive eWOM, and sustained customer loyalty. It can be shown that the use of augmented reality can not only reduce existing costs by increasing efficiency but promises additional revenue. With the advancement of technical realities in private households, the use of AR at the consumer level is no longer a utopia. The technology is ready! So, are you?
By Sachin Kotasthane
In his book, 21 Lessons for the 21st Century, the historian Yuval Noah Harari highlights the complex challenges mankind will face on account of technological challenges intertwined with issues such as nationalism, religion, culture, and calamities. In the current industrial world hit by a worldwide pandemic, we see this complexity translate in technology, systems, organizations, and at the workplace.
While in my previous article, Humane IIoT, I discussed the people-centric strategies that enterprises need to adopt while onboarding IoT initiatives of industrial IoT in the workforce, in this article, I will share thoughts on how new-age technologies such as AI, ML, and big data, and of course, industrial IoT, can be used for effective management of complex workforce problems in a factory, thereby changing the way people work and interact, especially in this COVID-stricken world.
Workforce related problems in production can be categorized into:
- Time complexity
- Effort complexity
- Behavioral complexity
Problems categorized in either of the above have a significant impact on the workforce, resulting in a detrimental effect on the outcome—of the product or the organization. The complexity of these problems can be attributed to the fact that the workforce solutions to such issues cannot be found using just engineering or technology fixes as there is no single root-cause, rather, a combination of factors and scenarios. Let us, therefore, explore a few and seek probable workforce solutions.
Figure 1: Workforce Challenges and Proposed Strategies in Production
Addressing Time Complexity
Any workforce-related issue that has a detrimental effect on the operational time, due to contributing factors from different factory systems and processes, can be classified as a time complex problem.
Though classical paper-based schedules, lists, and punch sheets have largely been replaced with IT-systems such as MES, APS, and SRM, the increasing demands for flexibility in manufacturing operations and trends such as batch-size-one, warrant the need for new methodologies to solve these complex problems.
- Worker attendance
Anyone who has experienced, at close quarters, a typical day in the life of a factory supervisor, will be conversant with the anxiety that comes just before the start of a production shift. Not knowing who will report absent, until just before the shift starts, is one complex issue every line manager would want to get addressed. While planned absenteeism can be handled to some degree, it is the last-minute sick or emergency-pager text messages, or the transport delays, that make the planning of daily production complex.
What if there were a solution to get the count that is almost close to the confirmed hands for the shift, an hour or half, at the least, in advance? It turns out that organizations are experimenting with a combination of GPS, RFID, and employee tracking that interacts with resource planning systems, trying to automate the shift planning activity.
While some legal and privacy issues still need to be addressed, it would not be long before we see people being assigned to workplaces, even before they enter the factory floor.
During this course of time, while making sure every line manager has accurate information about the confirmed hands for the shift, it is also equally important that health and well-being of employees is monitored during this pandemic time. Use of technologies such as radar, millimeter wave sensors, etc., would ensure the live tracking of workers around the shop-floor and make sure that social distancing norms are well-observed.
- Resource mapping
While resource skill-mapping and certification are mostly HR function prerogatives, not having the right resource at the workstation during exigencies such as absenteeism or extra workload is a complex problem. Precious time is lost in locating such resources, or worst still, millions spent in overtime.
What if there were a tool that analyzed the current workload for a resource with the identified skillset code(s) and gave an accurate estimate of the resource’s availability? This could further be used by shop managers to plan manpower for a shift, keeping them as lean as possible.
Today, IT teams of OEMs are seen working with software vendors to build such analytical tools that consume data from disparate systems—such as production work orders from MES and swiping details from time systems—to create real-time job profiles. These results are fed to the HR systems to give managers the insights needed to make resource decisions within minutes.
- Worker attendance
Addressing Effort Complexity
Just as time complexities result in increased production time, problems in this category result in an increase in effort by the workforce to complete the same quantity of work. As the effort required is proportionate to the fatigue and long-term well-being of the workforce, seeking workforce solutions to reduce effort would be appreciated. Complexity arises when organizations try to create a method out-of-madness from a variety of factors such as changing workforce profiles, production sequences, logistical and process constraints, and demand fluctuations.
Thankfully, solutions for this category of problems can be found in new technologies that augment existing systems to get insights and predictions, the results of which can reduce the efforts, thereby channelizing it more productively. Add to this, the demand fluctuations in the current pandemic, having a real-time operational visibility, coupled with advanced analytics, will ensure meeting shift production targets.
- Intelligent exoskeletons
Exoskeletons, as we know, are powered bodysuits designed to safeguard and support the user in performing tasks, while increasing overall human efficiency to do the respective tasks. These are deployed in strain-inducing postures or to lift objects that would otherwise be tiring after a few repetitions. Exoskeletons are the new-age answer to reducing user fatigue in areas requiring human skill and dexterity, which otherwise would require a complex robot and cost a bomb.
However, the complexity that mars exoskeleton users is making the same suit adaptable for a variety of postures, user body types, and jobs at the same workstation. It would help if the exoskeleton could sense the user, set the posture, and adapt itself to the next operation automatically.
Taking a leaf out of Marvel’s Iron Man, who uses a suit that complements his posture that is controlled by JARVIS, manufacturers can now hope to create intelligent exoskeletons that are always connected to factory systems and user profiles. These suits will adapt and respond to assistive needs, without the need for any intervention, thereby freeing its user to work and focus completely on the main job at hand.
Given the ongoing COVID situation, it would make the life of workers and the management safe if these suits are equipped with sensors and technologies such as radar/millimeter wave to help observe social distancing, body-temperature measuring, etc.
- Highlighting likely deviations
The world over, quality teams on factory floors work with checklists that the quality inspector verifies for every product that comes at the inspection station. While this repetitive task is best suited for robots, when humans execute such repetitive tasks, especially those that involve using visual, audio, touch, and olfactory senses, mistakes and misses are bound to occur. This results in costly reworks and recalls.
Manufacturers have tried to address this complexity by carrying out rotation of manpower. But this, too, has met with limited success, given the available manpower and ever-increasing workloads.
Fortunately, predictive quality integrated with feed-forwards techniques and some smart tracking with visuals can be used to highlight the area or zone on the product that is prone to quality slips based on data captured from previous operations. The inspector can then be guided to pay more attention to these areas in the checklist.
- Intelligent exoskeletons
Addressing Behavioral Complexity
Problems of this category usually manifest as a quality issue, but the root cause can often be traced to the workforce behavior or profile. Traditionally, organizations have addressed such problems through experienced supervisors, who as people managers were expected to read these signs, anticipate and align the manpower.
However, with constantly changing manpower and product variants, these are now complex new-age problems requiring new-age solutions.
- Heat-mapping workload
Time and motion studies at the workplace map the user movements around the machine with the time each activity takes for completion, matching the available cycle-time, either by work distribution or by increasing the manpower at that station. Time-consuming and cumbersome as it is, the complexity increases when workload balancing is to be done for teams working on a single product at the workstation. Movements of multiple resources during different sequences are difficult to track, and the different users cannot be expected to follow the same footsteps every time.
Solving this issue needs a solution that will monitor human motion unobtrusively, link those to the product work content at the workstation, generate recommendations to balance the workload and even out the ‘congestion.’ New industrial applications such as short-range radar and visual feeds can be used to create heat maps of the workforce as they work on the product. This can be superimposed on the digital twin of the process to identify the zone where there is ‘congestion.’ This can be fed to the line-planning function to implement corrective measures such as work distribution or partial outsourcing of the operation.
- Aging workforce (loss of tribal knowledge)
With new technology coming to the shop-floor, skills of the current workforce get outdated quickly. Also, with any new hire comes the critical task of training and knowledge sharing from experienced hands. As organizations already face a shortage of manpower, releasing more hands to impart training to a larger workforce audience, possibly at different locations, becomes an even more daunting task.
Fully realizing the difficulties and reluctance to document, organizations are increasingly adopting AR-based workforce trainings that map to relevant learning and memory needs. These AR solutions capture the minutest of the actions executed by the expert on the shop-floor and can be played back by the novice in-situ as a step-by-step guide. Such tools simplify the knowledge transfer process and also increase worker productivity while reducing costs.
Further, in extraordinary situations such as the one we face at present, technologies such as AR offer solutions for effective and personalized support to field personnel, without the need to fly in specialists at multiple sites. This helps keep them safe, and accessible, still.
- Heat-mapping workload
Key takeaways and Actionable Insights
The shape of the future workforce will be the result of complex, changing, and competing forces. Technology, globalization, demographics, social values, and the changing personal expectations of the workforce will continue to transform and disrupt the way businesses operate, increasing the complexity and radically changing where, and when of future workforce, and how work is done. While the need to constantly reskill and upskill the workforce will be humongous, using new-age techniques and technologies to enhance the effectiveness and efficiency of the existing workforce will come to the spotlight.
Figure 2: The Future IIoT Workforce
Organizations will increasingly be required to:
- Deploy data farming to dive deep and extract vast amounts of information and process insights embedded in production systems. Tapping into large reservoirs of ‘tribal knowledge’ and digitizing it for ingestion to data lakes is another task that organizations will have to consider.
- Augment existing operations systems such as SCADA, DCS, MES, CMMS with new technology digital platforms, AI, AR/VR, big data, and machine learning to underpin and grow the world of work. While there will be no dearth of resources in one or more of the new technologies, organizations will need to ‘acqui-hire’ talent and intellectual property using a specialist, to integrate with existing systems and gain meaningful actionable insights.
- Address privacy and data security concerns of the workforce, through the smart use of technologies such as radar and video feeds.
Nonetheless, digital enablement will need to be optimally used to tackle the new normal that the COVID pandemic has set forth in manufacturing—fluctuating demands, modular and flexible assembly lines, reduced workforce, etc.
Originally posted here.
Provisioning, managing and securing devices in an IoT product requires careful planning at the very start of the process. Rigorous evaluation of options, then a Proof of Concept helps determine the right solution. Once the POC has been approved, the IoT product moves to production. Then the real fun starts and many strategic considerations come into play. We can list them as follows:
-
Robust and secure OTA software updates
-
Security by design
-
Scalability
-
Automation
-
Remote terminal management
-
Device configuration, monitoring & troubleshooting
Robust and secure OTA software updates
Robust and secure OTA software updates are essential for keeping IoT devices secure as the software on these devices will become outdated during their lifetime and vulnerabilities are certain to arise if left in their initial states. Therefore a secure, risk-tolerant, and efficient update mechanism must be at the core of each product development team from the inception of the project to the end of its life.
How about a homegrown solution?
Homegrown solutions are less likely to be best-of-breed, can be hard to scale, can suffer from over customisation and scope creep, come at an inherently high cost and can be left in trouble if the star developers behind their creation suddenly jump ship and leave the organisation. They also often lack the requirements needed to ensure security and robustness of software updates. Various open source solutions exist, but none provide an end-to-end solution and lack the overall functionality to make them enterprise-grade. Generic public cloud IoT stacks wish to cater to the entire IoT value chain but fail to deliver a purpose-built solution for software updates. Proprietary and platform solutions cause lock-in to specific cloud infrastructure, operating system, or development tools.
The common thread among all of these solutions is the lack of a fully optimized end-to-end OTA software update and device management infrastructure that can minimize risk, increase efficiency and enhance security and uptime.
Security by design
A device security breach incident can interrupt operations, damage systems, and negatively impact both virtual and physical processes. This translates into unhappy customers and lost business. As Colin Duggan, the Founder and CEO at BG Networks says in an interview with the Device Chronicle, “It is difficult to add security after the design has been completed. There are a number of reasons for this. Embedded systems have limited MHz, memory, and limitations of network interfaces on embedded processors. Security features can be added after the fact but usually will not close off all the vulnerabilities.” That is why it is so important to ensure security by design, in the very early stages of the product’s lifecycle.
IoT product security should be approached holistically with a framework that addresses the people, devices and process. To help IoT professionals make the right decisions concerning their product development, we designed a simple framework based on these factors and called it the Triangle of Trust:
Scalability
There’s a significant difference between managing a small number of embedded devices and having thousands or even millions of devices deployed in the field. Microsoft’s new IoT Signals report found lack of scalability as a leading cause for IoT project failures. Complexity is one of the greatest scalability issues. As such, choosing the right solution with the right architecture is important to safeguard the long-term management viability of your fleet of connected devices. More on the topic of IoT scalability can be read here.
Automation
When one of the arms of the Triangle of Trust fails, the other two are endangered. To prevent any risks arising from human mistakes, automating some of the processes is a solution that might save your business thousands of dollars. Mender.io is an OTA software update manager for Linux-based embedded devices, and it also offers a wide range of automations to securely manage these devices. One of the features that Mender offers is automatic retry of failed device deployments. Deployments to devices might fail for various intermittent reasons like loss of power, network or device usage. Automatic retry upon failures reduces device deployment error rates up to 90%. This translates to time and money savings managing deployments, and also leads to customers receiving the updates faster.
Remote Management
Remote management is a necessity for any kind of embedded device. Any company rolling out its IoT products needs to have control of its systems from a central location. SSH, secure tunneling and remote terminal access is preferred by service providers to VPN access as they can assure their customers of security when accessing and troubleshooting devices. Furthermore, the management involves grouping and accessing embedded devices, provisioning, configuring, and monitoring remotely and securely.
Seeing the necessity for not only secure over-the-air processes, but also for reliable ways of monitoring, provisioning, configuring, grouping, and accessing the embedded devices, the team behind Mender decided to expand their offering by the mentioned remote management features. Mender is open source software meaning there are many contributors to make it better and support a variety of customer hardware and software such as NVIDIA Jetson and NXP's family of iMX processors. It provides flexibility in choosing your infrastructure, software, and hardware from prototyping to production which means there is no vendor lock-in. Mender supports all device software updates from a full disk image to application updates with the freedom to customize the update and installation process to fit your workflow. It is also integrated with Google Cloud and Microsoft Azure IoT for easy device authentication.
Device configuration, troubleshooting and monitoring
A proper device management set up should never be overlooked. Robust and secure device management is a necessary cornerstone for an IoT product and therefore you need to find a high quality solution. Once you deploy thousands or millions of devices into the field you’ll need to be able to configure them properly, gather the data, and quickly troubleshoot any arising problems. Many organisations treat these capabilities as an afterthought. Engineers realize that they need some kind of device management solution right before their deadlines and product releases, which results in rushed fixes being made, that may have serious implications for the robustness and security of connected devices.
Conclusion
In order to roll out a successful, secure, and robust IoT product a few things have to be taken into consideration before the release. To ensure security by design from the earliest stages of the product life cycle, the team behind the IoT product needs to find a solution for deploying secure and robust OTA updates, remotely monitor, configure, and troubleshoot the devices, and automate necessary processes in order to avoid human-made mistakes.
Flowchart of IoT in Mining
by Vaishali Ramesh
Introduction – Internet of Things in Mining
The Internet of things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, Internet connectivity, and other forms of hardware; these devices can communicate and interact with others over the Internet, and they can be remotely monitored and controlled. In the mining industry, IoT is used as a means of achieving cost and productivity optimization, improving safety measures and developing their artificial intelligence needs.
IoT in the Mining Industry
Considering the numerous incentives it brings, many large mining companies are planning and evaluating ways to start their digital journey and digitalization in mining industry to manage day-to-day mining operations. For instance:
- Cost optimization & improved productivity through the implementation of sensors on mining equipment and systems that monitor the equipment and its performance. Mining companies are using these large chunks of data – 'big data' to discover more cost-efficient ways of running operations and also reduce overall operational downtime.
- Ensure the safety of people and equipment by monitoring ventilation and toxicity levels inside underground mines with the help of IoT on a real-time basis. It enables faster and more efficient evacuations or safety drills.
- Moving from preventive to predictive maintenance
- Improved and fast-decision making The mining industry faces emergencies almost every hour with a high degree of unpredictability. IoT helps in balancing situations and in making the right decisions in situations where several aspects will be active at the same time to shift everyday operations to algorithms.
IoT & Artificial Intelligence (AI) application in Mining industry
Another benefit of IoT in the mining industry is its role as the underlying system facilitating the use of Artificial Intelligence (AI). From exploration to processing and transportation, AI enhances the power of IoT solutions as a means of streamlining operations, reducing costs, and improving safety within the mining industry.
Using vast amounts of data inputs, such as drilling reports and geological surveys, AI and machine learning can make predictions and provide recommendations on exploration, resulting in a more efficient process with higher-yield results.
AI-powered predictive models also enable mining companies to improve their metals processing methods through more accurate and less environmentally damaging techniques. AI can be used for the automation of trucks and drills, which offers significant cost and safety benefits.
Challenges for IoT in Mining
Although there are benefits of IoT in the mining industry, implementation of IoT in mining operations has faced many challenges in the past.
- Limited or unreliable connectivity especially in underground mine sites
- Remote locations may struggle to pick up 3G/4G signals
- Declining ore grade has increased the requirements to dig deeper in many mines, which may increase hindrances in the rollout of IoT systems
Mining companies have overcome the challenge of connectivity by implementing more reliable connectivity methods and data-processing strategies to collect, transfer and present mission critical data for analysis. Satellite communications can play a critical role in transferring data back to control centers to provide a complete picture of mission critical metrics. Mining companies worked with trusted IoT satellite connectivity specialists such as ‘Inmarsat’ and their partner eco-systems to ensure they extracted and analyzed their data effectively.
Cybersecurity will be another major challenge for IoT-powered mines over the coming years
As mining operations become more connected, they will also become more vulnerable to hacking, which will require additional investment into security systems.
Following a data breach at Goldcorp in 2016, that disproved the previous industry mentality that miners are not typically targets, 10 mining companies established the Mining and Metals Information Sharing and Analysis Centre (MM-ISAC) to share cyber threats among peers in April 2017.
In March 2019, one of the largest aluminum producers in the world, Norsk Hydro, suffered an extensive cyber-attack, which led to the company isolating all plants and operations as well as switching to manual operations and procedures. Several of its plants suffered temporary production stoppages as a result. Mining companies have realized the importance of digital security and are investing in new security technologies.
Digitalization of Mining Industry - Road Ahead
Many mining companies have realized the benefits of digitalization in their mines and have taken steps to implement them. There are four themes that are expected to be central to the digitalization of the mining industry over the next decade are listed below:
The above graph demonstrates the complexity of each digital technology and its implementation period for the widespread adoption of that technology. There are various factors, such as the complexity and scalability of the technologies involved in the adoption rate for specific technologies and for the overall digital transformation of the mining industry.
The world can expect to witness prominent developments from the mining industry to make it more sustainable. There are some unfavorable impacts of mining on communities, ecosystems, and other surroundings as well. With the intention to minimize them, the power of data is being harnessed through different IoT statements. Overall, IoT helps the mining industry shift towards resource extraction, keeping in mind a particular time frame and footprint that is essential.
Originally posted here.