By Sachin Kotasthane
In his book, 21 Lessons for the 21st Century, the historian Yuval Noah Harari highlights the complex challenges mankind will face on account of technological challenges intertwined with issues such as nationalism, religion, culture, and calamities. In the current industrial world hit by a worldwide pandemic, we see this complexity translate in technology, systems, organizations, and at the workplace.
While in my previous article, Humane IIoT, I discussed the people-centric strategies that enterprises need to adopt while onboarding IoT initiatives of industrial IoT in the workforce, in this article, I will share thoughts on how new-age technologies such as AI, ML, and big data, and of course, industrial IoT, can be used for effective management of complex workforce problems in a factory, thereby changing the way people work and interact, especially in this COVID-stricken world.
Workforce related problems in production can be categorized into:
Problems categorized in either of the above have a significant impact on the workforce, resulting in a detrimental effect on the outcome—of the product or the organization. The complexity of these problems can be attributed to the fact that the workforce solutions to such issues cannot be found using just engineering or technology fixes as there is no single root-cause, rather, a combination of factors and scenarios. Let us, therefore, explore a few and seek probable workforce solutions.
Figure 1: Workforce Challenges and Proposed Strategies in Production
Addressing Time Complexity
Any workforce-related issue that has a detrimental effect on the operational time, due to contributing factors from different factory systems and processes, can be classified as a time complex problem.
Though classical paper-based schedules, lists, and punch sheets have largely been replaced with IT-systems such as MES, APS, and SRM, the increasing demands for flexibility in manufacturing operations and trends such as batch-size-one, warrant the need for new methodologies to solve these complex problems.
Anyone who has experienced, at close quarters, a typical day in the life of a factory supervisor, will be conversant with the anxiety that comes just before the start of a production shift. Not knowing who will report absent, until just before the shift starts, is one complex issue every line manager would want to get addressed. While planned absenteeism can be handled to some degree, it is the last-minute sick or emergency-pager text messages, or the transport delays, that make the planning of daily production complex.
What if there were a solution to get the count that is almost close to the confirmed hands for the shift, an hour or half, at the least, in advance? It turns out that organizations are experimenting with a combination of GPS, RFID, and employee tracking that interacts with resource planning systems, trying to automate the shift planning activity.
While some legal and privacy issues still need to be addressed, it would not be long before we see people being assigned to workplaces, even before they enter the factory floor.
During this course of time, while making sure every line manager has accurate information about the confirmed hands for the shift, it is also equally important that health and well-being of employees is monitored during this pandemic time. Use of technologies such as radar, millimeter wave sensors, etc., would ensure the live tracking of workers around the shop-floor and make sure that social distancing norms are well-observed.
While resource skill-mapping and certification are mostly HR function prerogatives, not having the right resource at the workstation during exigencies such as absenteeism or extra workload is a complex problem. Precious time is lost in locating such resources, or worst still, millions spent in overtime.
What if there were a tool that analyzed the current workload for a resource with the identified skillset code(s) and gave an accurate estimate of the resource’s availability? This could further be used by shop managers to plan manpower for a shift, keeping them as lean as possible.
Today, IT teams of OEMs are seen working with software vendors to build such analytical tools that consume data from disparate systems—such as production work orders from MES and swiping details from time systems—to create real-time job profiles. These results are fed to the HR systems to give managers the insights needed to make resource decisions within minutes.
Addressing Effort Complexity
Just as time complexities result in increased production time, problems in this category result in an increase in effort by the workforce to complete the same quantity of work. As the effort required is proportionate to the fatigue and long-term well-being of the workforce, seeking workforce solutions to reduce effort would be appreciated. Complexity arises when organizations try to create a method out-of-madness from a variety of factors such as changing workforce profiles, production sequences, logistical and process constraints, and demand fluctuations.
Thankfully, solutions for this category of problems can be found in new technologies that augment existing systems to get insights and predictions, the results of which can reduce the efforts, thereby channelizing it more productively. Add to this, the demand fluctuations in the current pandemic, having a real-time operational visibility, coupled with advanced analytics, will ensure meeting shift production targets.
Exoskeletons, as we know, are powered bodysuits designed to safeguard and support the user in performing tasks, while increasing overall human efficiency to do the respective tasks. These are deployed in strain-inducing postures or to lift objects that would otherwise be tiring after a few repetitions. Exoskeletons are the new-age answer to reducing user fatigue in areas requiring human skill and dexterity, which otherwise would require a complex robot and cost a bomb.
However, the complexity that mars exoskeleton users is making the same suit adaptable for a variety of postures, user body types, and jobs at the same workstation. It would help if the exoskeleton could sense the user, set the posture, and adapt itself to the next operation automatically.
Taking a leaf out of Marvel’s Iron Man, who uses a suit that complements his posture that is controlled by JARVIS, manufacturers can now hope to create intelligent exoskeletons that are always connected to factory systems and user profiles. These suits will adapt and respond to assistive needs, without the need for any intervention, thereby freeing its user to work and focus completely on the main job at hand.
Given the ongoing COVID situation, it would make the life of workers and the management safe if these suits are equipped with sensors and technologies such as radar/millimeter wave to help observe social distancing, body-temperature measuring, etc.
The world over, quality teams on factory floors work with checklists that the quality inspector verifies for every product that comes at the inspection station. While this repetitive task is best suited for robots, when humans execute such repetitive tasks, especially those that involve using visual, audio, touch, and olfactory senses, mistakes and misses are bound to occur. This results in costly reworks and recalls.
Manufacturers have tried to address this complexity by carrying out rotation of manpower. But this, too, has met with limited success, given the available manpower and ever-increasing workloads.
Fortunately, predictive quality integrated with feed-forwards techniques and some smart tracking with visuals can be used to highlight the area or zone on the product that is prone to quality slips based on data captured from previous operations. The inspector can then be guided to pay more attention to these areas in the checklist.
Addressing Behavioral Complexity
Problems of this category usually manifest as a quality issue, but the root cause can often be traced to the workforce behavior or profile. Traditionally, organizations have addressed such problems through experienced supervisors, who as people managers were expected to read these signs, anticipate and align the manpower.
However, with constantly changing manpower and product variants, these are now complex new-age problems requiring new-age solutions.
Time and motion studies at the workplace map the user movements around the machine with the time each activity takes for completion, matching the available cycle-time, either by work distribution or by increasing the manpower at that station. Time-consuming and cumbersome as it is, the complexity increases when workload balancing is to be done for teams working on a single product at the workstation. Movements of multiple resources during different sequences are difficult to track, and the different users cannot be expected to follow the same footsteps every time.
Solving this issue needs a solution that will monitor human motion unobtrusively, link those to the product work content at the workstation, generate recommendations to balance the workload and even out the ‘congestion.’ New industrial applications such as short-range radar and visual feeds can be used to create heat maps of the workforce as they work on the product. This can be superimposed on the digital twin of the process to identify the zone where there is ‘congestion.’ This can be fed to the line-planning function to implement corrective measures such as work distribution or partial outsourcing of the operation.
With new technology coming to the shop-floor, skills of the current workforce get outdated quickly. Also, with any new hire comes the critical task of training and knowledge sharing from experienced hands. As organizations already face a shortage of manpower, releasing more hands to impart training to a larger workforce audience, possibly at different locations, becomes an even more daunting task.
Fully realizing the difficulties and reluctance to document, organizations are increasingly adopting AR-based workforce trainings that map to relevant learning and memory needs. These AR solutions capture the minutest of the actions executed by the expert on the shop-floor and can be played back by the novice in-situ as a step-by-step guide. Such tools simplify the knowledge transfer process and also increase worker productivity while reducing costs.
Further, in extraordinary situations such as the one we face at present, technologies such as AR offer solutions for effective and personalized support to field personnel, without the need to fly in specialists at multiple sites. This helps keep them safe, and accessible, still.
Key takeaways and Actionable Insights
The shape of the future workforce will be the result of complex, changing, and competing forces. Technology, globalization, demographics, social values, and the changing personal expectations of the workforce will continue to transform and disrupt the way businesses operate, increasing the complexity and radically changing where, and when of future workforce, and how work is done. While the need to constantly reskill and upskill the workforce will be humongous, using new-age techniques and technologies to enhance the effectiveness and efficiency of the existing workforce will come to the spotlight.
Figure 2: The Future IIoT Workforce
Organizations will increasingly be required to:
Nonetheless, digital enablement will need to be optimally used to tackle the new normal that the COVID pandemic has set forth in manufacturing—fluctuating demands, modular and flexible assembly lines, reduced workforce, etc.
Originally posted here.
The Internet of Things is growing at breakneck speed. One report suggests that the global market for IoT will surpass $1.38 trillion by 2026 — a substantial increase from its 2020 valuation of $761.4 billion.
The IoT is nothing without IoT platforms — middleware that connects sensors, assets, data, software, and business processes. It brings all the different components of your IoT infrastructure together so your business can get every possible benefit.
There are many IoT platforms on the market, and it’s important to find the right one for your business. This can be a challenging task, with lots of complex and competing information to sift through.
In this article, we’ve put together a list of the main factors that should drive your decision when settling on an IoT platform, helping you make an informed decision that leads to the best solution for your needs.
There are many reasons to consider investing in an IoT platform. Essentially, the job of an IoT platform is to act as a ready-made framework for all your IoT infrastructure, pulling everything together and helping you start getting the benefits as quickly as possible. Here are some of the biggest advantages of a good IoT platform:
The best IoT platforms can provide a whole host of major advantages to your project and business as a whole. By providing connectivity as a service, they simplify the process of managing IoT devices with various connectivity technologies and remove the need to establish a contract with multiple network providers.
But it’s important to pick the right platform for your specific needs. Here are some things to consider to ensure you make the right choice.
Connectivity is a huge factor when it comes to IoT. Each project and organization has its own specific connectivity requirements, and this will have a direct impact on which IoT platform is the best fit.
Some IoT platforms are more specialized in certain technologies than others. Ideally, you should choose a platform that’s able to orchestrate a range of different connectivity technologies like LoRaWAN, Sigfox, NB-IoT, LTE Cat. M1, 4G, 5G, and WIFI.
Geographical location is also something to consider. Your IoT platform should be able to support IoT applications and devices in all the different geographical regions you need it to.
Your IoT project will almost certainly grow over time. As this technology expands and becomes more widely used, almost every business is likely to find itself using more and more IoT devices and functions across multiple use cases.
Your IoT platform should be prepared for this. Select a platform that can comfortably scale as the project grows and is fit for all IoT project states from just a handful of devices in one area to thousands spread across many regions.
The best IoT platforms should be able to scale across a range of different deployment models, such as:
Another major concern for IoT networks is security. Attacks on IoT devices are on the rise, with 33% of infected devices now part of the IoT. It’s essential to make sure you choose an IoT platform that prioritizes security.
If you don’t take security seriously, you’re putting your IoT infrastructure at risk of cyberattacks, which could result in downtime, the loss of sensitive data, and serious reputational damage. On top of this, many companies have to comply with strict requirements when it comes to data ownership and security, which means you could face legal penalties if your data is breached.
It’s no longer enough to simply secure your business premises — in our increasingly remotely connected world, you have to keep your devices safe wherever they are. Your IoT platform should also be able to integrate with common cloud infrastructures like Google Cloud, Microsoft Azure, and Amazon AWS.
The whole point of IoT is to make your life and business processes easier. It shouldn’t add an extra layer of difficulty and complexity to your systems. The best IoT platforms are straightforward and easy to integrate with existing processes.
The main user groups to consider here are:
For both of these groups, the IoT platform should be as user-friendly as possible with minimal friction and challenges. This not only helps you get the most out of your technology but also keeps your team happy and stress-free.
It is crucial to make sure that your IoT platform can be integrated with your final application. Typically, you want the platform to have a standardized interface (REST API) that allows you to connect your end-user smart application and make use of the data for your particular business case.
Your chosen platform should also support the visualization of data during a pilot, as this helps you understand your IoT systems as closely as possible and communicate this to other members of your organization.
If there’s one thing we can be sure of when it comes to technology, it’s that constant change is unavoidable. This is a good thing for businesses and ensures constant progress and development, but when it comes to IoT systems it’s essential to prepare for this ongoing change.
Your hardware, connectivity, and applications need to be adaptable and resistant to change. Otherwise, you’ll run into issues like technological lock-in where you’re forced to use technology that is no longer sufficient for the demands of the time.
One way to ensure resistance to change is to make it possible to exchange the components of your IoT solution at any time, without negatively impacting the overall final application. This allows you to modify and upgrade your infrastructure bit-by-bit over time without major delays and downtime.
When it comes to IoT platforms, there is no one-size-fits-all answer. You need to take the time to figure out which platforms are the best fit for your unique set of needs and challenges, and pick one that can help you get the most out of your network.
Flowchart of IoT in Mining
by Vaishali Ramesh
The Internet of things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, Internet connectivity, and other forms of hardware; these devices can communicate and interact with others over the Internet, and they can be remotely monitored and controlled. In the mining industry, IoT is used as a means of achieving cost and productivity optimization, improving safety measures and developing their artificial intelligence needs.
IoT in the Mining Industry
Considering the numerous incentives it brings, many large mining companies are planning and evaluating ways to start their digital journey and digitalization in mining industry to manage day-to-day mining operations. For instance:
Another benefit of IoT in the mining industry is its role as the underlying system facilitating the use of Artificial Intelligence (AI). From exploration to processing and transportation, AI enhances the power of IoT solutions as a means of streamlining operations, reducing costs, and improving safety within the mining industry.
Using vast amounts of data inputs, such as drilling reports and geological surveys, AI and machine learning can make predictions and provide recommendations on exploration, resulting in a more efficient process with higher-yield results.
AI-powered predictive models also enable mining companies to improve their metals processing methods through more accurate and less environmentally damaging techniques. AI can be used for the automation of trucks and drills, which offers significant cost and safety benefits.
Although there are benefits of IoT in the mining industry, implementation of IoT in mining operations has faced many challenges in the past.
Mining companies have overcome the challenge of connectivity by implementing more reliable connectivity methods and data-processing strategies to collect, transfer and present mission critical data for analysis. Satellite communications can play a critical role in transferring data back to control centers to provide a complete picture of mission critical metrics. Mining companies worked with trusted IoT satellite connectivity specialists such as ‘Inmarsat’ and their partner eco-systems to ensure they extracted and analyzed their data effectively.
As mining operations become more connected, they will also become more vulnerable to hacking, which will require additional investment into security systems.
Following a data breach at Goldcorp in 2016, that disproved the previous industry mentality that miners are not typically targets, 10 mining companies established the Mining and Metals Information Sharing and Analysis Centre (MM-ISAC) to share cyber threats among peers in April 2017.
In March 2019, one of the largest aluminum producers in the world, Norsk Hydro, suffered an extensive cyber-attack, which led to the company isolating all plants and operations as well as switching to manual operations and procedures. Several of its plants suffered temporary production stoppages as a result. Mining companies have realized the importance of digital security and are investing in new security technologies.
Digitalization of Mining Industry - Road Ahead
Many mining companies have realized the benefits of digitalization in their mines and have taken steps to implement them. There are four themes that are expected to be central to the digitalization of the mining industry over the next decade are listed below:
The above graph demonstrates the complexity of each digital technology and its implementation period for the widespread adoption of that technology. There are various factors, such as the complexity and scalability of the technologies involved in the adoption rate for specific technologies and for the overall digital transformation of the mining industry.
The world can expect to witness prominent developments from the mining industry to make it more sustainable. There are some unfavorable impacts of mining on communities, ecosystems, and other surroundings as well. With the intention to minimize them, the power of data is being harnessed through different IoT statements. Overall, IoT helps the mining industry shift towards resource extraction, keeping in mind a particular time frame and footprint that is essential.
Originally posted here.
By Adam Dunkels
When you have to install thousands of IoT devices, you need to make device installation impressively fast. Here is how to do it.
Every single IoT device out there has been be installed by someone.
Installation is the activity that requires the most attention during that device’s lifetime.
This is particularly true for large scale IoT deployments.
We at Thingsquare have been involved in many IoT products and projects. Many of these have involved large scale IoT deployments with hundreds or thousands of devices per deployment site.
In this article, we look at why installation is so important for large IoT deployments – and a list of 6 installation tactics to make installation impressively fast while being highly useful:
And these tactics are useful even if you only have a handful of devices per site, but thousands or tens of thousands of devices in total.
Installation is a necessary step of an IoT device’s life.
Someone – maybe your customers, your users, or a team of technicians working for you – will be responsible for the installation. The installer turns your device from a piece of hardware into a living thing: a valuable producer of information for your business.
But most of all, installation is an inevitable part of the IoT device life cycle.
The life cycle of an IoT device can be divided into four stages:
Two stages in the list contain the installation activity: both Install and Use.
So installation is inevitable – and important. We need to plan to deal with it.
Most devices should spend most of their lifetime in the Use stage of their life cycle.
But a device’s lifetime is different from the attention time that we need to spend on them.
Devices usually don’t need much attention in their Use stage. At this stage, they should mostly be sitting there and generate valuable information.
By contrast, for the people who work with the devices, most of their attention and time will be spent in the Install stage. Since those are people who’s salary you are paying for, you want to be as efficient as possible.
At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.
These are our top six tactics to make installation fast – and useful:
After installation, you will need to maintain and troubleshoot the system. This is a normal part of the Use stage.
Photos are a goldmine of information. Particularly if it is difficult to get to the location afterward.
Make sure you take plenty of photos of each device as they are installed. In fact, you should include multiple photos in your installation checklist – more about this below.
We have been involved in several deployments where we have needed to remotely troubleshoot installations after they were installed. Having a bunch of photos of how and where the devices were installed helps tremendously.
The photos don’t need to be great. Having a low-quality photo beats having no photo, every time.
When dealing with hundreds of devices, you need to make sure that you know exactly which you installed, and where.
You therefore need to make it easy to identify each device. Device identification can be made in several ways, and we recommend you to use more than one way to identify the devices. This will reduce the risk of manual errors.
The two ways we typically use are:
Being certain about where devices were installed will make maintenance and troubleshooting much easier – particularly if it is difficult to visit the installation site.
When devices are installed, make sure to record their location.
The easiest way to do this is to take the GPS coordinates of the devices as it is being deployed. Preferably with the installation app, which can do this automatically – see below.
For indoor installations, exact GPS locations may be unreliable. But even for those devices, having a coarse-grained GPS location is useful.
The location is useful both when analyzing the data that the devices produce, and when troubleshooting problems in the network.
In large deployments, there will be many people involved.
Being able to trace the installation actions, as well as who took what action, is enormously useful. Sometimes just knowing the steps that were taken when installing each device is important. And sometimes you need to talk to the person who did the installation.
Determine what steps are needed to install each device, and develop a step-by-step checklist for each step.
Then turn this checklist into an app that installation personnel can run on their own phones.
Each step of each checklist should be really easy understand to avoid mistakes along the way. And it should be easy to go back and forth in the steps, if needed.
Ideally, the app should run on both Android and iOS, because you would like everyone to be able to use it on their own phones.
Here is an example checklist, that we developed for a sensor device in a retail IoT deployment:
Since installation costs money, we want it to be efficient.
And the best way to make a process more efficient is to measure it, and then improve it.
Since we have an installation checklist app, measuring installation time is easy – just build it into the app.
Once we know how much time each step in the installation process needs, we are ready to revise the process and improve it. We should focus on the most time-consuming step first and measure the successive improvements to make sure we get the most bang for the buck.
Every IoT device needs to be installed and making the installation process efficient saves us attention time for everyone involved – and ultimately money.
At Thingsquare, we have deployed thousands of devices together with our customers, and our customers have deployed many hundreds of thousands of devices with their customers.
We use our experience to solve hard problems in the IoT space, such as how to best install large IoT systems – get in touch with us to learn more!
Originally posted here.
by Stephanie Overby
What's next for edge computing, and how should it shape your strategy? Experts weigh in on edge trends and talk workloads, cloud partnerships, security, and related issues
All year, industry analysts have been predicting that that edge computing – and complimentary 5G network offerings – will see significant growth, as major cloud vendors are deploying more edge servers in local markets and telecom providers pushing ahead with 5G deployments.
The global pandemic has not significantly altered these predictions. In fact, according to IDC’s worldwide IT predictions for 2021, COVID-19’s impact on workforce and operational practices will be the dominant accelerator for 80 percent of edge-driven investments and business model change across most industries over the next few years.
First, what exactly do we mean by edge? Here’s how Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat, defines it: “Edge computing refers to the concept of bringing computing services closer to service consumers or data sources. Fueled by emerging use cases like IoT, AR/VR, robotics, machine learning, and telco network functions that require service provisioning closer to users, edge computing helps solve the key challenges of bandwidth, latency, resiliency, and data sovereignty. It complements the hybrid computing model where centralized computing can be used for compute-intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time.”
Moving data infrastructure, applications, and data resources to the edge can enable faster response to business needs, increased flexibility, greater business scaling, and more effective long-term resilience.
“Edge computing is more important than ever and is becoming a primary consideration for organizations defining new cloud-based products or services that exploit local processing, storage, and security capabilities at the edge of the network through the billions of smart objects known as edge devices,” says Craig Wright, managing director with business transformation and outsourcing advisory firm Pace Harmon.
“In 2021 this will be an increasing consideration as autonomous vehicles become more common, as new post-COVID-19 ways of working require more distributed compute and data processing power without incurring debilitating latency, and as 5G adoption stimulates a whole new generation of augmented reality, real-time application solutions, and gaming experiences on mobile devices,” Wright adds.
8 key edge computing trends in 2021
Noting the steady maturation of edge computing capabilities, Forrester analysts said, “It’s time to step up investment in edge computing,” in their recent Predictions 2020: Edge Computing report. As edge computing emerges as ever more important to business strategy and operations, here are eight trends IT leaders will want to keep an eye on in the year ahead.
1. Edge meets more AI/ML
Until recently, pre-processing of data via near-edge technologies or gateways had its share of challenges due to the increased complexity of data solutions, especially in use cases with a high volume of events or limited connectivity, explains David Williams, managing principal of advisory at digital business consultancy AHEAD. “Now, AI/ML-optimized hardware, container-packaged analytics applications, frameworks such as TensorFlow Lite and tinyML, and open standards such as the Open Neural Network Exchange (ONNX) are encouraging machine learning interoperability and making on-device machine learning and data analytics at the edge a reality.”
Machine learning at the edge will enable faster decision-making. “Moreover, the amalgamation of edge and AI will further drive real-time personalization,” predicts Mukesh Ranjan, practice director with management consultancy and research firm Everest Group.
“But without proper thresholds in place, anomalies can slowly become standards,” notes Greg Jones, CTO of IoT solutions provider Kajeet. “Advanced policy controls will enable greater confidence in the actions made as a result of the data collected and interpreted from the edge.”
2. Cloud and edge providers explore partnerships
IDC predicts a quarter of organizations will improve business agility by integrating edge data with applications built on cloud platforms by 2024. That will require partnerships across cloud and communications service providers, with some pairing up already beginning between wireless carriers and the major public cloud providers.
According to IDC research, the systems that organizations can leverage to enable real-time analytics are already starting to expand beyond traditional data centers and deployment locations. Devices and computing platforms closer to end customers and/or co-located with real-world assets will become an increasingly critical component of this IT portfolio. This edge computing strategy will be part of a larger computing fabric that also includes public cloud services and on-premises locations.
In this scenario, edge provides immediacy and cloud supports big data computing.
3. Edge management takes center stage
“As edge computing becomes as ubiquitous as cloud computing, there will be increased demand for scalability and centralized management,” says Wright of Pace Harmon. IT leaders deploying applications at scale will need to invest in tools to “harness step change in their capabilities so that edge computing solutions and data can be custom-developed right from the processor level and deployed consistently and easily just like any other mainstream compute or storage platform,” Wright says.
The traditional approach to data center or cloud monitoring won’t work at the edge, notes Williams of AHEAD. “Because of the rather volatile nature of edge technologies, organizations should shift from monitoring the health of devices or the applications they run to instead monitor the digital experience of their users,” Williams says. “This user-centric approach to monitoring takes into consideration all of the components that can impact user or customer experience while avoiding the blind spots that often lie between infrastructure and the user.”
As Stu Miniman, director of market insights on the Red Hat cloud platforms team, recently noted, “If there is any remaining argument that hybrid or multi-cloud is a reality, the growth of edge solidifies this truth: When we think about where data and applications live, they will be in many places.”
“The discussion of edge is very different if you are talking to a telco company, one of the public cloud providers, or a typical enterprise,” Miniman adds. “When it comes to Kubernetes and the cloud-native ecosystem, there are many technology-driven solutions competing for mindshare and customer interest. While telecom giants are already extending their NFV solutions into the edge discussion, there are many options for enterprises. Edge becomes part of the overall distributed nature of hybrid environments, so users should work closely with their vendors to make sure the edge does not become an island of technology with a specialized skill set.“
4. IT and operational technology begin to converge
Resiliency is perhaps the business term of the year, thanks to a pandemic that revealed most organizations’ weaknesses in this area. IoT-enabled devices (and other connected equipment) drive the adoption of edge solutions where infrastructure and applications are being placed within operations facilities. This approach will be “critical for real-time inference using AI models and digital twins, which can detect changes in operating conditions and automate remediation,” IDC’s research says.
IDC predicts that the number of new operational processes deployed on edge infrastructure will grow from less than 20 percent today to more than 90 percent in 2024 as IT and operational technology converge. Organizations will begin to prioritize not just extracting insight from their new sources of data, but integrating that intelligence into processes and workflows using edge capabilities.
Mobile edge computing (MEC) will be a key enabler of supply chain resilience in 2021, according to Pace Harmon’s Wright. “Through MEC, the ecosystem of supply chain enablers has the ability to deploy artificial intelligence and machine learning to access near real-time insights into consumption data and predictive analytics as well as visibility into the most granular elements of highly complex demand and supply chains,” Wright says. “For organizations to compete and prosper, IT leaders will need to deliver MEC-based solutions that enable an end-to-end view across the supply chain available 24/7 – from the point of manufacture or service throughout its distribution.”
5. Edge eases connected ecosystem adoption
Edge not only enables and enhances the use of IoT, but it also makes it easier for organizations to participate in the connected ecosystem with minimized network latency and bandwidth issues, says Manali Bhaumik, lead analyst at technology research and advisory firm ISG. “Enterprises can leverage edge computing’s scalability to quickly expand to other profitable businesses without incurring huge infrastructure costs,” Bhaumik says. “Enterprises can now move into profitable and fast-streaming markets with the power of edge and easy data processing.”
6. COVID-19 drives innovation at the edge
“There’s nothing like a pandemic to take the hype out of technology effectiveness,” says Jason Mann, vice president of IoT at SAS. Take IoT technologies such as computer vision enabled by edge computing: “From social distancing to thermal imaging, safety device assurance and operational changes such as daily cleaning and sanitation activities, computer vision is an essential technology to accelerate solutions that turn raw IoT data (from video/cameras) into actionable insights,” Mann says. Retailers, for example, can use computer vision solutions to identify when people are violating the store’s social distance policy.
7. Private 5G adoption increases
“Use cases such as factory floor automation, augmented and virtual reality within field service management, and autonomous vehicles will drive the adoption of private 5G networks,” says Ranjan of Everest Group. Expect more maturity in this area in the year ahead, Ranjan says.
8. Edge improves data security
“Data efficiency is improved at the edge compared with the cloud, reducing internet and data costs,” says ISG’s Bhaumik. “The additional layer of security at the edge enhances the user experience.” Edge computing is also not dependent on a single point of application or storage, Bhaumik says. “Rather, it distributes processes across a vast range of devices.”
As organizations adopt DevSecOps and take a “design for security” approach, edge is becoming a major consideration for the CSO to enable secure cloud-based solutions, says Pace Harmon’s Wright. “This is particularly important where cloud architectures alone may not deliver enough resiliency or inherent security to assure the continuity of services required by autonomous solutions, by virtual or augmented reality experiences, or big data transaction processing,” Wright says. “However, IT leaders should be aware of the rate of change and relative lack of maturity of edge management and monitoring systems; consequently, an edge-based security component or solution for today will likely need to be revisited in 18 to 24 months’ time.”
Originally posted here.
Over the years, IoT has made its way into the complex consumer markets and made millions of lives easier and smarter. Without a doubt, the industry holds enormous potential for upcoming entrepreneurs to introduce innovative solutions. In fact, the number of IoT start-ups have grown by 27% from 2019 till mid-2020.
While many of these IoT projects have made the cut, others are struggling to realize the intended RoI. Although tempting but it still a highly challenging space to be in and create sustainable companies.
While a new organization is a collaborative effort of many people, it is the leaders who hold the vision strong and spearhead the transformation. For those setting on their journeys of tech start-ups, here’s what you can learn from the best.
The rule to achieve your KPIs is simple – never ignore them. Start-ups who planned around the KPIs were able to meet them quickly and seamlessly. Starting from product ideation to distributing the budgets across marketing, development and acquiring customers and retaining them, the complete lifecycle should be evaluated periodically through metrics such as Customer Acquisition Cost (CAC), Customer Retention Rate (CRR) and the Life-Time Value (LTV).
Customer Retention Rate (CRR) is the total number of customers a business is able to retain over a given period of time. High retention rates are a clear hint of a successful product and a fully satisfied customer while high attrition rates mean the opposite. Life Time Value (LTV) is the net value of a customer to the business. When these metrics are evaluated in relation to each other such as the LTV/CAC ratio, the total capital efficiency of a company can be predicted.
IoT enables you to take a step further in tracking KPIs:
These advanced indicators can directly help you in reducing expenses and increase revenues by improving customer experience.
Why are these important? Entrepreneurs who stayed intact to meet these KPIs have seen a 10x increase in business efficiency. This is an important takeaway for budding entrepreneurs who have to justify their investments periodically. Since CAC has increased by 50% over the past few years, not ignoring performance KPIs is the foremost lesson for every new leader.
IoT is not the same as it was 5 years ago. In fact, it may not be a ‘new technology on the block’ anymore. It is continuously evolving and start-ups have no choice but to keep experimenting with newer builds and processes. For example, embracing new technologies such as Edge computing or bringing anonymity to the data transfers, IoT products must upgrade. Likewise, project owners can improvise their development process by officially collaborating with other companies. It is a mesh and more hands will help to simplify. So be it outsourcing the resourcing requirements to a partner or outsourcing end-to-end product development, start-ups must weigh their choices and utilize the available expertise optimally.
Few entrepreneurs have been able to resolve this complexity by trekking the midline. They sensed that the risk of not embracing change is greater than the risk of failing. Therefore, budding entrepreneurs must understand that experimentation doesn’t have to replace your existing processes. It can be an additional vertical which is committed to embracing contemporary product offerings or technologies.
Despite the world being restricted indoors due to Covid, the following tech entrepreneurs have brilliantly led their workforce and achieved impressive results.
Armis Security ventured, attempted and mastered a market that most companies are scared of trying – IoT cybersecurity. Led by the hugely ambitious Yevgeny Dibrov, Amris is a security platform that discovers devices across the network, analyses their behaviors and identifies risks. For an industry plagued with cybersecurity threats, Amris is a huge reassurance. The company has a line-up of customers across sectors such as healthcare, automobile, finance and manufacturing.
While the start-up completes 5 years shortly, CEO Yevgeny quotes – “ "As companies accelerate their digital transformation initiatives, securely enable employees to work from home long-term, and adopt 5G, we are seeing an explosion of connected devices. At the same time, this uptick has increased the risk profile for businesses, especially around ransomware attacks, which is driving even more demand for our industry-leading agentless device security platform”.
When most start-ups were swaying in the hype of IoT, Ioterra foresaw the complications and immediately plunged at the opportunity to resolve a huge gap in the IoT ecosystem – the challenge of quickly sourcing reliable IoT service partners and other resources needed for a successful IoT initiative. Unlike other technology markets, IoT is a rare space that involves sourcing complications regarding IoT services as well as solutions from all walks of technology – hardware, software and wireless communications. Besides delaying projects, sourcing difficulties lead to cost overheads. As an IoT consultant himself, Daniel along with his team created a digital marketplace that enables project owners to seek sourcing assistance based on their business model, type and sector.
Daniel says, “Startups are advised to ensure a minimum of 12-18 months of runway. The most important reasoning behind this thinking is that you would invariably pivot 2-3 times before you get it right and you need to survive until then. Unless you watch the KPIs regularly and quickly pivot adapting to what you see on the ground, you cannot build a growing startup”.
Technologies from all sectors and markets have started to embrace Web 3.0 and Helium is IoT’s big bet. It is a platform that empowers businesses to develop connectivity for devices and sensors over a peer-to-peer wireless network. CEO Amir Haleem who was always ambitious about wireless coverage for low power IoT devices aims at bringing more projects on the stage.
He quotes - We’ve worked hard to bring native geo-location to everything that connects to the network. This opens up all sorts of interesting use cases that haven’t been seen yet, which have otherwise been impossible to build.
Ultimately, no start-up can grow without the mindset to win. Although most tech leaders ensure a learning culture within the organization, the motivation is mostly missing at the employee level. This largely happens when leaders don’t communicate their vision to the workforce and keep them restricted to the task assignments. The ethos to grow has to reflect at the individual level and that’s the hack to organizational success that many don’t get right.
Moreover, the missing KPIs and not retrospecting upon those failures along with your teams is a big flaw. In a startup environment wherein the team structure is mostly lean, the entrepreneurs must share quarterly progress with everyone. Besides keeping everyone in unison about the expected outcomes, such sessions float innovative ideas to achieve the results more efficiently. Therefore, upcoming entrepreneurs should ensure a work culture that acknowledges creative inputs.
Motivated employees with a growth mindset, diligent tracking of KPIs and quick adaptability to change lay a solid foundation for success.
Recently I've written quite a bit about IOT, and one thing you may have picked up on is that the Internet of Things is made up of a lot of very large numbers.
For starters, the number of connected things is measured in the tens of billions, nearly 100's of billions. Then, behind that very large number is an even bigger number, the amount of data these billions of devices is predicted to generate.
As FutureIoT pointed out, IDC forecasted that the amount of data generated by IoT devices by 2025 is expected to be in excess of 79.4 zettabytes (ZB).
A zettabyte is a very large number indeed, but how big? How can you get your head around it? Does this help...?
A zettabyte is 1,000,000,000,000,000,000,000 bytes. Hmm, that's still not very easy to visualise.
So let's think of it in terms of London busses. Let's image a byte is represented as a human on a bus, a London bus can take 80 people, so you'd need 993 quintillion busses to accommodate 79.4 zettahumans.
I tried to calculate how long 993 quintillion busses would be. Relating it to the distance to the moon, Mars or the Sun wasn't doing it justice, the only comparable scale is the size of the Milky Way. Even with that, our 79.4 zettahumans lined up in London busses, would stretch across the entire Milky Way ... and a fair bit further!
Everything we do has an impact on the planet. Just by reading this article, you're generating 0.2 grams of Carbon Dioxide (CO2) emissions per second ... so I'll try to keep this short.
Using data from the Stanford Magazine that suggests every 100 gigabytes of data stored in the Cloud could generate 0.2 tons of CO2 per year. Storing 79.4 zettabytes of data in the Cloud could be responsible for the production of 158.8 billion tons of greenhouse gases.
Putting that number into context, using USA Today numbers, the total emissions for China, USA, India, Russia, Japan and Germany accounted for a little over 21 billion tons in 2019.
So if we just go ahead and let all the IoT devices stream data to the Cloud, those billions of little gadgets would indirectly generate more than seven times the air pollution than the six most industrial countries, combined.
As mentioned in a previous article, not all data generated by IoT devices needs to be stored in the Cloud.
Speaking with an expert in data storage, ObjectBox, they say their users on average cut their Cloud data storage by 60%. So how does that work, then?
The term "Edge" refers to the edge of the network, in other words the last piece of equipment or thing connected to the network closest to the point of usage.
Let me illustrate in rather over-simplified diagram.
In an article about computer vision and AI on the edge, I talked about how vast amounts of network data could be saved if the cameras themselves could detect what an important event was, and to just send that event over the network, not the entire video stream.
In that example, only the key events and meta data, like the identification marks of a vehicle crossing a stop light, needed to be transmitted across the network. However, it is important to keep the raw content at the edge, so it can be used for post processing, for further learning of the AI or even to be retrieved at a later date, e.g. by law-enforcement.
Another example could be sensors used to detect gas leaks, seismic activity, fires or broken glass. These sensors are capturing volumes of data each second, but they only want to alert someone when something happens - detection of abnormal gas levels, a tremor, fire or smashed window.
Those alerts are the primary purpose of those devices, but the data in between those events can also hold significant value. In this instance, keeping it locally at the edge, but having it as and when needed is an ideal way to reduce network traffic, reduce Cloud storage and save the planet (well, at least a little bit).
Keeping your data at the edge is a great way to save costs and increase performance, but you still want to be able to get access to it, when you need it.
ObjectBox have created not just one of the most efficient ways to store data at the edge, but they've also built a sophisticated and powerful method to synchronise data between edge devices, the Cloud and other edge devices.
Fog Computing (which is computing that happens between the Cloud and the Edge) requires data to be exchanged with devices connected to the edge, but without going all the way to/from the servers in the Cloud.
In the article on making smarter, safer cities, I talked about how by having AI-equipped cameras share data between themselves they could become smarter, more efficient.
A solution like that could be using ObjectBox's synchronisation capabilities to efficiently discover and collect relevant video footage from various cameras to help either identify objects or even train the artificial intelligence algorithms running on the AI-equipped cameras at the edge.
Edge computing has a lot of benefits to offer, in this article I've just looked at what could often be overlooked - the cost of transferring data. I've also not really delved into the broader benefits of ObjectBox's technology, for example, from their open source benchmarks, ObjectBox seems to offer a ten times performance benefit compared to other solutions out there, and is being used by more than 300,000 developers.
The team behind ObjectBox also built technologies currently used by internet heavy-weights like Twitter, Viber and Snapchat, so they seem to be doing something right, and if they can really cut down network traffic by 60%, they could be one of sustainable technology companies to watch.
Originally posted here.
Edge Impulse has joined 1% for Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. To complement this effort we launched the ElephantEdge competition, aiming to create the world’s best elephant tracking device to protect elephant populations that would otherwise be impacted by poaching. In this similar vein, this blog will detail how Lacuna Space, Edge Impulse, a microcontroller and LoraWAN can promote the conservation of endangered species by monitoring bird calls in remote areas.
Over the past years, The Things Networks has worked around the democratization of the Internet of Things, building a global and crowdsourced LoraWAN network carried by the thousands of users operating their own gateways worldwide. Thanks to Lacuna Space’ satellites constellation, the network coverage goes one step further. Lacuna Space uses LEO (Low-Earth Orbit) satellites to provide LoRaWAN coverage at any point around the globe. Messages received by satellites are then routed to ground stations and forwarded to LoRaWAN service providers such as TTN. This technology can benefit several industries and applications: tracking a vessel not only in harbors but across the oceans, monitoring endangered species in remote areas. All that with only 25mW power (ISM band limit) to send a message to the satellite. This is truly amazing!
Most of these devices are typically simple, just sending a single temperature value, or other sensor reading, to the satellite - but with machine learning we can track much more: what devices hear, see, or feel. In this blog post we'll take you through the process of deploying a bird sound classification project using an Arduino Nano 33 BLE Sense board and a Lacuna Space LS200 development kit. The inferencing results are then sent to a TTN application.
Note: Access to the Lacuna Space program and dev kit is closed group at the moment. Get in touch with Lacuna Space for hardware and software access. The technical details to configure your Arduino sketch and TTN application are available in our GitHub repository.
Our bird sound model classifies house sparrow and rose-ringed parakeet species with a 92% accuracy. You can clone our public project or make your own classification model following our different tutorials such as Recognize sounds from audio or Continuous Motion Recognition.
Once you have trained your model, head to the Deployment section, select the Arduino library and Build it.
Import the library within the Arduino IDE, and open the microphone continuous example sketch. We made a few modifications to this example sketch to interact with the LS200 dev kit: we added a new UART link and we transmit classification results only if the prediction score is above 0.8.
Connect with the Lacuna Space dashboard by following the instructions on our application’s GitHub ReadMe. By using a web tracker you can determine when the next good time a Lacuna Space satellite will be flying in your location, then you can receive the signal through your The Things Network application and view the inferencing results on the bird call classification:
No Lacuna Space development kit yet? No problem! You can already start building and verifying your ML models on the Arduino Nano 33 BLE Sense or one of our other development kits, test it out with your local LoRaWAN network (by pairing it with a LoRa radio or LoRa module) and switch over to the Lacuna satellites when you get your kit.
Originally posted on the Edge Impulse blog by Aurelien Lequertier - Lead User Success Engineer at Edge Impulse, Jenny Plunkett - User Success Engineer at Edge Impulse, & Raul James - Embedded Software Engineer at Edge Impulse
Today, forward-thinking companies across industries are implementing digital twin technology in increasingly fascinating and ground-breaking ways. With Internet of Things (IoT) technology improving every day and more and more compute power readily available to organizations of all sizes, the possibilities of what you can do with digital twin technology are only as limited as your imagination.
A digital twin is a virtual representation of a physical asset that is practically indistinguishable from its physical counterpart. It is made possible thanks to IoT sensors that gather data from the physical world and send it to be virtually reconstructed. This data includes design and engineering details that describe the asset’s geometry, materials, components, and behavior or performance.
When combined with analytics, digital twin data can unlock hidden value for an organization and provide insights about how to improve operations, increase efficiency or discover and resolve problems before the real-world asset is affected.
It’s critical to involve stakeholders across the product value chain in your design and implementation. Each department faces diverse business challenges in their day-to-day operations, and a digital twin provides ready solutions to problems such as the inability to coordinate across end-to-end supply chain processes, minimal or no cross-functional collaboration, the inability to make data-driven decisions, or clouded visibility across the supply chain. Decision-makers at each level of the value chain have extensive knowledge on critical and practical challenges. Including their inputs will ensure a better and more efficient design of the digital twin and ensure more valuable and relevant insights.
Standardized and well-documented design practices help organizations communicate ideas across departments, or across the globe, and make it easier for multiple users of the digital twin to build or alter the model without destroying existing components or repeating work. Best-in-class modelling practices increase transparency while simplifying and streamlining collaborative work.
Data from multiple sources—both internal and external—is an essential part of creating realistic and helpful simulations. 3D modeling and geometry is sufficient to show how parts fit together and how a product works, but more input is required to model how various faults or errors might occur somewhere in the product’s lifecycle. Because many errors and problems can be nearly impossible to accurately predict by humans alone, a digital twin needs a vast amount of data and a robust analytics program to be able to run algorithms to make accurate forecasts and prevent downtime.
Digital twins implemented using proprietary design software have a risk of locking owners into a single vendor, which ties the long-term viability of the digital twin to the longevity of the supplier’s product. This risk is especially significant for assets with long lifecycles such as buildings, industrial machinery, airplanes, etc., since the lifecycles of these assets are usually much longer than software lifecycles. This proprietary dependency only becomes riskier and less sustainable over time. To overcome these risks, IT architects and digital twin owners need to carefully set terms with software vendors to ensure data compatibility is maintained and vendor lock-in can be avoided.
Digital twin implementation requires an extraordinary investment of time, capital, and engineering might, and as with any project of this scale, there are several common pitfalls to implementation success.
Although it’s tempting to try and repurpose a digital twin platform, doing so can lead to incorrect data at best and catastrophic mistakes at worst. Each digital twin is completely unique to a part or machine, therefore assets with unique operating conditions and configurations cannot share digital twin platforms.
In the long run, a digital twin replica of your entire production line or building is possible and could provide incredible insights, but it is a mistake to try and deploy digital twins for all of your pieces of equipment or programs all at once. Not only is doing too much, too fast costly, but it might cause you to rush and miss critical data and configurations along the way. Rather than rushing to do it all at once, perfect a few critical pieces of machinery first and work your way up from there.
Data collected in the field is subject to quality errors due to human mistakes or duplicate entries. The insights your digital twin provides you are only as valuable as the data it runs off of. Therefore, it is imperative to standardize data collection practices across your organization and to regularly cleanse your data to remove duplicate and erroneous entries.
If your IoT devices do not speak a common language, miscommunications can muddy your processes and compromise your digital twin initiative. Build an IT framework that allows your IoT devices to communicate with one another seamlessly to ensure success.
As mentioned earlier in this eBook, a successful digital twin strategy includes users from across your product value chain. It is critical that your users understand and appreciate the value your digital twin brings to them individually and to your organization as a whole. Lack of buy-in due to skepticism, lack of confidence, or resistance can lead to a lack of user participation, which can undermine all of your efforts.
Each digital twin is unique and completely separate in its function and end-goal from others on the market, which can make measuring success challenging. Depending on the level of the twin implemented, businesses need to create KPIs for each individual digital twin as it relates to larger organizational goals.
The configuration of digital twins is determined by the type of input data, number of data sources and the defined metrics. The configuration determines the value an organization can extract from the digital twin. Therefore, a twin with a higher configuration can yield better predictions than can a twin with a lower configuration. The reality is that success can be relative, and it is impossible to compare the effectiveness of two different digital twins side by side.
It’s possible — probable even — that in the future all people, enterprises, and even cities will have a digital twin. With the enormous growth predicted in the digital twin market in the coming years, it’s evident that the technology is here to stay. The possible applications of digital twins are truly limitless, and as IoT technology becomes more advanced and widely accessible, we’re likely to see many more innovative and disruptive use cases.
However, a technology with this much potential must be carefully and thoughtfully implemented in order to ensure its business value and long-term viability. Before embracing a digital twin, an organization must first audit its maturity, standardize processes, and prepare its culture and staff for this radical change in operations. Is your organization ready?
Originally posted here.
In 2020 we saw retailers hard hit by the economic effects of the COVID-19 pandemic with dozens of retailers—Neiman Marcus, J.C. Penney, and Brooks Brothers to name a few— declaring bankruptcy. During the unprecedented chaos of lockdowns and social distancing, consumers accelerated their shift to online shopping. Retailers like Target and Best Buy saw online sales double while Amazon’s e–commerce sales grew 39 percent.1 Retailers navigated supply chain disruptions due to COVID-19, climate change events, trade tensions, and cybersecurity events.
After the last twelve tumultuous months, what will 2021 bring for the retail industry? I spoke with Microsoft Azure IoT partners to understand how they are planning for 2021 and compiled insights about five retail trends. One theme we’re seeing is a focus on efficiency. Retailers will look to pre-configured digital platforms that leverage cloud-based technologies including the Internet of Things (IoT), artificial intelligence (AI), and edge computing to meet their business goals.
In 2021, retailers will increase efficiency by empowering frontline workers with real-time data. Retail employees will be able to respond more quickly to customers and expand their roles to manage curbside pickups, returns, and frictionless kiosks.
In H&M Mitte Garten in Berlin, H&M empowered employee ambassadors with fashionable bracelets connected to the Azure cloud. Ambassadors were able to receive real-time requests via their bracelets when customers needed help in fitting rooms or at a cash desk. The ambassadors also received visual merchandising instructions and promotional updates.
Through the app built on Microsoft partner Turnpike’s wearable SaaS platform leveraging Azure IoT Hub, these frontline workers could also communicate with their peers or their management team during or after store hours. With the real-time data from the connected bracelets, H&M ambassadors were empowered to delivered best-in-class service.
Carl Norberg, Founder, Turnpike explained, “We realized that by connecting store IoT sensors, POS systems, and AI cameras, store staff can be empowered to interact at the right place at the right time.”
Livestreaming has been exploding in China as influencers sell through their social media channels. Forbes recently projected that nearly 40 percent of China’s population will have viewed livestreams during 2020.2 Retailers in the West are starting to leverage live stream technology to create innovative omnichannel solutions.
For example, Kjell & Company, one of Scandinavia’s leading consumer electronics retailers, is using a solution from Bambuser and Ombori called Omni-queue built on top of the Ombori Grid. Omni-queue enables store employees to handle a seamless combination of physical and online visitors within the same queue using one-to-one live stream video for online visitors.
Kjell & Company ensures e-commerce customers receive the same level of technical expertise and personalized service they would receive in one of their physical locations. Omni-queue also enables its store employees to be utilized highly efficiently with advanced routing and knowledge matching.
Maryam Ghahremani, CEO of Bambuser explains, “Live video shopping is the future, and we are so excited to see how Kjell & Company has found a use for our one-to-one solution.” Martin Knutson, CTO of Kjell & Company added “With physical store locations heavily affected due to the pandemic, offering a new and innovative way for customers to ask questions—especially about electronics—will be key to Kjell’s continued success in moving customers online.”
In 2021, retailers will continue experimenting with dark stores—traditional retail stores that have been converted to local fulfillment centers—and micro-fulfillment centers. These supply chain innovations will increase efficiency by bringing products closer to customers.
Microsoft partner Attabotics, a 3D robotics supply chain company, works with an American luxury department store retailer to reduce costs and delivery time using a micro-fulfillment center. Attabotics’ unique use of both horizontal and vertical space reduces warehouse needs by 85 percent. Attabotics’ structure and robotic shuttles leveraged Microsoft Azure Edge Zones, Azure IoT Central, and Azure Sphere.
The luxury retailer leverages the micro-fulfillment center to package and ship multiple beauty products together. As a result, customers experience faster delivery times. The retailer also reduces costs related to packaging, delivery, and warehouse space.
Scott Gravelle, Founder, CEO, and CTO of Attabotics explained, “Commerce is at a crossroads, and for retailers and brands to thrive, they need to adapt and take advantage of new technologies to effectively meet consumers’ growing demands. Supply chains have not traditionally been set up for e-commerce. We will see supply chain innovations in automation and modulation take off in 2021 as they bring a wider variety of products closer to the consumer and streamline the picking and shipping to support e-commerce.”
What will this look like? Cognizant’s recent work with an athletic apparel retailer offers a blueprint. During the peak holiday season, the retailer needed to protect its expanding warehouse workforce while minimizing absenteeism. To implement physical distancing and other safety measures, the retailer leveraged Cognizant’s Safe Buildings solution built with Azure IoT Edge and IoT Hub services.
With this solution, employees maintain physical distancing using smart wristbands. When two smart wristbands were within a pre-defined distance of each other for more than a pre-defined time, the worker’s bands buzzed to reinforce safe behaviors. The results drove nearly 98 percent distancing compliance in the initial pilot. As the retailer plans to scale-up its workforce at other locations, implementing additional safety modules are being considered:
“For organizations to thrive during and post-pandemic, enterprise-grade workplace safety cannot be compromised. Real-time visibility of threats is providing essential businesses an edge in minimizing risks proactively while building employee trust and empowering productivity in a safer workplace,” Rajiv Mukherjee, Cognizant’s IoT Practice Director for Retail and Consumer Goods.
In 2021, retailers will ramp up the adoption of intelligent edge solutions to optimize inventory management with real-time data. Most retailers have complex inventory management systems. However, no matter how good the systems are, there can still be data gaps due to grocery pick-up services, theft, and sweethearting. The key to addressing these gaps is to combine real-time data from applications running on edge cameras and other edge devices in the physical store with backend enterprise resource planning (ERP) data.
Seattle Goodwill worked with Avanade to implement a new Microsoft-based Dynamics platform across its 24 stores. The new system provided almost real-time visibility into the movement of goods from the warehouses to the stores.
Rasmus Hyltegård, Director of Advanced Analytics at Avanade explained, “To ensure inventory moves quickly off the shelves, retailers can combine real-time inventory insights from Avanade’s smart inventory accelerator with other solutions across the customer journey to meet customer expectations.” Hyltegård continued, “Customers can check online to find the products they want, find the stores with product in stock, and gain insight into which stores have the shortest queues, which is important during the pandemic and beyond. Once a customer is in the store, digital signage allows for endless aisle support.”
The new year 2021 holds a wealth of opportunities for retailers. We foresee retail leaders reimagining their businesses by investing in platforms that integrate IoT, AI, and edge computing technologies. Retailers will focus on increasing efficiencies to reduce costs. Modular platforms supported by an ecosystem of strong partner solutions will empower frontline workers with data, augment omnichannel fulfillment with dark stores and micro-fulfillment, leverage livestream video to enhance omnichannel, prioritize warehouse worker safety, and optimize inventory management with real-time data.
Originally posted here.
Security has long been a worry for the Internet of Things projects, and for many organizations with active or planned IoT deployments, security concerns have hampered digital ambitions. By implementing IoT security best practices, however, risk can be minimized.
Fortunately, IoT security best practices can help organizations reduce the risks facing their deployments and broader digital transformation initiatives. These same best practices can also reduce legal liability and protect an organization’s reputation.
Technological fragmentation is not just one of the biggest barriers to IoT adoption, but it also complicates the goal of securing connected devices and related services. With IoT-related cyberattacks on the rise, organizations must become more adept at managing cyber-risk or face potential reputational and legal consequences. This article summarizes best practices for enterprise and industrial IoT projects.
Key takeaways from this article include the following:
Fragmentation and security have long been two of the most significant barriers to the Internet of Things adoption. The two challenges are also closely related.
Despite the Internet of Things (IoT) moniker, which implies a synthesis of connected devices, IoT technologies vary considerably based on their intended use. Organizations deploying IoT thus rely on an array of connectivity types, standards and hardware. As a result, even a simple IoT device can pose many security vulnerabilities, including weak authentication, insecure cloud integration, and outdated firmware and software.
For many organizations with active or planned IoT deployments, security concerns have hampered digital ambitions. An IoT World Today August 2020 survey revealed data security as the top technology hurdle for IoT deployments, selected by 46% of respondents.
Fortunately, IoT security best practices can help organizations reduce the risks facing their deployments and broader digital transformation initiatives. These same best practices can also reduce legal liability and protect an organization’s reputation.
But to be effective, an IoT-focused security strategy requires a broad view that encompasses the entire life cycle of an organization’s connected devices and projects in addition to relevant supply chains.
Asset management is a cornerstone of effective cyber defence. Organizations should identify which processes and systems need protection. They should also strive to assess the risk cyber attacks pose to assets and their broader operations.
In terms of enterprise and industrial IoT deployments, asset awareness is frequently spotty. It can be challenging given the array of industry verticals and the lack of comprehensive tools to track assets across those verticals. But asset awareness also demands a contextual understanding of the computing environment, including the interplay among devices, personnel, data and systems, as the National Institute of Standards and Technology (NIST) has observed.
There are two fundamental questions when creating an asset inventory: What is on my network? And what are these assets doing on my network?
Answering the latter requires tracking endpoints’ behaviours and their intended purpose from a business or operational perspective. From a networking perspective, asset management should involve more than counting networking nodes; it should focus on data protection and building intrinsic security into business processes.
Relevant considerations include the following:
IoT device makers and application developers also should implement a vulnerability disclosure program. Bug bounty programs are another option that should include public contact information for security researchers and plans for responding to disclosed vulnerabilities.
Organizations that have accurately assessed current cybersecurity readiness need to set relevant goals and create a comprehensive governance program to manage and enforce operational and regulatory policies and requirements. Governance programs also ensure that appropriate security controls are in place. Organizations need to have a plan to implement controls and determine accountability for that enforcement. Another consideration is determining when security policies need to be revised.
An effective governance plan is vital for engineering security into architecture and processes, as well as for safeguarding legacy devices with relatively weak security controls. Devising an effective risk management strategy for enterprise and industrial IoT devices is a complex endeavour, potentially involving a series of stakeholders and entities. Organizations that find it difficult to assess the cybersecurity of their IoT project should consider third-party assessments.
Many tools are available to help organizations evaluate cyber-risk and defences. These include the vulnerability database and the Security and Privacy Controls for Information Systems and Organizations document from the National Institute of Standards and Technology. Another resource is the list of 20 Critical Security Controls for Effective Cyber Defense. In terms of studying the threat landscape, the MITRE ATT&CK is one of the most popular frameworks for adversary tactics and techniques.
At this stage of the process, another vital consideration is the degree of cybersecurity savviness and support within your business. Three out of ten organizations deploying IoT cite lack of support for cybersecurity as a hurdle, according to August 2020 research from IoT World Today. Security awareness is also frequently a challenge. Many cyberattacks against organizations — including those with an IoT element — involve phishing, like the 2015 attack against Ukraine’s electric grid.
Internet of Things projects demands a secure foundation. That starts with asset awareness and extends into responding to real and simulated cyberattacks.
Step 1: Know what you have.
Building an IoT security program starts with achieving a comprehensive understanding of which systems need to be protected.
Step 2: Deploy safeguards.
Shielding devices from cyber-risk requires a thorough approach. This step involves cyber-hygiene, effective asset control and the use of other security controls.
Step 3: Identify threats
Spotting anomalies can help mitigate attacks. Defenders should hone their skills through wargaming.
Step 4: Respond effectively.
Cyberattacks are inevitable but should provide feedback that feeds back to step 1.
Exploiting human gullibility is one of the most common cybercriminal strategies. While cybersecurity training can help individuals recognize suspected malicious activities, such programs tend not to be entirely effective. “It only takes one user and one-click to introduce an exploit into a network,” wrote Forrester analyst Chase Cunningham in the book “Cyber Warfare.” Recent studies have found that, even after receiving cybersecurity training, employees continue to click on phishing links about 3% of the time.
Security teams should work to earn the support of colleagues, while also factoring in the human element, according to David Coher, former head of reliability and cybersecurity for a major electric utility. “You can do what you can in terms of educating folks, whether it’s as a company IT department or as a consumer product manufacturer,” he said. But it is essential to put controls in place that can withstand user error and occasionally sloppy cybersecurity hygiene.
At the same time, organizations should also look to pool cybersecurity expertise inside and outside the business. “Designing the controls that are necessary to withstand user error requires understanding what users do and why they do it,” Coher said. “That means pulling together users from throughout your organization’s user chain — internal and external, vendors and customers, and counterparts.”
Those counterparts are easier to engage in some industries than others. Utilities, for example, have a strong track record in this regard, because of the limited market competition between them. Collaboration “can be more challenging in other industries, but no less necessary,” Coher added.
Protecting an organization from cyberattacks demands a clear framework that is sensitive to business needs. While regulated industries are obligated to comply with specific cybersecurity-related requirements, consumer-facing organizations tend to have more generic requirements for privacy protections, data breach notifications and so forth. That said, all types of organizations deploying IoT have leeway in selecting a guiding philosophy for their cybersecurity efforts.
A basic security principle is to minimize networked or vulnerable systems’ attack surface — for instance, closing unused network ports and eliminating IoT device communication over the open internet. Generally speaking, building security into the architecture of IoT deployments and reducing attackers’ options to sabotage a system is more reliable than adding layers of defence to an unsecured architecture. Organizations deploying IoT projects should consider intrinsic security functionality such as embedded processors with cryptographic support.
But it is not practical to remove all risk from an IT system. For that reason, one of the most popular options is defence-in-depth, a military-rooted concept espousing the use of multiple layers of security. The basic idea is that if one countermeasure fails, additional security layers are available.
While the core principle of implementing multiple layers of security remains popular, defence in depth is also tied to the concept of perimeter-based defence, which is increasingly falling out of favour. “The defence-in-depth approach to cyber defence was formulated on the basis that everything outside of an organization’s perimeter should be considered ‘untrusted’ while everything internal should be inherently ‘trusted,’” said Andrew Rafla, a Deloitte Risk & Financial Advisory principal. “Organizations would layer a set of boundary security controls such that anyone trying to access the trusted side from the untrusted side had to traverse a set of detection and prevention controls to gain access to the internal network.”
Several trends have chipped away at the perimeter-based model. As a result, “modern enterprises no longer have defined perimeters,” Rafla said. “Gone are the days of inherently trusting any connection based on where the source originates.” Trends ranging from the proliferation of IoT devices and mobile applications to the popularity of cloud computing have fueled interest in cybersecurity models such as zero trust. “At its core, zero trust commits to ‘never trusting, always verifying’ as it relates to access control,” Rafla said. “Within the context of zero trusts, security boundaries are created at a lower level in the stack, and risk-based access control decisions are made based on contextual information of the user, device, workload or network attempting to gain access.”
Zero trust’s roots stretch back to the 1970s when a handful of computer scientists theorized on the most effective access control methods for networks. “Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job,” one of those researchers, Jerome Saltzer, concluded in 1974.
While the concept of least privilege sought to limit trust among internal computing network users, zero trusts extend the principle to devices, networks, workloads and external users. The recent surge in remote working has accelerated interest in the zero-trust model. “Many businesses have changed their paradigm for security as a result of COVID-19,” said Jason Haward-Grau, a leader in KPMG’s cybersecurity practice. “Many organizations are experiencing a surge to the cloud because businesses have concluded they cannot rely on a physically domiciled system in a set location.”
Based on data from Deloitte, 37.4% of businesses accelerated their zero trust adoption plans in response to the pandemic. In contrast, more than one-third, or 35.2%, of those embracing zero trusts stated that the pandemic had not changed the speed of their organization’s zero-trust adoption.
“I suspect that many of the respondents that said their organization’s zero-trust adoption efforts were unchanged by the pandemic were already embracing zero trusts and were continuing with efforts as planned,” Rafla said. “In many cases, the need to support a completely remote workforce in a secure and scalable way has provided a tangible use case to start pursuing zero-trust adoption.”
A growing number of organizations are beginning to blend aspects of zero trust and traditional perimeter-based controls through a model known as secure access service edge (SASE), according to Rafla. “In this model, traditional perimeter-based controls of the defence-in-depth approach are converged and delivered through a cloud-based subscription service,” he said. “This provides a more consistent, resilient, scalable and seamless user experience regardless of where the target application a user is trying to access may be hosted. User access can be tightly controlled, and all traffic passes through multiple layers of cloud-based detection and prevention controls.”
Regardless of the framework, organizations should have policies in place for access control and identity management, especially for passwords. As Forrester’s Cunningham noted in “Cyber Warfare,” the password is “the single most prolific means of authentication for enterprises, users, and almost any system on the planet” — is the lynchpin of failed security in cyberspace. Almost everything uses a password at some stage.” Numerous password repositories have been breached, and passwords are frequently recycled, making the password a common security weakness for user accounts as well as IoT devices.
A significant number of consumer-grade IoT devices have also had their default passwords posted online. Weak passwords used in IoT devices also fueled the growth of the Mirai botnet, which led to widespread internet outages in 2016. More recently, unsecured passwords on IoT devices in enterprise settings have reportedly attracted state-sponsored actors’ attention.
IoT devices and related systems also need an effective mechanism for device management, including tasks such as patching, connectivity management, device logging, device configuration, software and firmware updates and device provisioning. Device management capabilities also extend to access control modifications and include remediation of compromised devices. It is vital to ensure that device management processes themselves are secure and that a system is in place for verifying the integrity of software updates, which should be regular and not interfere with device functionality.
Organizations must additionally address the life span of devices and the cadence of software updates. Many environments allow IT pros to identify a specific end-of-life period and remove or replace expired hardware. In such cases, there should be a plan for device disposal or transfer of ownership. In other contexts, such as in industrial environments, legacy workstations don’t have a defined expiration date and run out-of-date software. These systems should be segmented on the network. Often, such industrial systems cannot be easily patched like IT systems are, requiring security professionals to perform a comprehensive security audit on the system before taking additional steps.
In recent years, attacks have become so common that the cybersecurity community has shifted its approach from preventing breaches from assuming a breach has already happened. The threat landscape has evolved to the point that cyberattacks against most organizations are inevitable.
“You hear it everywhere: It’s a matter of when, not if, something happens,” said Dan Frank, a principal at Deloitte specializing in privacy and data protection. Matters have only become more precarious in 2020. The FBI has reported a three- to four-fold increase in cybersecurity complaints after the advent of COVID-19.
Advanced defenders have taken a more aggressive stance known as threat hunting, which focuses on proactively identifying breaches. Another popular strategy is to study adversary behaviour and tactics to classify attack types. Models such as the MITRE ATT&CK framework and the Common Vulnerability Scoring System (CVSS) are popular for assessing adversary tactics and vulnerabilities.
While approaches to analyzing vulnerabilities and potential attacks vary according to an organization’s maturity, situational awareness is a prerequisite at any stage. The U.S. Army Field Manual defines the term like this: “Knowledge and understanding of the current situation which promotes timely, relevant and accurate assessment of friendly, enemy and other operations within the battlespace to facilitate decision making.”
In cybersecurity as in warfare, situational awareness requires a clear perception of the elements in an environment and their potential to cause future events. In some cases, the possibility of a future cyber attack can be averted by merely patching software with known vulnerabilities.
Intrusion detection systems can automate some degree of monitoring of networks and operating systems. Intrusion detection systems that are based on detecting malware signatures also can identify common attacks. They are, however, not effective at recognizing so-called zero-day malware, which has not yet been catalogued by security researchers. Intrusion detection based on malware signatures is also ineffective at detecting custom attacks, (i.e., a disgruntled employee who knows just enough Python or PowerShell to be dangerous. Sophisticated threat actors who slip through defences to gain network access can become insiders, with permission to view sensitive networks and files. In such cases, situational awareness is a prerequisite to mitigate damage.
Another strategy for intrusion detection systems is to focus on context and anomalies rather than malware signatures. Such systems could use machine learning to learn legitimate commands, use of messaging protocols and so forth. While this strategy overcomes the reliance on malware signatures, it can potentially trigger false alarms. Such a system can also detect so-called slow-rate attacks, a type of denial of service attack that gradually robs networking bandwidth but is more difficult to detect than volumetric attacks.
The foundation for successful cyber-incident response lies in having concrete security policies, architecture and processes. “Once you have a breach, it’s kind of too late,” said Deloitte’s Frank. “It’s what you do before that matters.”
That said, the goal of warding off all cyber-incidents, which range from violations of security policies and laws to data breaches, is not realistic. It is thus essential to implement short- and long-term plans for managing cybersecurity emergencies. Organizations should have contingency plans for addressing possible attacks, practising how to respond to them through wargaming exercises to improve their ability to mitigate some cyberattacks and develop effective, coordinated escalation measures for successful breaches.
There are several aspects of the zero trust model that enhance organizations’ ability to respond and recover from cyber events. “Network and micro-segmentation, for example, is a concept by which trust zones are created by organizations around certain classes or types of assets, restricting the blast radius of potentially destructive cyberattacks and limiting the ability for an attacker to move laterally within the environment,” Rafla said. Also, efforts to automate and orchestrate zero trust principles can enhance the efficiency of security operations, speeding efforts to mitigate attacks. “Repetitive and manual tasks can now be automated and proactive actions to isolate and remediate security threats can be orchestrated through integrated controls,” Rafla added.
Response to cyber-incidents involves coordinating multiple stakeholders beyond the security team. “Every business function could be impacted — marketing, customer relations, legal compliance, information technology, etc.,” Frank said.
A six-tiered model for cyber incident response from the SANS Institute contains the following steps:
While the bulk of the SANS model focuses on cybersecurity operations, the last step should be a multidisciplinary process. Investing in cybersecurity liability insurance to offset risks identified after ongoing cyber-incident response requires support from upper management and the legal team. Ensuring compliance with the evolving regulatory landscape also demands feedback from the legal department.
A central practice that can prove helpful is documentation — not just for security incidents, but as part of ongoing cybersecurity assessment and strategy. Organizations with mature security documentation tend to be better positioned to deal with breaches.
“If you fully document your program — your policies, procedures, standards and training — that might put you in a more favourable position after a breach,” Frank explained. “If you have all that information summarized and ready, in the event of an investigation by a regulatory authority after an incident, it shows the organization has robust programs in place.”
Documenting security events and controls can help organizations become more proactive and more capable of embracing automation and machine learning tools. As they collect data, they should repeatedly ask how to make the most of it. KPMG’s Haward-Grau said cybersecurity teams should consider the following questions:
Ultimately, answering those questions may involve using machine learning or artificial intelligence technology, Haward- Grau said. “If your business is using machine learning or AI, you have to digitally enable them so that they can do what they want to do,” he said.
Finally, documenting security events and practices as they relate to IoT devices and beyond can be useful in evaluating the effectiveness of cybersecurity spending and provide valuable feedback for digital transformation programs. “Security is a foundational requirement that needs to be ingrained holistically in architecture and processes and governed by policies,” said Chander Damodaran, chief architect at Brillio, a digital consultancy firm. ”Security should be a common denominator.”
Recent legislation requires businesses to assume responsibility for protecting the Internet of Things (IoT) devices. “Security by Design” approaches are essential since successful applications deploy millions of units and analysts predict billions of devices deployed in the next five to ten years. The cost of fixing compromised devices later could overwhelm a business.
Security risks can never be eliminated: there is no single solution for all concerns, and the cost to counter every possible threat vector is prohibitively expensive. The best we can do is minimize the risk, and design devices and processes to be easily updatable.
It is best to assess damage potential and implement security methods accordingly. For example, for temperature and humidity sensors used in environmental monitoring, data protection needs are not as stringent as devices transmitting credit card information. The first may require anonymization for privacy, and the second may require encryption to prevent unauthorized access.
Senders and receivers must authenticate. IoT devices must transmit to the correct servers and ensure they receive messages from the correct servers.
Mission-critical applications, such as vehicle crash notification or medical alerts, may fail if the connection is not reliable. Lack of communication itself is a lack of security.
Connectivity errors can make good data unreliable, and actions on the content may be erroneous. It is best to select connectivity providers with strong security practices—e.g., whitelisting access and traffic segregation to prevent unauthorized communication.
IoT Security: 360-Degree Approach
Finally, only authorized recipients should access the information. In particular, privacy laws require extra care in accessing the information on individuals.
Developers should implement security best practices at all points in the chain. However, traditional IT security protects servers with access controls, intrusion detection, etc., the farther away from the servers that best practices are implemented, the less impact that remote IoT device breaches have on the overall application.
For example, compromised sensors might send bad data, and servers might take incorrect actions despite data filtering. Thus, gateways offer an ideal location for security with compute capacity for encryption and implement over-the-air (OTA) updates for security fixes.
Servers often automate responses on data content. Simplistic and automated responses to bad data could cascade into much greater difficulty. If devices transmit excessively, servers could overload and fail to provide timely responses to transmissions—retry algorithms resulting from network unavailability often create data storms.
IoT devices often use electrical power rather than batteries, and compromised units could continue to operate for years. Implementing over-the-air (OTA) functions for remotely disabling devices could be critical.
When a breach requires device firmware updates, OTA support is vital when devices are inaccessible or large numbers of units must be modified rapidly. All devices should support OTA, even if it increases costs—for example, adding memory for managing multiple “images” of firmware for updates.
In summary, IoT security best practices of authentication, encryption, remote device disable, and OTA for security fixes, along with traditional IT server protection, offers the best chance of minimizing risks of attacks on IoT applications.
Originally posted here.
The benefits of IoT data are widely touted. Enhanced operational visibility, reduced costs, improved efficiencies and increased productivity have driven organizations to take major strides towards digital transformation. With countless promising business opportunities, it’s no surprise that IoT is expanding rapidly and relentlessly. It is estimated that there will be 75.4 billion IoT devices by 2025. As IoT grows, so do the volumes of IoT data that need to be collected, analyzed and stored. Unfortunately, significant barriers exist that can limit or block access to this data altogether.
Successful IoT data acquisition starts and ends with reliable and scalable IoT connectivity. Selecting the right communications technology is paramount to the long-term success of your IoT project and various factors must be considered from the beginning to build a functional wireless infrastructure that can support and manage the influx of IoT data today and in the future.
Here are five IoT architecture must-haves for unlocking IoT data at scale.
For many businesses, IoT data is one of their greatest assets, if not the most valuable. This intensifies the demand to protect the flow of data at all costs. With maximum data authority and architecture control, the adoption of privately managed networks is becoming prevalent across industrial verticals.
Beyond the undeniable benefits of data security and privacy, private networks give users more control over their deployment with the flexibility to tailor their coverage to the specific needs of their campus style network. On a public network, users risk not having the reliable connectivity needed for indoor, underground and remote critical IoT applications. And since this network is privately owned and operated, users also avoid the costly monthly access, data plans and subscription costs imposed by public operators, lowering the overall total-cost-of-ownership. Private networks also provide full control over network availability and uptime to ensure users have reliable access to their data at all times.
Since the number of end devices is often fixed to your IoT use cases, choosing a wireless technology that requires minimal supporting infrastructure like base stations and repeaters, as well as configuration and optimization is crucial to cost-effectively scale your IoT network.
Wireless solutions with long range and excellent penetration capability, such as next-gen low-power wide area networks, require fewer base stations to cover a vast, structurally dense industrial or commercial campuses. Likewise, a robust radio link and large network capacity allow an individual base station to effectively support massive amounts of sensors without comprising performance to ensure a continuous flow of IoT data today and in the future.
As IoT initiatives move beyond proofs-of-concept, businesses need an effective and secure approach to operate, control and expand their IoT network with minimal costs and complexity.
As IoT deployments scale to hundreds or even thousands of geographically dispersed nodes, a manual approach to connecting, configuring and troubleshooting devices is inefficient and expensive. Likewise, by leaving devices completely unattended, users risk losing business-critical IoT data when it’s needed the most. A network and device management platform provides a single-pane, top-down view of all network traffic, registered nodes and their status for streamlined network monitoring and troubleshooting. Likewise, it acts as the bridge between the edge network and users’ downstream data servers and enterprise applications so users can streamline management of their entire IoT project from device to dashboard.
Most traditional assets, machines, and facilities were not designed for IoT connectivity, creating huge data silos. This leaves companies with two choices: building entirely new, greenfield plants with native IoT technologies or updating brownfield facilities for IoT connectivity. Highly integrable, plug-and-play IoT connectivity is key to streamlining the costs and complexity of an IoT deployment. Businesses need a solution that can bridge the gap between legacy OT and IT systems to unlock new layers of data that were previously inaccessible. Wireless IoT connectivity must be able to easily retrofit existing assets and equipment without complex hardware modifications and production downtime. Likewise, it must enable straightforward data transfer to the existing IT infrastructure and business applications for data management, visualization and machine learning.
Each IoT system is a mashup of diverse components and technologies. This makes interoperability a prerequisite for IoT scalability, to avoid being saddled with an obsolete system that fails to keep pace with new innovation later on. By designing an interoperable architecture from the beginning, you can avoid fragmentation and reduce the integration costs of your IoT project in the long run.
Today, technology standards exist to foster horizontal interoperability by fueling global cross-vendor support through robust, transparent and consistent technology specifications. For example, a standard-based wireless protocol allows you to benefit from a growing portfolio of off-the-shelf hardware across industry domains. When it comes to vertical interoperability, versatile APIs and open messaging protocols act as the glue to connect the edge network with a multitude of value-deriving backend applications. Leveraging these open interfaces, you can also scale your deployment across locations and seamlessly aggregate IoT data across premises.
IoT data is the lifeblood of business intelligence and competitive differentiation and IoT connectivity is the crux to ensuring reliable and secure access to this data. When it comes to building a future-proof wireless architecture, it’s important to consider not only existing requirements, but also those that might pop up down the road. A wireless solution that offers data ownership, minimal infrastructure requirements, built-in network management and integration and interoperability will not only ensure access to IoT data today, but provide cost-effective support for the influx of data and devices in the future.
Originally posted here.
by Ariane Elena Fuchs
Solar power, wind energy, micro cogeneration power plants: energy from renewable sources has become indispensable, but it makes power generation and distribution far more complex. How the Internet of Things is helping make energy management sustainable.
It feels like Ground Hog Day yet again – in 2020 it happened on August 22. That was the point when the demand for raw materials exceeded the Earth’s supply and capacity to reproduce these natural resources. All reserves that are consumed from that date on cannot be regenerated in the current year. In other words, humanity is living above its means, consuming around 50 percent more energy than the Earth provides naturally.
To conserve these precious resources and reduce climate-damaging CO2 emissions, the energy we need must come from renewable sources such as wind, sun and water. This is the only way to reduce both greenhouse gases and our fossil fuel use. Fortunately, a start has now been made: In 2019, renewable energies – predominantly from wind and sun – will already cover almost 43 percent of Germany's energy requirements and the trend is rising.
DECENTRALIZING ENERGY PRODUCTION
This also means, however, that the traditional energy management model – a few power plants supplying a lot of consumers – is outdated. After all, the phasing out of large nuclear and coal-fired power plants doesn’t just have consequences for Germany’s CO2 balance. Shifting electricity production to wind, solar and smaller cogeneration plants reverses the previous pattern of energy generation and distribution from a highly centralized to an increasingly decentralized organizational structure. Instead of just a few large power plants sending electricity to the grid, there are now many smaller energy sources such as solar panels and wind turbines. This has made the management of it all – including the optimal distribution of the electricity – far more complex as a result. It’s up to the energy sector to wrangle this challenging transformation. As the country’s energy is becoming more sustainable, it’s also becoming harder to organize, since the energy generated from wind and sun cannot be planned in advance as easily as coal and nuclear power can. What’s more, there are thousands of wind turbines and solar panels making electricity and feeding it into the grid. This has made the management of the power network extremely difficult. In particular, there’s a lack of real-time information about the amount of electricity being generated.
KEY TECHNOLOGY IOT: FROM ENERGY FLOW TO DATA STREAM
This is where the Internet of Things comes into play: IoT can supply exactly this data from every power generator and send it to a central location. Once there, it can be evaluated before ideally enabling the power grid to be controlled automatically. The result is an IoT ecosystem. In order to operate wind farms more efficiently and reliably, a project team is currently developing an IoT-supported system that bundles and processes all relevant parameters and readings at a wind farm. They can then reconstruct the current operating and maintenance status of individual turbines. This information can be used to detect whether certain components are about to wear out and replace them before a turbine fails.
POTENTIAL FOR NEW BUSINESS MODELS
According to a recent Gartner study, the Internet of Things (IoT) is becoming a key technology for monitoring and orchestrating the complex energy and water ecosystem. In addition, consumers want more control over energy prices and more environmentally friendly power products. With the introduction of smart metering, data from so-called prosumers is becoming increasingly important. These energy producing consumers act like operators of the photovoltaic systems on their roofs. IoT sensors are used to collect the necessary power generation information. Although they are only used locally and for specific purposes, they provide energy companies with a lot of data. In order to be able to use the potential of this information for the expansion of renewable energy, it must be combined and evaluated intelligently. According to Gartner, IoT has the potential to change the energy value chain in four key areas: It enables new business models, optimizes asset management, automates operations and digitalizes the entire value chain from energy source to kWh.
ENERGY TRANSITION REQUIRES TECHNOLOGICAL CHANGE
Installing smaller power-generating systems will soon no longer pose the greatest challenge for operators. In the near future, coherently linking, integrating and controlling them will be the order of the day. The energy transition is therefore spurring technological change on a grand scale. For example, smart grids will only function properly and increase overall capacity when data on generation, consumption and networks is available in real-time. The Internet of Things enables the necessary fast data processing, even from the smallest consumers and prosumers on the grid. With the help of the Internet of Things, more and more household appliances can communicate with the Internet. These devices are then in turn connected to a smart meter gateway, i.e. a hub for the intelligent management of consumers, producers and storage locations at private households and commercial enterprises. To be able to use the true potential of this information, however, all the data must flow together into a common data platform, so that it can be analyzed intelligently.
FROM INDIVIDUAL APPLICATIONS TO AN ECOSYSTEM
For the transmission of data from the Internet of Things, Germany has national fixed-line and mobile networks available. New technology such as the 5G mobile standard allows data to be securely and reliably transferred to the cloud either directly via the 5G network or a 5G campus networks. Software for data analytics and AI tailored to energy firms are now available – including monitoring, analysis, forecasting and optimization tools. Any analyzed data can be accessed via web browsers and in-house data centers. Taken together, it all provides the energy sector with a comprehensive IoT ecosystem for the future.
Originally posted here.
by Philipp Richert
New digital and IoT use cases are becoming more and more important. When it comes to the adoption of these new technologies, there are several different maturity levels, depending on the domain. Within the retail industry, and specifically food retail, we are currently seeing the emergence of a host of IoT use cases.
Two forces are driving this: a technology push, in which suppliers in the retail domain have technologies available to build retail IoT use cases within a connected store; and a market pull by their customers, who are boosting the demand for such use cases.
However, we also need to ask the following questions: What are IoT use cases good for? And what are they aiming at? We currently see three different fields of application:
No matter what is most important for your organization or whatever your focus, it is crucial to set up a process that provides guidance for identifying the right use cases. In the following section, we share some insights on how retailers can best design this process. We collated these insights together with the team from the Food Tech Campus.
When identifying the right use cases for their stores, retailers should make sure to look into all phases within the entire innovation process: from problem description and idea collation to solution concept and implementation. Within this process, it is also essential to consider the so-called innovator’s trilemma and ensure that use cases are:
Before we can actually start identifying retail IoT use cases, we need to define search fields so that we can work on one topic with greater dedication and focus. We must then open up the problem space in order to extract the most relevant problems and pain points. Starting with prioritized and selected pain points, we then open up the solution space in order to define several solution concepts. Once these have been validated, the result should be a well-defined problem statement that concisely describes one singular pain point.
In the following, we want to take a deep dive into the different phases of the process while giving concrete examples, tips and our top-rated tools. Enjoy!
Retailers possess expertise and face challenges at various stages along their complex process chains. It helps here to focus on a specific target group in order to avoid distraction. Target groups are typically users or customers in a defined environment. A good example would be to focus your search on processes that happen inside a store location and are relevant to the customer (e.g., the food shopper).
Understand and observe problems
User research, observation and listening are keys to a well-defined problem statement that allows for further ideation. Embedding yourself in various situations and conducting interviews with all the stakeholders visiting or operating a store should be the first steps. Join employees around the store for a day or two and support them during their everyday tasks. Empathize, look for any friction and ask questions. Take your key findings into workshops and spend some time isolating specific causes. Use personas based on your user research and make use of frameworks and canvas templates in order to structure your findings. Use working titles to name the specific problem statements. One example might be: Long queueing as a major nuisance for customers.
Are your findings somehow connected? Single-purpose processes and their owners within a store environment are prone to isolated views. Creating a common problem space increases the chances of adoption of any solution later. So it is worth taking the time to map out all findings and take a look at projects in the past and their outcome. In our example, queueing is linked to staff planning, lack of communication and unpredictable customer behavior.
Prioritize problems and pain points
Ask users or stakeholders to give their view on defined problem statements and let them vote. Challenge their view and make them empathize and broaden their view towards a more holistic benefit. Once the quality of a problem statement has been assessed, evaluate the economic implications. In our example, this could mean that queueing affects most employees in the store, directly or indirectly. This problem might be solved through technology and should be further explored.
The result of a well-structured problem statement list should consist of a few new insights that might result in quick gains; one or two major known pain points, where the solution might be viable and feasible; and a list with additional topics that exist but are not too pressing at the moment.
Define opportunity areas
Map technologies and problems together. Are there any strategic goals that these problem statements might be assigned to? Have things changed in terms of technical feasibility (e.g., has the cost of a technology dropped over the past three years?). Can problems be validated within a larger setup easily or are we talking about singular use cases? All these considerations should lead towards the most attractive problem to solve. Again, in our example, this might be: Queuing is a major problem in most locations, satisfying our customers should be our main goal, existing solutions are too expensive or inflexible.
Ideate and explore use cases
When conducting an ideation session, it is very helpful to bring in trends that are relevant to the defined problem areas so as to help boost creativity. In our example, for instance, this might be technology trends such as frictionless checkout for retail, hybrid checkout concepts, bring your own device (BYOD) and sensor approaches. It is always important to keep the following in mind: What do these trends mean for the customer journey in-store and how can they be integrated in (legacy) environments?
Define solutions concepts
In the process of further defining the solution concepts, it is essential to evaluate the market potential and to consider customer and user feedback. Depending on the solution, it might be necessary to ask the various stakeholders – from store managers to personnel to customers – in order to get a clearer picture. When talking to customers or users, it is also helpful to bring along scribbles, pictures or prototypes in order to increase immersion. The insights gathered in this way help to validate assumptions and to pilot the concept accordingly.
Set metrics and KPIs to prove success
Defining data-based metrics and KPIs is essential for a successful solution. When setting up metrics and KPIs, you need to consider two aspects:
Prototype for quick insights
In terms of technology, practically everything is feasible today. However, the value proposition of a use case (in terms of business and users) can remain unclear and requires testing. Instead of building a technical prototype, it can be helpful to evaluate the value proposition of the solution with humans (empathy prototyping). This could be a person triggering an alarm based on the information at hand instead of an automatic action. Insights and lessons learnt from this phase can be used alongside the technical realization (proof-of-concept) in order to tweak specific features of the solution.
Initiate a PoC for technical feasibility
When it comes to technical feasibility, a clear picture of the objectives and key results (OKRs) for the PoC is essential. This helps to set the boundaries for a lean process with respect to the installation of hardware, an efficient timeline and minimum costs. Furthermore, a well-defined test setup fosters short testing timespans that often yield all needed results.
The strong trend towards digitization within the retail industry opens up new use cases for the (food) retail industry. In order to make the most of this trend and to build on IoT, it is crucial first of all to determine which use cases to start with. Every retailer has a different focus and needs for their stores.
In the course of our retail projects, we have identified some of the recurring use cases that food retailers are currently implementing. We have also learnt a lot about how they can best leverage IoT in order to build a connected store. We share these insights in our white paper “The connected retail store.”
Originally posted here.
By Sanjay Tripathi, Lauren Luellwitz, and Kevin Egge
There are petabytes of data generated by intelligent, interconnected and autonomous systems of Industry 4.0. When combined with artificial intelligence tools that provide actionable insight, it has the potential to improve every function within a plant, i.e. operations, engineering, quality, reliability and maintenance.
The maintenance function, while crucial to the smooth functioning of a plant has, until recently not seen much innovation. Many among us have experienced the equipment downtime, process drifts, massive hits to yield, and decline in product reliability because of maintenance performed poorly or late. Yet, Enterprise Asset Management (EAM) systems – ERP systems that help maintain assets – remained as systems of record that typically generated work-orders and recorded maintenance performed. Even as production processes became mind-numbingly complex, EAM systems remained much the same.
IBM Maximo 8.0, or Maximo Application Suite, is one example of a system that combines artificial intelligent (AI), big data and cloud computing technologies with domain expertise from operating technologies (OT) to simplify maintenance and deliver production resilience.
Maximo 8.0 leverages AI to visually inspect gas pipelines, rail tracks, bridges and tunnels; AI guides technicians as they conduct complex repairs; it provides maintenance supervisors real-time visibility into the health and safety of their technicians. Domain expertise is incorporated in the form of data to train AI models. These capabilities improve the ability to avoid unscheduled downtime, improve first-time-fix rate, and reduce safety incidents.
Maintenance records residing in Maximo are combined with real-time operational data from production assets and their associated asset model to better predict when maintenance is required. In this example, asset models embody domain expertise. These models characterize how a production asset such as a power generator or catalytic converter should perform in the context of where it is installed in the process.
The Maximo application itself is encapsulated (containerized) using Red Hat’s OpenShift technology. Containerization allows the application to be easily deployed on-premises, on private clouds or hybrid clouds. This flexibility in deployment benefits IT organizations that need to continually evolve their infrastructure, which is almost every organization.
Maximo 8.0 is available as a suite that includes both core and advanced capabilities. A single software entitlement provides access to all capabilities. The entitlement provides access to the core EAM functionality of work and resource scheduling, asset management, industry-specific customizations, EHS guidelines, and mobile functionality. And it provides access to advanced functionality such as Maximo Monitor, which automatically detects anomalies in how an asset may be performing; Maximo Health, which measures equipment health; Maximo Predict, which, as the name suggests, predicts when maintenance is required; and Maximo Assist which assists technicians conduct repairs.
Originally posted here.
by Olivier Pauzet
Over the past year, we have seen the Industrial IoT (IIoT) take an important step forward, crossing the chasm that previously separated IIoT early adopters from the majority of companies.
New solutions like Octave, Sierra Wireless’ edge-to-cloud solution for connecting industrial assets, have greatly simplified the IIoT, making it possible now for practically any company to securely extract, transmit, and act on data from bio-waste collectors, liquid fertilizer tanks, water purifiers, hot water heaters and other industrial equipment.
So, what IIoT trends will these 2020 developments lead to in 2021? I expect that they will drive greater adoption of the IIoT next year, as manufacturing, utility, healthcare, and other organizations further realize that they can help their previously silent industrial assets speak using the APIs integrated in new IoT solutions. At the same time, I expect we will start to see the development of some revolutionary IIoT applications that use 5G’s Ultra-Reliable, Low-Latency Communications (URLLC) capabilities to change the way our factories, electric grid, and healthcare systems operate.
In 2021, Industrial Equipment APIs Will Give Quiet Equipment A Voice
Cloud APIs have transformed the tech industry, and with it, our digital economy. By enabling SaaS and other cloud-based applications to easily and securely talk to each other, cloud APIs have vastly expanded the value of these applications to users. These APIs have also spawned billion-dollar companies like Stripe, Tableau, and Twilio, whose API-focused business models have transformed the online payments, data visualization, and customer service markets.
2021 will be the year industrial companies begin seeing their markets transformed by APIs, as more of these companies begin using industrial equipment APIs built into new IIoT solutions to enable their industrial assets to talk to the cloud.
Using new edge-to-cloud solutions - like Octave -with built-in Industrial equipment APIs for Modbus and other industrial communications protocols, these companies will be able to securely connect these assets to the cloud almost as easily as if this equipment was a cloud-based application.
In fact, by simply plugging a low-cost IoT gateway with these IIoT APIs into their industrial equipment, they will be able to deploy IIoT applications that allow them to remotely monitor, maintain, and control this equipment. Then, using these applications, they can lower equipment downtime, reduce maintenance costs, launch new Equipment-as-a-Service business models, and innovate faster.
Industrial companies have been trying to connect their assets to the cloud for years, but have been stymied by the complexity, time, and expense involved in doing so. In 2021, industrial equipment APIs will provide these companies with a way to simply, quickly, and cheaply connect this equipment to the cloud. By giving a voice to billions of pieces of industrial equipment, these Industrial IoT APIs will help bring about the productivity, sustainability, and other benefits Industry 4.0 has long promised.
In 2021 Manufacturing, Utility and Healthcare Will Drive Growth of the Industrial IoT
Until recently, the consumer sector, and especially the smart home market, has led the way in adopting the IoT, as the success of the Google Nest smart thermostat, the Amazon Echo smart speaker and Ring smart doorbell, and the Phillips Hue smart lights demonstrate. However, in 2021 another IIoT trend we can expect to see is the industrial sector starting to catch up to the consumer market regarding the IoT, with the manufacturing, utility, and healthcare markets leading the way.
For example, new IIoT solutions now make it possible for Original Equipment Manufacturers (OEMs) and other manufacturing companies to simply plug their equipment into the IIoT and begin acting on data from this equipment almost immediately. This has lowered the time to value for IIoT applications to the point where companies can begin reaping financial benefits greater than the total cost for their IIoT application in a few short months.
At this point, manufacturers who don’t have a plan to integrate the IIoT into their assets are, to put it bluntly, leaving money on the table – money their competitors will happily snap up with their own new connected industrial equipment offerings if they do not.
Like manufacturing companies, utilities will ramp up their use of the IIoT in 2021, as they seek to improve their operational efficiency, customer engagement, reliability, and sustainability. For example, utilities will increasingly use the IIoT to perform remote diagnostics and predictive maintenance on their grid infrastructure, reducing this equipment’s downtime while also lowering maintenance costs. In addition, a growing number of utilities will use the IIoT to collect and analyze data on their wind, solar and other renewable energy generation portfolios, allowing them to reduce greenhouse gas emissions while still balancing energy supply and demand on the grid.
Along with manufacturing and utilities, healthcare is the third market sector I expect to lead the way in adopting the IIoT in 2021. The COVID-19 pandemic has demonstrated to healthcare providers how connectivity – such as Internet-based telemedicine solutions -- can improve patient outcomes while reducing their costs. In 2021 they will increase their use of the IIoT, as they work to extend this connectivity to patient monitors, scanners and other medical devices. With the Internet of Medical Things (IoMT), healthcare providers will be better able to prepare patient treatments, remotely monitor and respond to changes to their patients’ conditions, and generate health care treatment documents.
Revolutionary Ultra-Reliable, Low-Latency 5G Applications Will Begin to Be Developed
There is a lot of buzz regarding 5G New Radio (NR) in the IIoT market. However, having been designed to co-exist with 4G LTE, most of 5G NR’s impact in this market is still evolutionary, not revolutionary. Companies are beginning to adopt 5G to wring better performance out of their existing IIoT applications, or to future-proof their connectivity strategies. But they are doing this while continuing to use LTE, as well as Low Power Wide Area (LPWA) 5G technologies, like LTE-M and NB-IoT, for now.
In 2021 however I think we will begin to see companies starting to develop revolutionary new IIoT application proof of concepts designed to take advantage of 5G NR’s Ultra-Reliable, Low-Latency Communications (URLLC) capabilities. These URLLC applications – including smart Automated Guided Vehicle (AGVs) for manufacturing, self-healing energy grids for utilities and remote surgery for health care – are simply not possible with existing wireless technologies.
Thanks to its ability to deliver ultra-high reliability and latencies as low as one millisecond, 5G NR enables companies to finally build URLLC applications – especially when 5G NR is used in conjunction with new edge computing technologies.
It will be a long time before any of these URLLC application proof-of-concepts are commercialized. But as far as 5G Wave 5+, next year is when we will first begin seeing this wave forming out at sea. And when it does eventually reach shore, it will have a revolutionary impact on our connected economy.
Originally posted here.
By Sanjay Tripathi, Kevin Egge, and Shane Kehoe
Machines were first introduced into a manual manufacturing process between 1760 and 1820. But, it was the concurrent introduction of means to power machines that led to the First Industrial Revolution. An example is the first commercially viable Textile Power Loom which was introduced by Edmund Cartwright in England. It used water-power at first. But in two short years water-powered looms were replaced with looms powered with the steam-engines created by James Watts. The relatively smaller steam-engines allowed textile looms to be deployed in many sites enabling persons to be employed in factories.
Multiple innovations such as new manufacturing methods, electricity, steel, and machine tools ushered in the era of mass manufacturing and the Second Industrial Revolution. Henry Ford’s River Rouge Complex in Michigan, completed in 1928, deployed these modern inventions and was the largest integrated factory in the world at that time. The era of mass manufacturing subsequently brought about an explosion in the consumption of goods by households.
The Third Industrial Revolution improved Automation and Controls across many industries through the use of Programmable Logic Controllers (PLCs). PLCs were first introduced by Modicon in 1969. PLC-based automation and controls were introduced to a mostly mechanical world, and helped improve yields and decrease manufacturing costs. This revolution helped provide cheaper products.
Fast forward to the Industry 4.0 Revolution made possible by the synergistic combination of expertise from the worlds of Operating Technologies (OT) and Information Technologies (IT). The current revolution is bringing about intelligent, interconnected and autonomous manufacturing equipment and systems. This is by augmenting deep domain expertise within OT companies with IT technologies such as artificial intelligence (AI), big data, cloud computing and ubiquitous connectivity.
The widespread use of open protocols across heterogeneous equipment makes it feasible to optimize horizontally across previously disjointed processes. In addition, owner/operators of assets can more easily link the shop-floor to the top-floor. Connections across multiple layers of the ISA-95/Purdue Model stack provides greater vertical visibility and added ability to optimize processes.
The increased integration brings together both OT data (from sensors, PLCs, DCS, SCADA systems) and IT data (from MES, ERP systems). However, this integration has different impacts on different functions such as operations, engineering, quality, reliability, and maintenance.
To learn more about how the integration positively impacts the organization, read the next installment in this series to see how you can bridge the gap between OT and IT teams to improve production resilience.
Originally posted here.
Then it seemed that overnight, millions of workers worldwide were told to isolate and work from home as best as they could. Businesses were suddenly forced to enable remote access for hundreds or thousands of users, all at once, from anywhere across the globe. Many companies that already offered VPN services to a small group of remote workers scurried to extend those capabilities to the much larger workforce sequestering at home. It was a decision made in haste out of necessity, but now it’s time to consider, is VPN the best remote access technology for the enterprise, or can other technologies provide a better long-term solution?
Some knowledge workers are trickling back to their actual offices, but many more are still at home and will be for some time. Global Workplace Analytics estimates that 25-30% of the workforce will still be working from home multiple days a week by the end of 2021. Others may never return to an official office, opting to remain a work-from-home (WFH) employee for good.
Consequently, enterprises need to find a remote access solution that gives home-based workers a similar experience as they would have in the office, including ease of use, good performance, and a fully secure network access experience. What’s more, the solution must be cost effective and easy to administer without the need to add more technical staff members.
VPNs are certainly one option, but not the only one. Other choices include appliance-based SD-WAN and SASE. Let’s have a look at each approach.
While VPNs are a useful remote access solution for a small portion of the workforce, they are an inefficient technology for giving remote access to a very large number of workers. VPNs are designed for point-to-point connectivity, so each secure connection between two points – presumably a remote worker and a network access server (NAS) in a datacenter – requires its own VPN link. Each NAS has a finite capacity for simultaneous users, so for a large remote user base, some serious infrastructure may be needed in the datacenter.
Performance can be an issue. With a VPN, all communication between the user and the VPN is encrypted. The encryption process takes time, and depending on the type of encryption used, this may add noticeable latency to Internet communications. More important, however, is the latency added when a remote user needs access to IaaS and SaaS applications and services. The traffic path is convoluted because it must travel between the end user and the NAS before then going out to the cloud, and vice versa on the way back.
An important issue with VPNs is that they provide overly broad access to the entire network without the option of controlling granular user access to specific resources. Stolen VPN credentials have been implicated in several high-profile data breaches. By using legitimate credentials and connecting through a VPN, attackers were able to infiltrate and move freely through targeted company networks. What’s more, there is no scrutiny of the security posture of the connecting device, which could allow malware to enter the network via insecure user devices.
Another option for providing remote access for home-based workers is appliance-based SD-WAN. It brings a level of intelligence to the connectivity that VPNs don’t have. Lee Doyle, principal analyst with Doyle Research, outlines the benefits of using SD-WAN to connect home office users to their enterprise network:
One thing to consider about appliance-based SD-WAN is that it’s primarily designed for branch office connectivity—though it can accommodate individual users at home as well. However, if a company isn’t already using SD-WAN, this isn’t a technology that is easy to implement and setup for hundreds or thousands of home-based users. What’s more, a significant investment must be made in the various communication and security appliances.
Cato’s Secure Access Service Edge (or SASE) platform provides a great alternative to VPN for remote access by many simultaneous workers. The platform offers scalable access, optimized connectivity, and integrated threat prevention that are needed to support continuous large-scale remote access.
Companies that enable WFH using Cato’s platform can scale quickly to any number of remote users with ease. There is no need to set up regional hubs or VPN concentrators. The SASE service is built on top of dozens of globally distributed Points of Presence (PoPs) maintained by Cato to deliver a wide range of security and networking services close to all locations and users. The complexity of scaling is all hidden in the Cato-provided PoPs, so there is no infrastructure for the organization to purchase, configure or deploy. Giving end users remote access is as simple as installing a client agent on the user’s device, or by providing clientless access to specific applications via a secure browser.
Cato’s SASE platform employs Zero Trust Network Access in granting users access to the specific resources and applications they need to use. This granular-level security is part of the identity-driven approach to network access that SASE demands. Since all traffic passes through a full network security stack built into the SASE service, multi-factor authentication, full access control, and threat prevention are applied to traffic from remote users. All processing is done within the PoP closest to the users while enforcing all corporate network and security policies. This eliminates the “trombone effect” associated with forcing traffic to specific security choke points on a network. Further, admins have consistent visibility and control of all traffic throughout the enterprise WAN.
While some workers are venturing back to their offices, many more are still working from home—and may work from home permanently. The Cato SASE platform is the ideal way to give them access to their usual network environment without forcing them to go through insecure and inconvenient VPNs.
Originally posted here
Today the world is obsessed with the IoT, as if this is a new concept. We've been building the IoT for decades, but it was only recently some marketing "genius" came up with the new buzz-acronym.
Before there was an IoT, before there was an Internet, many of us were busy networking. For the Internet itself was a (brilliant) extension of what was already going on in the industry.
My first experience with networking was in 1971 at the University of Maryland. The school had a new computer, a $10 million Univac 1108 mainframe. This was a massive beast that occupied most of the first floor of a building. A dual-processor machine it was transistorized, though the control console did have some ICs. Rows of big tape drives mirrored the layman's idea of computers in those days. Many dishwasher-sized disk drives were placed around the floor and printers, card readers and other equipment were crammed into every corner. Two Fastrand drum memories, each consisting of a pair of six-foot long counterrotating drums, stored a whopping 90 MB each. Through a window you could watch the heads bounce around.
The machine was networked. It had a 300 baud modem with which it could contact computers at other universities. A primitive email system let users create mail which was queued till nightfall. Then, when demands on the machine were small, it would call the appropriate remote computer and forward mail. The system operated somewhat like today's "hot potato" packets, where the message might get delivered to the easiest machine available, which would then attempt further forwarding. It could take a week to get an email, but at least one saved the $0.08 stamp that the USPS charged.
The system was too slow to be useful. After college I lost my email account but didn't miss it at all.
By the late 70s many of us had our own computers. Mine was a home-made CP/M machine with a Z80 processor and a small TV set as a low-res monitor. Around this time Compuserve came along and I, like so many others, got an account with them. Among other features, users had email addresses. Pretty soon it was common to dial into their machines over a 300 baud modem and exchange email and files. Eventually Compuserve became so ubiquitous that millions were connected, and at my tools business during the 1980s it was common to provide support via this email. The CP/M machine gave way to a succession of PCs, Modems ramped up to 57 K baud.
My tools business expanded rapidly and soon we had a number of employees. Sneakernet was getting less efficient so we installed an Arcnet network using Windows 3.11. That morphed into Ethernet connections, though the cursing from networking problems multiplied about as fast as the data transfers. Windows was just terrible at maintaining reliable connectivity.
In 1992 Mike Lee, a friend from my Boys Night Out beer/politics/sailing/great friends group, which still meets weekly (though lately virtually) came by the office with his laptop. "You have GOT to see this" he intoned, and he showed me the world-wide web. There wasn't much to see as there were few sites. But the promise was shockingly clear. I was stunned.
The tools business had been doing well. Within a month we spent $100k on computers, modems and the like and had a new business: Softaid Internet Services. SIS was one of Maryland's first ISPs and grew quickly to several thousand customers. We had a T1 connection to MAE-EAST in the DC area which gave us a 1.5 Mb/s link… for $5000/month. Though a few customers had ISDN connections to us, most were dialup, and our modem shelf grew to over 100 units with many big fans keeping the things cool.
The computers all ran BSD Unix, which was my first intro to that OS.
I was only a few months back from a failed attempt to singlehand my sailboat across the Atlantic and had written a book-length account of that trip. I hastily created a web page of that book to learn about using the web. It is still online and has been read several million times in the intervening years. We put up a site for the tools business which eventually became our prime marketing arm.
The SIS customers were sometimes, well, "interesting." There was the one who claimed to be a computer expert, but who tried to use the mouse by waving it around over the desk. Many had no idea how to connect a modem. Others complained about our service because it dropped out when mom would pick up the phone to make a call over the modem's beeping. A lot of handholding and training was required.
The logs showed a shocking (to me at the time) amount of porn consumption. Over lunch an industry pundit explained how porn drove all media, from the earliest introduction of printing hundreds of years earlier.
The woman who ran the ISP was from India. She was delightful and had a wonderful marriage. She later told me it had been arranged; they met their wedding day. She came from a remote and poor village and had had no exposure to computers, or electricity, till emigrating to the USA.
Meanwhile many of our tools customers were building networking equipment. We worked closely with many of them and often had big routers, switches and the like onsite that our engineers were working on. We worked on a lot of what we'd now call IoT gear: sensors et al connected to the net via a profusion of interfaces.
I sold both the tools and Internet businesses in 1997, but by then the web and Internet were old stories.
Today, like so many of us, I have a fast (250 Mb/s) and cheap connection into the house with four wireless links and multiple computers chattering to each other. Where in 1992 the web was incredibly novel and truly lacking in useful functionality, now I can't imagine being deprived of it. Remember travel agents? Ordering things over the phone (a phone that had a physical wire connecting it to Ma Bell)? Using 15 volumes of an encyclopedia? Physically mailing stuff to each other?
As one gets older the years spin by like microseconds, but it is amazing to stop and consider just how much this world has changed. My great grandfather lived on a farm in a world that changed slowly; he finally got electricity in his last year of life. His daughter didn't have access to a telephone till later in life, and my dad designed spacecraft on vellum and starched linen using a slide rule. My son once saw a typewriter and asked me what it was; I mumbled that it was a predecessor of Microsoft Word.
That he understood. I didn't have the heart to try and explain carbon paper.
Originally posted HERE.
Please, subscribe to get an access.
Please, subscribe to get an access.
Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…Continue
Consumer-centric applications for artificial intelligence (AI) and automation are helping to stamp out the public perception that these technologies will only benefit businesses and negatively impact jobs and hiring. The conversation from human…Continue