Subscribe to our Newsletter | To Post On IoT Central, Click here

fog computing (6)

The world is flooded with digital innovation and technologies like IoT, 5G wireless network & embedded AI continues to increase the pace of change. At present millions of apps are coming online to monitor, measure, process, analyze, react to seemingly storm of endless data making the growth of IoT explosive as well as impressive. Now we all are aware regarding the fact that the internet of Things heavily relies on cloud technology not only to store large amounts of data collected from sensors but also process it.

What is Fog computing?

In simple words, Fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things. It can be summarized as:

Horizontal architecture- Support multiple industry verticals and application domains, delivering intelligence and services to users and business

Cloud-to-thing continuum of services- Enable services and applications to be distributed closer to Things, and anywhere along the continuum between Cloud and Things

System-level- Extend from the Things, over the network edges, through the Cloud, and across multiple protocol layers – not just radio systems, not just a specific protocol layer, not just at one part of an end-to-end system, but a system spanning between the Things and the Cloud

Its key benefits include:

  • Ultra-low latency
  • Business agility
  • Added security
  • Real-time analytics
  • Reduced costs
  • Less bandwidth and network load

Have you ever wondered how fog architecture leverages and extends edge capabilities? Here’s the answer

Compute Distribution and Load Balancing- Many edge architecture employs a strategy of placing servers, apps or small clouds at the edge. Fog simply provides a broader system-level architecture that also incorporates tools for distributing, orchestrating, managing and securing resources and services across networks. This provides a great balance of sophisticated computation, networking and storage capabilities and support for heterogeneous environments on any node (e.g., CPUs, GPUs, FPGAs, and DSPs for computing).

Hierarchical networking- Edge is often optimized for a single type of network resource at the network edges, such as edge gateways, routers, switches, or licensed spectrum wireless networks. Fog supports a physical and logical network hierarchy of multiple levels of cooperating nodes, supporting distributed applications. Fog nodes extend the edge with support for north-south, east-west and diagonal connectivity, including interfaces between edge and cloud. This could include, for example, analytics algorithms distributed up and down a hierarchy of nodes, or massively parallel applications that concurrently run on large peer groups of processors or highly distributed storage systems.

Universal Orchestration & Management- Edge orchestration and management are sometimes derived from specific legacy vertical practices, such as mobile network orchestration managed by the carrier. In these situations, the edge may deliver cloud capabilities but without orchestration for connecting edge nodes. Fog orchestration and management is intended to be more universal, modern, and automated. Fog orchestration enables resource pooling and permits interactions and collaborations between fog nodes at the same layer and at different layers in the hierarchy, which helps performance, fault tolerance, load distribution and load balancing. Fog network management considers a life-cycle management through a distributed service orchestration layer in each fog node. The fog architecture essentially validates IT (information technology), OT (operational technology) and CT (communications technology) approach.

Modular Architecture with Multiple Access Modes- Edge deployments are typically based on gateways with fixed functionality. Edge architectures favor one specific access network, such as either wireless or wireline. Fog has a highly modular hardware and software architecture, permitting every fog node to be equipped with exactly the resources its applications need, that can be dynamically configured. Fog embraces both the licensed and unlicensed wireless spectrum, as well as copper and fiber wireline modes.

Reliability and resiliency- Fog architectures are inherently reliable, supporting many fault tolerance, network resiliency, and fully autonomous emergency operation scenarios. If an edge device goes down, the services it supports will often fail.

Security and privacy- Vertical application-specific and multi-vendor nature edge may offer uneven security protection. Whereas fog, on the other hand, requires every fog node to include a high-assurance implementation of its Trusted Computing Base using secure hardware or hardware-supported security mechanisms and a mandatory mission-critical class protection of communication and computation security mechanisms and a mandatory mission-critical class protection of communication and computation security.

Virtualization Support- Fog supports virtualization and uses enterprise and web-scale models. This provides hardware virtualization at each node level and allows loads to be moved from one node to an adjacent node if the node is down or overloaded. Edge computing looks at virtualization mainly from the perspective of distributing computing resources in a local manner per server.


Read more…

IoT Gateway- Enabling Edge and Fog Computing

Connected devices are becoming essential components for enterprises as they can drive significant connectivity and integration between systems and data. The increasing number of devices getting connected to each other generates a huge amount of data.

However, when it comes to leveraging the full potential of these connected devices and data, it is necessary to have a scalable and robust environment which allows faster processing of data between systems.

The fundamental concern is on how to efficiently manage this data, as any data loss or delay in processing of data from a connected ecosystem can cause critical damage to an enterprise’s workflow.

Role of IoT gateway edge analytics in data processing & management

IoT Gateway is the key to any IoT deployment. It is a bridge between IoT devices and cloud that enables remote control of the devices and machines. The increasing number of devices propels the requirement for IoT gateways to solve the data management issues with Edge Analytics.

Edge analytics with IoT Gateway allows data processing before it is transmitted to the cloud. The gateway collects all the data from the connected devices and executes necessary algorithms or rule engine on it and sends actionable commands to connected devices. The actions allow for response to be taken in real-time and also helps in self-healing mechanism during faults/errors.

In large enterprises, having multiple geographical spread, there are a huge number of connected devices and generated data. This heterogeneous data, distributed at different levels (Devices and machines ) have high latency in cloud transferring due to the uncontrolled data flow. Here, distributed edge analytics is the solution as it allows faster data transfer and processing, resulting in the reduction of latency.

AWS Greengrass is the best example for the edge analytics setup. It allows enterprises to run local compute, messaging, data caching, sync, and ML inference capabilities for connected devices in a secure way. Greengrass ensures quick response of IoT devices at the time of local events, that reduces the cost of transmitting IoT data to the cloud.

How distributed edge analytics works in larger geographical areas

Let’s take an example of smart grids to understand the concept in-detail.

Smart grids are the combinations of smart meters, smart appliances, renewable energy resources, energy efficient resources, and substations. In a particular city area, the number of smart meters is equivalent to the number of households in that area. These AMI (Advanced Metering Infrastructure) continuously collects the energy consumption data and route it to the IoT gateways. The gateway enables edge analytics and then the processed data is rerouted to the cloud by the gateway.

As the number of AMI is high in a particular area, the number of gateways will be proportionately higher.

Merits of distributed edge analytics:

  • Reduced data transfer latency
  • Fast access to the faulty areas
  • Quick functional recovery and self healing capabilities that brings resilience in the system

Distributed edge analytics also enables fast response to the cloud in case of faults and failures with Fog Computing so that the recovery time can be minimal. Let us understand how.

How fog computing works with smart grids for faster data processing

Fog computing is the combination of two key components of data processing, Edge and Cloud both. The idea of combining edge computing with more complex computing (cloud computing) results into more reliable and faster data processing.

As smart grid tech is increasing rapidly, fog computing is the best tool for the data and information processing between consumers, grid operators, and energy providers.

In the edge analytics concept, the gateways form a mesh network. The individual mesh network of a designated area creates Fog Nodes. Each fog node is connected to each other, resulting in a fog network of smart meters and IoT gateways in the larger setups. The combination of these fog nodes then allows distributed fog computing, which gives the benefit of fast and real-time data analysis in any large geographical area. This further enables faster fault response time.

Use case of smart grids in distributed edge analytics

eInfochips developed a solution in which gateways are being connected into a mesh network with peer-to-peer communication. Mesh and cluster of gateways enable high availability and reliabilityof the IoT deployment in smart grids. Clustering enables distributed edge analytics. These distributed edge nodes allow processing of data at the edge before transferring it to the cloud.

According to the market research data, fog computing market is growing with the attractive amount of cost annual growth rate (CAGR), 55.6% between 2017 and 2022 (MarketsandMarkets).

With our edge and fog computing expertise, we help the IoT solution providers to optimise their computing infrastructure by distributing load between the cloud and edge devices in an intelligent way through our ready-to-use dynamic rule engine or custom solutions.

Read more…

Fog Computing is a slippery concept. It combines two critical components of data computing today, Edge and Cloud computing, into a system that leverages the strength – and necessity – of both. This idea of local computing (the Edge) combined with more complicated analytics engines (the Cloud) opens up a world of possibilities for data communications.

Fog Computing & Emergency Response

Earlier this fall, researchers at Georgia Tech looked at the application of Fog Computing in areas struck by natural disasters. In these areas, traditional means of internet connection are often knocked out of commission, leaving rescuers and victims unable to communicate with one another, even though there are many apps designed to help facilitate rescue. Where Fog Computing comes in is that rather than relying on a direct connection to the internet, different Fog nodes can be leveraged to create an ad hoc network that can still send basic messages:

However, one important advantage of a fog system is that messages can be distributed between a broad network of computers through temporary ad hoc connections, even without live internet connections.

The geo-distributed network of fog nodes, which could be phones, tablets or any device part of the Internet of Things, could generate communication channels in areas where there were none before, allowing the creation of population density maps in flooded areas.

Another application would allow users to check the fog network to see if their family members are safe after a crisis event.

Fog Computing applied in this setting is applicable around the world, as we are reminded daily of both the ubiquity and fragility of wireless communications against the whims of nature.

Smart Grids Need Fog Computing

Across the globe, more and more countries are jumping into smart grid deployments. The good side is that smart energy tools are critical to managing resources. The bad side is that most are not sufficiently developed with the necessary security infrastructure in place. When considering the rapid development of smart grid tech, Fog Computing quickly comes up as a viable tool for ensuring reliable data communication and information transfer between consumers, grid operators and larger energy providers. The Open Fog Consortium, a global Fog Computing group comprised of technology and academic thought leaders, has formed Resilient Information Architecture Platform for Smart Grid (RIAPS), a project aimed at developing software for Fog Computing platforms:

RIAPS is very different from conventional platforms as it was designed for inherently distributed and decentralized applications. An application is composed of interconnected real-time software components (similar to micro-services) that can be event- and/or time-triggered and that interact via well-defined communication patterns, including publish/subscribe and synchronous and asynchronous service invocations. Such components are location transparent and agnostic about the underlying messaging framework.

Although the project is based out of Vanderbilt University, in the United States, the repercussions will be felt throughout the world.

Is Fog Computing the Final Answer?

While Fog Computing has yet to be standardized and applied across the wide range of IoT technologies out in the field today, its ability to combine both local and Cloud data analytics is something that can have an impact in both the consumer and the Industrial IoT. However, the first adapters, companies that play in IIoT settings, will be largely responsible for driving the growth of this concept moving forward into the future.

Additional Reading:

Read more…

As if the Internet of Things (IoT) was not complicated enough, the Marketing team at Cisco introduced its Fog Computing vision in January 2014, also known as Edge Computing  for other more purist vendors.

Given Cisco´s frantic activity in their Internet of Everything (IoE) marketing campaigns, it is not surprising that many bloggers have abused of shocking headlines around this subject taking advantage of the Hype of the IoT.

I hope this post help you better understand what is  the role of Fog Computing  in the IoT Reference Model and how companies are using IoT Intelligent gateways in the Fog to connect the "Things" to the Cloud through some applications areas and examples of Fog Computing.

The problem with the cloud

As the Internet of Things proliferates, businesses face a growing need to analyze data from sources at the edge of a network, whether mobile phones, gateways, or IoT sensors. Cloud computing has a disadvantage: It can’t process data quickly enough for modern business applications.

The IoT owes its explosive growth to the connection of physical things and operation technologies (OT) to analytics and machine learning applications, which can help glean insights from device-generated data and enable devices to make “smart” decisions without human intervention. Currently, such resources are mostly being provided by cloud service providers, where the computation and storage capacity exists.

However, despite its power, the cloud model is not applicable to environments where operations are time-critical or internet connectivity is poor. This is especially true in scenarios such as telemedicine and patient care, where milliseconds can have fatal consequences. The same can be said about vehicle to vehicle communications, where the prevention of collisions and accidents can’t afford the latency caused by the roundtrip to the cloud server.

“The cloud paradigm is like having your brain command your limbs from miles away — it won’t help you where you need quick reflexes.”

Moreover, having every device connected to the cloud and sending raw data over the internet can have privacy, security and legal implications, especially when dealing with sensitive data that is subject to separate regulations in different countries.

IoT nodes are closer to the action, but for the moment, they do not have the computing and storage resources to perform analytics and machine learning tasks. Cloud servers, on the other hand, have the horsepower, but are too far away to process data and respond in time.

The fog layer is the perfect junction where there are enough compute, storage and networking resources to mimic cloud capabilities at the edge and support the local ingestion of data and the quick turnaround of results.

The variety of IoT systems and the need for flexible solutions that respond to real-time events quickly make Fog Computing a compelling option.

The Fog Computing, Oh my good another layer in IoT!

A study by IDC estimates that by 2020, 10 percent of the world’s data will be produced by edge devices. This will further drive the need for more efficient fog computing solutions that provide low latency and holistic intelligence simultaneously.

“Computing at the edge of the network is, of course, not new -- we've been doing it for years to solve the same issue with other kinds of computing.”

The Fog Computing or Edge Computing  is a paradigm championed by some of the biggest IoT technology players, including Cisco, IBM, and Dell and represents a shift in architecture in which intelligence is pushed from the cloud to the edge, localizing certain kinds of analysis and decision-making.

Fog Computing enables quicker response times, unencumbered by network latency, as well as reduced traffic, selectively relaying the appropriate data to the cloud.

The concept of Fog Computing attempts to transcend some of these physical limitations. With Fog Computing processing happens on nodes physically closer to where the data is originally collected instead of sending vast amounts of IoT data to the cloud.

Photo Source:

The OpenFog Consortium

The OpenFog Consortium, was founded on the premise based on open architectures and standards that are essential for the success of a ubiquitous Fog Computing ecosystem.

The collaboration among tech giants such as ARM, Cisco, Dell, GE, Intel, Microsoft and Schneider Electric defining an Open, Interoperable Fog Computing Architecture is without any doubt good news for a vibrant supplier ecosystem.

The OpenFog Reference Architecture is an architectural evolution from traditional closed systems and the burgeoning cloud-only models to an approach that emphasizes computation nearest the edge of the network when dictated by business concerns or critical application the functional requirements of the system.

The OpenFog Reference Architecture consists of putting micro data centers or even small, purpose-built high-performance data analytics machines in remote offices and locations in order to gain real-time insights from the data collected, or to promote data thinning at the edge, by dramatically reducing the amount of data that needs to be transmitted to a central data center. Without having to move unnecessary data to a central data center, analytics at the edge can simplify and drastically speed analysis while also cutting costs.

Benefits of Fog Computing

  • ·         Frees up network capacity - Fog computing uses much less bandwidth, which means it doesn't cause bottlenecks and other similar occupancies. Less data movement on the network frees up network capacity, which then can be used for other things.
  • ·         It is truly real-time - Fog computing has much higher expedience than any other cloud computing architecture we know today. Since all data analysis are being done at the spot it represents a true real time concept, which means it is a perfect match for the needs of Internet of Things concept.
  • ·         Boosts data security - Collected data is more secure when it doesn't travel. Also makes data storing much simpler, because it stays in its country of origin. Sending data abroad might violate certain laws.
  • ·         Analytics is done locally- Fog computing concept enables developers to access most important IoT data from other locations, but it still keeps piles of less important information in local storages;
  • ·         Some companies don't like their data being out of their premises- with Fog Computing lots of data is stored on the devices themselves (which are often located outside of company offices), this is perceived as a risk by part of developers' community.
  • ·         Whole system sounds a little bit confusing- Concept that includes huge number of devices that store, analyze and send their own data, located all around the world sounds utterly confusing.

Disadvantages of Fog Computing

Read more:

Examples of Fog Computing

The applications of fog computing are many, and it is powering crucial parts of IoT ecosystems, especially in industrial environments. See below some use cases and examples.

  • Thanks to the power of fog computing, New York-based renewable energy company Envision has been able to obtain a 15 percent productivity improvement from the vast network of wind turbines it operates. The company is processing as much as 20 terabytes of data at a time, generated by 3 million sensors installed on the 20,000 turbines it manages. Moving computation to the edge has enabled Envision to cut down data analysis time from 10 minutes to mere seconds, providing them with actionable insights and significant business benefits.
  • Plat One is another firm using fog computing to improve data processing for the more than 1 million sensors it manages. The company uses the Cisco-ParStream platform to publish real-time sensor measurements for hundreds of thousands of devices, including smart lighting and parking, port and transportation management and a network of 50,000 coffee machines.
  • In Palo Alto, California, a $3 million project will enable traffic lights to integrate with connected vehicles, hopefully creating a future in which people won’t be waiting in their cars at empty intersections for no reason.
  • In transportation, it’s helping semi-autonomous cars assist drivers in avoiding distraction and veering off the road by providing real-time analytics and decisions on driving patterns.
  • It also can help reduce the transfer of gigantic volumes of audio and video recordings generated by police dashboard and video cameras. Cameras equipped with edge computing capabilities could analyze video feeds in real time and only send relevant data to the cloud when necessary.

See more at: Why Edge Computing Is Here to Stay: Five Use Cases By Patrick McGarry  

What is the future of fog computing?

The current trend shows that fog computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Sensors that only log data might one day become a thing of the past.

Janakiram MSV  wondered if Fog Computing  will be the Next Big Thing In Internet of Things? . It seems obvious that while cloud is a perfect match for the Internet of Things, we have other scenarios and IoT solutions that demand low-latency ingestion and immediate processing of data where Fog Computing is the answer.

Does the fog eliminate the cloud?

Fog computing improves efficiency and reduces the amount of data that needs to be sent to the cloud for processing. But it’s here to complement the cloud, not replace it.

The cloud will continue to have a pertinent role in the IoT cycle. In fact, with fog computing shouldering the burden of short-term analytics at the edge, cloud resources will be freed to take on the heavier tasks, especially where the analysis of historical data and large datasets is concerned. Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.

And there are still many cases where the centralized, highly efficient computing infrastructure of the cloud will outperform decentralized systems in performance, scalability and costs. This includes environments where data needs to be analyzed from largely dispersed sources.

“It is the combination of fog and cloud computing that will accelerate the adoption of IoT, especially for the enterprise.”

In essence, Fog Computing allows for big data to be processed locally, or at least in closer proximity to the systems that rely on it. Newer machines could incorporate more powerful microprocessors, and interact more fluidly with other machines on the edge of the network. While fog isn’t a replacement for cloud architecture, it is a necessary step forward that will facilitate the advancement of IoT, as more industries and businesses adopt emerging technologies.

'The Cloud' is not Over

Fog computing is far from a panacea. One of the immediate costs associated with this method pertains to equipping end devices with the necessary hardware to perform calculations remotely and independent of centralized data centers. Some vendors, however, are in the process of perfecting technologies for that purpose. The tradeoff is that by investing in such solutions immediately, organizations will avoid frequently updating their infrastructure and networks to deal with ever increasing data amounts as the IoT expands.

There are certain data types and use cases that actually benefit from centralized models. Data that carries the utmost security concerns, for example, will require the secure advantages of a centralized approach or one that continues to rely solely on physical infrastructure.

Though the benefits of Fog Computing are undeniable, the Cloud has a secure future in IoT for most companies with less time-sensitive computing needs and for analysing all the data gathered by IoT sensors.


Thanks in advance for your Likes and Shares

Thoughts ? Comments ?

Read more…

How IoT can benefit from fog computing

fog computing

By Ben Dickson. This article originally appeared here.

What I’m mentioning a lot these days (and hearing about it as well) is the chaotic propagation and growth of the Internet of Things. With billions of devices slated to connect to the internet every year, we’re going to be facing some serious challenges. I’ve already discussed howblockchain technology might address connectivity issues for huge IoT ecosystems.

But connectivity accounts for a small part of the problems we’ll be facing. Another challenge will be processing and making sense of the huge reams of data that IoT devices are generating. Close on its heels will be the issue of latency or how fast an IoT system can react to events. And as always, security and privacy issues will remain one of the top items in the IoT challenge list.

Fog computing (aka edge computing) can help mitigate – if not overcome – these challenges. As opposed to the cloud, where all the computation takes place in a central location, fog computing pushes the computation of tasks toward the edge of the network and distributes it among smart routers or gateways. The term and concept was coined by networking giant Cisco even before the IoT became a buzzword, but it was the advent of the Internet of Things that provided it with true, legitimate use cases.

Here are some of the domains where cloud computing can deal with the challenges of IoT.

Computation and data processing

Naturally, computation problems will be one of the main reasons we’ll descend from the cloud and wade into the fog. A problem lying ahead of us is the sheer amount of computation and data processing that IoT ecosystems will require.

With Machine-to-Machine (M2M) communications accounting for most of exchanges in IoT ecosystems, the amount of traffic that will be generated will be incomparable to what we’re used to deal with in human-machine settings. Pushing all of these tasks to the cloud will overburden centralized computation nodes and require bigger and stronger cloud servers.

The cloud is best known for its huge storage and analytics capacities. Meanwhile, many of the tasks and events that take place in IoT ecosystems do not require such capabilities and sending them to the cloud will be a waste of precious resources and will only bog down servers and prevent them from performing their more critical duties.

Fog computing can address this issue. Small computational tasks can be performed at the edge (IoT gateways and routers), while valuable data can continue to be pushed to the cloud. This way, precious cloud resources for can be saved for more suitable tasks such as big data analysis and pattern recognition. Reciprocally, functionality and policies of edge devices can be altered and updated based on insights gained from cloud analytics.

This model will also help address response time and latency issues, which is discussed next.

Response times and latency

Rather than requiring huge computational resources, many of the transactions and decisions being made in IoT systems are time-critical. Imagine a telemedicine scenario, or an IoT-powered hospital, where seconds and milliseconds can make a difference for patients’ health or life. The same can be said in industrial settings and work areas, where quick response can prevent or mitigate damage and safety issues. A simpler example would be parking lights that would have to respond to passage of cars and pedestrians, but must do so in a timely fashion.

Other settings that require large bandwidth, such as IoT ecosystems involving many CCTV cameras, would also be hard to deploy in environments that have limited connectivity if they rely on cloud computation.

In many cases, it’s funny (and outright ridiculous) that two devices that stand a few feet apart have to go through the internet and the cloud to exchange simple messages. It’s even more ridiculous having to cope with the fact that your fridge and toaster don’t work because they’re disconnected from the internet.

A roundtrip to the cloud can sometimes take seconds – or even minutes, in poorly connected areas – which is more than can be afforded in many of these scenarios. Meanwhile, at the edge, IoT ecosystems can make decisions at the speed of lightning, making sure that everything gets responded to in time.

study by IDC Futurescape shows that by 2018, some 40 percent of IoT-created data will be stored, analyzed and processed at the edge.

Security and privacy

As Phantom CEO Ken Tola mentioned in a previous post, encryption isn’t panacea to IoT security problems. And as a study by LGS Innovations told us earlier, hackers don’t necessarily need to crack into your encrypted communications in order to carry out their evil deeds. In fact, just eavesdropping on your IoT internet traffic – whether it’s encrypted or not – will provide malicious actors with plenty of useful information, e.g. give away your living habits.

Moreover, some forms of attacks, such as replay attacks, don’t require the attacker to have access to encryption keys. All they need to do is to replicate packets that are being exchanged on the network. For instance, with a good bit of network monitoring, an attacker might figure out which sequence of packets unlocks your home’s smart-lock.

Of course, there are ways to mitigate each of these threats, but robust security practices aren’t the greatest strength of IoT device manufacturers, and that’s why we’re seeing all thesespooky IoT hacks surface every week.

Fog computing will reduce many of these risks by considerably decreasing the amount of dependency on internet connections. Moving data and command exchange into the local area network will make it much harder for hackers to gain remote access to your data and devices. Moreover, with device-cloud exchanges no longer happening in real-time, it will be much harder to discern life and usage patterns by eavesdropping on your network.

Overcoming the challenges

Despite all the mentioned advantages, fog computing does have its own set of caveats and difficulties. For one thing, edge devices can’t match the power of cloud in computing and analytics. This issue can be addressed by distributing the workload between the cloud and the fog. Edge devices such as smart routers and gateways can mimic cloud capabilities at the edge location, making optimal use of their resources to respond to time-critical and lightweight tasks, while the heavier, analytics-intensive requests that don’t necessarily need to be carried out in real-time can be sent to the cloud.

Meanwhile, edge software should be designed and developed with flexibility in mind. For instance, IoT gateway software that controls industrial equipment should be able to receive policy and function updates, which will be produced by machine learning solutions analyzing big data at the cloud.

Read more…

A smart, highly optimized distributed neural network, based on Intel Edison "Receptive" Nodes

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.

These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.

These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.

Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.

To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.

Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.

Read more…