As the old adage goes, “while the cat’s away, the mice will play”. In the case of NB-IOT, “when the spec’s delayed, LPWAN will play”, which is exactly what’s happening in the Internet of Things market today. The problem is that 3GPP (the 3rd Generation Partnership Project), the standards body which has been responsible for the 3G, 4G and 5G mobile standards, dropped the ball as far as the Internet of Things is concerned. Seduced by the slabs of black glass which suck up both our attention and the mobile networks’ spectrum, the 3GPP engineers totally forgot to design something to replace the old 2G workhorse of GPRS, which is responsible for most of today’s machine to machine communications. Instead, they spent all of their time designing high power, high speed, expensive variants of 4G to support an ongoing dynasty of iPhones, Galaxys and Pixels, none of which were any use for the Internet of Things.
Noticing this hole, a number of companies who had been developing proprietary, low cost, low speed, low power communication options saw an opportunity and created the Low Power WAN market. Whilst many perceived them as a group of Emperors with no clothes, the network operators were so desperate to have something to offer for upcoming IoT applications that they started engaging with them, rolling out LPWAN infrastructure. Whether they believed the LPWAN story, or just hoped it would fill a hole is difficult to ascertain, but no-one can deny that LPWAN is now firmly on the map, in the form of Sigfox, LoRa, Ingenu and a raft of others. To address that challenge to their hegemony, the GSM Association (GSMA) directed the 3GPP to assemble their own suit of imperial clothing which would be called the Narrow Band Internet of Things, or NB-IoT.
This is the story of why NB-IoT was too late, why it will fail in the short term, why it will win in the long term, and why the industry will struggle to make any money from it.
One of the most surprising aspects of this story is how long it took 3GPP and the network operators to realise that they had a problem. It’s not as if they didn’t see the problem coming. Back in 2010, Ericsson set the bar for much of the subsequent hype around the Internet of Things by making a very public prediction that by 2020 there would be 50 billion internet connected devices. They’ve subsequently downgraded that, but very few in the industry noticed – for them, it’s very difficult to discard the prospect of “tens of billions” once it’s made its way into their business plans. Numbers that big get attention in boardrooms, whether or not they mean anything – they just sound so good that they are assumed to be true.
What happened is that the industry became fixated with the concept of revenue today, rather than revenue tomorrow. As users embraced smartphones, their demand for data soared. When competing smartphone vendors made smartphone screens larger, mobile video took off, putting further pressure on the network’s capacity. Everyone’s attention became focused on how to build enough capacity into their network to retain their users. Instead of calling for new standards for M2M and IoT, operators started concentrating on how they could use their existing spectrum more efficiently. There was an easy answer to this – turn off their old 2G networks and use them for 4G, which supported around 40 times as many users. It was only as they started to do this that they belatedly realised that they were euthanising the only technology they had which would support the Internet of Things. At which point the LPWAN industry stepped into the frame and started cutting deals. The GSMA panicked, and directed 3GPP to embark on the path to NB-IoT.
At this stage it’s worth pointing two things out. The first is the normal timeline for developing a new radio standard, and the second is the requirements for the majority of the projected 50 billion IoT devices.
Developing a wireless standard is a slow business. Back in 2010 I tried to estimate the time and cost involved and came to the conclusion that it costs around a billion dollars and takes 8 – 10 years before the standard is robust and getting traction in the market. That was for personal area networks like Bluetooth, Wi-Fi and ZigBee. Cellular networks are more complex, so cost more and take longer. Despite the evidence, the GSMA announced that their new NB-IoT standard would be complete and released in six months. Six months later, they announced that it was going well and that they would release it in six months’ time. And six months after that they put out a press release saying that the specification was complete. We’ll come back to that in a minute.
The second thing we need to look at is what a standard for wireless IoT connectivity needs to do? Most IoT devices will be quite taciturn. They will measure data and events and send that data a few times each day. They’re not going to be streaming video or having lengthy conversations because they’re battery powered. If they’re going to run for several years on a small battery or some energy harvesting power supply, all they can manage is a few messages each day. Sigfox understand this and make it evident in their data plans. They’re not talking about hundreds of Megabytes like the cellular industry, but as little as 14 messages of 12 bytes each day. That’s about the same as a single SMS message. To put it another way, most IoT applications make text messaging look bloated.
It’s not at all clear that the GSMA understand this. In a recent Mobile Broadband Forummeeting, the GSMA and other operators kept on implying that IoT devices need data rates of tens or hundreds of kilobits per second. That is definitely what network operators want to sell, but it’s not what IoT devices need. If we’re going to get to billions of device, connectivity and silicon needs to be cheap. Cheaper and simpler than GPRS was. The cellular industry has never taken on board that fact that the reason we don’t already have billions of IoT devices is that even GPRS is too expensive. Trying to make NB-IoT more complex than GPRS is not going to kickstart the IoT era. What we need is a standard which will let companies make a chip that costs around a dollar in high volume.
That’s not where the cellular chip industry has been going. In the early days of 2G, networks operated at two different frequencies, with relatively simple radio modulation. That meant that chips were moderately simple. Over time, the GPRS modules which are used in most current IoT devices have fallen in price to around $7. However, as the desire for more bandwidth has grown, 3G and 4G chips have become much more complex. Moore’s law has helped to prevent them becoming exorbitant, but each new release of the standards has to support a growing number of frequency bands (we’re up from 2 to over 70), as well as all of the different protocols in the previous standards which have gone before it. Developing these is prohibitively expensive. As a result, 3G modules cost around $20 and 4G modules $35. The growing complexity, which requires immensely complex protocol stacks to complement the chips, has benefitted a very few silicon suppliers, who have largely destroyed the competition. Qualcomm dominates, with Mediatek taking most of the rest of the market. The business model for both is to sell billions of chips to a small number of high volume manufacturers who have deep technical competence to integrate these into their products. That is very different to the model needed to support tens of thousands of IoT manufacturers who need $1 comms chips which they can just drop into their products.
You can see this contradiction in the NB-IoT standard which has recently been released. There were two industry groupings with radically different approaches. The traditional one, led by Nokia and Ericsson, proposed what is essentially a cut down, lower power variant of 4G. The key feature of this is that it is capable of working with other 4G devices in the same spectrum, so it can easily be slotted into existing networks. However, to do that it needed to retain a fair degree of radio complexity to be aware of other 4G traffic. That has two consequences. It meant the chip was much more complex because it had to be able to identify what was going on around it, hence it’s still expensive. It also made it more difficult to make it very low power.
The alternative approach, led by Huawei and Vodafone was for a “clean sheet“ approach. This was a solution which did not have the intelligence to coexist with 4G networks, but required operators to set aside a small amount of spectrum for it, (which could be a guard band), specifically reserved for IoT traffic. As the chips didn’t need to be aware of any other 4G traffic, they could be much simpler and hence much cheaper. It’s a cleaner approach, but one which goes against the traditional network approach of making complex hardware which can work on any band around the world. Network operators typically prefer the complex hardware approach, as it passes the problem of global interoperability onto the chip and protocol stack companies. Whatever the operators do with their networks, regardless of the frequency bands they own, things just work. But it raises the cost of hardware.
This “clean sheet” approach grew out of the Weightless standard. Neul – a Cambridge start-up helped developed Weightless as a new radio and protocol for use in TV Whitespace. That failed to get traction, but the company was acquired by Huawei and the technology repurposed to work in the licensed spectrum that’s used for LTE. Because it does not have the baggage of backward compatibility, there’s a fair chance that the silicon could get down to the $1 mark.
These two approaches are essentially incompatible, and it was interesting to speculate how 3GPP would resolve the difference between them. Hence I was intrigued to see the resulting specification when it was published. When you start to read it, you can see how they managed to get it out so quickly. Instead of trying to find a compromise, it includes both the Huawei / Vodafone and Ericsson / Nokia / Intel options, so it is entirely up the chip vendor and network operator to decide which they support. That means that a user or manufacturer has absolutely no idea of whether an NB-IoT product they make will work on any particular NB-IoT network. It’s as if the acronym should really be Nobody Believes the Internet of Things.
It’s a fudge, where the specification group has produced some pieces of paper to meet a deadline and then passed everything over to a PR department which is taking the post-truth approach to promoting the technology. It would be nice to think that the specification group had realised that this first release was just a PR exercise and were working on harmonising the two conflicting proposals, but it seems they’re ignoring that and looking at adding location features instead, presumably because LoRa is offering that, and they don’t want to be left behind again. In other words, bells and whistles are more important to them than making NB-IoT work.
Making it work appears to be left to market forces. Vodafone is trumpeting the first commercial NB-IoT network. At the same time, Sonera, in Finland is announcing the first commerical NB-IoT trial. Although that may seem confusing, there is no contradiction here. Both are telling the truth, as Vodafone is using Huawei’s NB-IoT, which is totally different for the Nokia NB-IoT which Sonera is using. Nobody knows which variant will win. The key player in this could end up being Huawei. They have a captive silicon supplier in Hisilicon, which should help them get to the $1 chip price point. If they could persuade the Chinese Government to deploy hundreds of millions of devices in the country, this could make it the de facto standard. Nokia, Ericsson and Intel are unlikely to concede without a struggle, but with a higher cost and the lack of scale that a Government backed deployment in China could provide, they may struggle to gain momentum.
Unfortunately, this type of commercial battle generally doesn’t help the market. Without global compatibility, manufacturers will be loath to adopt the technology, as they have no idea whether it will work in any target market. That reduces volumes, which keeps chip costs high. It also delays all of the important things like developing test equipment and compliance programs which are vital to develop a robust network, which further undermines confidence. To survive, NB-IoT needs to be a single low cost, globally interoperable standard. In its current form, NB-IoT is dead.
While it goes through its death throes, the LPWAN suppliers will make mischief.
Sigfox is being aggressive in pricing, both for modules and data contracts. They recently announced that modules will be available for just $3 in 2017 and already have data plans with charges as low as $1.50 per year. They also desperately need to get the number of connections up, so will probably offer even lower costs in the near future. The company has raised over $300 million in funding and is aiming for an IPO in 2018. However, they feel that they need to get above 100 million active devices to persuade the market to support a decent valuation. So their investors will be putting pressure on them to get more connections made as soon as possible, potentially commoditising the IoT connectivity market in an attempt to buy market share from their rivals.
LoRa is a more distributed community, with multiple vendors providing parts of the ecosystem. However, LoRa has a significant difference from other LPWAN offerings, which could be important. It is the fact that anyone can buy a gateway and set up their own network. A crowdfunded initiative – the Things Network, has designed modules and gateways and persuaded the electronics distributor Farnell / Element14 to sell them in the same way they sell Raspberry Pis. For those who don’t know it, the Raspberry Pi is a highly effective embedded computing board. Originally designed to help teach coding in schools, it has been adopted by the maker community as the basis for thousands of projects and products. Farnell have recently announced that they have shipped their ten millionth Raspberry Pi.
The Things Network / Farnell initiative is relevant, as they will be selling LoRa gateways for €250. In other words, for €250, anyone can become an Internet of Things network operator covering a radius of around 5km. The Things Network - a development community attempting to build a global LoRa network, is providing compatibility layers behind that which will stitch many of these gateways together. Costs will probably be slightly higher than Sigfox, but this will appeal to an open source community, with the innovation benefits that brings to an emerging technology.
There are issues about scaling. Tech hotspots like Cambridge, Amsterdam and Berlin could each have over a thousand LoRa gateways by Christmas 2017, which could make or break the technology. It will be an interesting experiment. It may also give Ingenu an opportunity, as they’ve been in the game longer and appear to have a more robust technology in terms of scalability. But they’ve not achieved the same traction in the minds of IoT developers yet.
This brings us to the important part, which is what this means for network operators? Other than Vodafone, who have firmly nailed their colours onto the NB-IoT mast, most operators are hedging their bets by flirting with at least one proprietary LPWAN option. However, in order to get critical mass, contract prices are racing to the bottom. SK telecom is down to $0.30 per month and Sigfox’s pricing will probably push that down to below $2 a year in the near future. That’s a long way away from the $50- $200 that operators get from their current M2M contracts.
At $2 a year, 20 billion devices will contribute around 4% of current global mobile subscription revenues. That is probably less than network operators currently make from their GPRS subscriptions, yet it will replace much of that revenue. In other words, by supporting 20 billion IoT devices, the network operators will probably be making less money. Let me emphasise that point. The IoT opportunity of tens of billions of connected devices could reduce mobile operator revenue, not increase it.
Many mobile operators seem to think that they will make money from other parts of the IoT value chain, like cloud services or data analytics, but there is little indication that they’re well positioned for that. Amazon, Google and a host of others are already there. In the next few years, the volume in deployments will probably be using the LPWAN standards of Sigfox and LoRa. The developers who choose them will naturally turn to Amazon and Google, giving them the opportunity to further refine their IoT offerings. I’ll cover this in more detail in a future article.
Despite the present debacle over NB-IoT, the developers at 3GPP are bright – they will eventually get a specification out which meets the industry’s requirements, whether that’s driven by market forces winning out or technical decisions. However, my guess is that it may not be before 2023, as that’s how long wireless standards take. Which gives the different LPWAN standards plenty of time to play, and time for the cloud and analytics providers to shake out, settle down and start some serious customer acquisition.
The great thing about 3GPP standards is that they’re dead easy to roll out. In most cases they’re simply a software upgrade for the base stations. So it won’t take long to go from a final standard to global availability. At which point most IoT manufacturers will probably migrate to it, signalling the end of the short-lived LPWAN era. Of course, most of the LPWAN players and their investors are looking for shorter term returns, so they may already have disappeared. Even five years is a long time in a venture funded world.
What will be missing in the future NB-IoT world will be the hoped-for revenue. The years of LPWAN competition will have driven any profits out of NB-IoT, leaving the operators as pipes. It will also have established other players higher up in the value chain who can cream off what profit there is to be made. A future variant of NB-IoT will come to life and dominate as the connectivity standard for IoT, not least because as volumes grow, the licensed spectrum that operators own will offer a Quality of Service that is missing from the LPWAN offerings. It will also provide the certainty that manufacturers are desperate for, which is that the network will be a stable solution which is available for fifteen to twenty years. NB-IoT will wipe out any remaining alternatives, but it will not be the IoT pot of gold that many in the industry believe.
There is a final sting in the tail of this story, which is that for years we have been striving to develop low power, wide area connectivity which will enable a sensor battery life of ten years or more. The irony is that we now have a set of different LPWAN options which look as if they do support a ten year battery life, but it’s unlikely that any of them will still be operating in ten years’ time. In other words, battery life now exceeds network life.
One wonders how we got to this point? There is little good news for an equipment manufacturer, who is faced with the prospect that whatever connectivity solution they choose today, it will probably disappear within the next ten years. In other words, their product obsolescence is in the hands of their choice of network operator. But that’s the problem when you forget your King is dying and everyone spends their time running around backing pretenders to the throne. Be careful what you wish for. NB-IoT is dead. Long live NB-IoT.
Read more NB-IoT and LPWAN articles at my Creative Connectivity blog.
The Internet evolution has achieved the level when it is simply here for us at all times. We don’t even think of how we connect to a network, nor analyze the connection technical details, as well as we don’t care about who our communications service provider is. All-round Wi-Fi penetration and gradual IPv6 extension enable thousands of simple devices and trackers to interoperate continuously and send data “to the cloud”. Fast infrastructure advancement resulted in substituting the older Machine-to-Machine (M2M) term for more up-to-date Internet of Things (IoT) one.
Building up sort of distributed intelligence, IoT devices yet need centralized management, a system or service able to fine-tune the devices, provide storage and interpret collected data. Being the “brain” of the device cloud infrastructure, the management system also enlarges machine knowledge bases and updates device software.
Operators study data aggregated by groups or time periods and visualize it. This data is then delivered to various Business Intelligence Systems for more detailed analysis. Curiously enough, even if we speak about personal devices (e.g. fitness trackers), almost every cloud service operator analyses the collected data usage statistics anonymously for further device/service development.
Development of IoT devices becomes simpler and cheaper enabling small companies to enter the market. Plenty of businesses realize the need of building a management system, but they underestimate its development complexity and ignore the need of using industrial server technologies (such as failover clustering and multi-server distributed architecture). Typically, such a development starts in house. IoT devices successfully introduced in the market lead to rapid growth of users, causing long-term problems with service scaling and performance.
Anticipating further problems and being unable to form a server-based software development team quickly, IoT operators usually outsource the central system development focusing on devices only. Yet, it doesn’t solve the problem as third-party developers start building the system from scratch with lack of time and resources to apply serious technologies.
AggreGate Platform was born in 2002. At that time we were producing serial-over-IP converters and needed a central server that would transmit data between converters hidden by firewalls or NAT and having no chance to communicate directly. The first product version called LinkServer was written in C++ and was available only as a service simply transmitting data flows without any processing.
Short while later our converters developed into freely programmable controllers. They “understood” data flowing through them, thus we wanted the central server to do the same thing. At about the same time we realized that 90% of time spent for developing a monitoring and device management system was reinventing the wheel with very little effort put into solving certain business problems.
Since 2004 the system ported on Java has evolved as a framework for device management. For quite a few years we worked without clear understanding of the result we want to achieve. Fortunately, we have avoided work with a single customer or in a single industry by keeping our system flexible.
Now AggreGate Platform is applied to a great variety of industries, including Remote Monitoring and Service, IT Infrastructure and Network Monitoring, SCADA/HMI and Process Automation, Access Control, Building Automation, Fleet Management, Vending Machine and Self-service Kiosk Management, Sensor Network Monitoring, People and Vehicle Counting, Centralized Event and Incident Management, Digital Signage and Mobile Device Management.
Major Platform Tasks
Figuratively speaking, AggreGate is a LEGO constructor for prompt device cloud interface development. Allowing IoT solution architects to focus mainly on hardware and business logic, it solves the following infrastructure tasks:
- Maintaining communication between servers and devices connected via unreliable cellular and satellite links
- Unified approach to device data regardless of its physical meaning
- Storing large volumes of collected events and historical data in various databases (relational, round-robin, NoSQL)
- Visual building of complex source data analysis and event correlation chains
- Modeling multiple device data integration and all infrastructure KPIs calculation processes
- Fast operator and system engineer interface building using out-of-the-box “bricks” without any coding
- Implementing integration scenarios via ready-to-use universal connectors (SQL, HTTP/HTTPS, SOAP, CORBA, SNMP, etc.)
Being universal, AggreGate Platform unites various monitoring and management systems. It helps avoid extra integration points and decreases the number of integration scenarios. For example, the integrated monitoring system has a single integration point with Service Desk/ITSM/Maintenance Management systems for incident (alert) delivery. It also integrates with Inventory/Asset Management systems for collecting information on available physical assets and their influence on business services.
In such cases, role-based access control provides various departments with customized system scenarios and unique operator interfaces.
The Platform includes the following essential components:
- Server is a Java-based application providing communication with devices, data storage and its automated processing. Servers can group into clusters for high availability and keep peer-to-peer relations in distributed installations. AggreGate Server manages an embedded web server which in its turn supports web interfaces.
- Unified Console is a crossplatform desktop client software ensuring simultaneous work with one or several servers in administrator, system engineer or operator mode.
- Agent is a library that can be integrated into an IoT device firmware to ensure communication with servers, device setup unification, performing operations with a device and asynchronous event sending. There are a lot of libraries (Java, .NET, C/C++, Android Java, etc.). No need to deploy an agent if communications with the server are performed using standard or proprietary protocols. In the latter case a separate device driver is developed for the server. The agent can be also implemented as a separate hardware device (gateway).
- Open-source API for extending functionality of all other components and implementing complex integration scenarios.
The Server supervises device data reading and writing changes. This process is called bidirectional synchronization. The server creates a device snapshot containing last values of device metrics and changes carried out by operators or system modules and not written to a device due to communication downtime. Configuration changes are delivered to devices on the “best effort” basis enabling to configure device groups, even if some devices are offline.
The Server also provides receiving and processing incoming device connections that have no white static IP addresses.
Device data and events merge into a unified data model. Within this model, each device is represented as a so-called context in a hierarchical context structure. Each context includes a set of formalized data elements of three types: variables (properties, settings, attributes), functions (methods, operations), and events (notifications). A context also contains metadata describing all available elements. Therefore, all context data and metadata are entirely stored in the current context. This technology is called device normalization. Device drivers and agents create a normalized presentation of various device types.
There are some parallels with object-oriented programming, where objects typically have properties, events and methods. Properties are internal device variables, methods are operations performed by a device, and events describe how a device notifies the server of internal data or environment changes.
Virtually any device can be described as a set of properties, methods and events. For example, a remotely controlled water tank can have a “water level” property to show the current amount of water in the tank and “turn valve on/off” methods to control the valve letting the water into/out of the tank. This smart water tank may also generate a number of notifications, such as “nearly empty”, “nearly full” and “overflow”. We have developed more than 100 Java-based drivers, and the normalization concept has also proved to be an advantage. Moreover, a lot of current “universal” protocols (such as OPC UA, JMX or WMI) use similar data models.
All Server contexts are a part of a hierarchical structure called context tree. Though the contexts match diverse objects (devices, users, reports, alerts, etc.), they have a unified interface and can interoperate within the server context tree, offering a high level of flexibility. The same principle enables various servers to interact in a distributed installation.
Every connected device allows operators to perform direct configuration (device configuration reading and modification), direct management (forcing device operation performance manually), and direct monitoring (viewing the device event log in near-real-time mode).
Events and changes of device metric values are stored in the server storage. Depending on the system task, the storage type can vary. For example, if it’s the Raspberry Pi microserver, the simplest file storage is used, while the central server of a distributed installation can use NoSQL-based Apache Cassandra cluster storing dozens of thousands events per second out of original stream with hundreds of thousands events per second.
However, in most cases a regular relational database is used as storage. Using ORM layer (Hibernate) provides compatibility with MySQL, Oracle, Microsoft SQL Server, PostgreSQL, and other DBMS.
Device data and events affect the life cycle of active server objects allowing the server to react to environmental condition changes. These active objects include:
- Alerts converting a particular seamless object state or event chain to a new event type called incident
- Models converting source events and values into user-defined events and value types by using business rules
- Scheduler assuring task performance on schedule even when the server is shut off
- Sensors and several other object types
Active objects are able to add new types of variables, functions and events in the unified data model, send custom variables and event changes to storage, and invoke device and other object operations in automated mode.
You can use widgets for building data entry forms, tables, dynamic maps, charts and HMIs. They can be combined in dashboards, both global (based on aggregated KPIs and showing the whole infrastructure state) and object-oriented (displaying a single device or infrastructure component state).
Widgets and report templates are built in specialized visual editors seamlessly integrated in the Aggregate Platform ecosystem. The GUI Builder helps design complex interfaces consisting of multiple nested containers with visual components. In addition to absolute layout typical for editors, you can use grid layout familiar to those who came across table editing in HTML page. The grid layout makes it possible to build scalable multi-size data entry forms and tables.
As a result, first-line or second-line operator interfaces developed by using data visualization tools include dashboards with widgets, forms, tables, diagrams, reports, HMIs, and navigation between them.
The GUI Builder supports dozens of out-of-the-box components, such as captions, text fields, buttons, checkboxes, sliders as well as spinners, lists, date/time selectors, scales, and pointers. Among more complex components are trees, video windows, dynamic vector SVG images, geographical maps based on Google Maps/Bing/Yandex/OpenStreetMap. The list of supported diagrams includes classic charts, statistics charts, Gantt charts, and polar charts.
Properties related to server objects (devices, models, alerts) and UI components are linked together using bindings. Such bindings define when and where data should be taken, how to process it and where to place the results. While processing data, the bindings use expression and query languages.
A binding using an expression resembles Microsoft Excel formula. Such a formula takes data from several cells, applies mathematical operations or data processing functions to it, and places the result into the current cell. An expression is also a formula describing where data should be taken from and what sort of changes to apply to it.
The query language is very similar to regular SQL. It also aggregates data from various tables into one by using filtering, sorting, grouping, etc. The difference between the classic SQL and the embedded query language is that the latter uses virtual tables built on-the-fly from diverse unified model data as a source. Every query checks operator/system object access permissions automatically. With this in mind, the query language has an obvious advantage over direct SQL queries to the server database.
To solve more challenging data processing tasks, you can easily write a script in Java or even a dedicated plugin. However, every script written for data processing by any of our partners is a warning for us: why does one need A platform if classic development out of familiar environment (such as Eclipse or Idea) is still required?
And finally, a few words about the distributed architecture technology. Our concept implies customization of peering relationships between servers so that a server (provider) links a part of its unified data model to the other server (consumer). This allows the consumer server objects to equally interact with the provider server objects. A single server can have unlimited links, moreover, such a server can be both a provider and a consumer towards neighboring servers.
Distributed architecture ensures solving various large-scale system tasks:
- Horizontal system scaling.
- Distributed monitoring with local monitoring server installations and intermediate data storage at remote sites.
Vertical scaling, dividing functions between servers into several logic levels.
CB Insights has identified 78 private companies at the intersection of cybersecurity and connected hardware, which includes: critical infrastructure, mobile phones, connected devices, enterprise endpoints, and connected cars.
The breakdown of categories is as follows:
Critical Infrastructure: Startups in this category include Indegy which provides real-time situational awareness, visibility, and security for industrial control systems used across critical infrastructure, including energy, water utilities, petrochemical plants, manufacturing facilities, etc. Similar companies such as CyberX can detect network anomalies by analyzing the operational behavior of industrial internet networks using Big Data and Machine Learning. The company Bastille Networks is among the more unique startups in this category, with a product that scans air space to provide visibility into RF-emitting devices. Bastille has broad implications across the connected hardware cybersecurity market.
Mobile Phones: Companies in this category include three unicorns valued at $1B+. They are: Okta which offers cloud-based identity management and mobility management services, Lookout which is a smartphone security company for the Android and iOS platforms, and Avast Software which offers security and privacy solutions also for iOS and Android.
Connected Devices: Included are companies like Mocana which secures IP addressable devices as well as the information, applications, and services that run on them. Companies in this category also include MedCrypt which offers the ability to manage all of the digital keys needed for users to securely access medical devices.
Enterprise Endpoints: Startups like the unicorn Tanium offer a systems management solution that allows enterprises to collect data and update endpoints across networks. Another unicorn in this category is Cylance, which operates in defense of enterprises’ endpoints by applying artificial intelligence algorithms to predict, identify, and stop malware and advanced threats.
Connected Cars: Argus Cyber Security enables car manufacturers to protect technologically advanced connected vehicles from malicious cyber attacks.
The full company list is here.
Two years ago, IBM announced a groundbreaking partnership with another colossal tech pioneer – Apple, in order to enhance the mobile enterprise space. In little over 25 months, the two companies have declared a major breakthrough – on October 25th, IBM officially announced that the company will incorporate the famous Watson capabilities into the MobileFirst for iOS ecosystem. The integration will bring deep data analysis, natural language processing and even more features to iOS 10.
The Intelligence Inside
This is just the most recent dramatic outcome of the IMB/Apple partnership. You may recall that just a few weeks ago, IBM’s VP of WaaS, Fletcher Previn said that each Mac deployment will cost the company around $535, in comparison to PCs. Seemingly, this latest move will have a huge impact on a wide range of “real-life” situations and strengthen Apple’s growing position in enterprise IT. In fact, according to the IMB, business apps will now have the ability to understand and even learn on the basis of data analytics.
But the question you might be asking yourself is – how will all of this work in practice? Well, Mahmoud Naghshineh, general manager of the Apple/IBM partnership explains that an average technician would be able to literally ask the F&F app to look up some suggestions on how to solve a specific equipment problem. The app would essentially enable them to pay more attention to the piece of equipment – and as Apple puts it – would provide a “hands-free” solution when a technician needs it most.
What’s more, as numerous experts predicted last year, a technician that is using an app with Apple Watch support will be able to call up some kind of intelligent support system, while keeping their smartphones in their pockets. Basically, cognitive technology coupled with mobile aims to be transformational – its goal is to change how workers interact with enterprise applications in order to assist any company, and eventually, any industry move forward.
The Future Lies in Contextual Awareness
And while IBM’s Watson is smart enough to listen closely, learn from you and make increasingly refined suggestions in reaction to certain situational needs, the system is still not contextually aware. You have to understand that modern voice assistants, like Google Now and Siri, are simply comprised of pattern matching and voice recognition software, tied closely to a number of typical questions and answers. But cognitive computing is a more powerful beast.
Watson delivers a comprehensive set of abilities based on technologies like computer learning, reasoning and decision technologies; language and speech systems are disturbed, high-performance computing. Simply put, Watson always learns from previous interactions with humans and machines, and gains in knowledge over time. The system can learn from past experiences, and consequently learn and create conclusions from different experiences. Once we move into “Cognitive Era", and when this system merges with, for instance, 3pl logistics and other eCommerce software solutions, most companies that leverage the technology will have the highest probability of success.
It is Just a Quest of Ethics
While IBM is still enthusiastic, the company knows that these forms of AI are the peak of our society’s digital transformation, and that they are likely to have a long-lasting effect on our society. Artificial Intelligence systems are augmenting human intelligence in every field imaginable, and will transform our lives on a personal and professional level, in the long run. As a matter of fact, we are likely to see cognitive robots integrated into our society as bartenders, receptionists and even doctors. They could even be useful as carers of our kids and elderly.
AI systems such as IBM’s Watson are indeed powerful, and like with all powerful tools, we have to take great care of their usage. We have to agree that IBM’s job as a part of the global community is to make sure that the cognitive tech they (and others) develop is manufactured the right way and only for the right reasons. The company may be involved in several efforts to advance our overall understanding if problems affecting the ethical development of AI occur.
Wondering what IoT software platform to latch onto? Well, Forrester recently published their research report and named the 11 that are most significant — Amazon Web Services (AWS), Ayla Networks, Cisco Jasper, Exosite, General Electric (GE), IBM, LogMeIn, Microsoft, PTC, SAP, and Zebra Technologies.
The report shows how each provider measures up and helps infrastructure and operations (I&O) professionals make the right choice to support their IoT-enabled connected product and asset initiatives.
IBM, PTC, GE, And Microsoft Lead The Pack
Forrester’s research uncovered a market in which IBM, PTC, GE, and Microsoft lead the pack. AWS, SAP, and Cisco Jasper offer competitive options. LogMeIn, Ayla Networks, Exosite, and Zebra Technologies lag behind.
IoT Software Platforms Simplify Enabling Connected Products And Processes
To deliver differentiated connected products or transform business processes, I&O leaders face a fragmented set of network technologies, hardware, protocols, software, applications, and analytics solutions. IoT software platforms help simplify deploying, managing, operating, and capturing insights from IoT-enabled connected devices.
Partner Ecosystems, Prebuilt Apps, And Advanced Analytics Are Key Differentiators
Vendors that allow customers to tap into a broad partner ecosystem to extend the functionality available through their platform solutions will position themselves to successfully deliver additional value to end user customers. Other key criteria to jump-start IoT solutions include application enablement functions, analytics features, and interfaces to generate actionable insights from connected products and prebuilt applications.
To view the entire report, IBM has a free download, with registration, here.
Guest post by Ian Skerrett, Eclipse Foundation.
The Internet of Things (IoT) is transforming how individuals and organizations connect with customers, suppliers, partners, and other individuals. IoT is all about connecting sensors, actuators, and devices to a network and enabling the collection, exchange, and analysis of generated information.
New technology innovations in hardware, networking and, software are fueling the opportunity for new IoT solutions and use cases.Hardware innovations, like the Raspberry Pi, are making it easier, faster and cheaper to develop new devices. Networking standards for low power networks, like LoRaWAN or Sigfox, create new opportunities for connecting very small devices to a network. New standards are being developed specifically for IoT use cases, like MQTT for messaging, or OMA Lightweight M2M for device management. And finally, significant improvements in data storage, data analysis, and event processing are making it possible to support the amount of data generated in large-scale IoT deployments.
In parallel to the emerging IoT industry, the general software industry has moved towards open source as being a key supplier of critical software components. The phrase “software is eating the world” reflects the importance of software in general, but in reality the software industry is now dominated by open source. This is true for key software categories, including Operating Systems (Linux), Big Data (Apache Hadoop, Apache Cassandra), Middleware (Apache HTTP Server, Apache Tomcat, Eclipse Jetty), Cloud (OpenStack, Cloud Foundry, Kubernetes), and Microservices (Docker).
The purpose of this article is to look at the new technology requirements and architectures required for IoT solutions. It will identify three stacks of software required by any IoT solution. Similar to how the LAMP (Linux/Apache HTTP Server/MySQL/PHP) stack has dominated the web infrastructures, it is believed a similar open source stack will dominate IoT deployments.
IoT Architecture: Devices, Gateways, and IoT Platforms
A typical IoT solution is characterized by many devices (i.e. things) that may use some form of gateway to communicate through a network to an enterprise back-end server that is running an IoT platform that helps integrate the IoT information into the existing enterprise. The roles of the devices, gateways, and cloud platform are well defined, and each of them provides specific features and functionality required by any robust IoT solution.
Stack for Constrained Devices: Sensors and Actuators
The “Thing” in the IoT is the starting point for an IoT solution. It is typically the originator of the data, and it interacts with the physical world. Things are often very constrained in terms of size or power supply; therefore, they are often programmed using microcontrollers (MCU) that have very limited capabilities. The microcontrollers powering IoT devices are specialized for a specific task and are designed for mass production and low cost.
The software running on MCU-based devices aims at supporting specific tasks. The key features of the software stack running on a device may include:
- IoT operating system – many devices will run with ‘bare metal’, but some will have embedded or real-time operating systems that are particularly suited for small constrained devices, and that can provide IoT-specific capabilities.
- Hardware abstraction – a software layer that enables access to the hardware features of the MCU, such as flash memory, GPIOs, serial interfaces, etc.
- Communication support – drivers and protocols allowing to connect the device to a wired or wireless protocol like Bluetooth, Z-Wave, Thread, CAN bus, MQTT, CoAP, etc., and enabling device communication.
- Remote management – the ability to remotely control the device to upgrade its firmware or to monitor its battery level.
Stack for Gateways: Connected and Smart Things
The IoT gateway acts as the aggregation point for a group of sensors and actuators to coordinate the connectivity of these devices to each other and to an external network. An IoT gateway can be a physical piece of hardware or functionality that is incorporated into a larger “Thing” that is connected to the network. For example, an industrial machine might act like a gateway, and so might a connected automobile or a home automation appliance.
An IoT gateway will often offer processing of the data at the ‘edge’ and storage capabilities to deal with network latency and reliability. For device to device connectivity, an IoT gateway deals with the interoperability issues between incompatible devices. A typical IoT architecture would have many IoT gateways supporting masses of devices.
IoT gateways are becoming increasingly dependent on software to implement the core functionality. The key features of a gateway software stack include:
- Operating system – typically a general purpose operating system such as Linux.
- Application container or run-time environment – IoT gateways will often have the ability to run application code, and to allow the applications to be dynamically updated. For example, a gateway may have support for Java, Python, or Node.js.
- Communication and Connectivity – IoT gateways need to support different connectivity protocols to connect with different devices (e.g. Bluetooth, Wi-Fi, Z-Wave, ZigBee). IoT Gateways also need to connect to different types of networks (e.g. Ethernet, cellular, Wi-Fi, satellite, etc.…) and ensure the reliability, security, and confidentiality of the communications.
- Data management & Messaging – local persistence to support network latency, offline mode, and real-time analytics at the edge, as well as the ability to forward device data in a consistent manner to an IoT Platform.
- Remote management – the ability to remotely provision, configure, startup/shutdown gateways as well as the applications running on the gateways.
Stack for IoT Cloud Platforms
The IoT Cloud Platform represents the software infrastructure and services required to enable an IoT solution. An IoT Cloud Platform typically operates on a cloud infrastructure (e.g. OpenShift, AWS, Microsoft Azure, Cloud Foundry) or inside an enterprise data center and is expected to scale both horizontally, to support the large number of devices connected, as well as vertically to address the variety of IoT solutions. The IoT Cloud Platform will facilitate the interoperability of the IoT solution with existing enterprise applications and other IoT solutions.
The core features of an IoT Cloud Platform include:
- Connectivity and Message Routing – IoT platforms need to be able to interact with very large numbers of devices and gateways using different protocols and data formats, but then normalize it to allow for easy integration into the rest of the enterprise.
- Device Management and Device Registry – a central registry to identify the devices/gateways running in an IoT solution and the ability to provision new software updates and manage the devices.
- Data Management and Storage – a scalable data store that supports the volume and variety of IoT data.
- Event Management, Analytics & UI – scalable event processing capabilities, ability to consolidate and analyze data, and to create reports, graphs, and dashboards.
- Application Enablement – ability to create reports, graphs, dashboards, … and to use API for application integration.
Across the different stacks of an IoT solution are a number of features that need to be considered for any IoT architecture, including:
- Security – Security needs to be implemented from the devices to the cloud. Features such as authentication, encryption, and authorization need be part of each stack.
- Ontologies – The format and description of device data is an important feature to enable data analytics and data interoperability. The ability to define ontologies and metadata across heterogeneous domains is a key area for IoT.
- Development Tools and SDKs – IoT Developers will require development tools that support the different hardware and software platforms involved.
Key characteristics for IoT Stacks
There are some common characteristics that each IoT stack should embrace, including
- Loosely coupled – Three IoT stacks have been defined but it is important that each stack can be used independently of the other stacks. It should be possible to use an IoT Cloud Platform from one supplier with an IoT Gateway from another supplier and a third supplier for the device stack.
- Modular – Each stack should allow for the features to be sourced from different suppliers.
- Platform-independent – Each stack should be independent of the host hardware and cloud infrastructure. For instance, the device stack should be available on multiple MCUs and the IoT Cloud Platform should run on different Cloud PaaS.
- Based on open standards – Communication between the stacks should be based on open standards to ensure interoperability.
- Defined APIs – Each stack should have defined APIs that allow for easy integration with existing applications and integration with other IoT solutions.
Any IoT Solution requires substantial amount of technology in the form of software, hardware, and networking. In this white paper we have defined the key software requirements across three different stacks. Many software and hardware vendors will offer complete stacks or parts of each stack. However, for IoT industry to be successful these software stacks need to be independent of each other and available from different sources.
In the next article we will look at how the Eclipse IoT Working Group is actively building the open source technology for each of these three IoT stacks. The goal of this open source community is to provide the IoT building blocks required to enable each of these stacks.
When you think about consumer cloud platforms, which ones come to mind? Amazon AWS, Microsoft Azure and Google’s Cloud Platform are likely to be at the top of your list. But what about industrial cloud platforms? Which ones rise to the top for you? Well, GE’s Predix, Siemen's MindSphere, and the recently announced Honeywell Sentience are likely to be on any short list of industrial cloud platforms. But they aren’t the only ones in this space. Cisco's Jasper, IBM’s Watson IoT, Meshify, Uptake, and at least 20 others are competing to manage all those billions of sensors that are expected to encompass the Industrial Internet of Things (IIoT). Which one do you think will end up dominating the market?
A Brief Overview of Cloud Computing
To answer the above question, let's start with a very brief overview of cloud computing to put industrial cloud platforms in their proper context. Cloud platforms are one of several services that cloud computing providers offer, with the main ones being: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
Three Main Services of Cloud Computing
Software as a Service (SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
Industrial Cloud Platforms by Industrial Companies
Industrial cloud platforms have a much deeper focus on operational technology than consumer platforms. They are designed to allow data gathering throughout manufacturing production processes, in order to improve performance as well as predict failures before they happen. Here are three industrial cloud platforms by long time industrial companies:
Industrial Cloud Platforms by Software Development Firms
Industrial companies aren’t the only ones developing industrial cloud platforms. There’s already a long list of industrial cloud platforms available from software development firms. Here are a few worth mentioning:
|Table 1: The List of Industrial Cloud Platform Providers|
IBM Watson IoT
SAP Hana Cloud
What Do You Think Will Be The Future of the Industrial Cloud?
Will there be a dominant Industrial Cloud Platform? It's hard to say at this point. GE Predix is hoping for 500,000 connected devices by the beginning of 2017, while C3 IoT is said to have 70 million devices connected to its platform already.
The Crowded Cloud of Industrial Platforms
Will this market consolidate around a few big-name platforms, or will a lesser known provider be the winner and take all?
This post originally appeared here.
IoT has already enjoyed a next-big-thing status for several years now and all signs point to continued growth as homes, cars, and nearly every piece of hardware in our lives gets connected. But that doesn’t mean the industry isn’t facing challenges. In fact, rapid growth and its status as the new kid on the block, make IoT perfectly poised to face two particularly big challenges in the years to come.
Given the high demand for connected products, we need to solve these problems as quickly as possible. Late last year, Gartner analysts estimated that 2016 would see about 5.5 million new IoT devices connected every single day. But that’s nothing compared to what’s coming. Back in 2013, Cisco made headlines by predictingIoT will be a $14.4 trillion dollar industry by 2022. So what does the future of IoT look like? And what can we do now to better prepare for the connected future?
Developers aren’t ready for IoT
The first and perhaps easiest challenge to overcome is the talent supply problem. Most developers were not taught to write software for connected hardware. And up until very, very recently, developers were set in a desktop mindset. Speaking to our CTO at Mokriya, Pranil Kanderi, there are a few exceptions to this rule: “A small market share of embedded devices has been around for a while now,” Kanderi says. “But they were such a small segment that, for the most part, software developers kept the focus on desktops.”
Now, as consumers eagerly buy up new IoT products as quickly as they hit the market, the demand for developers who can write for connected hardware is growing fast. Moving from this desktop-first mentality to different kinds of hardware has been a slow process, Kanderi says. Too slow for an industry releasing new products every single week.
Here at Mokriya, our IoT business is a majority of the work we do. “If I have to guess, more than 70-80% of our projects so far have some kind of a connected device or hardware,” says Kanderi. “And that’s in addition to the mobile device itself.”
It wasn’t until IoT exploded in popularity that developers began to realize there was a lot of demand in the space with few developers ready to fulfil that demand. Now that we’re seeing this demand balloon, how much trouble is the industry in? How likely is it that developers can learn the new skills and approaches that engineering hardware to software challenges will require?
Kanderi says it’s probably not going to be as dramatic as it looks right now. “Developers now have physical hardware — Raspberry Pi, IoT kits, etc. — on their desks that they can write software for,” he says. “The real shift will be learning to think in terms of smaller distributed networked components.”
But the majority of software developers still rely on just a few platforms, like Java, .Net, and ERP. “As the connected devices grow, there will come a time when a lot of these developers will need to transition to write software for connected devices,” Kanderi points out. And that time is approaching. So, developers should start learning the most important software development skills in IoT immediately.
Fragmentation in connected hardware
What’s more worrisome in the long term is the unsolved issue of fragmentation within the industry. There are thousands of devices — with new products launching every week — and very few of them can communicate effectively. In order to really take advantage of the possibilities in connected hardware, we’re going to have to create a way for these products to come together.
The problem isn’t that this challenge has gone unnoticed, the opposite, actually. At this point, the larger issue is that too many tech enterprises have spotted the opportunity to become IoT’s universal hub and are currently fighting to build the budding industry around their own ecosystem. Google is expanding Google Homewhile Amazon is finding new ways to connect IoT products using Echo/Alexa. It comes as no surprise that these two giants have begun a very public initial battle to rule the smart home. And then there are organizations, like Chronicled, that are building open databases to connect IoT products. Earlier this month, even the giant consortium, AllSeen Alliance, announced a partnership with rival Open Connectivity Foundation.
There’s no way to tell how this will all turn out, partially because connected devices are still in such a nascent stage, but also due to increasing instances of tech brands partnering up with multiple IoT alliances at once. The fact that brands are hedging their bets is not a huge surprise, given the newness of the mainstream IoT industry, yet it does point to a greater uncertainty as we look toward the future of IoT. And while it’s in everyone’s interest to solve that problem together, the temptation of becoming the hub for all connected hardware in the future is too good for many brands to pass up altogether.
The lack of a common standard to communicate across various devices could be a major problem if we continue on this path. But either way, if you are developer its high time that you should look towards getting your hands on an IoT kit and start connecting with the future.
As if the Internet of Things (IoT) was not complicated enough, the Marketing team at Cisco introduced its Fog Computing vision in January 2014, also known as Edge Computing for other more purist vendors.
Given Cisco´s frantic activity in their Internet of Everything (IoE) marketing campaigns, it is not surprising that many bloggers have abused of shocking headlines around this subject taking advantage of the Hype of the IoT.
I hope this post help you better understand what is the role of Fog Computing in the IoT Reference Model and how companies are using IoT Intelligent gateways in the Fog to connect the "Things" to the Cloud through some applications areas and examples of Fog Computing.
The problem with the cloud
As the Internet of Things proliferates, businesses face a growing need to analyze data from sources at the edge of a network, whether mobile phones, gateways, or IoT sensors. Cloud computing has a disadvantage: It can’t process data quickly enough for modern business applications.
The IoT owes its explosive growth to the connection of physical things and operation technologies (OT) to analytics and machine learning applications, which can help glean insights from device-generated data and enable devices to make “smart” decisions without human intervention. Currently, such resources are mostly being provided by cloud service providers, where the computation and storage capacity exists.
However, despite its power, the cloud model is not applicable to environments where operations are time-critical or internet connectivity is poor. This is especially true in scenarios such as telemedicine and patient care, where milliseconds can have fatal consequences. The same can be said about vehicle to vehicle communications, where the prevention of collisions and accidents can’t afford the latency caused by the roundtrip to the cloud server.
“The cloud paradigm is like having your brain command your limbs from miles away — it won’t help you where you need quick reflexes.”
Moreover, having every device connected to the cloud and sending raw data over the internet can have privacy, security and legal implications, especially when dealing with sensitive data that is subject to separate regulations in different countries.
IoT nodes are closer to the action, but for the moment, they do not have the computing and storage resources to perform analytics and machine learning tasks. Cloud servers, on the other hand, have the horsepower, but are too far away to process data and respond in time.
The fog layer is the perfect junction where there are enough compute, storage and networking resources to mimic cloud capabilities at the edge and support the local ingestion of data and the quick turnaround of results.
The variety of IoT systems and the need for flexible solutions that respond to real-time events quickly make Fog Computing a compelling option.
The Fog Computing, Oh my good another layer in IoT!
A study by IDC estimates that by 2020, 10 percent of the world’s data will be produced by edge devices. This will further drive the need for more efficient fog computing solutions that provide low latency and holistic intelligence simultaneously.
“Computing at the edge of the network is, of course, not new -- we've been doing it for years to solve the same issue with other kinds of computing.”
The Fog Computing or Edge Computing is a paradigm championed by some of the biggest IoT technology players, including Cisco, IBM, and Dell and represents a shift in architecture in which intelligence is pushed from the cloud to the edge, localizing certain kinds of analysis and decision-making.
Fog Computing enables quicker response times, unencumbered by network latency, as well as reduced traffic, selectively relaying the appropriate data to the cloud.
The concept of Fog Computing attempts to transcend some of these physical limitations. With Fog Computing processing happens on nodes physically closer to where the data is originally collected instead of sending vast amounts of IoT data to the cloud.
The OpenFog Consortium
The OpenFog Consortium, was founded on the premise based on open architectures and standards that are essential for the success of a ubiquitous Fog Computing ecosystem.
The collaboration among tech giants such as ARM, Cisco, Dell, GE, Intel, Microsoft and Schneider Electric defining an Open, Interoperable Fog Computing Architecture is without any doubt good news for a vibrant supplier ecosystem.
The OpenFog Reference Architecture is an architectural evolution from traditional closed systems and the burgeoning cloud-only models to an approach that emphasizes computation nearest the edge of the network when dictated by business concerns or critical application the functional requirements of the system.
The OpenFog Reference Architecture consists of putting micro data centers or even small, purpose-built high-performance data analytics machines in remote offices and locations in order to gain real-time insights from the data collected, or to promote data thinning at the edge, by dramatically reducing the amount of data that needs to be transmitted to a central data center. Without having to move unnecessary data to a central data center, analytics at the edge can simplify and drastically speed analysis while also cutting costs.
Benefits of Fog Computing
- · Frees up network capacity - Fog computing uses much less bandwidth, which means it doesn't cause bottlenecks and other similar occupancies. Less data movement on the network frees up network capacity, which then can be used for other things.
- · It is truly real-time - Fog computing has much higher expedience than any other cloud computing architecture we know today. Since all data analysis are being done at the spot it represents a true real time concept, which means it is a perfect match for the needs of Internet of Things concept.
- · Boosts data security - Collected data is more secure when it doesn't travel. Also makes data storing much simpler, because it stays in its country of origin. Sending data abroad might violate certain laws.
- · Analytics is done locally- Fog computing concept enables developers to access most important IoT data from other locations, but it still keeps piles of less important information in local storages;
- · Some companies don't like their data being out of their premises- with Fog Computing lots of data is stored on the devices themselves (which are often located outside of company offices), this is perceived as a risk by part of developers' community.
- · Whole system sounds a little bit confusing- Concept that includes huge number of devices that store, analyze and send their own data, located all around the world sounds utterly confusing.
Disadvantages of Fog Computing
Read more: http://bigdata.sys-con.com/node/3809885
Examples of Fog Computing
The applications of fog computing are many, and it is powering crucial parts of IoT ecosystems, especially in industrial environments. See below some use cases and examples.
- Thanks to the power of fog computing, New York-based renewable energy company Envision has been able to obtain a 15 percent productivity improvement from the vast network of wind turbines it operates. The company is processing as much as 20 terabytes of data at a time, generated by 3 million sensors installed on the 20,000 turbines it manages. Moving computation to the edge has enabled Envision to cut down data analysis time from 10 minutes to mere seconds, providing them with actionable insights and significant business benefits.
- Plat One is another firm using fog computing to improve data processing for the more than 1 million sensors it manages. The company uses the Cisco-ParStream platform to publish real-time sensor measurements for hundreds of thousands of devices, including smart lighting and parking, port and transportation management and a network of 50,000 coffee machines.
- In Palo Alto, California, a $3 million project will enable traffic lights to integrate with connected vehicles, hopefully creating a future in which people won’t be waiting in their cars at empty intersections for no reason.
- In transportation, it’s helping semi-autonomous cars assist drivers in avoiding distraction and veering off the road by providing real-time analytics and decisions on driving patterns.
- It also can help reduce the transfer of gigantic volumes of audio and video recordings generated by police dashboard and video cameras. Cameras equipped with edge computing capabilities could analyze video feeds in real time and only send relevant data to the cloud when necessary.
See more at: Why Edge Computing Is Here to Stay: Five Use Cases By Patrick McGarry
What is the future of fog computing?
The current trend shows that fog computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Sensors that only log data might one day become a thing of the past.
Janakiram MSV wondered if Fog Computing will be the Next Big Thing In Internet of Things? . It seems obvious that while cloud is a perfect match for the Internet of Things, we have other scenarios and IoT solutions that demand low-latency ingestion and immediate processing of data where Fog Computing is the answer.
Does the fog eliminate the cloud?
Fog computing improves efficiency and reduces the amount of data that needs to be sent to the cloud for processing. But it’s here to complement the cloud, not replace it.
The cloud will continue to have a pertinent role in the IoT cycle. In fact, with fog computing shouldering the burden of short-term analytics at the edge, cloud resources will be freed to take on the heavier tasks, especially where the analysis of historical data and large datasets is concerned. Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.
And there are still many cases where the centralized, highly efficient computing infrastructure of the cloud will outperform decentralized systems in performance, scalability and costs. This includes environments where data needs to be analyzed from largely dispersed sources.
“It is the combination of fog and cloud computing that will accelerate the adoption of IoT, especially for the enterprise.”
In essence, Fog Computing allows for big data to be processed locally, or at least in closer proximity to the systems that rely on it. Newer machines could incorporate more powerful microprocessors, and interact more fluidly with other machines on the edge of the network. While fog isn’t a replacement for cloud architecture, it is a necessary step forward that will facilitate the advancement of IoT, as more industries and businesses adopt emerging technologies.
'The Cloud' is not Over
Fog computing is far from a panacea. One of the immediate costs associated with this method pertains to equipping end devices with the necessary hardware to perform calculations remotely and independent of centralized data centers. Some vendors, however, are in the process of perfecting technologies for that purpose. The tradeoff is that by investing in such solutions immediately, organizations will avoid frequently updating their infrastructure and networks to deal with ever increasing data amounts as the IoT expands.
There are certain data types and use cases that actually benefit from centralized models. Data that carries the utmost security concerns, for example, will require the secure advantages of a centralized approach or one that continues to rely solely on physical infrastructure.
Though the benefits of Fog Computing are undeniable, the Cloud has a secure future in IoT for most companies with less time-sensitive computing needs and for analysing all the data gathered by IoT sensors.
Thanks in advance for your Likes and Shares
Thoughts ? Comments ?
Earlier this month I attended the 2016 ANT Wireless Symposium in Banff, Canada (Listen up conference organizers: more meetings in Banff please!). Put on by ANT Wireless, stewards of a protocol and silicon solution for ultra-low power (ULP) practical wireless networking applications which are now integrated into many popular products and devices, the conference looked at how wireless advances define how we live, do business and use products.
I moderated a panel on IoT and fragmentation. The panelists should make any conference organizer drool. Joining me on stage were IBM’s Doug Barton, Google’s Doug Daniels, Rick Gibbs of North Pole Engineering and sport technology guru Ray Maker.
Each of these gentlemen has first hand experience in either creating, developing or heavily using connected devices. Ray Maker’s blog is the first place to to visit if you’re looking to try out sports equipment. Rick’s company is a vertically integrated electrical engineering company specializing in embedded microprocessor-based hardware and software design. Doug Daniels is the head of cloud platform at Google and helped create Mi Pulse, high-tech stylish activewear with integrated heart rate monitoring technology. And finally, Doug Barton, partnered with ultra-cyclist Dave Haase, to put IBM’s IoT and Big Data capabilities to the test.
If you’re looking at connecting low-powered devices, or are considering a communications standard, be it ANT+ vs. BTLE, then you’ll want to watch this discussion. We cover interoperability, explore the large-scale initiatives that will define the standards and frameworks for the IoT globally, what will help increase adoption for end-users, and ask if diversification of protocols is actually good for the market.
Update: Ray Maker also includes information about his talk and our discussion here. Be sure to read Ray’s additional thoughts in the comments section.
While the full impact of the Internet of Things has yet to arrive, many companies already offer IoT platforms with the services, software, and hardware necessary to make your company IoT-ready. How do you determine which platform will meet the needs of your industry and, more importantly, your business? This O’Reilly report will help you sort through criteria you need to think about when planning for the IoT strategy.
Author Matthew Perry explores key considerations for smoothing the transition to IoT components. You’ll not only examine the things and people you want to connect, but also the network upgrades necessary to connect them—including data flow, APIs and end user applications, and robust security.
Chances are your IoT platform won’t come out of a box; you may do part of the work internally. With the guidelines in this report, you’ll learn how to prepare for an IoT platform that specifically meets the needs of your business and its customers.
Brought to you by PTC
Guest blog post by ajit jaokar
At the Data Science for Internet of Things course, we have been working with the Predix APIs. We emphasise the Industrial IoT and Predix from GE is one the best examples of a mature IIoT platform. Predix is currently available on a free trial and I asked GE about the process after the free trial. After 6 months of trail, developers would have to pay for the price listed under each service for example HERE I have used the documentation from the Predix site for this article.
What is Predix Platform?
The Predix platform is a cloud-based Platform-as-a-Service (PaaS) for the Industrial Internet. Predix provides a complete environment to create solutions to run industrial-scale analytics for Industrial IoT. Predix platform uses a Cloud Foundry based microservice architecture which enables it to deliver functionality as a set of very small, granular, independent collaborating services.
The high-level architecture of Predix is as below
The Predix platform provides the following industrial microservices:
Asset Service: to support asset modeling. Application developers use the Asset service to create, update, and store asset model data that defines asset properties as well as relationships between assets and other modeling elements. The Asset service consists of an API layer, a query engine, and a graph database. For example, an application developer can create an asset model describing the logical component structure of all pumps in an organization, then create instances of the model to represent each pump in an organization.
Analytics Services: can be used to manage the execution of analytics through configuration, abstraction, and extensible modules.
Data Services: include the following services: Time Series Service and the BLOB Store Service(Binary Large Object Storage). Data Services also include a SQL Database, the PostgreSQL object-relational database management system, Message Queue, Key-Value Store
Security Services: consists of User Account and Authentication, Access Control and Tenant Management
Machine Services: a device-independent software stack that allows you to develop solutions to connect machines to the Industrial Internet.
Predix Mobile: used for mobility applications
Event hub: Predix also includes the Event Hub beta pub/sub service built to be secure, massively scalable, fault tolerant, and language-agnostic. It supports textual (JSON over websocket) and binary (protobuf over gRPC) for publishing, and binary (protobuf over gRPC) for subscribing. Devices communicate with the Event Hub service by publishing messages to topics.Event Hub provides the ability to ingest streaming data from anywhere to the cloud for processing.
We are more interested in the Analytics service considering our emphasis on Data Science for IoT. An ‘Analytic’ is roughly equivalent to a model in traditional data science parlance. The Analytics Catalog service provides a software catalog for sharing reusable analytics across development teams. The Analytics Catalog service supports analytics written in Java, Matlab and Python. The Analytics Runtime service supports analytics from the Analytics Catalog service.
Developer Workflow for analytics is as follows
Stage 1: Develop the Analytic
Stage 2: Publish the Analytic to the Analytics Catalog
Stage 3: Validate the Analytic
(3a) Test the analytic with a sample data set and verify the results using the Analytics Catalog service.
(3b) If necessary, modify the analytic and upload new artifact to the catalog using the Analytics Catalog service.
Stage 4: Release the Analytic
Stage 5: Execute the Analytic
Predix is interesting for our work in the Data Science for Internet of Things course, because of its emphasis on the Industrial Internet. In subsequent articles, I will cover Predix Edge Processing capabilities and the Predix APIs. I am interested in finding an equivalent for Industrie 4.0. If you know such as service for Industrie 4.0 or you wish to join the course – please email me at info at futuretext.com
In next versions of this series, I will cover Predix Edge Processing and the Predix API
Predix is also covered in our forthcoming book on Data Science for Internet of Things by Ajit Jaokar and Jean Jacques Bernard
In this issue we interview Autodesk's Bryan Kester. Hey wait a minute, you say, Autodesk is CAD, not IoT. Well read our interview to learn how Autodesk is more IoT than you think. Also, Bill McCabe looks at the skills for IoT and we revisit one of our most popular posts: Internet of Things Landscape 2016 - In One Diagram. If you're interested in being featured, we always welcome your contributions on all things IoT Infrastructure, IoT Application Development, IoT Data and IoT Security, and more. All members can post on IoT Central. Consider contributing today. Our guidelines are here.
Autodesk's Bryan Kester - Skills for the IoT pro, disagreement with Gartner, and what's next for IoT
By David Oro
In our latest installment of interviews with IoT practitioners, we interview Bryan Kester, Director of IoT, Autodesk, Inc. Bryan leads the Internet of Things (IoT) Product Group at Autodesk. We asked him questions about Gartner's prediction of IoT maturation, his take on the IoT platform wars, the skills sets needed in this rapidly emerging and changing field, and what's next for IoT. Bryan predicts, "There will be some continued hype and then a subtle, but significant shakeout among both pure play and "me too" vendors. Those that help simplify the systems integration nature of IoT will have a future."
By David Oro
Matt notes "The IoT today is largely at this inflection point where “the future is already here but it is not evenly distributed”. From ingestibles, wearables, AR/VR headsets to connected homes and factories, drones, autonomous cars and smart cities, a whole new world (and computing paradigm) is emerging in front of us. But as of right now, it just feels a little patchy, and it doesn’t always look good, or work great – yet."
By Ajit Joakar
This blog is based on my talk in London at the Re.work Connected City Summit on Deep Learning Applications for Smart cities. The talk is based on a forthcoming paper created with the help of my students atUPM/citysciences on the same theme.
Here are some notes on our approach:
- When we speak of Machines – the media dramatizes the issue. Yet, city officials and planners plan for ten to twenty years in the future. They will have to consider many of these issues in a pragmatic way.
- Deep Learning / Artificial Intelligence will impact many aspects of Smart cities. We decided to approach the subject in a pragmatic manner and to explore the impact of Deep Learning/AI technology on the lives of future citizens.
How could self-learning machines affect humanity in cities?
Posted by Bill McCabe
With many IT professionals with business experience in hot industries like healthcare, telecom and wearables looking to make the switch from systems software and other terrestrial IT-based positions to M2M or IOT strategy, leadership and sales, what are the skills you need to work on the IOT.
Nvidia announced a new version of Nvidia Shield TV at CES in Las Vegas, and the device went on sale last week. I already own the first generation of Nvidia Shield TV and I love it. Do you already own a Nvidia Shield TV? If yes, should you upgrade…Continue
Consumer Reports gave Apple's latest MacBook Pro laptops its envied "Recommended" label last week after retesting the notebooks, ending a month-long dust-up between the magazine and some Apple defenders.
"Now that we've factored in…Continue
23% off TaoTronics LED Desk Lamp, (Dimmable, Touch Control, 5 Color Modes, USB Charging Port) - Deal Alert
The TaoTronics LED Desk Lamp is adjustable and dimmable for multiple brightness settings, making them ideal for home and office use. Say goodbye to old incandescent light and faintly illuminated working space, and say hello to this elegantly…Continue
Apple had its share of problems in 2016. The company’s line of iMacs, laptops and even the iPhone 7 felt rather stale to many customers. The elimination of AirPort routers and lack of updates to the Mac mini and Mac Pro also contributed to the…Continue