Subscribe to our Newsletter | To Post On IoT Central, Click here


Case Studies (236)

IoT Central Digest, August 1, 2016

Here is the latest issue of the IoT Central Digest. This digest links you to a three part series entitled IoT 101, well worth a read. We also include articles about software tools for IoT device security, dive into fog computing and look at who holds the intellectual property in IoT.  If you're interested in being featured, we always welcome your contributions on all things IoT Infrastructure, IoT Application Development, IoT Data and IoT Security, and more. All members can post on IoT Central. Consider contributing today. Our guidelines are here.

IoT 101 – Everything You Need to Know to Start Your IoT Project

By Bill Vorhies

Summary: This is the first in a series of articles aimed at providing a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. Visit www.iotcentral.io to read the entire series.

Intellectual Property Held by the Top 100 IoT Startups

Posted by Mitchell Schwartz 

Using Mattermarks’s list of the Top 100 IoT startups in 2015 (ranked by funding, published in Forbes Oct 25, 2015) Ipqwery has looked behind the analytics to reveal the nature of the intellectual property (IP) behind these innovative companies. Our infographic presents a general summary of the IP within the group as a whole, and illustrates the trailing 5-year trends related to IP filing activity.

Automated Software Development Tools for Improving IoT Device Security

Posted by Bill Graham 

For IoT and M2M device security assurance, it's critical to introduce automated software development tools into the development lifecycle. Although software tools' roles in quality assurance is important, it becomes even more so when security becomes part of a new or existing product's requirements.

How IoT can benefit from fog computing

By Ben Dickson

What I’m mentioning a lot these days (and hearing about it as well) is the chaotic propagation and growth of the Internet of Things. With billions of devices slated to connect to the internet every year, we’re going to be facing some serious challenges. I’ve already discussed howblockchain technology might address connectivity issues for huge IoT ecosystems. But connectivity accounts for a small part of the problems we’ll be facing. Another challenge will be processing and making sense of the huge reams of data that IoT devices are generating. Close on its heels will be the issue of latency or how fast an IoT system can react to events. And as always, security and privacy issues will remain one of the top items in the IoT challenge list. Fog computing (aka edge computing) can help mitigate – if not overcome – these challenges

Additional Links

Follow us on Twitter | Join our LinkedIn group | Members Only | For Bloggers | Subscribe

Read more…

By Bill Vorhies

Bill is Editorial Director for our sister site Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. This article originally appeared here

Summary:  In this Lesson 3 we continue to provide a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. 

In Lesson 1

In Lesson 2

In This Article

Is it IoT or Streaming

Stream Processing – Open Source

Three Data Handling Paradigms – Spark versus Storm

Basics of IoT Architecture – Open Source

What Can Stream Processors Do

Streaming and Real Time Analytics

Data Capture – Open Source with Options

Open Source Options for Stream Processors

Beyond Open Source for Streaming

Storage – Open Source with Options

Spark Streaming and Storm

Competitors to Consider

Query – Open Source Open Source with Options

Lambda Architecture – Speed plus Safety

Trends to Watch

 

Do You Really Need a Stream Processor

 

 

Four Applications of Sensor Data

 

 

Continuing from Lesson 2, our intent is to provide a broad foundation for folks who are starting to think about streaming and IoT.  In this lesson we’ll explain how Spark and Storm handle data streams differently, discuss what real time analytics actually means, offer some alternatives for streaming beyond open source, and suggest some trends you should watch in this fast evolving space.

 

Three Data Handling Paradigms:  SPARK Versus Storm

When users compare SPARK versus Storm the conversation usually focuses on the difference in the way they handle the incoming data stream. 

  • Storm processes incoming data one event at a time – called Atomic processing. 
  • SPARK processes incoming data in very small batches – called Micro Batch.  A SPARK micro batch is typically between ½ second and 10 seconds depending on how often the sensors are transmitting.  You can define this value.
  • A third method called Windowing allows for much longer windows of time and can be useful in some text or sentiment analysis applications, or systems in which signals only evolve over a relatively long period of time.

 

Atomic:  (aka one-tuple-at-a-time) Processes each inbound data event as a separate element.  This is the most intuitively obvious but also the most computationally expensive design.  For example, it’s used to guarantee fastest processing of individual events with least delay in transmitting the event to the subscriber.  Seen often for customer transactional inputs so that if some element of the event block fails the entire block is not deleted but moved to a bad record file that can later be processed further.  Apache Storm uses this paradigm.

Micro batching:  The critique of this approach is that it processes in batches (not atomic level streaming) but typically those batches are extremely small encompassing actions that occur within only a few seconds.  You can adjust the time window.  This makes the process somewhat more efficient.  SPARK Streaming uses this paradigm.

Windowing:  A hybrid of the two approaches, Windowing maintains the atomic processing of each data item but creates pseudo-batches (windows) to make processing more efficient.  This also allows for many more sophisticated interpretations such as sliding windows (e.g. everything that occurred in the last X period of time). 

All three of these approaches can guarantee that each data element is processed at least once.  Only the Atomic paradigm can guarantee that each data element is processed only once.

 

Consider this Example 

Your sensors are like FitBits and sample data every 10 seconds.  They transmit that in bursts whenever the sensor is cued to dump its data into a Wi-Fi stream.  One user may monitor the results of the stream many times during the day, valuing low latency and causing his sensor to upload via Wi-Fi frequently.  Another user may not be near a Wi-Fi connection or may simply not bother to download the data for several days.  Still a third user may have trouble with a network connection or the hardware itself that causes the sensor to transmit incomplete or missing packets that are then repeated later or are simply missing from the stream.

In this scenario, data from sensors originating at the same time may arrive at the stream processor with widely different delays and some of those packets that were disrupted may have been transmitted more than once or not at all.

You will need to carefully evaluate whether guaranteeing ‘only once’ processing, or the marginally faster response time of atomic processing warrant using this factor in your selection of the Stream Processor.

 

Streaming and Real Time Analytics

It’s common in IoT to find references to “real time analytics” or “in stream analytics” and these terms can be misleading.  Real time analytics does not mean discovering wholly new patterns in the data in real time while it is streaming by.  What it means is that previously developed predictive models that were deployed into the Stream Processor can score the streaming data and determine whether that signal is present, in real time.

It’s important to remember that the data science behind your sophisticated Stream Processor was developed in the classic two step data science process. First data scientists worked in batch with historical data with a known outcome (supervised learning) to develop an algorithm that uses the inputs to predict the likelihood of the targeted event.  The model, an algebraic formula, represented by a few lines of code (C, Python, Java, R, and others) is then exported into a program within the Stream Processor and goes to work evaluating the passing data to see if the signal is present.  If it is, some form of action alert is sent to the human or machine, or sent as a visual signal to a dashboard.

Recently the first indications that some new discoveries can be made in real time have been emerging but they are exceedingly rare.  See more in this article.

 

Beyond Open Source for Streaming

Why would you want to look beyond open source for your IoT system?  Largely because while open source tools and packages are practically free, this is the same as ‘free puppy’. 

Yes these packages can be downloaded for free from Apache but the most reasonable sources are the three primary distributors, Hortonworks, Cloudera, and MapR all of whom make sure the code is kept up to date and add certain features that make it easier to maintain.  Even from these distributors, your total investment should be in the low five figures.  This does not of course include implementation, consulting, or configuration support which is extra, either from the distributors, from other consultants, or from your own staff if they are qualified.

With open source what you also get is complexity.  Author Jim Scott writing about SPARK summed it up quite nicely.  “SPARK is like a fighter jet that you have to build yourself. The great thing about that is that after you are done building it, you have a fighter jet. Problem is, have you ever flown a fighter jet? There are more levers than could be imagined.”

In IT parlance, the configurations and initial programs you create in SPARK or other open source streaming platforms will be brittle.  That is every time your business rules change you will have to modify the SPARK code written in Scala, though Python is also available.

Similarly, standing up a SPARK or Hadoop storage cluster comes with programming and DBA overhead that you may not want to incur, or at least to minimize.  Using one of the major cloud providers and/or adding a SaaS service like Qubole will greatly reduce your labor with only a little incremental cost.

The same is true for the proprietary Stream Processors many of which are offered by major companies and are well tested and supported.  Many of these come with drag-and-drop visual interfaces eliminating the need for manual coding so that any reasonably dedicated programmer or analyst can configure and maintain the internal logic as your business changes.  (Keep your eye on NiFi, the new open source platform that also claims drag-and-drop).

 

Competitors to Consider

Forrester publishes a periodic rating and ranking of the competitor “Big Data Streaming Analytic Platforms” and as of the spring of 2016 listed 15 worthy of consideration.

Here are the seven Forrester regards as leaders in rank order:

  1. IBM
  2. Software AG
  3. SAP
  4. TIBCO Software
  5. Oracle
  6. DataTorrent,
  7. SQLstream

There are eight additional ‘strong performers’ in rank order:

  1. Impetus Technologies
  2. SAS
  3. Striim
  4. Informatica
  5. WSO2
  6. Cisco Systems
  7. data Artisans
  8. EsperTech

Note that the ranking does not include the cloud-only offerings which should certainly be included in any competitive comparison:

  1. Amazon Web Services’ Elastic MapReduce
  2. Google Cloud Dataflow
  3. Microsoft Azure Stream Analytics

Here’s the ranking chart:

 

It’s likely that you can get a copy of the full report from one of these competitors.  Be sure to pay attention to the detail.  For example here are some interesting observations from the numerical scoring table.

Stream Handling:  In this presumably core capability SoftwareAG got a perfect score while Impetus and WSO2 scored decidedly below average.

Stream Operators (Programs):  Another presumably core capability.  IBM Streams was given a perfect score.  Most other competitors had scores near 4.0 (out of 5.0) except for data Artisans given a noticeably weak score.

Implementation Support: data Artisans and EsperTech were decidedly weaker than others.

In all there are 12 scoring categories that you’ll want to examine closely.

What these 15 leaders and 3 cloud offerings have in common is that they greatly simplify the programming and configuration and hide the gory details.  That’s a value well worth considering.

 

Trends to Watch

IoT and streaming is a fast growth area with a high rate of change.  Witness the ascendance of SPARK in just the last year to become the go-to open source solution.  All of this development reflects the market demand for more and more tools and platforms to address the exploding market for data-in-motion applications.

All of this means you will need to keep your research up to date during your design and selection period.  However, don’t let the rate of change deter you from getting started.

  • One direction of growth will be the further refinement of SPARK to become a single platform capable of all four architectural elements:  data capture, stream processing, storage, and query.
  • We would expect many of the proprietary solutions to stake this claim also.
  • When this is proven reliable you can abandon the separate components required by the Lambda architecture.
  • We expect SPARK to move in the direction of simplifying set up and maintenance which is the same ground the proprietary solutions are claiming.  Watch particularly for integration of NiFi into SPARK, or at least the drag-and-drop interface elements creating a much friendlier UI.
Read more…

By Bill Vorhies.

Bill is Editorial Director for our sister site Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. This article originally appeared here

Summary:  In this Lesson 2 we continue to provide a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. 

In Lesson 1

In This Article

In Lesson 3

Is it IoT or Streaming

Stream Processing – Open Source

Three Data Handling Paradigms – Spark versus Storm

Basics of IoT Architecture – Open Source

What Can Stream Processors Do

Streaming and Real Time Analytics

Data Capture – Open Source with Options

Open Source Options for Stream Processors

Beyond Open Source for Streaming

Storage – Open Source with Options

Spark Streaming and Storm

Competitors to Consider

Query – Open Source Open Source with Options

Lambda Architecture – Speed plus Safety

Trends to Watch

 

Do You Really Need a Stream Processor

 

 

Four Applications of Sensor Data

 

 

Continuing from Lesson 1, our intent is to provide a broad foundation for folks who are starting to think about streaming and IoT.  In this lesson we’ll dive into Stream Processing the heart of IoT, then discuss Lambda architecture, whether you really need a Stream Processor, and offer a structure for thinking about what sensors can do.

 

Stream Processing – Open Source

Event Stream Processing platforms are the Swiss Army knives that can make data-in-motion do almost anything you want it to do.

The easiest way to understand ESP architecture is to see it as three layers or functions, input, processing, and output.

 

Input accepts virtually all types of time-based streaming data and multiple input streams are common.  In the main ESP processor occur a variety of actions called programs or operators.  And the results of those programs are passed to the subscriber interface which can send alerts via human interfaces or create machine automated actions, and also pass the data to Fast and Forever data stores.

It is true that Stream Processing platforms can directly receive data streams, but recall that they are not good at preserving accidentally lost data so you will still want a Data Capture front end like Kafka that can rewind and replay lost data.  It’s likely over the near future that many stream processors will resolve this problem and then you will need to revisit the need for a Kafka front end.

 

Stream Processing Requirements

The requirements for your stream processor are these:

  • High Velocity:  Capable of ingesting and processing millions of events per seconds depending on your specific business need.
  • Scales Easily:  These will all run on distributed clusters.
  • Fault Tolerant:  This is different than guaranteeing no lost data.
  • Guaranteed Processing:  This comes in two flavors: 1.) Process each event at least once, and 2. Process each event only once.  The ‘only-once’ criteria is harder to guarantee.  This is an advanced topic we will discuss a little later.
  • Performs the Programs You Need for Your Application.

 

What Can ESP Programs Do

The real power is in the programs starting with the ability to do data cleansing on the front end (kind of a mini-MDM), then duplicate the stream of data multiple times so that each identical stream can be used in different analytic routines simultaneously without waiting for one to finish before the next begins.  Here’s a diagram from a healthcare example used in a previous article describing how this works that illustrates multiple streams being augmented by static data, and processed by different logic types at the same time.  Each block represents a separate program within the ESP that needs to be created by you.

 

There are a very large number of different logic types that can be applied through these ESP programs including:

  • Compute
  • Copy, to establish multiple processing paths – each with different retention periods of say 5 to 15 minutes
  • Aggregate
  • Count
  • Filter – allows you to keep only the data from the stream that is useful and discard the rest, greatly reducing storage.
  • Function (transform)
  • Join
  • Notification email, text, or multimedia
  • Pattern (detection) (specify events of interest EOIs)
  • Procedure (apply advanced predictive model)
  • Text context – could detect for example Tweet patterns of interest
  • Text Sentiment – can monitor for positive or negative sentiments in a social media stream

There is some variation in what open source and proprietary packages can do so check the details against what you need to accomplish.

 

Open Source Options for Stream Processing

The major open source options (all Apache) are these:

Samza:  A distributed stream processing framework. It uses Kafka for messaging, and YARN to provide fault tolerance, processor isolation, security, and resource management.

NiFi: This is a fairly new project still in incubation.  It is different because of its user-friendly drag-and-drop graphical user interface and the ease with which it can be customized on the fly for specific needs.

Storm:  A well tested event based stream processor originally developed by Twitter.

SPARK Streaming:  SPARK Streaming is one of the four components of SPARK which is the first to integrate batch and streaming in a single enterprise capable platform.

 

SPARK Streaming and Storm Are the Most Commonly Used Open Source Packages

SPARK has been around for several years but in the last year it’s had an amazing increase in adoption, now replacing Hadoop/MapReduce in most new projects and with many legacy Hadoop/MapReduce systems migrating to SPARK.  SPARK development is headed toward being the only stack you would need for an IoT application.

SPARK consists of five components all of which support Scala, Java, Python, and R.

  1. SPARK:  The core application is a batch processing engine that is compatible with HDFS and other NoSQL DBs.  Its popularity is driven by the fact that it is 10X to 100X times faster than Hadoop/MapReduce.
  2. ML.lib: A powerful on-board library of machine learning algorithms for data science.
  3. SPARK SQL:  For direct support of SQL queries.
  4. SPARK Streaming:  Its integrated stream processing engine.
  5. GraphX:  A powerful graph database engine useful outside of streaming applications.

 

Storm by contrast is a pure event stream processor.  The differences between Storm and SPARK Streaming are minor except in the area of how they partition the incoming data.  This is an advanced topic discussed later.

If after you’ve absorbed the lesson about data partitioning and you determine this does not impact your application then in open source SPARK / SPARK Streaming is the most likely choice.

 

Lambda Architecture – Speed Plus Safety

The standard reference architecture for an IoT streaming application is known as the Lambda architecture which incorporates a Speed Layer and a Safety Layer

The inbound data stream is duplicated by the Data Capture app (Kafka) and sent in two directions, one to the safety of storage, and the other into the Stream Processing platform (SPARK Streaming or Storm).  This guarantees that any data lost can be replayed to ensure that all data is processed at least once.

 

The queries on the Stream Processing side may be extracting static data to add to the data stream in the Stream Processor or they may be used to send messages, alerts, and data to the consumers via any number of media including email, SMS, customer applications, or dashboards.  Alerts are also natively produced within the Stream Processor.

Queries on the Storage safety layer will be batch used for creating advanced analytics to be embedded in the Stream Processor or to answer ad hoc inquiries, for example to develop new predictive models.

 

Do You Really Need a Stream Processor?

As you plan your IoT platform you should consider whether a Stream Processor is actually required.  For certain scenarios where the message to the end user is required only infrequently or for certain sensor uses it may be possible to skip the added complexity of a Stream Processor altogether.

 

When Real Time is Long

When real time is fairly long, for example when notifying the end user of any new findings can occur only once a day or even less often it may be perfectly reasonable to process the sensor data in batch.

From an architecture standpoint the sensor data would arrive at the Data Capture app (Kafka) and be sent directly to storage.  Using regular batch processing routines today’s data would be analyzed overnight and any important signals sent to the user the following day.

Batch processing is a possibility where ‘real time’ is 24 hours or more and in some cases perhaps as short as 12 hours.  Shorter than this and Stream Processing becomes more attractive.

It is possible to configure Stream Processing to evaluate data over any time period including days, weeks, and even months but at some point the value of simplifying the system outweighs the value of Stream Processing.

 

Four Applications of Sensor Data

There are four broad applications of sensor data that may also impact your decision as to whether or not to incorporate Stream Processing as illustrated by these examples.

Sensor Direct:  For example, reading the GPS coordinates directly from the sensor and dropping them on to a map can readily create a ‘where’s my phone’ style app.  It may be necessary to join static data regarding the user (their home address in order to limit the map scale) and that could be accomplished external to a Stream Processor using a standard table join or it could be accomplished within a Stream Processor.

Expert Rules:  Without the use of data science, it may be possible to write rules that give meaning to the inbound stream of data.  For example, when combined with the patient’s static data an expert rule might be to summon medical assistance if the patient’s temperature reaches 103°.

Predictive Analytics: The next two applications are both within the realm of data science.  Predictive analytics are used by a data scientist to find meaningful information in the data.

Unsupervised Learning:  In predictive analytics unsupervised learning means applying techniques like clustering and segmentation that don’t require historical data that would indicate a specific outcome.  For example, an accelerometer in your FitBit can readily learn that you are now more or less active than you have been recently, or that you are more or less active than other FitBit users with whom you compare.  Joining with the customer’s static data is a likely requirement to give the reading some context. 

The advantage of unsupervised learning is that it can be deployed almost immediately after the sensor is placed since no long period of time is required to build up training data. 

Some unsupervised modeling will be required to determine the thresholds at which the alerts should be sent.  For example, a message might only be appropriate if the period of change was more than say 20% day-over-day, or more than one standard deviation greater than a similar group of users. 

These algorithms would be determined by data scientists working from batch data and exported into the Stream Processor as a formula to be applied to the data as it streams by.

Supervised Learning:  Predictive models are developed using training data in which the outcome is known.  This requires some examples of the behavior or state to be detected and some examples where that state is not present. 

For example we might record the temperature, vibration, and power consumption of a motor and also whether that motor failed within the next 12 hours following the measurement.  A predictive model could be developed that predicts motor failure 12 hours ahead of time if sufficient training data is available. 

The model in the form of an algebraic formula (a few lines of C, Java, Python, or R) is then exported to the Stream Processor to score data as it streams by, automatically sending alerts when the score indicates an impending failure. 

The benefits of sophisticated predictive models used in Stream Processing are very high.  The challenge may be in gathering sufficient training data if the event is rare as a percentage of all readings or rare over time meaning that much time may pass before adequate training data can be acquired.

Watch for our final installment, Lesson 3.

Read more…

By Bill Vorhies.

Bill is Editorial Director for our sister site Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. This article originally appeared here

Summary: This is the first in a series of articles aimed at providing a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. 

In This Article

In Lesson 2

In Lesson 3

Is it IoT or Streaming

Stream Processing – Open Source

Three Data Handling Paradigms – Spark versus Storm

Basics of IoT Architecture – Open Source

What Can Stream Processors Do

Streaming and Real Time Analytics

Data Capture – Open Source with Options

Open Source Options for Stream Processors

Beyond Open Source for Streaming

Storage – Open Source with Options

Spark Streaming and Storm

Competitors to Consider

Query – Open Source Open Source with Options

Lambda Architecture – Speed plus Safety

Trends to Watch

 

Do You Really Need a Stream Processor

 

 

Four Applications of Sensor Data

 

 

In talking to clients and prospects who are at the beginning of their IoT streaming projects it’s clear that there’s a lot of misunderstanding and gaps in their knowledge.  You can find hundreds of articles on IoT and inevitably they focus on some portion of the whole without an overall context or foundation.  This is understandable since the topic is big and far ranging not to mention changing fast. 

So our intent is to provide a broad foundation for folks who are starting to think about streaming and IoT.  We’ll start with the basics and move up through some of the more advanced topics, hopefully leaving you with enough information to then begin to start designing the details of your project or at least equipped to ask the right questions.

Since this is a large topic, we’ll spread it out over several articles with the goal of starting with the basics and adding detail in logical building blocks.

 

Is It IoT or Is It Streaming?

The very first thing we need to clear up for beginners is the nomenclature.  You will see the terms “IoT” and “Streaming” used to mean different things as well as parts of the same thing.  Here’s the core of the difference:  If the signal derives from sensors it’s IoT (Internet of Things).  The problem is that there are plenty of situations where the signal doesn’t come from sensors but are handled in essentially the same way.  Web logs, click streams, streams of text from social media, and streams of stock prices are examples of non-sensor streams that are therefore not “IoT”.

What they share however is that all are data-in-motion streams of data. Streaming is really the core concept and we could just as easily have called this “Event Stream Processing”, except that focusing on streaming leaves out several core elements of the architecture such as how we capture the signal, store the data, and query it.

In terms of the architecture, the streaming part is only one of the four main elements we’ll discuss here.  Later we’ll also talk about the fact that although the data may be streaming, you may not need to process it as a stream depending on what you think of as real time.  It’s a little confusing but we promise to clear that up below.

The architecture needed to handle all types of streaming data is essentially the same regardless of whether the source is specifically a sensor or not so throughout we’re going to refer to this as “IoT Architecture”.  And since this is going to be a discussion that focuses on architecture, if you’re still unclear about streaming in general you might start with these overviews: Stream Processing – What Is It and Who Needs It and Stream Processing and Streaming Analytics – How It Works”.

 

Basics of IoT Architecture – Open Source

Open source in Big Data has become a huge driver of innovation.  So much so that probably 80% of the information available on-line deals with some element or package for data handling that is open source.  Open source is also almost completely synonymous with Apache Institute.  So to understand the basics of IoT architecture we’re going to start by focusing on open source tools and packages.

If you’re at all familiar with IoT you cannot have avoided learning something about SPARK and Storm, two of the primary Apache open source streaming projects but these are only part of the overall architecture.  Also, later in this series we’ll turn our attention to the emerging proprietary non-open source options and why you may want to consider them.

Your IoT architecture will consist of four components: Data Capture, Stream Processing, Storage, and Query.  Depending on the specific packages you choose some of these may be combined but for this open source discussion we’ll assume they’re separate.

 

Data Capture – Open Source

Think of the Data Capture component as the catchers mitt for all your incoming sources be they sensor, web streams, text, image, or social media.  The Data Capture application needs to:

  1. Be able to capture all your data as fast as it’s coming from all sources at the same time.  In digital advertising bidding for example this can easily be 1 million events per second.  There are applications where the rate is even higher but it’s unlikely that yours will be this high.  However, if you have a million sensors each transmitting once per second you’re already there.
  2. Must not lose events.  Sensor data is notoriously dirty.  This can be caused by malfunction, age, signal drift, connectivity issues, or a variety of other network, software and hardware issues.  Depending on your use case you may be able to stand some data loss but our assumption is that you don’t want to lose any.
  3. Scale Easily:  As your data grows, your data capture app needs to keep up.  This means that it will be a distributed app running on a cluster as will all the other components discussed here.

Streaming data is time series so it arrives with at least three pieces of information: 1.) the time stamp from its moment of origination, 2.) sensor or source ID, and 3.) the value(s) being read at that moment.

Later you may combine your streaming data with static data, for example about your customer, but that happens in another component.

 

Why Do You Need a Message Collector At All?

Many of the Stream Processing apps including SPARK and Storm can directly ingest messages without a separate Message Collector front end.  However, if a node in the cluster fails they can’t guarantee that the data can be recovered.  Since we assume your business need demands that you be able to save all the incoming data, a front end Message Collector that can temporarily store and repeat data in the case of failure is considered a safe architecture.

 

Open Source Options for Message Collectors

In open source you have a number of options.  Here are some of the better known Data Collectors.  This is not an exhaustive list.

  • FluentD – General purpose multi-source data collector.
  • Flume – Large scale log aggregation framework.  Part of the Hadoop ecosystem.
  • MQ (e.g. RabbitMQ) There are a number of these lightweight message brokers deriving from the original IBM MQTT (message queuing telemetry transport, shortened to MQ).
  • AWS Kinesis – The other major cloud services also have open source Data Collectors.
  • Kafka – Distributed queue publish-subscribe system for large amounts of streaming data.

 

Kafka is Currently the Most Popular Choice

Kafka is not your only choice but it is far and away today’s most common choice used by LinkedIn, Netflix, Spotify, Uber, and AirBNB among others.

Kafka is a distributed messaging system designed to tolerate hardware, software, and network failures and to allow segments of failed data to be essentially rewound and replayed, providing the needed safety in your system.  Kafka came out of LinkedIn in 2011 and is known for its ability to handle very high throughput rates and to scale out.

If your stream of data needed no other processing, it could be passed directly through Kafka to a data store.

 

Storage – Open Source

Here’s a quick way to do a back-of-envelope assessment of how much storage you’ll need.  For example:

Number of Sensors

1 Million

Signal Frequency

Every 60 seconds

Data packet size

1 Kb

Events per sensor per day

1,440

Total events per day

1.44 Billion

Events per second

16,667

Total data size per day

1.44 TB per day

 

Your system will need two types of storage, ‘Forever’ storage and ‘Fast’ storage.

Fast storage is for real time look up after the data has passed through your streaming platform or even while it is still resident there.  You might need to query Fast storage in just a few milliseconds to add data and context to the data stream flowing through your streaming platform, like what were the min and max or average readings for sensor X over the last 24 hours or the last month.  How long you hold data in Fast storage will depend on your specific business need.

Forever storage isn’t really forever but you’ll need to assess exactly how long you want to hold on to the data.  It could be forever or it could be a matter of months or years.  Forever storage will support your advanced analytics and the predictive models you’ll implement to create signals in your streaming platform, and for general ad hoc batch queries.

RDBMS is not going to work for either of these needs based on speed, cost, and scale limitations.  Both these are going to be some version of NoSQL.

 

Cost Considerations

In selecting your storage platforms you’ll be concerned about scalability and reliability, but you’ll also be concerned about cost.  Consider this comparison drawn from Hortonworks:

 

For on premise storage a Hadoop cluster will be both the low cost and best scalability/reliability option.  Cloud storage also based on Hadoop is now approaching 1¢ per GB per month from Google, Amazon, and Microsoft.

 

Open Source Options for Storage

Once again we have to pause to explain nomenclature, this time about “Hadoop”.  Many times, indeed most times that you read about “Hadoop” the author is speaking about the whole ecosystem of packages that are available to run on Hadoop. 

Technically however Hadoop consists of three elements that are the minimum requirements for it to operate as a database.  Those are: HDFS (Hadoop file system – how the data is stored), YARN (the scheduler), and Map/Reduce (the query system).  “Hadoop” (the three component database) is good for batch queries but has recently been largely overtaken in new projects by SPARK which runs on HDFS and has a much faster query method. 

What you should really focus on is the HDFS foundation.  There are other open source alternatives to HDFS such as S3 and Mongo, and these are viable options.  However almost universally what you will encounter are NoSQL database systems based on HDFS.  These options include:

  • Hbase
  • Cassandra
  • Accumulo
  • SPARK
  • And many others.

We said earlier that RDBMS was non-competitive based on many factors, not the least of which is that the requirement for a schema-on-write is much less flexible than the NoSQL schema-on-read (late schema).  However, if you are committed to RDBMS you should examine the new entries in NewSQL which are RDBMS with most of the benefits of NoSQL.  If you’re not familiar, try one of these refresher articles here,here, or here.

 

Query – Open Source

The goal of your IoT streaming system is to be able to flag certain events in real time that your customer/user will find valuable.  At any given moment your system will contain two types of data, 1.) Data-in-motion, as it passes through your stream processing platform, and 2.) Data-at-rest, some of which will be in fast storage and some in forever storage.

There are two types of activity that will require you to query your data:

Real time outputs:  If your goal is to send an action message to a human or a machine, or if you are sending data to a dashboard for real time update you may need to enhance your streaming data with stored information.  One common type is static user information.  For example, adding static customer data to the data stream while it is passing through the stream processor can be used to enhance the predictive power of the signal.  A second type might be a signal enhancement.  For example if your sensor is telling you the current reading from a machine you might need to be able to compare that to the average, min, max, or other statistical variations from that same sensor over a variety of time periods ranging from say the last minute to the last month.

These data are going to be stored in your Fast storage and your query needs to be completed within a few milliseconds.

Analysis Queries:  It’s likely that your IoT system will contain some sophisticated predictive models that score the data as it passes by to predict human or machine behavior.  In IoT, developing predictive analytics remains the classic two step data science process: first analyze and model known data to create the predictive model, and second, export that code (or API) into your stream processing system so that it can score data as it passes through based on the model.  Your Forever data is the basis on which those predictive analytic models will be developed.  You will extract that data for analysis using a batch query that is much less time sensitive.

Open Source Options for Query

In the HDFS Apache ecosystem there are three broad categories of query options.

  1. Map/Reduce:  This method is one of the three legs of a Hadoop Database implementation and has been around the longest.  It can be complex to code though updated Apache projects like Pig and Hive seek to make this easier.  In batch mode, for analytic queries where time is not an issue Map/Reduce on a traditional Hadoop cluster will work perfectly well and can return results from large scale queries in minutes or hours.
  2. SPARK:  Based on HDFS, SPARK has started to replace Hadoop Map/Reduce because it is 10X to 100X faster at queries (depending on whether the data is on disc or in memory).  Particularly if you have used SPARK in your streaming platform it will make sense to also use it for your real time queries.  Latencies in the milliseconds range can be achieved depending on memory and other hardware factors.
  3. SQL:  Traditionally the whole NoSQL movement was named after database designs like Hadoop that could not be queried by SQL.  However, so many people were fluent in SQL and not in the more obscure Map/Reduce queries that there has been a constant drumbeat of development aimed at allowing SQL queries.  Today, SQL is so common on these HDFS databases that it’s no longer accurate to say NoSQL.  However, all these SQL implementations require some sort of intermediate translator so they are generally not suited to millisecond queries.  They do however make your non-traditional data stores open to any analysts or data scientists with SQL skills.

Watch for Lessons 2 and 3 in the next weeks.

Read more…

A smart, highly optimized distributed neural network, based on Intel Edison "Receptive" Nodes

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.

These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.

These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.

Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.

To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.

Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.

Read more…

Originally Posted and Written by: Michelle Canaan, John Lucker, & Bram Spector

Connectivity is changing the way people engage with their cars, homes, and bodies—and insurers are looking to keep pace. Even at an early stage, IoT technology may reshape the way insurance companies assess, price, and limit risks, with a wide range of potential implications for the industry.

Insurers’ path to growth: Embrace the future

In 1997, Progressive Insurance pioneered the use of the Internet to purchase auto insurance online, in real time.1 In a conservative industry, Progressive’s innovative approach broke several long-established trade-offs, shaking up traditional distribution channels and empowering consumers with price transparency.

This experiment in distribution ended up transforming the industry as a whole. Online sales quickly forced insurers to evolve their customer segmentation capabilities and, eventually, to refine pricing. These modifications propelled growth by allowing insurers to serve previously uninsurable market segments. And as segmentation became table stakes for carriers, a new cottage industry of tools, such as online rate comparison capabilities, emerged to capture customer attention. Insurers fought to maintain their competitive edge through innovation, but widespread transparency in product pricing over time created greater price competition and ultimately led to product commoditization. The tools and techniques that put the insurer in the driver’s seat slowly tipped the balance of power to the customer.

This case study of insurance innovation and its unintended consequences may be a precursor to the next generation of digital connectivity in the industry. Today, the availability of unlimited new sources of data that can be exploited in real time is radically altering how consumers and businesses interact. And the suite of technologies known as the Internet of Things (IoT) is accelerating the experimentation of Progressive and other financial services companies. With the IoT’s exponential growth, the ways in which citizens engage with their cars, homes, and bodies are getting smarter each day, and they expect the businesses they patronize to keep up with this evolution. Insurance, an industry generally recognized for its conservatism, is no exception.

IoT technology may still be in its infancy, but its potential to reshape the way insurers assess, price, and limit risks is already quite promising. Nevertheless, since innovation inevitably generates unintended possibilities and consequences, insurers will need to examine strategies from all angles in the earliest planning stages.

To better understand potential IoT applications in insurance, the Deloitte Center for Financial Services (DCFS), in conjunction with Wikistrat, performed a crowdsourcing simulation to explore the technology’s implications for the future of the financial services industry. Researchers probed participants (13 doctorate holders, 24 cyber and tech experts, 20 finance experts, and 6 entrepreneurs) from 20 countries and asked them to imagine how IoT technology might be applied in a financial services context. The results (figure 1) are not an exhaustive compilation of scenarios already in play or forthcoming but, rather, an illustration of several examples of how these analysts believe the IoT may reshape the industry.2

ER_2824_Fig.1

CONNECTIVITY AND OPPORTUNITY

Even this small sample of possible IoT applications shows how increased connectivity can generate tremendous new opportunities for insurers, beyond personalizing premium rates. Indeed, if harnessed effectively, IoT technology could potentially boost the industry’s traditionally low organic growth rates by creating new types of coverage opportunities. It offers carriers a chance to break free from the product commoditization trend that has left many personal and commercial lines to compete primarily on price rather than coverage differentiation or customer service.

For example, an insurer might use IoT technology to directly augment profitability by transforming the income statement’s loss component. IoT-based data, carefully gathered and analyzed, might help insurers evolve from a defensive posture—spreading risk among policyholders and compensating them for losses—to an offensive posture: helping policyholders prevent losses and insurers avoid claims in the first place. And by avoiding claims, insurers could not only reap the rewards of increased profitability, but also reduce premiums and aim to improve customer retention rates. Several examples, both speculative and real-life, include:

  • Sensors embedded in commercial infrastructure can monitor safety breaches such as smoke, mold, or toxic fumes, allowing for adjustments to the environment to head off or at least mitigate a potentially hazardous event.
  • Wearable sensors could monitor employee movements in high-risk areas and transmit data to employers in real time to warn the wearer of potential danger as well as decrease fraud related to workplace accidents.
  • Smart home sensors could detect moisture in a wall from pipe leakage and alert a homeowner to the issue prior to the pipe bursting. This might save the insurer from a large claim and the homeowner from both considerable inconvenience and losing irreplaceable valuables. The same can be said for placing IoT sensors in business properties and commercial machinery, mitigating property damage and injuries to workers and customers, as well as business interruption losses.
  • Socks and shoes that can alert diabetics early on to potential foot ulcers, odd joint angles, excessive pressure, and how well blood is pumping through capillaries are now entering the market, helping to avoid costly medical and disability claims as well as potentially life-altering amputations.3

Beyond minimizing losses, IoT applications could also potentially help insurers resolve the dilemma with which many have long wrestled: how to improve the customer experience, and therefore loyalty and retention, while still satisfying the unrelenting market demand for lower pricing. Until now, insurers have generally struggled to cultivate strong client relationships, both personal and commercial, given the infrequency of interactions throughout the insurance life cycle from policy sale to renewal—and the fact that most of those interactions entail unpleasant circumstances: either deductible payments or, worse, claims. This dynamic is even more pronounced in the independent agency model, in which the intermediary, not the carrier, usually dominates the relationship with the client.

The emerging technology intrinsic to the IoT that can potentially monitor and measure each insured’s behavioral and property footprint across an array of activities could turn out to be an insurer’s holy grail, as IoT applications can offer tangible benefits for value-conscious consumers while allowing carriers to remain connected to their policyholders’ everyday lives. While currently, people likely want as few associations with their insurers as possible, the IoT can potentially make insurers a desirable point of contact. The IoT’s true staying power will be manifested in the technology’s ability to create value for both the insurer and the policyholder, thereby strengthening their bond. And while the frequency of engagement shifts to the carrier, the independent agency channel will still likely remain relevant through the traditional client touchpoints.

By harnessing continuously streaming “quantified self” data, using advanced sensor connectivity devices, insurers could theoretically capture a vast variety of personal data and use it to analyze a policyholder’s movement, environment, location, health, and psychological and physical state. This could provide innovative opportunities for insurers to better understand, serve, and connect with policyholders—as well as insulate companies against client attrition to lower-priced competitors. Indeed, if an insurer can demonstrate how repurposing data collected for insurance considerations might help a carrier offer valuable ancillary non-insurance services, customers may be more likely to opt in to share further data, more closely binding insurer and customer.

Leveraging IoT technologies may also have the peripheral advantage of resuscitating the industry’s brand, making insurance more enticing to the relatively small pool of skilled professionals needed to put these strategies in play. And such a shift would be welcome, considering that Deloitte’s Talent in Insurance Survey revealed that the tech-savvy Millennial generation generally considers a career in the insurance industry “boring.”4 Such a reputational challenge clearly creates a daunting obstacle for insurance executives and HR professionals, particularly given the dearth of employees with necessary skill sets to successfully enable and systematize IoT strategies, set against a backdrop of intense competition from many other industries. Implementing cutting-edge IoT strategies could boost the “hip factor” that the industry currently lacks.

With change comes challenges

While most stakeholders might see attractive possibilities in the opportunity for behavior monitoring across the insurance ecosystem, inevitable hurdles stand in the way of wholesale adoption. How insurers surmount each potential barrier is central to successful evolution.

For instance, the industry’s historically conservative approach to innovation may impede the speed and flexibility required for carriers to implement enhanced consumer strategies based on IoT technology. Execution may require more nimble data management and data warehousing than currently in place, as engineers will need to design ways to quickly aggregate, analyze, and act upon disparate data streams. To achieve this speed, executives may need to spearhead adjustments to corporate culture grounded in more centralized location of data control. Capabilities to discern which data are truly predictive versus just noise in the system are also critical. Therefore, along with standardized formats for IoT technology,5 insurers may see an increasing need for data scientists to mine, organize, and make sense of mountains of raw information.

Perhaps most importantly, insurers would need to overcome the privacy concerns that could hinder consumers’ willingness to make available the data on which the IoT runs. Further, increased volume, velocity, and variety of data propagate a heightened need for appropriate security oversight and controls.

For insurers, efforts to capitalize on IoT technology may also require patience and long-term investments. Indeed, while bolstering market share, such efforts could put a short-term squeeze on revenues and profitability. To convince wary customers to opt in to monitoring programs, insurers may need to offer discounted pricing, at least at the start, on top of investments to finance infrastructure and staff supporting the new strategic initiative. This has essentially been the entry strategy for auto carriers in the usage-based insurance market, with discounts provided to convince drivers to allow their performance behind the wheel to be monitored, whether by a device installed in their vehicles or an application on their mobile device.

Results from the Wikistrat crowdsourcing simulation reveal several other IoT-related challenges that respondents put forward. (See figure 2.)6

ER_2824_Fig.2a

Each scenario implies some measure of material impact to the insurance industry. In fact, together they suggest that the same technology that could potentially help improve loss ratios and strengthen policyholder bonds over the long haul may also make some of the most traditionally lucrative insurance lines obsolete.

For example, if embedding sensors in cars and homes to prevent hazardous incidents increasingly becomes the norm, and these sensors are perfected to the point where accidents are drastically reduced, this development may minimize or eliminate the need for personal auto and home liability coverage, given the lower frequency and severity of losses that result from such monitoring. Insurers need to stay ahead of this, perhaps even eventually shifting books of business from personal to product liability as claims evolve from human error to product failure.

Examining the IoT through an insurance lens

Analyzing the intrinsic value of adopting an IoT strategy is fundamental in the development of a business plan, as executives must carefully consider each of the various dimensions to assess the potential value and imminent challenges associated with every stage of operationalization. Using Deloitte’s Information Value Loop can help capture the stages (create, communicate, aggregate, analyze, act) through which information passes in order to create value.7

The value loop framework is designed to evaluate the components of IoT implementation as well as potential bottlenecks in the process, by capturing the series and sequence of activities by which organizations create value from information (figure 3).

ER_2824_Fig.3

To complete the loop and create value, information passes through the value loop’s stages, each enabled by specific technologies. An act is monitored by a sensor that creates information. That information passes through a network so that it can be communicated, and standards—be they technical, legal, regulatory, or social—allow that information to be aggregated across time and space. Augmented intelligence is a generic term meant to capture all manner of analytical support, collectively used to analyze information. The loop is completed via augmented behavior technologies that either enable automated, autonomous action or shape human decisions in a manner leading to improved action.8

For a look at the value loop through an insurance lens, we will examine an IoT capability already at play in the industry: automobile telematics. By circumnavigating the stages of the framework, we can scrutinize the efficacy of how monitoring driving behavior is poised to eventually transform the auto insurance market with a vast infusion of value to both consumers and insurers.

Auto insurance and the value loop

Telematic sensors in the vehicle monitor an individual’s driving to create personalized data collection. The connected car, via in-vehicle telecommunication sensors, has been available in some form for over a decade.9 The key value for insurers is that sensors can closely monitor individual driving behavior, which directly corresponds to risk, for more accuracy in underwriting and pricing.

Originally, sensor manufacturers made devices available to install on vehicles; today, some carmakers are already integrating sensors into showroom models, available to drivers—and, potentially, their insurers—via smartphone apps. The sensors collect data (figure 4) which, if properly analyzed, might more accurately predict the unique level of risk associated with a specific individual’s driving and behavior. Once the data is created, an IoT-based system could quantify and transform it into “personalized” pricing.

ER_2824_Fig.4

Sensors’ increasing availability, affordability, and ease of use break what could potentially be a bottleneck at this stage of the Information Value Loop for other IoT capabilities in their early stages.

IoT technology aggregatesand communicatesinformation to the carrier to be evaluated. To identify potential correlations and create predictive models that produce reliable underwriting and pricing decisions, auto insurers need massive volumes of statistically and actuarially credible telematics data.

In the hierarchy of auto telematics monitoring, large insurers currently lead the pack when it comes to usage-based insurance market share, given the amount of data they have already accumulated or might potentially amass through their substantial client bases. In contrast, small and midsized insurers—with less comprehensive proprietary sources—will likely need more time to collect sufficient data on their own.

To break this bottleneck, smaller players could pool their telematics data with peers either independently or through a third-party vendor to create and share the broad insights necessary to allow a more level playing field throughout the industry.

Insurers analyze data and use it to encourage drivers to act by improving driver behavior/loss costs. By analyzing the collected data, insurers can now replace or augment proxy variables (age, car type, driving violations, education, gender, and credit score) correlated with the likelihood of having a loss with those factors directly contributing to the probability of loss for an individual driver (braking, acceleration, cornering, and average speed, as figure 4 shows). This is an inherently more equitable method to structure premiums: Rather than paying for something that might be true about a risk, a customer pays for what is true based on his own driving performance.

But even armed with all the data necessary to improve underwriting for “personalized” pricing, insurers need a way to convince millions of reluctant customers to opt in. To date, insurers have used the incentive of potential premium discounts to engage consumers in auto telematics monitoring.10 However, this model is not necessarily attractive enough to convince the majority of drivers to relinquish a measure of privacy and agree to usage-based insurance. It is also unsustainable for insurers that will eventually have to charge rates actually based on risk assessment rather than marketing initiatives.

Substantiating the point about consumer adoption is a recent survey by the Deloitte Center for Financial Services of 2,193 respondents representing a wide variety of demographic groups, aiming to understand consumer interest in mobile technology in financial services delivery, including the use of auto telematics monitoring. The survey identified three distinct groups among respondents when asked whether they would agree to allow an insurer to track their driving experience, if it meant they would be eligible for premium discounts based on their performance (figure 5).11 While one-quarter of respondents were amenable to being monitored, just as many said they would require a substantial discount to make it worth their while (figure 5), and nearly half would not consent.

ER_2824_Fig.5

While the Deloitte survey was prospective (asking how many respondents would be willing to have their driving monitored telematically), actual recruits have been proven to be difficult to bring on board. Indeed, a 2015 Lexis-Nexis study on the consumer market for telematics showed that usage-based insurance enrollment has remained at only 5 percent of households from 2014 to 2015 (figure 6).12

ER_2824_Fig.6

Both of these survey results suggest that premium discounts alone have not and likely will not induce many consumers to opt in to telematics monitoring going forward, and would likely be an unsustainable model for insurers to pursue. The good news: Research suggests that, while protective of their personal information, most consumers are willing to trade access to that data for valuable services from a reputable brand.13 Therefore, insurers will likely have to differentiate their telematics-based product offerings beyond any initial early-adopter premium savings by offering value-added services to encourage uptake, as well as to protect market share from other players moving into the telematics space.

In other words, insurers—by offering mutually beneficial, ongoing value-added services—can use IoT-based data to become an integral daily influence for connected policyholders. Companies can incentivize consumers to opt in by offering real-time, behavior-related services, such as individualized marketing and advertising, travel recommendations based on location, alerts about potentially hazardous road conditions or traffic, and even diagnostics and alerts about a vehicle’s potential issues (figure 7).14 More broadly, insurers could aim to serve as trusted advisers to help drivers realize the benefits of tomorrow’s connected car.15

Many IoT applications offer real value to both insurers and policyholders: Consider GPS-enabled geo-fencing, which can monitor and send alerts about driving behavior of teens or elderly parents. For example, Ford’s MyKey technology includes tools such as letting parents limit top speeds, mute the radio until seat belts are buckled, and keep the radio at a certain volume while the vehicle is moving.16 Other customers may be attracted to “green” monitoring, in which they receive feedback on how environmentally friendly their driving behavior is.

Insurers can also look to offer IoT-related services exclusive of risk transfer—for example, co-marketing location-based services with other providers, such as roadside assistance, auto repairs, and car washes may strengthen loyalty to a carrier. They can also include various nonvehicle-related service options such as alerts about nearby restaurants and shopping, perhaps in conjunction with points earned by good driving behavior in loyalty programs or through gamification, which could be redeemed at participating vendors. Indeed, consumers may be reluctant to switch carriers based solely on pricing, knowing they would be abandoning accumulated loyalty points as well as a host of personalized apps and settings.

For all types of insurance—not just auto—the objective is for insurers to identify the expectations that different types of policyholders may have, and then adapt those insights into practical applications through customized telematic monitoring to elevate the customer experience.

Telematics monitoring has demonstrated benefits even beyond better customer experience for policyholders. Insurers can use telematics tools to expose an individual’s risky driving behavior and encourage adjustments. Indeed, people being monitored by behavior sensors will likely improve their driving habits and reduce crash rates—a result to everyone’s benefit. This “nudge effect” indicates that the motivation to change driving behavior is likely linked to the actual surveillance facilitated by IoT technology.

The power of peer pressure is another galvanizing influence that can provoke beneficial consumer behavior. Take fitness wearables, which incentivize individuals to do as much or more exercise than the peers with whom they compete.17 In fact, research done in several industries points to an individual’s tendency to be influenced by peer behavior above most other factors. For example, researchers asked four separate groups of utility consumers to cut energy consumption: one for the good of the planet, a second for the well-being of future generations, a third for financial savings, and a fourth because their neighbors were doing it. The only group that elicited any drop in consumption (at 10 percent) was the fourth—the peer comparison group.18

Insurers equipped with not only specific policyholder information but aggregated data that puts a user’s experience in a community context have a real opportunity to influence customer behavior. Since people generally resist violating social norms, if a trusted adviser offers data that compares customer behavior to “the ideal driver”—or, better, to a group of friends, family, colleagues, or peers—they will, one hopes, adapt to safer habits.

ER_2824_Fig.7a

The future ain’t what it used to be—what should insurers do?

After decades of adherence to traditional business models, the insurance industry, pushed and guided by connected technology, is taking a road less traveled. Analysts expect some 38.5 billion IoT devices to be deployed globally by 2020, nearly three times as many as today,19 and insurers will no doubt install their fair share of sensors, data banks, and apps. In an otherwise static operating environment, IoT applications present insurers with an opportunity to benefit from technology that aims to improve profits, enable growth, strengthen the consumer experience, build new market relevance, and avoid disruption from more forward-looking traditional and nontraditional competitors.

Incorporating IoT technology into insurer business models will entail transformation to elicit the benefits offered by each strategy.

  • Carriers must confront the barriers associated with conflicting standards—data must be harvested and harnessed in a way that makes the information valid and able to generate valuable insights. This could include making in-house legacy systems more modernized and flexible, building or buying new systems, or collaborating with third-party sources to develop more standardized technology for harmonious connectivity.
  • Corporate culture will need a facelift—or, likely, something more dramatic—to overcome longstanding conventions on how information is managed and consumed across the organization. In line with industry practices around broader data management initiatives,20 successfully implementing IoT technology will require supportive “tone at the top,” change management initiatives, and enterprisewide training.
  • With premium savings already proving insufficient to entice most customers to allow insurers access to their personal usage data, companies will need to strategize how to convince or incentivize customers to opt in—after all, without that data, IoT applications are of limited use. To promote IoT-aided connectivity, insurers should look to market value-added services, loyalty points, and rewards for reducing risk. Insurers need to design these services in conjunction with their insurance offerings, to ensure that both make best use of the data being collected.
  • Insurers will need to carefully consider how an interconnected world might shift products from focusing on cleaning up after disruptions to forestalling those disruptions before they happen. IoT technology will likely upend certain lines of businesses, potentially even making some obsolete. Therefore, companies must consider how to heighten flexibility in their models, systems, and culture to counterbalance changing insurance needs related to greater connectivity.
  • IoT connectivity may also potentially level the playing field among insurers. Since a number of the broad capabilities that technology is introducing do not necessarily require large data sets to participate (such as measuring whether containers in a refrigerated truck are at optimal temperatures to prevent spoilage21 or whether soil has the right mix of nutrients for a particular crop22), small to midsized players or even new entrants may be able to seize competitive advantages from currently dominant players.
  • And finally, to test the efficacy of each IoT-related strategy prior to implementation, a framework such as the Information Value Loop may become an invaluable tool, helping forge a path forward and identify potential bottlenecks or barriers that may need to be resolved to get the greatest value out of investments in connectivity.

The bottom line: IoT is here to stay, and insurers need look beyond business as usual to remain competitive.

The IoT is here to stay, the rate of change is unlikely to slow anytime soon, and the conservative insurance industry is hardly impervious to connectivity-fueled disruption—both positive and negative. The bottom line: Insurers need to look beyond business as usual. In the long term, no company can afford to engage in premium price wars over commoditized products. A business model informed by IoT applications might emphasize differentiating offerings, strengthening customer bonds, energizing the industry brand, and curtailing risk either at or prior to its initiation.

IoT-related disruptors should also be considered through a long-term lens, and responses will likely need to be forward-looking and flexible to incorporate the increasingly connected, constantly evolving environment. With global connectivity reaching a fever pitch amid increasing rates of consumer uptake, embedding these neoteric schemes into the insurance industry’s DNA is no longer a matter of if but, rather, of when and how.

You can view the original post in its entirety Here

Read more…

Originally Posted by: Shawn Wasserman

Shodan search results show that over half a million devices use the 10-year-old OpenSSH 4.3 software. This puts all these devices at risk.

Shodan search results show that over half a million devices use the 10-year-old OpenSSH 4.3 software. This puts all these devices at risk.

One doesn’t have to look too far to realize how vulnerable the Internet of Things (IoT) can be. It just takes a quick search on IoT search engines like BullGuard and Shodan.io.

During a presentation at PTC LiveWorx 2016, Rob Black, senior director of product management at PTC, outlined how black hat hackers could get into over half a million connected devices using an old software known as OpenSSH 4.3.

OpenSSH is a secure shell (SSH) protocol used to allow users access to networks from a remote location. It’s harmless, even useful, if used by the right user in a controlled way.

Unfortunately, a popular version of the software, OpenSSH 4.3, has been out for about a decade. As a result, it has developed a laundry list of vulnerabilities that hackers can use to gain access to systems.

According to the Shodan IoT device search engine, over half a million devices on the ‘net still use this outdated software.

“Half a million devices are on the open Internet with 10-year-old software that allows you to tunnel inside to their network. Who thinks that’s good?” Black rhetorically questioned. “This is one example. One search. One software. One version of a software. There are millions of exposed resources on the Internet.”

The scary thing is that Black explained that some search results will bring up IoT devices associated with power plants and wind tunnels. According to AdaptiveMobile, a mobile network security company, up to 80 percent of connected devices on the IoT do not have the security measures they need to protect us. Once you find a device on Shodan, you can see many characteristics on that device which will help hackers get into it.

These attacks can even prove deadly depending on the IoT application. Take an integrated clinical environment (ICE) like an IoT-enabled hospital. Without proper security, many types of attacks have the potential to risk lives. According to a report published by the Industrial Internet Consortium, these attacks fall into five categories.

Five IoT hacking attacks that can risk lives. Examples from an integrated clinical environment (ICE). (Table from the Industrial Internet Consortium.)
Five IoT hacking attacks that can risk lives. Examples from an integrated clinical environment (ICE). (Table from the Industrial Internet Consortium.)

Engineers are designing these IoT devices, sensors and edge points. To ensure that hackers are kept at bay, these engineers need to understand and learn from their software engineer and IT cousins.

“From a design point of view, engineers need to learn about hacking security. You need security at the edge point to make an intelligent analytic device,” said Michael Wendenburg, CEO at Michael Wendendenburg Online Redaktion. “If you hack into that point, you hack into all this data. Engineers are not prepared for that.”

Black agreed, saying, “It’s our role as practitioners of IoT is to really manage those devices that we have in a smart way.”

How Do IoT and Cloud Security Differ?

Black explained that unlike in cloud security, humans may not be in the loop when it comes to IoT security. It’s not feasible for millions of users to be there to hit “Okay” to update software in billions of devices.

Black explained that unlike in cloud security, humans may not be in the loop when it comes to IoT security. It’s not feasible for millions of users to be there to hit “Okay” to update software in billions of devices.

An engineer might think that as long as the cloud system utilized by the IoT device is secure, then all is well. However, there are differences between an IoT system and a cloud system.

Black explained that on the cloud, users and applications are both managed. There are security tools and permissions put into place. On the operations side, servers will be secured and ports will be closed and audited. This takes a lot of testing, but it’s been done before. IoT security, on the other hand, adds complexity.

“Cloud security has been around for a long time and there are lots of good strong practices and management around cloud applications. For IoT, the key difference is we connect things,” clarified Black. “A lot of the challenge is the number of devices to manage and the differences between these devices.”

“There are a bunch of new issues out there like rogue sensors and rogue data sources,” said Andy Rhodes, division head of IoT at Dell. “If you’re orchestrating a turbine or a damn and someone hacks into that and changes the settings, then there are catastrophic issues.”  

Here are some other key differences between cloud and IoT applications:

  • IoT has a stronger potential for damage as water mains can be shut off, power plants can become critical and cars made unresponsive on the road.
  • IoT has a diverse number of devices, operating systems and protocols making it hard to consolidate and standardize as companies grow and products change.
  • Human interactions with all the devices is not scalable. For instance, humans many not be there to hit “Okay” for an update.

The key is to work together. Engineers and IT professionals need to demolish their silos and learn from one another to make the IoT ecosystem secure. However, just because the IT crew has the ecosystem covered on the cloud doesn’t mean the devices and sensors are secure.

“IT [Information Technology] knows how to do security and a lot of this is still traditional IT security working alongside the OT [Operations Technology] people to understand how to secure the sensors as well,” described Rhodes. “You need [security on the device] and network security on the IT side because data flows two ways so you have to secure both ends of that spectrum.”

How to Manage Your Connected Device

Black demonstrating an IoT security architecture.

Black demonstrating an IoT security architecture.

With current IoT trends, if your device isn’t connected to the Internet, it soon will be. Otherwise, it will not keep up with the 30 billion other connected devices Gartner expects to see in the market by 2020.

So the question may not be whether to get into the IoT market given all the security risks. It should be a question of how to manage connected devices with all these security risks.

Black demonstrated what a simple IoT architecture might look like. It includes devices within a firewall, wireless devices outside the firewall and having those devices connecting into the IoT platform. Then, all of this will be used in an application that will use the data from the devices to perform a function. All of these systems, applications and development tools used to make the system must be made secure.

The issue is that because all of these different systems are under the control of various organizations on the vendor, customer and public levels, it can be confusing to establish who is really responsible for all of this IoT security.

“I argue that for IoT we have a shared security responsibility,” noted Black. “This is not a one-entity responsibility. It is shared between the providers of the infrastructure, service, platform, application and the end customers.”

Importance of User Roles on IoT Security

Given all of the organizations and users that might be associated with one IoT system, defining roles for these organizations and users is of high importance.

Each user and organization will have different roles, which will define levels of control over the IoT system. For instance, you don’t want to give your customers visibility into and control over all of the IoT devices on your ecosystem. This could make the data of your other customers insecure, as competitors might gain insights due to the information on your system and the lack of roles governing the system.

However, a maintenance team that services all the devices sent to customers will need to see which devices from each customer will be up for servicing.

The key takeaway is that as your system grows on the IoT, much of this role management should be automated. Otherwise, the role management will not scale with the IoT system if a human remains in the role assignment loop.

“From a visibility and permission standpoint, what you really want are mechanisms to drive that behavior,” instructed Black. “When new devices are added, if you have a manual process, that is not going to scale when you [have] tens of thousands of devices. You are going to need a system that drives this behavior automatically. You just need to set the rules beforehand to ensure the users are put in the right groups.”

Division of Systems is Key to a Secure IoT Ecosystem

The division of permissions shouldn’t just be between roles. It should also be between systems within the IoT device itself. Engineers must design some systems and subsystems to be independent and separate from all other systems. This will ensure that if a hacker compromises your device, they will not be able to take control of key systems.

After all, there is no reason for an entertainment system in a car to be linked to the steering, brakes and accelerator of a car. As the WIRED video below shows, though, this was the case with the Jeep Cherokee. As a result, hackers were able to mess with one reporter’s drive on the highway with hilarious outcomes—but the joke isn’t funny anymore if people actually get hurt.

“The way some of these systems are designed, if you have access to this you have access to multiple design elements in the car,” said Frank Antonysamy, head of engineering and manufacturing solutions at Cognizant. “The way we are dealing with this is to isolate as much as possible and then get the data.”

“When you look at it from a system design [perspective], in an automobile for example, there is still a fair amount of isolation written into the design,” said Antonysamy. “Because I have access to my control panel doesn’t mean I have access to the accelerator. That kind of design-based isolation is critical at least until we get a zero-vulnerability scenario.”

Eric van Gemeren, vice president of R&D at Flowserve, explained that the automobile industry and other IoT device creators can learn a lot from the process industry on the separation of systems within a design.

“In the process industry, it’s different from having a car that’s IoT-enabled and someone can hack into it,” said van Gemeren. “In the process industry, there are well-established IEC [International Electrotechnical Commission] and ISO [International Organization for Standardization] standards for safety and compliance. The control communication network is always separate and independent from the diagnostics and asset management network. It’s very clear that when you design that solution, there are certain features and functions that will never be available through wireless, in a discrete controlled domain, with an entirely different protocols and with robust security on top of it.”

“A lot of the stuff we are talking about in the IoT space is all about gathering outbound asset information,” added van Gemeren. “You can’t send back control information or directions that can hijack the device.”

In other words, van Gemeren explained that if a safety system like fire suspension sprinklers were installed in a process plant, they will need to be on an isolated system.

Do Your Devices Need to Talk to Other Devices?

Black explained the scenarios in which you need to use device-to-device

Black explained the scenarios in which you need to use device-to-device

When people think about the IoT, many of them think of connected devices communicating with each other over the Internet.

Though there are situations when the data should be sent to the cloud, there are also situations where it is faster and more efficient for devices to talk to each other directly.

“You could go up to the cloud and negotiate up there and bring it back down but that is not using bandwidth efficiently and what happens if you lose network connectivity? Will your devices fail? Do you want them to be dependent on the network?” asked Black.

When connected device need to talk directly, you will need a way to authenticate the devices mutually as well as a method of authorizing the devices to an appropriate level of interactions.

“It doesn’t make sense for one car to have the authorization to turn on the windshield wipers for another car,” joked Black.

The Importance of Provisioning and Approval of an IoT Device

This brings us to another key step in setting up a secure IoT system: ensuring your processes can set up provisioning and approval for device-to-device communication, data ownership, de-provisioning and more.

“Any process that runs off of administration approval will fail on an IoT scale,” remarked Black. This is similar to the creation of roles the human needs to be out of the loop. Black added, “You can’t design a process based on admin approval—it might work for a hundred devices but it won’t work on a large-scale system.”

Unfortunately, you can’t just let all devices interconnect without a provisioning and approval process either. Take the Superfish scandal, for example. The program was intended to provide advertisers with a way to show ads based on a user’s Internet searches.

This sounds innocuous enough until you realize that, at the time, all Lenovo laptops had the same self-signed certification key for all the laptops that shipped out with the program. This allowed for man-in-the-middle hacking attacks that could intercept the Internet communications of any Lenovo laptop with the Superfish program still installed.

“Ensuring trust when you’re bootstrapping a device is challenging even big laptop manufacturers can make mistakes,” said Black. “We need to think through some of those processes to see how do we get secrets onto a device. You need a well-defined mechanism for establishing trust on your device.”

One method Black suggested to get your devices onto your IoT system with secure provisioning and approval is to use your enterprise resource planning (ERP) system. If your ERP system were connected to the IoT system, then the provisioning and approval process will expect to see the device. Not only would this system be secure, it can also be made scalable as there will be no need to have a human in the loop.

The Importance of De-Provisioning When You Re-Sell a Connected Device

Black explained the importance of factory resets and de-provisioning when selling used devices.

Black explained the importance of factory resets and de-provisioning when selling used devices.

There is a lot of confidential information that can be stored on a connected device. Therefore, if users aren’t careful, they could be giving a hacker everything they need to get into the system when re-selling these devices.

The average user would know enough to delete their personal and business data from the device, but there still might be information on the re-sold device that can open doors to hackers.

For instance, the device might store digital keys that were used to encrypt the data you were sending and receiving from the Internet. If you were to sell that equipment without changing those keys, then whomever you sold that equipment to could now be able to decrypt all of the data you sent and received while operating the device. Assuming the hacker intercepted that data in full knowledge that you were to sell the equipment, they now have gathered a lot of information on your personal or business operations.

As a result, engineers should design easy to use de-provisioning procedures for the users of their devices.

Whose Data Is It Anyway? Where the Contract’s Made Up and Protection Should Matter.

Black asked the question: Whose data is it anyway?

Black asked the question: Whose data is it anyway?

One point of contention for the development of IoT security is the question of who owns the data.

Is it the device manufacturer, systems operator, device operator or the maintenance operator?

Will the answer be dependent on the IoT device application?

These questions need answers if robust security measures are to be put into place. Otherwise, the right information might end up in the wrong hands.

“We’ve seen a range of responses about data ownership and a lot revolves around privacy,” said Colm Pendergast, director of IoT technology at Analog Devices. “To a large extent, it will come down to negotiations between various partners in an ecosystem.”

“[Who owns the data] is a question that is always on the table,” said Chris May, account executive at ARIDEA SOLUTIONS. “It depends on the type of data being acquired. If it’s general weather data, then people are not very concerned. The weather is the weather… When you get to environmental data, it’s a completely different story. They are very protective of that data. [What] If the wrong person gets that data and they don’t understand how to interpret it? [What] if they can’t understand it’s a sensor being recalibrated and they think a water shed was contaminated? It would be massive lawsuits.”

It appears that though 54 percent of surveyed consumers might be comfortable sharing their data with companies, the reverse is not always true.

Alternatively, Black used an example of a medical device company. If the company is sold, then it makes sense for whomever buys the company to also own the data. After all, it will, in theory, be using said data to service the same clients. It isn’t in the client’s interest for the data to start at point zero.

However, does the answer of selling data ownership change with the scenario? What if, instead of a company being sold, it’s a house? Who owns all the data of the smart home—the previous tenants or the incoming tenants? It might be useful for the new tenants to know the power usage history of the house so they can budget their expenses, but do you want strangers to have data like that?

“When you think about how many different entities are involved with an IoT implementation, there are a lot of them,” said Black. “Some of them probably have rights to some of that data and some it’s probably better if they don’t have it.”

Before security walls are put up for an IoT device, these questions must be answered. Otherwise, an owner of the data might be cut off from their property. This can lead to some serious legal ramifications. On the other hand, not understanding where the line in the sand is for data can also open up security risks.

“If there was one single challenge that people are concerned about and has slowed IoT deployments is the question of security and integrating security solutions all over that technology stack. It is one of the bigger challenges,” said Pendergast.

However, one solutions to the IoT data question may not lie with the engineers, programmers or designers. It might be in the hands of public relations educating the public about IoT security and what data is and isn’t being collected.

“We deal with the medical device market and we constantly face the issue that we can’t send patient data—and we are a cloud-based platform, so that is a challenge,” said Puneet Pandit, CEO of Glassbeam. “We are not taking the patient data; we are taking the operation data. I think that is a constant question. There is a lot of education that has to be done in the industry to clarify what IoT data means at the end of the day. People have created security barriers for all the right reasons, but in the context of IoT you are taking machine and operational data and that isn’t something that is included on data privacy.”

Reducing IoT Attack Surfaces: Do You Need Access to the Open Web?

Shodan is only able to show the IoT devices that are on the open web. The number, as well as types, of devices that it can find is certainly scary.

“[Security is] still the top-two or -three concern of customers when you read surveys and speak to them,” said Rhodes. “What you’ve basically done is you’ve opened up a surface of attack either as a gateway or the things themselves.”

Does your device need to be on the open web? Do multiple surfaces of attack need to exist? The answer is no—not if engineers design the device to be the one to initiate communications.

“Different IoT solutions have the capability to perform device-initiated communication,” said Black. “That means that from a connection standpoint, if your device initiates communications, then that device is exclusively paired with one server on the cloud. That device is only going to communicate with that server.”

In other words, the device won’t be generally available on the Internet.

“It’s something to think about. Can I communicate with this device from every [access point] on the earth or is it tied to a single server? Because you are really reducing your attack surface with that kind of capability,” Black explained. “You reduce your attack surface so you are not worried about everything in the world. You are only connected to a very limited set of servers.”

If your device can connect to any endpoint on the Internet, then any hacker at any location could in theory send a command to that device. However, if the device is connected only to one server via a device-initiated communication, then only that server can send commands. The theory is that your server will be within internal IT infrastructures and securities.

However, there is a downside to device-initiated connectivity. You will have to rely on the device to connect to the system in order to initiate an update or collect data. In other words, you can lose connections to the device as soon as a customer changes firewall securities or the network is interrupted. 

As a result, if engineers chooses to use device-initiated connections for an IoT system, then they will need to inform the customer. The customer will need to understand if the firewall and network connection isn’t interfering with the connection.

“We’ve seen a lot of software partners changing their architecture to support intermittent connectivity,” said Gerald Kleyn, director of engineering at Hewlett Packard Enterprise (HPE). “In some cases, if the weather gets bad and [satellite communication] goes down, then when it comes back up it starts releasing things that have been stored on the edge back up to the cloud.”

What to Do When You Find a Vulnerability on Your Connected Device

The longer your device is in the real world, the more likely it is that a vulnerability will be found. As a result, engineers will need to design software update compatibility into their devices.

“You need a software distribution mechanism that will work for all of your devices that’s scalable, secure, flexible and efficient,” said Black. “It needs to be flexible because all your devices are different, so they need different processes and procedures.”

“You need to be able to say, if the install isn’t going right, that you need to hold back and notify your system. You need to be able to say, ‘do this for North America first, or Europe or everyone but that customer that doesn’t want updates,’” added Black. “Without a plan, you will be sad when the next Heartbleed comes out. You are going to have to patch. So what is the mechanism you are going to utilize?”

This all must seem very complicated, but much of this IoT security issues will be answered when you choose the IoT platform to run, manage design the system. Black says that when choosing your IoT platform, keep these three main security challenges in mind:

  1. Managing the complex interactions between devices and user
  2. Patching security updates to your devices in an easy and secure fashion
  3.  Reducing the risk by mitigating cyber-attacks form finding your device

You can view the original post Here

Read more…

Originally Posted by: Mimi Spier

The Internet of Things (IoT) is here to stay—and rapidly evolving. As we try to make sense of IoT’s impact on our lives and businesses, we also continue grappling with the security challenges.

As the IoT security landscape evolves, here are five key insights for designing and implementing IoT deployments for your enterprise.

5 IoT insights vmware airwatch

1. Protect Your People

IoT has opened up a world of possibilities in business, but it has also opened up a host of ways to potentially harm employees and customers. A security breach is not limited to stealing credit card data, anymore. Anyone with the right access could breach firewalls or steal health records. A key challenge of the IoT world is providing the right access to the right people at the right time.

[Related: 5 Real Ways to Enable IoT Success in Your Enterprise]

2. Watch Your Things

As millions of “things” start joining the enterprise network, it also expands the surface area for hackers to breach your system. All these devices will be leveraging public Wi-Fi, cloud, Bluetooth networks, etc., which will create multiple points of vulnerabilities. Your system needs to be designed for security from the bottom up to account for:

A) Device level: better quality devices

B) Data level: encryption and cryptology

C) Network level: certificates and firewalls

D) Application level: login/authorized access

3. Poor Quality of Things

The standards for IoT hardware and software are still evolving, which means until we have any established guidelines, we need to account for a vast range in the quality of “things.” Some of these may be very sophisticated and hardy, while others may be of the cheap disposable variety. Which devices you pick may depend upon factors like cost, usage and the use case itself. However, be warned that lower-quality devices have been used to gain entry to a secure network.

“By 2020, more than 25% of identified attacks in enterprises will involve the Internet of Things (IoT), although the IoT will account for less than 10% of the IT security budget.” Gartner

4. Is Your Network Ready?

One of the biggest challenge for any IT department implementing company-wide IoT projects will be assessing and managing bandwidth. As millions of devices join your network at increasing rates, scaling your network’s bandwidth will be an ongoing struggle. Your bandwidth needs must remain elastic, so you can support your enterprise needs, while minimizing costs. It is critical to minimize exposure of your networks by using, for example, micro-segmentation.

5. Data Is Your Friend

As with protecting any system, predictive maintenance is the way to stay a step ahead of breaches. The usual ways of pushing out timely security patches and software upgrades will continue to be helpful. However, one big advantage of IoT is the sheer amount of data it generates. You can track operational data to create alerts based on anomalies in the system. For example, if someone logs into the system from Atlanta and then, 30 minutes later, logs in again from Palo Alto, the system should raise a red flag.

You can view the original post by clicking Here.

Read more…

The internet is now a given. It’s something that we don’t even consider. It’s always there and we can depend on it to help us just as we can depend on electricity and natural gas to keep us warm or cool.

 The way in which we use the internet began as communications and has evolved far beyond that to something that is a necessity and something that is changing lives. We are entering a very unique period in the life of the internet.

 IoT is isn't at all new to us though many people are not sure what IoT is and how it’s useful to humanity.  It began at MIT and started nearly 20 years ago, in the early part of 2000s.  IoT, to simplify the explanation, is nothing more than a network that is designed of all kinds of objects that connect to the internet. Refrigerators, cars, trucks, manufacturing computers, watches, tablets, are all examples of the IoT and each of them has unique capabilities.

 Given the changes being made in IoT, this network can now be expanded to include physical items that may not traditionally have been part of the internet. Things like sneakers that count how far you've run or cushion your foot and measure the impact to the body. Street lights connected to the internet can record those who stand beneath them or activity that took place.

 Iot, according to companies such as DHL and Cisco, is firing the imagination and creating a broad and diverse array of new jobs and new methods of accomplishing old tasks.

 IoT offers us a transition in technology that has been impacting many different industries. IT will continue to do so along the way, impacting more tasks and more companies. It will, as it continues to change and evolve—offer huge implications for the movement of goods and services and the business of logistics.

 Today some 15 million devices are connected to the internet. These embed sensors, control computers, help us to analyze our work, to source new data, and to find unparalleled views into operations and information that allow us to improve the speed, improve the products, improve the delivery and improve the overall service to our customers.

 The IoT is already changing the way that we do business and the logistics of storage and delivery. It’s doing that by changing how we are making decisions about how goods are trucked, “stored, monitored, serviced, and delivered to customers.”

Trucks and cars carrying goods are already moving by the use of robotics in countries such as Singapore, the UK and the US.

Units for storage are carefully measuring temperature to ensure that goods are stored in the right way to prevent spoilage and saving money for the companies which are using them.

Vast changes and major impacts in how we buy, sell and use goods and services and improvements in the  ways that they serve mankind are being wrought by the internet of things every day. Expect the future to be more of the same.

For more information check out our website at www.internetofthingsrecruiting.com

Read more…

We went to IoT World last week in Santa Clara, California, where over 150 vendors and 10,000 attendees were showing their wares and making connections. More posts on that soon. In the meantime, here's our third issue of the IoTC Bi-Weekly Digest. If you're interested in being featured, we always welcome your contributions on all things IoT Infrastructure, IoT Application Development, IoT Data and IoT Security, and more. All members can post on IoT Central. Consider contributing today. Our guidelines are here.

Featured Articles


Who's Your Buddy? An interview with Dave McLauchlan, CEO & Co-Founder, Buddy Platform

By David Oro

Last week at IoT World, I stopped by the Buddy Platform booth (namely because of their killer Lego set-up). Buddy provides data hosting and management solutions for manufacturers and vendors of connected ("IoT") devices. Prior to IoT World, I sent Buddy CEO and Co-Founder Dave McLauchlan a few questions. Here's what he had to say.

People talk about the Internet of Things (IoT) but few know what it even means in practice

By Danielle Storey 

The technology sector is buzzing with predictions and hype about the Internet of Things (IoT), but many people are still confused about what it means, what the real world opportunities are and why businesses should be looking into IoT.

These Are The Weakest Points in Your IoT Security

By Shayla Price

The Internet of Things is changing the world, heralded as one of the most pivotal technology trends of the modern era. We are getting ready to enter a time where everything, quite literally, is connected to the Internet. For the industrial sector, this is a new area of exploration. Factories have smart infrastructures that use sensors to relay data about machine performance. Cities have smart grids that monitor everything from traffic to the energy used by streetlights. Hospitals can monitor the health of high-risk, at-home patients.

In other words, we are entering a hacker's dream world.

The Internet of Things >> Birth of a tech ecosystem purpose-built for disruptive growth

By Roger Attick

The Internet of Things (IoT) concept promises to improve our lives by embedding billions of cheap purpose-built sensors into devices, objects and structures that surround us (appliances, homes, clothing, wearables, vehicles, buildings, healthcare tech, industrial equipment, manufacturing, etc.). What this means is that billions of sensors, machines and smart devices will simultaneously collect volumes of big data, while processing real-time fast data from almost everything and...almost everyone!!! IoT vision is not net reality  Simply stated, the Internet of Things is all about the power of of connections.

Additional Links

Follow us on Twitter | Join our LinkedIn group | Members Only | For Bloggers | Subscribe

Read more…

Guest blog post by Vincent Granville

AI was very popular 30 years ago, then disappeared, and is now making a big come back because of  new robotic technologies: driver-less cars, automated diagnostic, IoT (including vacuum cleaning and other household robots), automated companies with zero employee, soldier robots, and much more.

Will AI replace data scientists? I think so, though data scientists will be initially replaced by "low intelligence" yet extremely stable and robust systems. There has been a lot of discussions about the automated statistician. I am myself developing data science techniques such as  Jackknife regression  that are simple, robust, suitable for black-box, machine-to-machine communications or other automated use, and easy to understand and pilot by the layman, just like a Google driver-less car can be "driven" by an 8 years old kid. 

My approach to automating data science and data cleaning / EDA (exploratory data analysis) is not really AI: it's just a starting point, but not a permanent solution. In the long term, it is possible that AI will handle complex regression models, far more complex than my Jackknife regression: after all, all the steps of linear or logistics regression modeling, currently handled by human beings spending several days or weeks on the problem, involve extremely repetitive, boring, predictable tasks, and thus it is a good candidate for an AI implementation entirely managed by robots.  

As machine learning (ML) more and more involves AI, and the blending of ML and AI is referred to as deep learning, I can see data science evolving to deep data science (DDS) or automated data science (ADS), where AI, robots, or automation at large, take a more prominent role. 

True AI systems can even predict travel time in real time based on expected traffic bottlenecks and road closures

Which jobs are threatened by AI?

Just like data science will take years to get a high level automation, where as much as 50% of human tasks are replaced by robots, I believe that these professions are at risk, but the erosion will be modest and slow, taking a lot of time to materialize:

  • Teachers: some topics such as mathematics or computer science can be taught by robots, at least for the 10% of students that are self-learners. Generally speaking, topics that are currently taught by robots include flying a plane, training on an AI-powered simulator. Ironically, planes can be flew without human pilots, but studies have shown that passengers would be very scared to board a pilot-less plane. The biggest threat for teachers is not AI though, it is online training.
  • Grading student papers, detect plagiarism. But students / authors are getting more sophisticated, using article-generating software powered by AI, to avoid detection. This could lead to an interesting war: AI robots designed for fraud detection fighting against AI robots designed to cheat.
  • For publishers, automatically writing high-quality, curated articles in a short amount of time. An article such as this one is a good candidate for automated, AI-powered production. The first step is to identify articles that are good candidates (for curation)  for a specific audience; this is also accomplished using AI. 
  • Can AI writes AI algorithms, or in short, can AI automate AI? I believe so; after all, I was one of the pioneers who wrote programs that write programs (software code compilers or interpreters also fit in this category). I guess this is just an extension of this concept.
  • Automated diagnostic (or automated doctor, but also automated lawyer). I guess this will eliminate a small proportion of these practitioners. But what about a robot performing a brain surgery with higher efficiency than a human surgeon? Or a robot manufacturing an ad-hoc, customized client-specific drug for maximum efficiency? 
  • Automated chefs replacing expensive cooks in a number of restaurants. Or think about a McDonald restaurant where the only human is a security guard - everything else being outsourced to AI-powered robots, including cleaning, preparing food, delivering to customers, processing payments, filing tax returns and accounting, ordering from vendors, and so forth. This would require significant system-to-system communications, but I believe it is feasible.
  • Automated policemen or soldiers is a source of concern, as you would have algorithms that decide who to kill or who to arrest. So this might not happen for a long time, though drones are replacing soldiers in a number of wars, and have the power to kill (based on some algorithm) with no one complaining about, as long as it is not happening in US. Terrorists might be attracted too by this type of technology.
  • AI will be present in many IoT applications such as smart cities, precision farming, transportation, monitoring (detecting when an offshore oil platform is going to collapse), and so on.

AI and automation has already replaced many data science tasks long ago

Many people talk about the threat of AI, but as of today, many jobs have already been automated, some more than 30 years ago. For instance, during my PhD years, a lot of data transited through tapes between big computer systems, and involved trips to the computer center, interacting with a number of people taking care of the data flow. This has entirely disappeared.

We used to have shared secretaries to write research papers (they could write LaTeX documents), I think this has all but disappeared.

One of the applications that I developed in the eighties was a remote sensing software that could perform image segmentation and clustering, for instance to compute the proportion of various crops in a specific area based on satellite images, without human interactions - thus eliminating all the expensive jobs that were previously performed by humans to accomplish this task.

Final note

Those who automate data science are still data scientists. Just like those developing robots to automate brain surgery work in a team, with many members being brain surgeons. it's just shifting the nature of the job rather than eliminating it.

 

Follow us @IoTCtrl | Join our Community

Read more…

Guest blog post by Bernard Marr

In a meeting with Airbus last week I found out that their forthcoming A380-1000 – the supersized airliner capable of carrying up to 1,000 passengers – will be equipped with 10,000 sensors in each wing.

The current A350 model has a total of close to 6,000 sensors across the entire plane and generates 2.5 Tb of data per day, while the newer model – expected to take to the skies in 2020 – will capture more than triple that amount.

In an industry as driven by technology as the aviation industry, it’s hardly surprising that every element of an aircraft’s performance is being monitored for the potential to make adjustments which could save millions on fuel bills and, more importantly, save lives by improving safety.

So I thought this would be a good opportunity to explore how the aviation industry, just like every other industry, is putting data science to work.

There are 5,000 commercial aircraft in the sky at any one time over the US alone, and 35 million departures each year. In other words the aviation industry is big. And given that every single passenger on each of those flights is putting their life in the hands of not just the pilot, but the technology, the safety measures and regulations in place are extremely complex.

This means that the data it generates is big, and complex too. But airlines have discovered that with the right analytical systems, it can be used to eliminate inefficiencies due to redundancy, predict routes their passengers are likely to need, and improve safety.

Engines are equipped with sensors capturing details of every aspect of their operation, meaning that the impact of humidity, air pressure and temperature can be assessed more accurately. It is far cheaper for a company to be able to predict when a part will fail and have a replacement ready, than to wait for it to fail and take the equipment offline until repairs can be completed.

In fact, Aviation Today reported that it can often take airlines up to six months to source a replacement part, due to inefficient prediction of failures leading to a massive backlog with manufacturers.

On top of this fuel usage can be economized by ensuring engines are always running at optimal efficiency. This not only cuts fuel costs but minimizes environmentally damaging emissions.

In the case of Airbus, they partnered with IBM to develop their own Smarter Fuel system, specifically to target this area of their operation with Big Data and analytics.

Additionally, airlines closely monitor arrival and departure data, correlating it with weather and related data to predict when delays or cancellations are likely – meaning alternative arrangements can be made to get their passengers where they need to be.

Before they even take off, taxi times between the departure gates and runways is also recorded and analyzed, allowing airlines and airport operators to further optimize operational efficiency – meaning less delays and less unhappy passengers.

This sort of predictive analysis is common across all areas of industry but is particularly valuable in commercial aviation, where delays of a few hours can cost companies millions in rearrangements, backup services and lost business (The FAA estimates that delayed flights cost the US aviation industry $22 million per year).  

Specialist service providers have already cropped up – masFlight is one – aiming to help airlines and airports make the most of the data they have available to them.

They aggregate data sets including weather information, departure times, radar flight data and submitted flight plans, monitoring 100,000 flights every day, to enable operators to more efficiently plan and deliver their services.

In marketing, too, airlines are beginning to follow the lead of companies such as Amazon by collecting data on their customers, monitoring everything from customer feedback to how they behave when visiting websites to make bookings.

Now we are used to generating and presenting tickets and boarding cards through our smartphones, more information about our journey through the airport, from the time we enter to the time we board our flight can also be tracked. This is useful both to airport operators, managing the flow of people through their facilities, and to airlines who will gather more information on who we are and how we behave.

So businesses in the aviation industry, including Airbus, are making significant steps towards using data to cut waste, improve safety and enhance the customer experience. 10,000 sensors in one wing may sound excessive but with so much at stake – both in terms of profits and human lives – it’s reassuring that nothing will be overlooked.

I hope you found this post interesting. I am always keen to hear your views on the topic and invite you to comment with any thoughts you might have.

About : Bernard Marr is a globally recognized expert in analytics and big data. He helps companies manage, measure, analyze and improve performance using data.

His new book is: Big Data: Using Smart Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance You can read a free sample chapter here.

Follow us @IoTCtrl | Join our Community

Read more…

The observation deck won’t be finished for a few years yet. If you want to see the future of New York, walk north along the High Line, round the curve at the rail yards, and turn your back to the river. Amid the highway ramps and industrial hash of far-west Manhattan, a herd of cranes hoists I-beams into the sky. This is Hudson Yards, the largest private real-estate development in United States history and the test ground for the world’s most ambitious experiment in “smart city” urbanism. 1

Over the next decade, the $20-billion project — spanning seven blocks from 30th to 34th Street, between 10th and 12th Avenues — will add 17 million square feet of commercial, residential, and civic space, much of it housed in signature architecture by the likes of Skidmore, Owings & Merrill; Diller Scofidio + Renfro; and Bjarke Ingels Group. 2But you don’t have to wait that long to see where this is headed. The first office tower, Kohn Pedersen Fox’s 10 Hudson Yards, opens next month, with direct access to the High Line. The new subway stop is already in business (and has already sprung a few leaks); an extension of the 7 train line connects the diverse, middle-class neighborhood of Flushing, Queens, with this emerging island of oligarchs.

Read the complete story here.

Read more…

The Internet of Things (IoT) concept promises to improve our lives by embedding billions of cheap purpose-built sensors into devices, objects and structures that surround us (appliances, homes, clothing, wearables, vehicles, buildings, healthcare tech, industrial equipment, manufacturing, etc.).

IoT Market Map -- Goldman Sachs

What this means is that billions of sensors, machines and smart devices will simultaneously collect volumes of big data, while processing real-time fast data from almost everything and... almost everyone!!!

IoT vision is not net reality

Simply stated, the Internet of Things is all about the power of connections.

Consumers, for the moment anyway, seem satisfied to have access to gadgets, trendy devices and apps which they believe will make them more efficient (efficient doesn't necessarily mean productive), improve their lives and promote general well-being.

Corporations on the other hand, have a grand vision that convergence of cloud computing, mobility, low-cost sensors, smart devices, ubiquitous networks and fast-data will help them achieve competitive advantages, market dominance, unyielding brand power and shareholder riches.

Global Enterprises (and big venture capital firms) will spend billions on the race for IoT supremacy. These titans of business are chomping at the bit to develop IoT platforms, machine learning algorithms, AI software applications & advanced predictive analytics. The end-game of these initiatives is to deploy IoT platforms on a large scale for;

  • real-time monitoring, control & tracking (retail, autonomous vehicles, digital health, industrial & manufacturing systems, etc.)
  • assessment of consumers, their emotions & buying sentiment,
  • managing smart systems and operational processes,
  • reducing operating costs & increasing efficiencies,
  • predicting outcomes, and equipment failures, and
  • monetization of consumer & commercial big data, etc.

 

IoT reality is still just a vision

No technology vendor (hardware or software), service provider, consulting firm or self-proclaimed expert can fulfill the IoT vision alone.

Recent history with tech hype-cycles has proven time and again that 'industry experts' are not very accurate predicting the future... in life or in business!

Having said this, it only makes sense that fulfilling the promise of IoT demands close collaboration & communication among many stake-holders.

A tech ecosystem is born

IoT & Industrial IoT comprise a rapidly developing tech ecosystem. Momentum is building quickly and will drive sustainable future demand for;

  • low-cost hardware platforms (sensors, smart devices, etc.),
  • a stable base of suppliers, developers, vendors & distribution,
  • interoperability & security (standards, encryption, API's, etc.),
  • local to global telecom & wireless services,
  • edge to cloud networks & data centers,
  • professional services firms (and self-proclaimed experts),
  • global strategic partnerships,
  • education and STEM initiatives, and
  • broad vertical market development.

I'll close with one final thought; "True IoT leaders and visionaries will first ask why, not how..!"

Read more…

DIY Home Automation

Home Automation DIY Case Study

The following is from a Mind Commerce interview with residential owner/installer/operator: 

I got into the home automation craze by accident when one of my managers described what he was doing.  After looking at it, the added convenience, security, and cost savings made me a believer.  The overall category of devices that I use are the Internet of Things (IoT).

My setup is as follows:

  • I have an Amazon Echo that allows me to issue voice commands to the majority of my IoT devices.  It also will play music from my Amazon Prime account and allow me to order merchandise (all voice of course).  It additionally allows me to keep a TODO and shopping list that is synchronized to my Alexa app on my iPhone.  As I think of items, I just tell Alexa (the name for the Echo), and she will add the items to the list.  I use this all the time.  You can also set timers and alarms vocally, which is another well-used feature.  There's tons more.  The Echo talks WiFi.
  • I use a Wink Hub to interface the Echo to devices that don't directly talk over WiFi, or that the Echo doesn't directly support.  The Wink Hub talks Z-Wave, Zigbee, WiFi, and Lutron's proprietary communications (dimmers).  The Wink Hub also has a nice APP that lets me control everything directly from my cellphone if I want.
  • I use Luton dimmers that allow me to turn on, turn off, or set the dimming level for my most commonly used lights.  The echo supports this so I can say "Alexa set living room lights to XX%" and it happens.
  • I have a Rain Machine which is a connected sprinkler controller.  I can turn on stations from the Echo, but I don't.  What it allows me to do is to set the watering parameters and then it connects to NOAA and it will modify my preferences based on how much rain has fallen.  Money saver.  It has a great APP and will tell me how much each station actually watered per week.  A real money saver in Florida.
  • The Ecobee 3 thermostat was an expensive but awesome IoT purchase that also saved me a lot of money this past winter.  It is very smart and connects to the Echo directly (WiFi).  I can tell Alexa to raise or lower the temperature by voice.  Setup couldn't be any simpler, and the APP is awesome.  Conventional wisdom in the winter is to lower your temperature at night and then have it increase before you wake to save money.  Wrong!  The Ecobee tracks when your fan and compressor run (view on the website).  I found out that turning the temperature down by 4 degrees overnight was causing my heat strips (expensive) to turn on for a couple of hours around 5AM to bring the temperature back up.  I was much better off just leaving it one degree less all the time.
  • For my garage door controller, I bought an IoT box that allows me to view the status of the garage door and to remotely open or close the door by using the Wink APP.  Really nice when I can't remember if I closed the door, or left it open.  This doesn't work with the Echo by design (having a crook yell into your house "Alexa open the garage door" wouldn't be a good thing).
  • Nest Cam is an awesome security device.  When I'm on travel I can view what's going on in the house and even hear what's going on.  It's got 1080p resolution and night IR capability (see at night with the lights off).  I can even talk to my cat through it.  I pay for the cloud recording service, so when it's on, a month of recording is held on the cloud, which would be useful if the house is ever robbed.  The problem is I don't want it recording while I'm home.  That is solved by...
  • Leviton makes smart bricks that plug into an outlet and let you plug an appliance (anything) into it and control that appliance on/off state through Wink or the Echo.  So when I leave, I can just vocally tell the Nest Cam to turn on, or if I forget, I can just use the Wink APP to turn it on remotely.  I use these to control the Nest Cam, my DirecTV internet device, and my Amazon Fire TV.  Whey have them sucking energy all the time when I use them maybe 2% of the time?

As an advanced user*, he also had this to say:

  • The is a function call IFTT (If This Then That) that works with the Echo, Wink and the IoT devices to allow creation of recipes that handle what to do if something happens.  For example, I set up an IFTT that when I ask the Echo where my cellphone is, the IFTT will call the phone so it rings.  The possibilities are limitless.  Think Geo-fencing or linking input from IoT sensors to automatically cause actions.

*Note: Remember, this is a more advanced, tech user.  However, IoT is increasingly becoming part of the consumer lexicon!!

Read more…

Our second issue of the IoTC Bi-Weekly Digest is below. If you're interested in being featured, we always welcome your contributions on all things IoT Infrastructure, IoT Application Development, IoT Data and IoT Security, and more. All members can post on IoT Central. Consider contributing today. Our guidelines are here.

Featured Articles



An Interview with Ken Finnegan, Chief Technology Officer, IDA Technology Ireland

By David Oro

Just ahead of the Internet of Things World conference taking place May 10–12 at the Santa Clara Convention Center in Silicon Valley, we were lucky enough to catch up with one of the conference speakers, Ken Finnegan, Chief Technology Officer, IDA Technology Ireland. We asked Mr. Finnegan about IoT and Smart Cities, IoT implementations in Dublin, and his thoughts on making cities smarter.  Here’s what we learned.




Profile of an IoT Developer: Results of the IoT Developer Survey

By Ian Skerrett

Today we release the results of our second annual IoT Developer Survey. Like last year it provides an interesting insight into how developers are building IoT solutions.  This year the Eclipse IoT Working Group partnered with IEEE IoT and theAGILE-IoT research project to expand the scope and respondent pool for the survey. Thanks to this partnership, we had 528 participants in the survey, up from 392 last year. The partnership also allowed us to analyze the data to look for any significant difference between the different IoT communities.

What options do you have for remotely monitoring water and fluids with Industrial IoT sensor telemetry?

By Pawei Sasik

IIoT or Industrial IoT (Internet of Things) is everywhere. It’s across all industries, from high tech transport, to natural resources and governments. IIoT software and hardware is deployed for numerous, varying applications, and it’s critical to understand just what the customer needs. One of the areas that we’ve seen recent growth is water and fluid monitoring. Water comes to us as a life sustaining asset and also as a force of destruction. The utility of water needs to be measured and monitored in order to effectively and efficiently use our greatest natural resource. Similarly, monitoring the destructive force of water can be just as important. Let’s talk about the different ways that you can measure and monitor water!

Platforms instead of products: the new normal

By Thierry Lillete

As the platform race continues to mature for the IoT, we found a great post by by Thierry Lillette that looks into the platforms, ecosystems and products. Good reading for any IoT and digital professional.

The Next Big Thing In Big Data: BDaaS

By Bernard Marr

We’ve had software as a service, platform as a service and data as a service. Now, by mixing them all together and massively upscaling the amount of data involved, we’ve arrived at Big Data as a Service (BDaaS).  It might not be a term you’re familiar with yet – but it suitably describes a fast-growing new market. In the last few years many businesses have sprung up offering cloud based Big Data services to help other companies and organizations solve their data dilemmas.

The IoT Database

By David Oro
Phillip Zito at the highly resourceful blog Building Automation Monthly has consolidated multiple IoT Frameworks and Offerings into the IoT Database. You will see links to the Frameworks and Offerings below. He says over time that he will be working on providing summary articles on each Framework and Offering. He could use your help. If you have an offering/framework you would like added to this list feel free to add it in the comments. You can find the IoT Database here.

10 Case Studies for the Industrial Internet of Things

By David Oro

It’s still early days for the IoT but everyday a little part of its burgeoning ecosystem becomes a factor in our lives, whether we know it or not. From industrial tools to farming to cities to grocery aisles and everything in between, the IoT is there. Here are 10 IoT case studies that show just where some of these technologies and applications are being applied.

Additional Links

Follow us on Twitter | Join our LinkedIn group | Members Only | For Bloggers | Subscribe

Read more…

Guest blog by Kai Goerlich. This post originally appeared here

While discrete manufacturing is used in a diverse range of industries, including automotive, aerospace, defense, construction, industrial machinery, and high tech, all of them face common and tough challenges such as higher resource volatility, more competition, increasing customer expectations, and shorter innovation cycles.

According to a study by a Roland Berger (see chart), product complexity has increased dramatically in the past 15 years. Manufacturers have to cope with two overlapping trends: the variety of products is constantly increasing and has more than doubled in the past 15 years, and, in parallel, product lifecycles have gotten about 25% shorter. These factors are putting an increasing pressure on margins, on supply and procurement systems, and on overall business models. According to Roland Berger, managing this complexity could reduce costs by roughly 3% – and certainly digitization can help improve this margin.

The threats and potentials of digitization

Adapting to the age of hyperconnectivity is a matter of life and death for the majority of companies, according to a study by the Economist Intelligence Unit. More than half of enterprises feel very strong competitive pressure from digital offerings by their traditional competition, established companies using digital to enter their market, and digital startups. Certainly, the competition is not waiting, and neither will today’s well-informed digital customers, who want more choice, better customization, and more information around the buying process. While digitization might add another disruptive dimension to an already rising complexity, discrete manufacturers are seeking the benefits of digitization. They are already proactively exploring the use of the IoT to better connect their supply chains, assets, and products, according to an IDC white paper, The Internet of Things and Digital Transformation: A Tale of Four Industries, sponsored by SAP.

Most manufacturers start with less complex projects, such as enhanced visibility or tracking, and progress to more sophisticated processes that require automated or predictive workflows, according to IDC. The findings of the study suggest that companies should start their IoT projects with the overarching goal of a live business operation already in mind. By combining three IoT use cases for manufacturing, i.e. connecting products, creating a connected shop floor with customization, and extending digital business models (see chart), companies will create a competitive business operation that fully exploits the digital opportunities.

Connecting products to improve innovation

Using IoT for innovation is a highly underestimated potential of digitization. A significant percentage of new products fail, and the associated R&D and marketing costs are lost. Customers already expect their products to come with a certain degree of interactivity and this demand will certainly grow in the future. According some estimates on the adoption of connected technology by consumers, the ratio of connected and interactive products will rise to approximately 20% on average by 2020, according to Forbes. This is a conservative estimate, and in some segments the ratio might increase much faster.

By digitizing current products and launching fully digitized ones, manufacturers can significantly reduce the risk of new product failures, as IoT-based products will enable them to monitor the actual use and performance of their products, get live feedback from their customers, and adopt future product innovation. IDC expects that by 2017, 60% of global manufacturers will use IoT to sense data from connected products and analyze that data to optimize the product portfolios, performance, and manufacturing processes. Similarly, the integration of IT assets and information with operational technology in the plant and the supply chain is also on the roadmap, if not already started.

Connecting the shop floor

Digitization offers the possibility to oversee every step in the manufacturing process, from customer demand, through production, and across the complete supply chain. The IDC study identified two IoT use cases – strategic asset management and customer experience – that seem to be very attractive for discrete manufacturing.

1. Strategic asset management

Manufacturers should start to digitize all of their assets in the production process and use IoT-based preventive and predictive maintenance scenarios in the plant and supply chain to reduce downtime and improve utilization. Using the information generated from digitization and IoT, businesses can evaluate use patterns and maintenance routines of their inventory and assets and optimize operations. Fixed assets can account for as much as one-third of all operating costs, so under today’s cost pressures a digital asset management surely matters. To fully use the potential of IoT and the real-time information gathered from assets, devices, and machines, companies need to ramp up their analytical and decision-making capabilities. Anecdotally, companies report that IoT use cases (such as remote maintenance) changed the way they thought about data and got them thinking significantly differently about information and insights.

2. Customization for customer experience

Demand for more choice, flexibility, and customized products is growing fast and estimated to be 15% of all products by 2020, according to MIT Smart Customization Group. Depending on size, material, and complexity, that percentage might be significantly higher. However complex the challenge for manufacturers might be, connected production in real-time is the basis, and it needs the right data from production capabilities, supply, equipment, and workforce, combined with all customer preferences. Getting the customer into the customization and production process is increasingly important for an improved customer experience, so IoT should be used to connect the products and, with it, the customer. This will not only give companies valuable data about user preferences and ideas for product innovation and improvements, but it will allow them to plan the customization of products much more efficiently.

Digitally enhanced business models

Digitization is by now a synonym for disruption. According to a study by the Economist Intelligence Unit, 60% of companies think that digitization is the biggest risk they face. More than half of companies feel competitive pressure from digital offerings by their traditional competition and digital startups. As IDC found, discrete manufacturers are already actively exploring the IoT opportunities, so the change is already underway.

As we pointed out previously, the customer experience of choosing and buying a product is increasingly important, but it does not stop there. IoT-connected products will get the customer into an ongoing interaction with the product vendor and/or retailer, enhancing the buying and use experience. Moreover, companies can use this connection to expand their business models. In its study, IDC mentions a wider range of ideas that manufacturers already explore, such as remote maintenance, refill and replenishment, contracting, product performance, training, and location-based services. While they may not be applicable for all companies, they show the wide range of possibilities and opportunities. Digitization may be a threat for some traditional business models and companies, but it offers huge potentials for those who focus on the customer experience.

Creating a live business operation

The huge potential that IoT offers is less the physical connection of things, machines, and devices, and more the opportunity to create a live business operation based on an advanced data strategy and analytics. While all aspects of IoT have large innovation opportunities on their own, the combination of connected products, customization, and digitally expanded business models promises the biggest benefits for discrete manufacturers. Thus any IoT strategy – wherever it starts – should be created with a larger digitization goal in mind.

Conclusion

  • Connecting products and strategic asset management has big potentials for discrete industries.
  • The combination of connected products, customization, and digitally expanded business models promises the biggest benefits.
  • Companies should create a live business operation with advanced data and analytical skills to use the full potential of IoT.

For more details and information, please read IDC’s IoT whitepaper IoT and Digital Transformation: A Tale of Four Industries and look for future IoT papers that delve deeper into the IDC study’s findings.

Read more…

Just ahead of the Internet of Things World conference taking place May 10–12 at the Santa Clara Convention Center in Silicon Valley, we were lucky enough to catch up with one of the conference speakers, Ken Finnegan, Chief Technology Officer, IDA Technology Ireland. He advises and provides strategic insights into technology trends both nationally and globally for the agency and client companies. He has worked in the software, telecommunications and big data industries for 15 years before joining the IDA in 2014. The IDA is Ireland's inward investment promotion agency, it is a non-commercial, semi-state body promoting Foreign Direct Investment into Ireland through a wide range of services.

We asked Mr. Finnegan about IoT and Smart Cities, IoT implementations in Dublin, and his thoughts on making cities smarter.  Here’s what we learned.

 

What are a few examples of IoT-based technologies that have been implemented throughout Dublin?

There are some really great projects happening in Ireland. The approach that Dublin has taken is a balanced top down - bottom up approach. What I mean by this is that the smart initiative is being driven by city leaders with support from government agencies (e.g. IDA Ireland, Enterprise Ireland and Science Foundation Ireland) at the top, whilst at the same time engaging with the citizens and companies in order to identify and seek solutions to the real needs of the city. 

There are five pillars to the Smart Dublin strategy. These include:

  • Smart Government
  • Smart Mobility
  • Smart Environment
  • Smart Living 
  • Smart People

The principles followed: 

  • How to use smart technologies to improve city livability and competitiveness:
  • Taking a challenge based approach to procurement to deliver better quality outcomes for the city.
  • Positioning Dublin as the place to pilot and scale new smart city technology opportunities. 

Understanding the key areas of focus and the driving principles are vital to describing the challenges and demonstrating that top down bottom up approach. 

A recently completed Smart City challenge that is a fantastic demonstration of IoT in the city was “Keeping Our City Streets Clean.“

A critical role of the city council is that of street cleaning and managing waste across busy city center areas in particular. There is a network of over 3,500 street bins that are manually emptied on a regular basis - the timing of which varies depending on the profile of the street. This street cleaning service is critical to maintaining a clean and litter free city. There has been an increasing trend of successful deployment of smart bin technologies in cities that incorporate features such as:

  • Sensors that communicate back to the street cleaners when they are full
  • Use of accompanying software that allow for optimization of routes for cleaning schedules
  • Use of software applications that deliver real-time data information (through a web portal or smartphone) on each bin status, their inventory management and other efficiency related data

The result was self-compacting bins that send an email when they need to be emptied!

Smart Bins are solar-powered, Wi-Fi enabled bins that are being installed in towns, villages and residential areas across the country to replace traditional public litter bins.

There are currently 401 Smart Bins installed in the south county area. The project is managed by the County Council by the Environment Department with the purpose to improve the efficiency of waste management.

Other examples can be found here including this video of Croke Park Smart Stadium.

 

Since transitioning to a smart city, what benefits has the city of Dublin experienced? And what plans do you have to make Dublin even smarter?

Without a doubt the biggest benefit Dublin and Ireland’s other cities have seen is a demonstration of the power of collaboration to uncover value. 

IDA Ireland has been successful in attracting and supporting multinationals here for a long time. With the combination of engagement with our multinational companies, a vibrant small-to-medium enterprise and start-up community, an openness for business from the cities, the youngest, digitally savvy population in Europe, a highly connected research ecosystem that is easily accessed by industry and support from the government - there is a lot happening. 

For example, Dublin has what we call ‘Silicon Docks’. It’s a part of the city that has the European HQ’s for Google, Facebook, Airbnb, Twitter, LinkedIn, LogMeIn, Adroll, Accenture, Zalando, Tripadvisor and more.

Dublin City Corporation are planning to make this part of the city the most ‘densely sensored’ urban area in the world - producing lots of data that will be accessible by companies, government, academia and citizens. We anticipate that this is going to be a very powerful demonstration of Ireland’s capabilities to design and develop the sensors, connect them over multiple transmission types and finally with one of Europe’s largest data analytics research centers here, uncover, discover and predict value. 

Central to the smart city goals is also to ensure that the infrastructure in place, the LORA (Low Powered Radio) transmission standards are currently being rolled out across the entire island. This is funded by Science Foundation Ireland and coordinated by the CONNECT Research center and allows companies to conduct robust due diligence into what transmission standard works for them. Companies can also access and rent the live radio spectrum, access the Sigfox network and lots more infrastructure; the building blocks are in place for technical solutions.

Ireland seems to have a head start when it comes to the innovation in the area of IoT and smart cities. What other cities have you admired in their innovation, implementations and adoption to make their cities smarter?

A city I really respect for embracing and encouraging technology is Amsterdam. 

Amsterdam is my second home, I lived there after graduating university and it was where a young Ken Finnegan learned the power and beauty of innovation. That is a city that is not afraid to positively leverage emergent technologies. I have seen cities, companies, government and people look at innovation as a threat and to try and tame it. This never works, if there is a smarter way to do things, do it. When policy tries to limit adoption of innovation or when companies fail to recognize it, they are only delaying its ubiquitous arrival and ultimately lose opportunities for growth and success. Amsterdam has the right attitude. It may not know what it’s dealing with but they know there is value to be exploited somehow. I would love to see a twining of Amsterdam and Dublin. I think they are two European cities that are extremely likeminded in approach.

Ireland seems to be all in on smart cities - enlisting both the public and private sector, and educational institutions - towards creating smart cities. Whats your advice for other government entities and the many private vendors in this space?

Indeed governments, academia and the private sector all play an essential part and each entity has ideas about what value is and how it will be generated. Simply my advice is to start the conversation.

Government can facilitate conversation with all the entities. We have a strong appetite for change and growth and a characteristic in Ireland I come across every day is the idea of coopetition. The idea of cooperating together whilst possibly in competition. We all wear the green jersey in Ireland, we are very proud of this green island, but we also want to develop the industry ultimately making it stronger for all in order to grow and win. By not talking to each, you limit growth opportunities, when you sit with competitor and others you need to figure out the safe ground and see how you can work together to succeed. 

Next we have to realize that government and industries have to engage with the end-users. We see that the citizen or what I term pro-citizen (professional citizen – the skilled and informed people that live, work and play in the cities, know the fabric of the city – plumbers, binmen, clubber, doctors, civil servants, sports members, teachers, social workers, bar staff, etc.), as the consumers of smart city good and services. These citizens provide the suggested personalized solutions of the problems they encounter in day-to-day life. It’s the application of a User Design approach to Smart cities.

Finally we have being listening to the narrative about the power of big data for years now. In order to harness the power it essential that data is accessible to all. For example Dublinked is a regional data sharing initiative that has previously unreleased public operational data being made available online for others to research or reuse. With the initial data coming from Dublin City (4 boroughs), public and private organizations in Dublin are linking up with Dublinked to share their data and invite research collaborations. The information is curated by Maynooth University to ensure ideas can be commercialized as easily as possible and to minimize legal or technical barriers that can be impediments for small and medium businesses (SMEs) seeking to develop and prove business ideas.

Smart cities are predicated on the advancement of IoT technologies. Do you see IoT as an opportunity for economic development and job creation? If so, how?

Yes for both cases.

In our five-year strategy launched in 2015, Wining 2020, IoT is the number one strategic technological area we are focusing on. If we didn’t believe IoT would increase economic development or create jobs there is absolutely no way it would be there. We have done our homework, we have listened to our clients and we have mobilized the organization to ensure that each person know exactly why Ireland is the global location for the Internet of Things. In addition to this, we are working with other government agencies to ensure that the environment is right for our clients to be successful. For example our sister agency Science Foundation Ireland has funded multiple research centers of scale (€50m +) so that industry can leverage the quality research coming from the academic system. They have also funded the roll out of transmission network s across the entire land that can be leveraged by industry to research, test and develop innovations. Between IDA Ireland, Enterprise Ireland and Science Foundation Ireland, there are many tools we provide by which industry can leverage to test and trail their products and services before commercializing. Our client companies are trailing these, not in a confine test lab, but literally out in the field, in the cities, in our bays and on our highways because Ireland is connected.

Youll be speaking at IoT World. What should the audience expect to hear from you?

Three things:

1. Ireland is open for business. If you have a problem that needs to be solved, if you want to service the European, Middle East and African markets, if you need infrastructure for research and development, or simply looking for a location with accessible and available talent, we are ready for to have that conversation. 

2. IoT has gained a lot of talk time over the past 5 years, but the conversation for IoT have been developing in Ireland more than 30 years. We are home to 10 out of the top 10 born on the internet/content companies, 9 of the top 10 information communication technology, 15 of the top 20 pharmaceutical and life science companies, fintech, engineering, food etc. companies. Many of these companies are developing their IoT solutions by working together here. It’s truly an agile and collaborative hotspot to be. Take a look at the past two years and the companies that have decided to move here, there is a very convincing track record. 

3. The environment is right. With one of the youngest and tech savvy populations in Europe, the biggest names in Industry, proactive government agencies and an academic scene focused on impact for industry, IDA Ireland want to partner and support companies ready to grow and succeed in the Smart IoT arena.

Read more…
RSS
Email me when there are new items in this category –

Upcoming IoT Events

More IoT News

How wearables can improve healthcare | TECH(talk)

Wearable tech can help users track their fitness goals, but these devices can also give wearers ownership of their electronic health records. TECH(talk)'s Juliet Beauchamp and Computerworld's Lucas Mearian take a look at how wearable health tech can… Continue

IoT Career Opportunities