Subscribe to our Newsletter | To Post On IoT Central, Click here


Apps and Tools (43)

If IoT is the Meteor, is OT the Dinosaur?

Today’s digital transformation of business and government will have the same effect. It will make short work of any organization that does not evolve rapidly. CEOs must quickly define where their organizations can compete for success, and lead them on that journey. If they can’t—or won’t—change, they risk fading away like the dinosaurs.
Read more…

In this article, I’ll keep introducing AggreGate IoT Platform-based products. In 2010, two years after AggreGate Network Manager release, we started AggreGate SCADA/HMI project ‒ fourth-generation SCADA system.

So what is fourth-generation SCADA? 

Wikipedia suggests the following definitions:

  1. First-generation SCADA are monolithic systems that had been developed before the Internet expansion became widespread. Such systems do not operate anymore.
  2. Second-generation SCADA solutions are operating in enterprise local systems. They are supposed to employ IP network for connection between controllers, data collection servers, controlling servers, and APM operators.
  3. Third-generation SCADA architecture enables coordination of geographically-distributed automatic process control systems. These systems include multiple manufacturing sites and remote monitoring objects. Until recently, third-generation SCADA was a cutting-edge SCADA product with the possibility of HMI launch in mobile device browsers, remote project editing right on the production server, and testing without server shutdown and project file copying.
  4. Finally, fourth-generation SCADA should fit the Internet of Things. It implies decentralization and unification at a greater extent, i.e. possibility of algorithm execution point shift between SCADA servers and controllers. Another indispensable feature is operation via cellular and satellite network avoiding VPN (controllers with no static IP address can connect to SCADA servers operating in a cloud).

Naturally, every SCADA vendor develops their products evolving from generation to generation, while the previous versions become stagnant for not being compatible with the latest trends.

IoT Platform-based AggreGate SCADA/HMI (http://aggregate.tibbo.com/solutions/scada-hmi.html) has inherited all functions of fourth-generation SCADA:

  • Built on Java platform, the system operates on Linux perfectly, which allows SCADA core running on embedded systems. Our OEM partners supply systems built on Raspberry Pi, BeagleBone Black and similar low-priced microcomputers. In addition, there is an option for SCADA core to access IP communications, serial ports, as well as discrete and analog inputs, etc.
  • The same solution operating on regular servers provides centralized data collection and HMI handling. Servers based on unified architecture establish peering relations for data interchange with PLCs.
  • The system is fully compatible with all Tibbo programmable controllers and modules.
  • HMIs can be launched on Linux or Windows PCs, touch boards, or opened in web-browsers.
  • There are no such concepts as “development environment” or “runtime environment” in our solution. Development is implemented via remote connection right on a production server considering role-based permissions. In addition, there are a lot of ways of cloning the whole project or its parts. Platform capabilities for designing reference projects and derived products will be described in a separate article.
  • AggreGate Platform is tailored to work with M2M devices. The server with controllers connecting to it by themselves operates perfectly. In our terminology, such controllers connecting to the server themselves are called agents.

 

There is still a question left: why have we developed another SCADA? The international market is saturated with such solutions.

The point is that AggreGate SCADA/HMI as an AggreGate Platform add-on is technically a set of drivers for data collection and typical HMI vector images. All features necessary for SCADA are AggreGate Platform components: GUI (widget) builder, report editor, alert and event control tools, tag modeling system, failover clustering technology, SDK with DDK, etc.

Our investment to SCADA system development was not so great comparing to the development of such a system from scratch. To implement industrial and building automation projects, we developed the drivers for standard process control protocols (Modbus, OPC, OPC UA, BACnet, DNP3, etc.) and designed several thousands of vector images.

Along with standard SCADA system functions, AggreGate Platform fills it with exceptional features, for instance: 

  • Statistics storage in Round-Robin Database (RRD) and NoSQL database (BigData)
  • Unlimited horizontal and vertical system scaling based on AggreGate distributed architecture
  • Data collection and control via both IT monitoring protocols (SNMP, FTP, JMX, SSH, WMI) and generic ones (SQL, SOAP, CORBA, LDAP…).

These features allow you to apply the system in multiple projects, not typical for SCADA solutions. AggreGate SCADA/HMI, in particular, is used for manufacturer fleet telemetry, MES replacement, cell tower and data center engineering infrastructure monitoring (included into AggreGate Data Center Supervisor solution). 

In terms of AggreGate architecture and project building concept, AggreGate SCADA/HMI resembles most of other products. A typical project development cycle includes:

  • Deploying a server or several servers in a failover configuration
  • Connecting to a storage that can be either a standard relational DBMS or integrated Apache Cassandra DMBS saving dozens of thousands tags per second
  • Connecting controllers and other data sources (e.g. external databases), configuring tag polling period
  • Configuring automated tag processing algorithms on a server side. These can be models determining additional calculated tags, alerts delivering e-mail and SMS notifications, schedules for performing certain jobs, etc.
  • Developing HMIs, dashboards, and navigation between them
  • Setting user roles, access, and external authentication via LDAP/AD configuration.

 Running on Linux, AggreGate server collects data from OPC servers running on Windows. This procedure is implemented via IP network and DCOM protocol. As a result, there is no need for installing SCADA server and OPC server on a single computer anymore.

There are no such notions as “project”, “development environment”, and “runtime environment” in AggreGate SCADA/HMI. According to its concept, a single primary server is installed on a worksite. During the initial deployment phase, system engineers can connect to the server locally or remotely for developing HMIs, create PLC user accounts, set up data storage, and so on. After this phase, the same server will be utilized during commissioning and further on a regular basis, although the system migration to another server is possible and simple.

Unified environment enables to introduce modifications into the production server without any interruptions. In this case one should: 

  • Make temporary copies of one or two system components (for example, HMIs or alerts)
  • Introduce changes in the copy and test them
  • Replace the original component with the successfully modified copy.

One of the vital SCADA system parts is GUI Builder. Inherited from AggreGate Platform, GUI Builder assists in drawing and animating any HMIs containing both simple components (buttons, captions, text fields, lists, etc.) and complex ones (tables, multi-layer panes, tabbed panes, charts, geographical maps, dynamic SVG images, video windows, etc.).


Even though AggreGate GUI Builder is similar to other system editors of this kind, it has an outstanding feature. Alongside with standard visual component absolute layout, any pane can utilize a grid layout similar to an HTML table. Plus, in case of a complex form with multiple tabbed panes (simple, multi-layer, tabbed, split panes), every pane can employ both absolute and grid layouts. 

Grid layout allows designing HMIs, data input forms, and dashboards that seamlessly adjust to any screen resolution. In case of absolute layout, component proportional scaling is used. In this case, component height also increases, which leads to unacceptable results for almost all forms and dialogs.

HMIs are animated through bindings that allow data copying between server object properties and visual component properties in response to server and HMI events. AggreGate expression language brings aid in applying any operations to replicated data on the fly (processing numbers, strings, dates and time, tables, etc.).

Any data processed by AggreGate can be utilized for reporting. Expression builder and integrated SQL-like query language help retrieve necessary indicators, and the system creates the optimal template for their visual representation. After this, you can customize the template using the report builder.

As for the KPIs, you can configure alerts raised in response to critical object state events or event chain retrieving. The system sends alert notifications in almost any way (popup windows, sound notifications, E-mail messages, SMS). Automatically launched corrective actions can run both autonomously and under operator control. The alert module supports other typical industrial control features: flapping detection, hysteresis, prioritization, acknowledgement, escalation, etc.

AggreGate SCADA/HMI automates industrial processes, displays all necessary data in the operator center, provides visualization, saves information into a database, and creates reports ‒ in fact, everything that is expected from SCADA. The system promptly analyzes technological process efficiency and takes important decisions on its optimization, i.e. it partially performs MES software functions.

Usually, there are several SCADA installations operating simultaneously at large enterprises. Every installation has its own function in a certain workshop. The systems are logically bound by the production chain. Thus, their integration and automated KPIs transmission to MES/ERP levels are required. In AggreGate ecosystem, this is carried out by exchanging unified data model parts between servers with the help of distributed architecture (http://aggregate.tibbo.com/technology/architecture/distributed-architecture.html). 

It often happens that on a single object/within a single project, it’s necessary to implement not only SCADA, but also IT infrastructure management system, building automation, access control and physical access control, automatic system for commercial accounting of power consumption, and other solutions in various combinations. AggreGate has all these features implemented within one installation and possibility of binding modules on a single server. Where can you run across it? For example, in data centers where active networking equipment, climate sensors, UPS, DGU, conditioners, water-cooling system, personnel access, time and attendance should be monitored. Some more examples: cell towers, where radio-relay equipment of transport network, sector antenna parameters, intrusion detection sensors, and other systems must be controlled. In large warehouses, it is vital to monitor personnel access, loader behavior, ventilation and lighting systems. Almost all large-scale objects can gain an advantage from merging various monitoring and management systems.

In our upcoming articles, we will describe distinguishing features of our SCADA solution, various industrial automation problems and their described solutions, as well as newsworthy projects we’ve taken part in.

Victor Polyakov, Managing Director, Tibbo Systems

Read more…

Guest blog post by Sandeep Raut



Digital Transformation is helping all the corners of life and healthcare is no exception.
Patients when discharged from the hospital are given verbal and written instructions regarding their post-discharge care but many of them get readmitted in 30 days due to various reasons. 
Over last 5 years this 30 days readmission rate is almost 19% with over 25 billions of dollars spent per year.

In October 2012 the Centers for Medicaid and Medicare Services (CMS) began penalizing hospitals with the highest readmission rates for health conditions like acute myocardial infarction (AMI), heart failure (HF), pneumonia (PN), chronic obstructive pulmonary disease (COPD) and total hip arthroplasty/total knee arthroplasty (THA/TKA).


Various steps to reduce the readmission:

  • Send the patient home with 30-day medication supply, wrapped in packaging that clearly explains timing, dosage, frequency, etc
  • Have hospital staff make follow-up appointments with patient's physician and don't discharge patient until this schedule is set up
  • Use Digital technologies like Big Data & IoT to collect vitals and keep up visual as well as verbal communication with patients, especially those that are high risk for readmission. 
  • Kaiser Permanente & Novartis are using Telemedicine technologies like video cameras for remote monitoring to determine what's happening to the patient after discharge
  • Piedmont Hospital in Atlanta provides home care on wheels like case management, housekeeping services, transportation to the pharmacy and physician's office          
  • Use of Data Science algorithms to predict patients with high risk of readmission
  • Walgreens launched WellTransitions program where patients receive a medication review upon admission and discharge from hospital, bedside medication delivery, medication education and counseling, and regularly scheduled follow-up support by phone and online.
  • HealthLoop is a cloud based platform that automates follow-up care keeping doctors, patients and care-givers connected between visits with clinical information that is insightful, actionable, and engaging.
  • Propeller Health, a startup company in Madison has developed an app and sensors track medication usage and then send time and location data to a smartphone
  • Mango Health for iPhone and wearables like Apple Watch makes managing your medications fun, easy, and rewarding. App feature include: dose reminders, drug interaction info, a health history, and best of all - points and rewards, just for taking your medicines.
These emerging digital tools enable health care organizations to assess and better manage who is at risk for readmission and determine the optimal course of action for the patients.

Such tools also enable patients to live at home, in greater comfort and at lower cost, lifting the burden on themselves and their families.
Digital is helping mankind in all ways !!
Read more…

Interactive Map of IoT Organizations -- TAKE 2

I am excited to launch the 2nd version of my Interactive Map of IoT Organizations  Thanks for all the support and encouragement from David Oro!

https://www.diku.ca/blog/2016/12/04/interactive-map-of-iot-organizations-take-2/

Here are the material changes from the first version:

  1. Each organization now has their specific address instead of being city-based
  2. Now includes the Founder(s) of the organization and a link to more information about them. This is in addition to the “Founded” year which was in the first version
  3. Cleanup of categories. Folks are still trying to determine what it means to be an IoT Platform. For me, it’s most important to focus on standards and integration of systems as there will be organizations that specialize in one aspect of an IoT platform whether it’s the analytics, rules engine, device management, workflow, or visualization functions.
  4. The initial launch of the map had 246 organizations, this new map has 759 organizations. Thanks to many people on LinkedIn and through blog comments for suggesting their companies which accounted for 180 additional organizations. The other 330+ organizations I have been finding on my own by trolling news, Twitter, IoT conference Web sites, “Partners” sections of each organization.

I set up a Twitter account @EyeOhTee and although I still need to tweet more, you may see some interesting news on there and feel free to tweet out this post, plug plug!

Besides the basic data shown on the map, I also track many more attributes of each product. I will publish additional findings and analysis on this blog and here on IoT Central.

I hope you find the map useful and I would love to hear if, and how, it has helped you. Whether you located a company in your area to collaborate or a supplier for a problem you are trying to solve or just learning like me it will have made it worth the time I spend on this.

BGJ

Read more…

MQTT Library Demo

About The Application

To illustrate the use of the MQTT library, we have created two simple Tibbo BASIC applications called "mqtt_publisher" and "mqtt_subscriber".

In our MQTT demo, the publisher device is monitoring three buttons (Tibbits #38). This is done through the keypad (kp.) object.

The three buttons on the publisher device correspond to the red, yellow, and green LEDs (Tibbits #39) on the subscriber device.

As buttons are pushed and released, the publisher device calls mqtt_publish() with topics "LED/Red", "LED/Green", and "LED/Red". Each topic's data is either 0 for "button released" or 1 for "button pressed". The related code is in the on_kp() event handler.

The subscriber device subscribes to all three topics with a single call to mqtt_sub() and the line "LED/#". This is done once, inside callback_mqtt_connect_ok().

With every notification message received from the server, the subscriber device gets callback_mqtt_notif() invoked. The LEDs are turned on and off inside this functions's body.

Testing the MQTT demo

The demo was designed to run on our TPS3 boards, but you can easily modify it for other devices.

The easiest way to get the test hardware is to order "MQTTPublisher" and "MQTTSubscriber" TPS configurations.

You can also order all the parts separately:

  • On the publisher side:
    • TPP3 board in the TPB3 enclosure.
    • Your will need Tibbits #00-3 in sockets S1, S3, S5; and
    • Tibbits #38 in sockets S2, S4, S6;
    • You will also need some form of power, i.e. Tibbit #10 and #18, plus a suitable 12V power adaptor.
  • On the subscriber side:
    • TPP3 board in the TPB3 enclosure.
    • Your will need Tibbits #00-3 in sockets S1, S3, S5;
    • Tibbit #39-2 (red) in S2;
    • Tibbit #39-3 (yellow) in S4;
    • Tibbit #39-1 (green) in S6;
    • You will also need some form of power, i.e. Tibbit #10 and #18, plus a suitable 12V power adaptor.

Test steps

  • Install a suitable MQTT server. We suggest HiveMQ (www.hivemq.com):
    • Download the software here: www.hivemq.com/downloads/ (you will be asked to register).
    • Unzip the downloaded file.
    • Go to the "windows-service" folder and execute "installService.bat".
    • Go to the "bin" folder and launch "run.bat".
    • You do not need to configure any user names or passwords.
  • Open mqtt_publisher and mqtt_subscriber projects in two separate instances of TIDE, then correct the following in the projects' global.tbh files:
    • OWN_IP - assign a suitable unoccupied IP to the publisher and to the subscriber (you know that they will use two different IPs, right?);
    • MQTT_SERVER_HOST - set this to the address of the PC on which your run HiveMQ.
  • Select your subscriber and publisher devices as debug targets, and run corresponding demo apps on them.
  • Press buttons on the publisher to see the LEDs light up on the subscriber.
  • If you are running in debug mode you will see a lot of useful debug info printed in the output panes of both TIDE instances.
  • You can switch into the release mode to see how fast this works without the debug printing.
Read more…

Open Source for IoT Software Stacks

Guest post by Ian Skerrett, Eclipse Foundation

In the previous article, Three Software Stacks Required to Implement IoT , we introduce the 3 software stacks that are required for any IoT solution: 1) Constrained Devices, 2) IoT Gateways and Smart Devices, and 3) IoT Cloud Platforms. In part 2 of this series, we discuss how open source software communities and in particular the Eclipse IoT open source community is becoming a key provider of the building blocks required to implement each of the three software stacks. Similar to how the LAMP (Linux/Apache HTTP Server/MySQL/PHP) stack has dominated the web infrastructures, it is believed a similar open source stack will dominate IoT deployments.

The Importance of open stacks for IoT

The separation of concerns brought by separating any IoT architecture into three stacks is a great step forward for building scalable and maintainable solutions. What’s more, building a software stack on top of open technologies helps achieve the following:

  1. Open standards ensure interoperability – The use of proprietary communication protocols create silos of IoT networks which cannot easily exchange information. Building IoT stacks on top of open standards (radio protocols, messaging protocols, etc.) helps with the overall interoperability in IoT.
  2. Software reuse reduces TCO – The total cost of ownership is an important consideration for any IoT solution provider. Open source technology is often being made available as building blocks that one can re-use across several solutions. An IoT solution based on open source software may for example leverage the same protocol implementations in the devices and in the gateways.
  3. No vendor lock-in – Building a solution on top of proprietary technologies and software exposes to the risk of having a third-party vendor change its roadmap, or stopping the support of its solution. An IoT stack based on open source technology enables solution providers to adapt the software to their needs if a feature is missing, without having to ask or wait for the feature to be implemented by a given vendor.
  4. Open stacks attract developers – Open source communities are vibrant ecosystems of companies and individuals who innovate and collaborate. A company using open source software will typically find it easier to attract or find developers who have the required skills to work with the stack.
  5. Reduce risk and time to market - Users of open source technology benefit from reusing technology that has been used and test by others, which reduce the overall development time and ensure a smoother transition from prototype to pilot to production.

Open Source Technology for IoT

The open source community has become an active producer of technology for IoT solutions. Like the LAMP stack for websites, there are a set of open source projects that can be used as the building blocks for an IoT solution architecture.

The Eclipse IoT community is very active in providing the technology that can be used in each stack of an IoT solution. Eclipse IoT has 26 different open source projects that address different features of the IoT stacks. In addition to the Eclipse IoT projects, there are other open source projects that are also relevant to an IoT stack. The next few pages provide a brief summary of how Eclipse IoT as well as other open source projects can be used to implement IoT stacks.

Open Source Stack for Constrained Devices

Eclipse IoT provides a set of libraries that can be deployed on a constrained embedded device to provide a complete IoT development stack.

  • IoT Operating Systems – RIOT, FreeRTOS, Zephyr, Apache Mynewt.
  • Hardware Abstraction – Eclipse Edjeprovides an high-level Java API for accessing hardware features provided by microcontrollers (e.g GPIO, ADC, MEMS, etc.). It can directly connect to native libraries, drivers and board support packages provided by silicon vendors.
  • Device Management – Eclipse Wakaama provides a C implementation of the OMA LWM2M standard.
  • Communication – Open source projects like Eclipse Paho or Eclipse Wakaamaprovide implementation of IoT communication protocols such as, respectively, MQTT or LWM2M. Eclipse Paho has a C implementation of MQTT that is less than 2,000 LOC.

Open Source Stack for Gateways: Connected and Smart Things

Within the Eclipse IoT community there are a variety of projects that work to provide the capabilities that an IoT gateway requires.

Eclipse Kura provides a general purpose middleware and application container for IoT gateway services. An IoT gateway stack based on Eclipse Kura would include the following:

  • Operating system – Linux (Ubuntu/Ubuntu Core, Yocto-based linux distribution), Windows.
  • Application container or runtime environment – Eclipse Equinox or Eclipse Concierge (OSGi Runtime).
  • Connectivity support for devices – Eclipse Kura  includes APIs to interface with the gateway I/Os (e.g. Serial, RS-485, BLE, GPIO, etc.) and support for many field protocols that can be used to connect to devices, e.g MODBUS, CAN bus, etc.
  • Networking support – Eclipse Kura provides advanced networking and routing capabilities over a wide-range of interfaces (cellular, Wi-Fi, Ethernet, etc.).
  • Data management & Messaging – Eclipse Kura implements a native MQTT-based messaging solution, that allows application running on the gateway to transparently communicate with a Cloud Platform, without having to deal with the availability of the network interfaces, or how to represent IoT data. Support for additional messaging protocols is available through the built-in Apache Camel message routing engine.
  • Remote management – Eclipse Kura provides a remote management solution based on the MQTT protocol, that allows to monitor the overall health of an IoT Gateway, in addition to control (install, update, modify settings) the software it’s running.

Eclipse SmartHome provides an IoT gateway platform that is specifically focused on the home automation domain. An Eclipse SmartHome stack would including the following:

  • Operating system – Linux (Ubuntu/Ubuntu Core, Yocto-based linux distribution), Windows or macOS.
  • Application container or runtime environment – Eclipse Equinox or Eclipse Concierge (OSGi Runtimes).
  • Communication and Connectivity – Eclipse SmartHome brings support for many off-the-shelf home automation devices such as Belkin WeMo, LIFX, Philips Hue, Sonos, etc. Eclipse SmartHome focuses on enabling home automation solutions to communicate within an “Intranet of Things”; therefore offline capabilities are a paramount design goal.
  • Data management & Messaging – Eclipse SmartHome has an internal event bus, which can be exposed to external systems through e.g. SSE or MQTT. It furthermore provides mechanisms for persisting values in databases and for running local business logic through a rule engine.
  • Remote management – Eclipse SmartHome supports device onboarding and configuration through its APIs. It furthermore provides an infrastructure to perform firmware update of connected devices.

Eclipse 4DIACprovides an industrial-grade open source infrastructure for distributed industrial process measurement and control systems based on the IEC 61499 standard. 4DIAC is ideally suited for Industrie 4.0 and Industrial IoT applications in a manufacturing setting. The IEC 61499 standard defines a domain specific modeling language for developing distributed industrial control solutions by providing a vendor independent format and for simplifying support for controller to controller communication.

Open Source Stack for IoT Cloud Platforms

The Eclipse IoT Community has a number of projects that are focused on providing the functionality required for IoT cloud platforms.

Eclipse Kapua is a modular platform providing the services required to manage IoT gateways and smart edge devices. Kapua provides a core integration framework and an initial set of core IoT services including a device registry, device management services, messaging services, data management, and application enablement.

The goal of Eclipse Kapua is to create a growing ecosystem of micro services through the extensions provided by other Eclipse IoT projects and organizations.

Eclipse OM2M is an IoT Platform specific for the telecommunication industry, based on the oneM2M specification. It provides a horizontal Common Service Entity (CSE) that can be deployed in an M2M server, a gateway, or a device. Each CSE provides Application Enablement, Security, Triggering, Notification, Persistency, Device Interworking, Device Management.

The Eclipse IoT community also has a number of standalone projects that provide functionality to address key features required for an IoT cloud platform. These projects can be used independently of Eclipse Kapua and over time some may be integrated into Kapua.

Connectivity and Protocol Support

  • Eclipse Hono provides a uniform API for interacting with devices using arbitrary protocols, as well as an extensible framework to add other protocols.
  • Eclipse Mosquitto provides an implementation of an MQTT broker.

Device Management and Device Registry

  • Eclipse Leshan provides an implementation of the OMA LWM2M device management protocol.  
  • Eclipse hawkBitprovides the management tools to roll out software updates to devices and gateways.

Event management and application enablement

  • Eclipse Hono helps to expose consistent APIs for consuming telemetry data or sending commands to devices, so as to rationalize IoT application development.

Analytics and Visualization – Outside of the Eclipse IoT community there are many open source options for data analytics and visualization, including Apache Hadoop, Apache Spark, and Apache Storm. Within the Eclipse community, Eclipse BIRT provides support for dashboards and reporting of data stored in a variety of data repositories.

Open Source for Cross-Stack Functionality

Security

  • Eclipse tinydtls provides an implementation of the DTLS security protocol providing transport layer security between the device and server.
  • Eclipse ACS provides an access control service that allows each stack in an IoT solution to protect their resources using a RESTful interface.

Ontologies

  • Eclipse Unide is a protocol for Production Performance Management (PPM) in the manufacturing industry. It establishes an ontology for sharing machine performance information.
  • Eclipse Whiskers implements the OGC SensorThings API that provides a standard way to share location based information for devices.

Development Tools and SDKs

  • Eclipse Vorto provides a set of tools and repository for creating device information models.
  • Eclipse JDT and CDT allow for integrated development of IoT solutions. For example, Eclipse Kura applications can be tested and debugged from within the Eclipse Java IDE (JDT).
  • Eclipse Che provides a browser-based IDE that can be used for building IoT solutions.

Conclusion

An IoT Solution requires substantial amount of technology in the form of software, hardware, and networking. In this series of articles we have defined the software requirements across three different stacks and the open source software that can be used to build the stacks

The last twenty years have proven that open source software and open source communities are key providers of technology for the software industry. The Internet of Things is following a similar trend, and it is expected that more and more IoT solutions will be built on open source software.

For the past five years, the Eclipse IoT community has been very active in building a portfolio of open source projects that companies and individuals use today to build their IoT solutions. If you are interested in participating, please join us and visit https://iot.eclipse.org.

Read more…

Tibbo Project System (TPS) is a highly configurable, affordable, and innovative automation platform. It is ideal for home, building, warehouse, and production floor automation projects, as well as data collection, distributed control, industrial computing, and device connectivity applications.

Suppliers of traditional “control boxes” (embedded computers, PLCs, remote automation and I/O products, etc.) typically offer a wide variety of models differing in their I/O capabilities. Four serial ports and six relays. Two serial ports and eight relays. One serial port, four relays, and two sensor inputs. These lists go on and on, yet never seem to contain just the right mix of I/O functions you are looking for.

Rather than offering a large number of models, Tibbo Technology takes a different approach: Our Tibbo Project System (TPS) utilizes Tibbits® – miniature electronic blocks that implement specific I/O functions. Need three RS232 ports? Plug in exactly three RS232 Tibbits! Need two relays? Use a relay Tibbit. This module-based approach saves you money by allowing you to precisely define the features you want in your automation controller.

Here is a closer look at the process of building a custom Tibbo Project System.

Start with a Tibbo Project PCB (TPP)

 

 

A Tibbo Project PCB is the foundation of TPS devices.

Available in two sizes – medium and large – each board carries a CPU, memory, an Ethernet port, power input for +5V regulated power, and a number of sockets for Tibbit Modules and Connectors.

Add Tibbit® Blocks

Tibbits (as in “Tibbo Bits”) are blocks of prepackaged I/O functionality housed in brightly colored rectangular shells. Tibbits are subdivided into Modules and Connectors.

Want an ADC? There is a Tibbit Module for this. 24V power supply? Got that! RS232/422/485 port? We have this, and many other Modules, too.

Same goes for Tibbit Connectors. DB9 Tibbit? Check. Terminal block? Check. Infrared receiver/transmitter? Got it. Temperature, humidity, and pressure sensors? On the list of available Tibbits, too.

Assemble into a Tibbo Project Box (TPB)

Most projects require an enclosure. Designing one is a tough job. Making it beautiful is even tougher, and may also be prohibitively expensive. Finding or making the right housing is a perennial obstacle to completing low-volume and hobbyist projects.

Strangely, suppliers of popular platforms such as Arduino, Raspberry Pi, and BeagleBone do not bother with providing any enclosures, and available third-party offerings are primitive and flimsy.

Tibbo understands enclosure struggles and here is our solution: Your Tibbo Project System can optionally be ordered with a Tibbo Project Box (TPB) kit.

The ingenious feature of the TPB is that its top and bottom walls are formed by Tibbit Connectors. This eliminates a huge problem of any low-volume production operation – the necessity to drill holes and openings in an off-the-shelf enclosure.

The result is a neat, professionally looking housing every time, even for projects with the production quantity of one.

Like boards, our enclosures are available in two sizes – medium and large. Medium-size project boxes can be ordered in the LCD/keypad version, thus allowing you to design solutions incorporating a user interface.

 

Unique Online Configurator

To simplify the process of planning your TPS we have created an Online Configurator.

Configurator allows you to select the Tibbo Project Board (TPP), “insert” Tibbit Modules and Connectors into the board’s sockets, and specify additional options. These include choosing whether or not you wish to add a Tibbo Project Box (TPB) enclosure, LCD and keypad, DIN rail mounting kit, and so on. You can choose to have your system shipped fully assembled or as a parts kit.

Configurator makes sure you specify a valid system by watching out for errors. For example, it verifies that the total power consumption of your future TPS device does not exceed available power budget. Configurator also checks the placement of Tibbits, ensuring that there are no mistakes in their arrangement.

Completed configurations can be immediately ordered from our online store. You can opt to keep each configuration private, share it with other registered users, or make it public for everyone to see.

Develop your application


Like all programmable Tibbo hardware, Tibbo Project System devices are powered by Tibbo OS (TiOS).

Use our free Tibbo IDE (TIDE) software to create and debug sophisticated automation applications in Tibbo BASIC, Tibbo C, or a combination of the two languages.

To learn more about the Tibbo Project System click here

Read more…

OPC Server from Tibbo Technology

OPC – «Open Platform Communications» – is a set of standards and specifications for manufacturing telecommunication. OPC specifies the transfer of real-time plant data between control devices from various producers. OPC was designed to process control hardware and support a common bridge for Windows-based software applications. OPC was aimed to reduce the number of duplicated effort performed by hardware manufacturers and their software partners.

 

The most typical OPC specification, OPC Data Access (OPC DA), is supported by Tibbo OPC Server. Any device compatible with the Tibbo AggreGate protocol can be a data source. AggreGate is a white-label IoT integration platform using up-to-date network technologies to control, configure, monitor and support electronic devices, along with distributed networks of such electronic devices. It also helps you collect device data in the cloud, where you can slice and dice it in alignment with your needs. In addition, the platform lets other enterprise applications transparently access this data via the AggreGate server.

Tibbo OPC server has embedded AggreGate network protocol. It can both interact with any Tibbo devices via AggreGate agent protocol and connect to AggreGate server. The AggreGate agent protocol open-source solution is published for Java, C#, and C++ programming languages, so your connection scheme is not restricted to AggreGate server  or Tibbo devices only.

 

Examples

A simple example: TPS reads Tibbit #29 (Ambient temperature meter) and forwards data to OPC server via AggreGate agent protocol.

A more complex example: we have a Windows-based PC controlling a wood processing machine by means of AggreGate server through the Modbus protocol. If Tibbo OPC server is linked with AggreGate server, the data from the machine is sent to Tibbo OPC server, and therefore, we can operate and monitor the machine via any OPC client.

Technical Specification

  • Compatibility with Windows XP/2003 or later (Microsoft Visual C++ 2013 redistributable is required - installed automatically)

  • Support of DA Asynchronous I/O 2.0 and Synchronous I/O with COM/DCOM technology

Tibbo OPC Server transmits the information on the Value, Quality and Timestamp of an item (tag) to the OPC Client applications. These fields are read from the AggreGate variables.

 

The process values are set to Bad [Configuration Error] quality if OPC Server loses communication with its data source (AggreGate Agent or AggreGate Server). The quality is set to Uncertain [Non-Specific] if the AggreGate variable value is empty.

In the following chart below you can see a concordance table of the AggreGate variables and the OPC data types:

AggreGate Data Type OPC Data Type
INTEGER VT_I4
STRING VT_BSTR
BOOLEAN VT_BOOL
LONG VT_I8
FLOAT VT_R4
DOUBLE VT_R8
DATE VT_DATE
DATATABLE OPC VT_BSTR (by default)
COLOR VT_I4
DATA VT_BSTR

To learn more about Tibbo OPC server, click here

Read more…

The IoT communication protocols

Guest post by James Stansberry

Messaging protocols for “lightweight” IoT nodes

A fascinating article from Philip N. Howard at George Washington University asserts that based on multiple sources, the number of connected devices surpassed the number of people on the planet in 2014. Further, it estimates that by 2020 we will be approaching 50 billion devices on the Internet of Things (IoT).

Philip N. Howard’s Study of Connected Devices

In other words, while humans will continue to connect their devices to the web in greater numbers, a bigger explosion will come from “things” connecting to the web that weren’t before, or which didn’t exist, or which now use their connection as more of a core feature.

The question is, how will these billions of things communicate between the end node, the cloud, and the service provider?

This article dives into that subject as it relates to a particular class of devices that are very low cost, battery-powered, and which must operate at least seven years without any manual intervention.

In particular, it looks at two emerging messaging protocols to address the needs of these “lightweight” IoT nodes. The first, MQTT, is very old by today’s standards from way back in 1999. And the second, CoAP, is relatively new but gaining traction.

IoT Communication Protocol Requirements

One definition of IoT is connecting devices to the internet that were not previously connected. A factory owner may connect high-powered lights. A triathlete may connect a battery-powered heart-rate monitor. A home or building automation provider may connect a wireless sensor with no line power source.

But the important thing here is that in all the above cases the “Thing” must communicate through the Internet to be considered an “IoT” node.

Since it must use the Internet, it must also adhere to the Internet Engineering Task Force’s (IETF) Internet Protocol Suite. However, the Internet has historically connected resource-rich devices with lots of power, memory and connection options. As such, its protocols have been considered too heavy to apply wholesale for applications in the emerging IoT.

Internet Protocol Suite Overview

There are other aspects of the IoT which also drive modifications to IETF’s work. In particular, networks of IoT end nodes will be lossy, and the devices attached to them will be very low power, saddled with constrained resources, and expected to live for years.

The requirements for both the network and its end devices might look like the table below. This new model needs new, lighter weight protocols that don’t require the large amount of resources.

MQTT and CoAP address these needs through small message sizes, message management, and lightweight message overhead. We look at each below.

Requirements for low-cost, power-constrained devices and associated networks

MQTT and CoAP: Lightweight IoT Communications Protocols

MQTT and CoAP allow for communication from Internet-based resource-rich devices to IoT-based resource-constrained devices. Both CoAP and MQTT implement a lightweight application layer, leaving much of the error correction to message retries, simple reliability strategies, or reliance on more resource rich devices for post-processing of raw end-node data.

Conceptual Diagram of MQTT and CoAP Communication to Cloud / Phone

MQTT Overview

IBM invented Message Queuing Telemetry Transport (MQTT) for satellite communications with oil field equipment. It had reliability and low power at its core and so made good sense to be applied to IoT networks.

The MQTT standard has since been adopted by the OASIS open standards society and released as version 3.1.1. It is also supported within the Eclipse community, as well as by many commercial companies who offer open source stacks and consulting.

MQTT uses a “publish/subscribe” model, and requires a central MQTT broker to manage and route messages among an MQTT network’s nodes. Eclipse describes MQTT as “a many-to-many communication protocol for passing messages between multiple clients through a central broker.”

MQTT uses TCP for its transport layer, which is characterized as “reliable, ordered and error-checked.”

MQTT Strengths

Publish / Subscribe Model

MQTT’s “pub/sub” model scales well and can be power efficient. Brokers and nodes publish information and others subscribe according to the message content, type, or subject. (These are MQTT standard terms.) Generally the broker subscribes to all messages and then manages information flow to its nodes.

There are several specific benefits to the Pub/Sub model.

Space decoupling

While the node and the broker need to have each other’s IP address, nodes can publish information and subscribe to other nodes’ published information without any knowledge of each other since everything goes through the central broker. This reduces overhead that can accompany TCP sessions and ports, and allows the end nodes to operate independently of one another.

Time decoupling

A node can publish its information regardless of other nodes’ states. Other nodes can then receive the published information from the broker when they are active. This allows nodes to remain in sleepy states even when other nodes are publishing messages directly relevant to them.

Synchronization decoupling

A node that in the midst of one operation is not interrupted to receive a published message to which it is subscribed. The message is queued by the broker until the receiving node is finished with its existing operation. This saves operating current and reduces repeated operations by avoiding interruptions of on-going operations or sleepy states.

Security

MQTT uses unencrypted TCP and is not “out-of-the-box” secure. But because it uses TCP it can – and should – use TLS/SSL internet security. TLS is a very secure method for encrypting traffic but is also resource intensive for lightweight clients due to its required handshake and increased packet overhead. For networks where energy is a very high priority and security much less so, encrypting just the packet payload may suffice.

MQTT Quality of Service (QoS) levels

The term “QoS” means other things outside of MQTT. In MQTT, “QoS” levels 0, 1 and 2 describe increasing levels of guaranteed message delivery.

MQTT QoS Level 0 (At most once)

This is commonly known as “Fire and forget” and is a single transmit burst with no guarantee of message arrival. This might be used for highly repetitive message types or non-mission critical messages.

MQTT QoS Level 1 (At least once)

This attempts to guarantee a message is received at least once by the intended recipient. Once a published messaged is received and understood by the intended recipient, it acknowledges the message with an acknowledgement message (PUBACK) addressed to the publishing node. Until the PUBACK is received by the publisher, it stores the message and retransmits it periodically. This type of message may be useful for a non-critical node shutdown.

MQTT QoS Level 2 (Exactly once)

This level attempts to guarantee the message is received and decoded by the intended recipient. This is the most secure and reliable MQTT level of QoS.  The publisher sends a message announcing it has a QoS level 2 message. Its intended recipient gathers the announcement, decodes it and indicates that it is ready to receive the message. The publisher relays its message. Once the recipient understands the message, it completes the transaction with an acknowledgement. This type of message may be useful for turning on or off lights or alarms in a home.

Last Will and Testament

MQTT provides a “last will and testament (LWT)” message that can be stored in the MQTT broker in case a node is unexpectedly disconnected from the network. This LWT retains the node’s state and purpose, including the types of commands it published and its subscriptions. If the node disappears, the broker notifies all subscribers of the node’s LWT. And if the node returns, the broker notifies it of its prior state. This feature accommodates lossy networks and scalability nicely.

Flexible topic subscriptions

An MQTT node may subscribe to all messages within a given functionality. For example a kitchen “oven node” may subscribe to all messages for “kitchen/oven/+”, with the “+” as a wildcard. This allows for a minimal amount of code (i.e., memory and cost). Another example is if a node in the kitchen is interested in all temperature information regardless of the end node’s functionality. In this case, “kitchen/+/temp” will collect any message in the kitchen from any node reporting “temp”. There are other equally useful MQTT wildcards for reducing code footprint and therefore memory size and cost.

Issues with MQTT

Central Broker

The use of a central broker can be a drawback for distributed IoT systems. For example, a system may start small with a remote control and window shade, thus requiring no central broker. Then as the system grows, for example adding security sensors, light bulbs, or other window shades, the network naturally grows and expands and may have need of a central broker. However, none of the individual nodes wants to take on the cost and responsibility as it requires resources, software and complexity not core to the end-node function.

In systems that already have a central broker, it can become a single point of failure for the complete network. For example, if the broker is a powered node without a battery back-up, then battery-powered nodes may continue operating during an electrical outage while the broker is off-line, thus rendering the network inoperable.

TCP

TCP was originally designed for devices with more memory and processing resources than may be available in a lightweight IoT-style network. For example, the TCP protocol requires that connections be established in a multi-step handshake process before any messages are exchanged. This drives up wake-up and communication times, and reduces battery life over the long run.

Also in TCP it is ideal for two communicating nodes to hold their TCP sockets open for each other continuously with a persistent session, which again may be difficult with energy- and resource-constrained devices.

Wake-up time

Again, using TCP without session persistence can require incremental transmit time for connection establishment. For nodes with periodic, repetitive traffic, this can lead to lower operating life.

CoAP Overview

With the growing importance of the IoT, the Internet Engineering Task Force (IETF)took on lightweight messaging and defined the Constrained Application Protocol (CoAP). As defined by the IETF, CoAP is for “use with constrained nodes and constrained (e.g., low-power, lossy) networks.” The Eclipse community also supports CoAP as an open standard, and like MQTT, CoAP is commercially supported and growing rapidly with IoT providers.

CoAP is a client/server protocol and provides a one-to-one “request/report” interaction model with accommodations for multi-cast, although multi-cast is still in early stages of IETF standardization. Unlike MQTT, which has been adapted to IoT needs from a decades-old protocol, the IETF specified CoAP from the outset to support IoT with lightweight messaging for constrained devices operating in a constrained environment. CoAP is designed to interoperate with HTTP and the RESTful web through simple proxies, making it natively compatible to the Internet.

Strengths of CoAP

Native UDP

CoAP runs over UDP which is inherently and intentionally less reliable than TCP, depending on repetitive messaging for reliability instead of consistent connections. For example, a temperature sensor may send an update every few seconds even though nothing has changed from one transmission to the next. If a receiving node misses one update, the next will arrive in a few seconds and is likely not much different than the first.

UDP’s connectionless datagrams also allow for faster wake-up and transmit cycles as well as smaller packets with less overhead. This allows devices to remain in a sleepy state for longer periods of time conserving battery power.

Multi-cast Support

A CoAP network is inherently one-to-one; however it allows for one-to-many or many-to-many multi-cast requirements. This is inherent in CoAP because it is built on top of IPv6 which allows for multicast addressing for devices in addition to their normal IPv6 addresses. Note that multicast message delivery to sleeping devices is unreliable or can impact the battery life of the device if it must wake regularly to receive these messages.

Security

CoAP uses DTLS on top of its UDP transport protocol. Like TCP, UDP is unencrypted but can be – and should be – augmented with DTLS.

Resource / Service Discovery

CoAP uses URI to provide a standard presentation and interaction expectations for network nodes. This allows a degree of autonomy in the message packets since the target node’s capabilities are partly understood by its URI details. In other words, a battery-powered sensor node may have one type of URI while a line-powered flow control actuator may have another. Nodes communicating to the battery-powered sensor node might be programmed to expect longer response times, more repetitive information, and limited message types. Nodes communicating to the line-powered flow control actuator might be programmed to expect rich, detailed messages, very rapidly.

Asynchronous Communication

Within the CoAP protocol, most messages are sent and received using the request/report model; however, there are other modes of operation that allow nodes to be somewhat decoupled. For example, CoAP has a simplified “observe” mechanism similar to MQTT’s pub/sub that allows nodes to observe others without actively engaging them.

As an example of the “observe” mode, node 1 can observe node 2 for specific transmission types, then any time node 2 publishes a relevant message, node 1 receives it when it awakens and queries another node. It’s important to note that one of the network nodes must hold messages for observers. This is similar to MQTT’s broker model except that there is no broker requirement in CoAP, and therefore no expectation of being able to hold or queue messages for observers.

There are currently draft additions to the standard which may provide a similar CoAP function to MQTT’s pub/sub model over the short-to-medium term. The leading candidate today is a draft proposal from Michael Koster, allowing CoAP networks to implement a pub/sub model like MQTT’s mentioned above.  

Issues with CoAP

Standard Maturity

MQTT is currently a more mature and stable standard than CoAP. It’s been Silicon Labs’ experience that it is easier to get an MQTT network up and running very quickly than a similar one using CoAP. That said, CoAP has tremendous market momentum and is rapidly evolving to provide a standardized foundation with important add-ons in the ratification pipeline now.

It is likely that CoAP will reach a similar level of stability and maturity as MQTT in the very near term. But the standard is evolving for now, which may present some troubles with interoperability.

Message Reliability (QoS level)

CoAP’s “reliability” is MQTT’s QoS and provides a very simple method of providing a “confirmable” message and a “non-confirmable” message. The confirmable message is acknowledged with an acknowledgement message (ACK) from the intended recipient. This confirms the message is received but stops short of confirming that its contents were decoded correctly or at all. A non-confirmable message is “fire and forget.”

Summary

The two messaging protocols MQTT and CoAP are emerging as leading lightweight messaging protocols for the booming IoT market. Each has benefits and each has issues. As leaders in mesh networking where lightweight nodes are a necessary aspect of almost every network, Silicon Labs has implemented both protocols, including gateway bridging logic to allow for inter-standard communication.

Further Reading

MQTT

Specification - http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html

Excellent source for MQTT information – http://www.hivemq.com/mqtt-essentials-wrap-up/

CoAP

Specification - https://tools.ietf.org/html/rfc7252

Excellent source for CoAP information - http://coap.technology/

MQTT-SN –

Specification – http://mqtt.org/2013/12/mqtt-for-sensor-networks-mqtt-sn

General coverage of IoT messaging protocols

Excellent white paper on using MQTT, CoAP, and other messaging protocols –  http://www.prismtech.com/sites/default/files/documents/MessagingComparsionNov2013USROW_vfinal.pdf

This article originally appeared here.

Read more…

New IoT App Makes Drivers Safer

Transportation has become one of the most frequently highlighted areas where the internet of things can improve our lives. Specifically, a lot of people are excited about the IoT's potential to further the progress toward entire networks of self-driving cars. We hear a lot about the tech companies that are involved in building self-driving cars, but it's the IoT that will actually allow these vehicles to operate. In fact, CNET quoted one IoT expert just last year as saying that because of the expanding IoT, self-driving cars will rule the roads by 2030.

On a much smaller scale, there are also some niche applications of the IoT that are designed to fix specific problems on the road. For instance, many companies have looked to combat distracted driving by teenagers through IoT-related tools. As noted by PC World, one device called the Smartwheel monitors teens' driving activity by sensing when they're keeping both hands on the wheel. The device sounds an alert when a hand comes off the wheel and communicates to a companion app that compiles reports on driver performance. This is a subtle way in which the IoT helps young drivers develop better habits.

In a way, these examples cover both extremes of the effect the IoT is having on drivers. One is a futuristic idea that's being slowly implemented to alter the very nature of road transportation. The other is an application for individuals meant to make drivers safer one by one. But there are also some IoT-related tools that fall somewhere in the middle of the spectrum. One is an exciting new app that seeks to make the roads safer for the thousands of shipping fleet drivers operating on a daily basis.

At first this might sound like a niche category. However, the reality is that the innumerable companies and agencies relying shipping and transportation fleets have a ton of drivers to take care of. That means supervising vehicle performance, safety, and more for each and every one of them. That process comprises a significant portion of road activity, particularly in cities and on highways. These operations are able to be simplified and streamlined through Networkfleet Driver, which Verizon describes as a tool to help employees manage routes, maintenance, communication, and driving habits all in one place.

The app can communicate up-to-date routing changes or required stops, inform drivers of necessary vehicle repairs or upkeep, and handle communication from management. It can also make note of dangerous habits (like a tendency to speed or make frequent sudden stops), helping the driver to identify bad habits and helping managers to recommend safer performance. All of this is accomplished through various IoT sensors on vehicles interacting automatically with the app, and with systems that can be monitored by management.

The positive effect, while difficult to quantify, is substantial. Fleet drivers make up a significant portion of road activity, and through the use of the IoT we can make sure that the roads are safer for everyone.

Read more…

Internet of Things has raised concerns over safety. Nowadays, it is possible to control your home using your Smartphone. In the coming years, mobile devices will work as a remote control to operate all the things in your house. 

Some devices display one or several vulnerabilities that can be exploited by the hackers to infiltrate them and the whole network of the connected home. For instance:

1.      During configuration, data – including the device ID and MAC address - is sometimes transmitted in plain text.

2.      The communication between the device and the app passes unencrypted through the manufacturer’s servers.

3.      The hotspot is poorly secured with a weak username and password and sometimes remains active after configuration.

4.      The device comes pre-installed with a Telnet client carrying default credentials.

With rising cases of identity theft and vishing, it has become absolutely necessary to install any of these 5 free tools in your smartphone in order to keep your data safe from hackers. 

1- LastPass - It lets you store passwords in a secure vault that is easy to use, searchable and organized the way you like. It is perhaps the safest vault available online today that lets you store password data for unlimited websites. 

2- Lookout - This tool offers security for the today's mobile generation. It is a free app that protects your iOS or Android device around the clock from mobile threats such as unsecure WiFi networks, malicious apps, fraudulent links, etc. It has a worldwide network of 100 million mobile sensors, world's largest mobile data set and a smarter machine intelligence to keep your smartphone secure from all kinds of threats. 

3- Authy - This app generates secure 2 step verification tokens on your device and protects your account from hackers and hijackers by adding an additional layer of security. Moreover, it offers secure cloud backup, multi device synchronization and multi factor authentication. 2 step authentication is the best kind of security available today that ensures your accounts don't get hacked. 

4- BullGuard - It protects your smartphone from all forms of viruses and malware. With an inbuilt, rigorous anti-theft functionality, BullGuard enables you to lock, locate and wipe device remotely in case it gets lost or stolen. It allows automatic scans so that the security remains updated. Moreover, it doesn't drains down your battery. 

5- Prey - It is a lightweight theft protection software that lets you keep an eye over your mobile devices in case you have more than one and you are leaving one in your home. Prey lets you recover the phone in case it gets stolen. After installing the software on your laptop, tablet or phone, Prey will sleep silently in the background awaiting your command. Once remotely triggered from your Prey account, your device will gather and deliver detailed evidence back to you, including a picture of who's using it – often the crucial piece of data that police officers need to take action. 

 

Read more…

As if the Internet of Things (IoT) was not complicated enough, the Marketing team at Cisco introduced its Fog Computing vision in January 2014, also known as Edge Computing  for other more purist vendors.

Given Cisco´s frantic activity in their Internet of Everything (IoE) marketing campaigns, it is not surprising that many bloggers have abused of shocking headlines around this subject taking advantage of the Hype of the IoT.

I hope this post help you better understand what is  the role of Fog Computing  in the IoT Reference Model and how companies are using IoT Intelligent gateways in the Fog to connect the "Things" to the Cloud through some applications areas and examples of Fog Computing.

The problem with the cloud

As the Internet of Things proliferates, businesses face a growing need to analyze data from sources at the edge of a network, whether mobile phones, gateways, or IoT sensors. Cloud computing has a disadvantage: It can’t process data quickly enough for modern business applications.

The IoT owes its explosive growth to the connection of physical things and operation technologies (OT) to analytics and machine learning applications, which can help glean insights from device-generated data and enable devices to make “smart” decisions without human intervention. Currently, such resources are mostly being provided by cloud service providers, where the computation and storage capacity exists.

However, despite its power, the cloud model is not applicable to environments where operations are time-critical or internet connectivity is poor. This is especially true in scenarios such as telemedicine and patient care, where milliseconds can have fatal consequences. The same can be said about vehicle to vehicle communications, where the prevention of collisions and accidents can’t afford the latency caused by the roundtrip to the cloud server.

“The cloud paradigm is like having your brain command your limbs from miles away — it won’t help you where you need quick reflexes.”

Moreover, having every device connected to the cloud and sending raw data over the internet can have privacy, security and legal implications, especially when dealing with sensitive data that is subject to separate regulations in different countries.

IoT nodes are closer to the action, but for the moment, they do not have the computing and storage resources to perform analytics and machine learning tasks. Cloud servers, on the other hand, have the horsepower, but are too far away to process data and respond in time.

The fog layer is the perfect junction where there are enough compute, storage and networking resources to mimic cloud capabilities at the edge and support the local ingestion of data and the quick turnaround of results.

The variety of IoT systems and the need for flexible solutions that respond to real-time events quickly make Fog Computing a compelling option.

The Fog Computing, Oh my good another layer in IoT!

A study by IDC estimates that by 2020, 10 percent of the world’s data will be produced by edge devices. This will further drive the need for more efficient fog computing solutions that provide low latency and holistic intelligence simultaneously.

“Computing at the edge of the network is, of course, not new -- we've been doing it for years to solve the same issue with other kinds of computing.”

The Fog Computing or Edge Computing  is a paradigm championed by some of the biggest IoT technology players, including Cisco, IBM, and Dell and represents a shift in architecture in which intelligence is pushed from the cloud to the edge, localizing certain kinds of analysis and decision-making.

Fog Computing enables quicker response times, unencumbered by network latency, as well as reduced traffic, selectively relaying the appropriate data to the cloud.

The concept of Fog Computing attempts to transcend some of these physical limitations. With Fog Computing processing happens on nodes physically closer to where the data is originally collected instead of sending vast amounts of IoT data to the cloud.

Photo Source: http://electronicdesign.com/site-files/electronicdesign.com/files/uploads/2014/06/113191_fig4sm-cisco-fog-computing.jpg

The OpenFog Consortium

The OpenFog Consortium, was founded on the premise based on open architectures and standards that are essential for the success of a ubiquitous Fog Computing ecosystem.

The collaboration among tech giants such as ARM, Cisco, Dell, GE, Intel, Microsoft and Schneider Electric defining an Open, Interoperable Fog Computing Architecture is without any doubt good news for a vibrant supplier ecosystem.

The OpenFog Reference Architecture is an architectural evolution from traditional closed systems and the burgeoning cloud-only models to an approach that emphasizes computation nearest the edge of the network when dictated by business concerns or critical application the functional requirements of the system.

The OpenFog Reference Architecture consists of putting micro data centers or even small, purpose-built high-performance data analytics machines in remote offices and locations in order to gain real-time insights from the data collected, or to promote data thinning at the edge, by dramatically reducing the amount of data that needs to be transmitted to a central data center. Without having to move unnecessary data to a central data center, analytics at the edge can simplify and drastically speed analysis while also cutting costs.

Benefits of Fog Computing

  • ·         Frees up network capacity - Fog computing uses much less bandwidth, which means it doesn't cause bottlenecks and other similar occupancies. Less data movement on the network frees up network capacity, which then can be used for other things.
  • ·         It is truly real-time - Fog computing has much higher expedience than any other cloud computing architecture we know today. Since all data analysis are being done at the spot it represents a true real time concept, which means it is a perfect match for the needs of Internet of Things concept.
  • ·         Boosts data security - Collected data is more secure when it doesn't travel. Also makes data storing much simpler, because it stays in its country of origin. Sending data abroad might violate certain laws.
  • ·         Analytics is done locally- Fog computing concept enables developers to access most important IoT data from other locations, but it still keeps piles of less important information in local storages;
  • ·         Some companies don't like their data being out of their premises- with Fog Computing lots of data is stored on the devices themselves (which are often located outside of company offices), this is perceived as a risk by part of developers' community.
  • ·         Whole system sounds a little bit confusing- Concept that includes huge number of devices that store, analyze and send their own data, located all around the world sounds utterly confusing.

Disadvantages of Fog Computing

Read more: http://bigdata.sys-con.com/node/3809885

Examples of Fog Computing

The applications of fog computing are many, and it is powering crucial parts of IoT ecosystems, especially in industrial environments. See below some use cases and examples.

  • Thanks to the power of fog computing, New York-based renewable energy company Envision has been able to obtain a 15 percent productivity improvement from the vast network of wind turbines it operates. The company is processing as much as 20 terabytes of data at a time, generated by 3 million sensors installed on the 20,000 turbines it manages. Moving computation to the edge has enabled Envision to cut down data analysis time from 10 minutes to mere seconds, providing them with actionable insights and significant business benefits.
  • Plat One is another firm using fog computing to improve data processing for the more than 1 million sensors it manages. The company uses the Cisco-ParStream platform to publish real-time sensor measurements for hundreds of thousands of devices, including smart lighting and parking, port and transportation management and a network of 50,000 coffee machines.
  • In Palo Alto, California, a $3 million project will enable traffic lights to integrate with connected vehicles, hopefully creating a future in which people won’t be waiting in their cars at empty intersections for no reason.
  • In transportation, it’s helping semi-autonomous cars assist drivers in avoiding distraction and veering off the road by providing real-time analytics and decisions on driving patterns.
  • It also can help reduce the transfer of gigantic volumes of audio and video recordings generated by police dashboard and video cameras. Cameras equipped with edge computing capabilities could analyze video feeds in real time and only send relevant data to the cloud when necessary.

See more at: Why Edge Computing Is Here to Stay: Five Use Cases By Patrick McGarry  

What is the future of fog computing?

The current trend shows that fog computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Sensors that only log data might one day become a thing of the past.

Janakiram MSV  wondered if Fog Computing  will be the Next Big Thing In Internet of Things? . It seems obvious that while cloud is a perfect match for the Internet of Things, we have other scenarios and IoT solutions that demand low-latency ingestion and immediate processing of data where Fog Computing is the answer.

Does the fog eliminate the cloud?

Fog computing improves efficiency and reduces the amount of data that needs to be sent to the cloud for processing. But it’s here to complement the cloud, not replace it.

The cloud will continue to have a pertinent role in the IoT cycle. In fact, with fog computing shouldering the burden of short-term analytics at the edge, cloud resources will be freed to take on the heavier tasks, especially where the analysis of historical data and large datasets is concerned. Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.

And there are still many cases where the centralized, highly efficient computing infrastructure of the cloud will outperform decentralized systems in performance, scalability and costs. This includes environments where data needs to be analyzed from largely dispersed sources.

“It is the combination of fog and cloud computing that will accelerate the adoption of IoT, especially for the enterprise.”

In essence, Fog Computing allows for big data to be processed locally, or at least in closer proximity to the systems that rely on it. Newer machines could incorporate more powerful microprocessors, and interact more fluidly with other machines on the edge of the network. While fog isn’t a replacement for cloud architecture, it is a necessary step forward that will facilitate the advancement of IoT, as more industries and businesses adopt emerging technologies.

'The Cloud' is not Over

Fog computing is far from a panacea. One of the immediate costs associated with this method pertains to equipping end devices with the necessary hardware to perform calculations remotely and independent of centralized data centers. Some vendors, however, are in the process of perfecting technologies for that purpose. The tradeoff is that by investing in such solutions immediately, organizations will avoid frequently updating their infrastructure and networks to deal with ever increasing data amounts as the IoT expands.

There are certain data types and use cases that actually benefit from centralized models. Data that carries the utmost security concerns, for example, will require the secure advantages of a centralized approach or one that continues to rely solely on physical infrastructure.

Though the benefits of Fog Computing are undeniable, the Cloud has a secure future in IoT for most companies with less time-sensitive computing needs and for analysing all the data gathered by IoT sensors.

 

Thanks in advance for your Likes and Shares

Thoughts ? Comments ?

Read more…

Originally posted on Data Science Central

 Printed electronics are being vouched as the next best thing in Internet of Things (IoT), the technology that is rightly regarded as a boon of advancing technology. Silicon-based sensors are the first that have been associated with IoT technology. These sensors have numerous applications, such as track data from airplane, wind turbines, engines, and medical devices, amongst other internet connected devices.

However, these silicon-based are not suitable for several other applications. Bendable packaging and premium items are some of the application where embedded sensors do not work. For such applications, printed electronics befit the need. Using sensor technology, information is transferred on smart labels that can be attached to packages to be tracked in real time.

Some Applications of Printed Sensor Technology

Grocery Industry: While bar code is the standard technology used in the grocery sector, the technology has limitations pertaining to the data it can store. Also, for some products, product packaging can run up to 30-40% of the cost, for which printed sensor are best-suited to save packaging costs. For such needs, a printed sensor is the most apt solution for real-time information about a product’s temperature, moisture, location, movement, and much more. Companies can check these parameters to validate the freshness and prevent substantial spoilage. Smart labels are also used to validate the authenticity of products.

Click here to get report.

Healthcare: The use of smart labels enables manufacturers and logistics firms to track the usage and disposal of pharmaceuticals and to control inventory. The use of smart labels on patients’ clothing enables to check their body temperature, dampness of adult diapers, or bandages for assisted living scenarios.

Logistics: Radio frequency identification (RFID) was the standard tag used by logistics companies until recently to identify shipping crates that carried perishable products. RFID is increasingly being replaced by smart labels that enable tracking of individual items. This facilitates companies to track products at the item level rather than at the container shipping level.

Biosensors Lead Printed and Flexible Sensors Market

As per the research study, the global market for printed and flexible sensors is estimated to grow at a fast pace, due to which several investors are interested in pouring funds into the market. This is expected to create potential opportunities for commercialization and product innovation. In addition, several new players are also projected to participate in order to gain a competitive advantage in the market. In 2013, the global printed and flexible sensors market stood at US$6.28 bn and is projected to be worth US$7.51 bn by the end of 2020. The market is expected to register a healthy 2.50% CAGR between 2012 and 2020, as per the study.

The rapid growth in individual application segments and several benefits over the conventional sensors are some of the key factors driving the global market for printed and flexible sensors. In addition, the developing global market for Internet of Things is further anticipated to fuel the growth of the market in the next few years. On the flip side, several challenges in conductive ink printing are estimated to hamper the growth of the market for printed and flexible sensors in the near future.

Biosensors are most extensively used with the largest market share in the global market for printed and flexible sensors. Glucose strips incorporated with a biosensor are one of the most sought after ways to track and monitor glucose levels among diabetics. Thus, it accounts as a multi-billion dollar segment in the global market for printed and flexible sensors. To evaluate and monitor working of the heart, kidney diseases, and cancer are the other emerging applications where printed biosensors technology is being utilized.

The expanding automobile industry holds promise for piezoelectric type printed flexible sensors for performance testing during production. Due to these varied applications of printed and flexible sensors, the global market for printed and flexible sensors will expand at a slow but steady 2.5% CAGR in the next six years starting from 2012.

Follow us @IoTCtrl | Join our Community

Read more…

Soft Pasture

By Ben Dickson. This article originally appeared here.

The Internet of Things (IoT) is one of the most exciting phenomena of the tech industry these days. But there seems to be a lot of confusion surrounding it as well. Some think about IoT merely as creating new internet-connected devices, while others are more focused on creating value through adding connectivity and smarts to what already exists out there.

I would argue that the former is an oversimplification of the IoT concept, though it accounts for the most common approach that startups take toward entering the industry. It’s what we call greenfield development, as opposed to the latter approach, which is called brownfield.

Here’s what you need to know about greenfield and brownfield development, their differences, the challenges, and where the right balance stands.

Greenfield IoT development

In software development, greenfield refers to software that is created from scratch in a totally new environment. No constraints are imposed by legacy code, no requirements to integrate with other systems. The development process is straightforward, but the risks are high as well because you’re moving into uncharted territory.

In IoT, greenfield development refers to all these shiny new gadgets and devices that come with internet connectivity. Connected washing machines, smart locks, TVs, thermostats, light bulbs, toasters, coffee machines and whatnot that you see in tech publications and consumer electronic expos are clear examples of greenfield IoT projects.

Greenfield IoT development is adopted by some well-established brands as well as a lineup of startups that are rushing to climb the IoT bandwagon and grab a foothold in one of the fastest growing industries. It is much easier for startups to enter greenfield development because they have a clean sheet and no strings attached to past development.

But it also causes some unwanted effects. First of all, when things are created independent of each other and their predecessors, they tend to pull the industry in separate ways. That is why we see the IoT landscape growing in many different directions at the same time, effectively becoming a fragmented hodgepodge of incompatible and non-interoperable standards and protocols. Meanwhile, the true future of IoT is an ecosystem of connected devices that can autonomously inter-communicate (M2M) without human intervention and create value for the community. And that’s not where these isolated efforts are leading us.

Also, many of these companies are blindly rushing into IoT development without regard to the many challenges they will eventually face. Many of the ideas we see are plain stupidand make the internet of things look like the internet of gadgets. Nice-to-haves start to screen out must-haves, and the IoT’s real potential for disruption and change will become obscured by the image of a luxury industry.

As is the case with most nascent industries, a lot of startups will sprout and many will wither and die before they can muster the strength to withstand the tidal waves that will wash over the landscape. And in their wake, they will leave thousands and millions of consumers with unsupported devices running buggy—and potentially vulnerable—software.

On the consumer side, greenfield products will impose the requirement to throw away appliances that should’ve worked for many more years. And who’s going to flush down hundreds and thousands of hard-earned dollars down the drain to buy something that won’t necessarily solve a critical problem?

On the industrial side, the strain is going to be even more amplified. The costs of replacing entire infrastructures are going to be stellar, and in some cases the feat will be impossible.

This all doesn’t mean that greenfield development is bad. It just means that it shouldn’t be regarded as the only path to developing IoT solutions.

Brownfield IoT development

Again, to take the cue from software development, brownfield development refers to any form of software that created on top of legacy systems or with the aim of coexisting with other software that are already in use. This will impose some constraints and requirements that will limit design and implementation decisions to the developers. The development process can become challenging and arduous and require meticulous analysis, design and testing, things that many upstart developers don’t have the patience for.

The same thing applies to IoT, but the challenges become even more accentuated. In brownfield IoT development, developers inherit hardware, embedded software and design decisions. They can’t deliberate on where they want to direct their efforts and will have to live and work within a constrained context. Throwing away all the legacy stuff will be costly. Some of it has decades of history, testing and implementation behind it, and manufacturers aren’t ready to repeat that cycle all over again for the sake of connectivity.

Brownfield is especially important in industrial IoT (IIoT), such as smart buildings, bridges, roads, railways and all infrastructure that have been around for decades and will continue to be around for decades more. Connecting these to the cloud (and the fog), collecting data and obtaining actionable insights might be even more pertinent than having a light bulb that can be turned on and off with your smartphone. IIoT is what will make our cities smarter, more efficient, and create the basis to support the technology of the future, shared economies, fully autonomous vehicles and things that we can’t imagine right now.

But as its software development counterpart, brownfield IoT development is very challenging, and that’s why manufacturers and developers are reluctant and loathe to engage in it. And thus, we’re missing out on a lot of the opportunities that IoT can provide.

So which is the better?

There’s no preference. There should be balance and coordination between greenfield and brownfield IoT development. We should see more efforts that bridge the gap between so many dispersed efforts in IoT development, a collective effort toward creating establishing standards that will ensure present and future IoT devices can seamlessly connect and combine their functionality and power. I’ve addressed some of these issues in a piece I wrote for TechCrunch a while back, and I think there’s a lot we can learn from the software industry. I’ll be writing about it again, because I think a lot needs to be done to have IoT development head in the right direction.

The point is, we don’t need to reinvent the wheel. We just have to use it correctly.

Read more…

Guest blog post by Bill Vorhies

Summary:  Deep learning and Big Data are being adopted in law enforcement and criminal justice at an unprecedented rate.  Does this scare you or make you feel safe?

 

When you read the title, whether your mind immediately went for the upstairs “H” or the downstairs “H” probably says something about whether the new applications of Big Data in law enforcement let you sleep like a baby or keep you up at night. 

You might have thought your choice of “H” related to whether you’ve been on the receiving end of Big Data in law enforcement but the fact is that practically all of us have, and for those who haven’t it won’t take much longer to reach you.

There is an absolute explosion in the use of Big Data and predictive analytics in our legal system today driven by the latest innovations in data science and by some obvious applications.

It hasn’t always been so.  In the middle 90s I was part of the first wave trying to convince law enforcement to adopt what was then cutting edge data science.  At the time that was mostly GIS analysis combined with predictive analytics to create what we called predictive policing.  That is predicting where and at what time of day crime of each type was most likely to occur so that manpower could be effectively allocated.  Seems so quaint now.  It was actually quite successful but the public sector had never been quick to adapt to new technology so there weren’t many takers.

That trend about slow adoption has changed.  So while accelerating the usage of advanced analytics to keep the peace may keep some civil libertarians up at night, it’s coming faster than ever, and it’s our most advanced techniques in deep learning that are driving it.

By now you’ve probably figured out the deep learning is best used for three things: image recognition, speech recognition, and text processing.  Here are two stories illustrating how this is impacting law enforcement.

 

Police Ramp Up Scrutiny Over On Line Threats

The article by this title appeared in the July 20 WSJ.  Given what’s been happening recently both internationally and at home most of us probably applaud the use of text analytics to monitor for early warning signs of home grown miscreants.  The article states “In the past two weeks at least eight people have been arrested by state and federal authorities for threats against police posted on social media”.  It remains to be seen if these will turn into criminal prosecutions and how this will play out against 1st Amendment rights but as a society we seem to be OK for trading a little of one for more of the other.

It’s always in the back of our minds whether this is Facebook, Twitter, Apple, Google and the others actively cooperating in undisclosed programs to aid the police, but this article specifically calls out the fact that the police were the ones doing the monitoring.  Whether they’ve built these capabilities in-house or are using contractors isn’t clear.  What is clear is that advanced text analytics and deep learning were the data science elements behind it.

 

Taser – the Data Science Company

The second example comes from an article in Business Week’s July 18 issue, “Will a Camera on Every Cop Help Save Lives or Just Make a Tech Company Richer”.  

Taser – a tech company?  When I think about Taser, the maker of the ubiquitous electric stun gun, I am much more likely to associate them with Smith & Wesson than with Silicon Valley and apparently I couldn’t be more wrong.

In short the story goes like this.  In the 90s Taser dominated the market for non-lethal police weapons to provide better alternatives for a wide variety of incidents where bullets should not be the answer.  By the 2000s Taser had successfully saturated that market and its next big opportunity came from the unfortunate Ferguson Mo. unrest. 

That opportunity turned out to be wearable cameras.  Although the wearable police cameras date back to about 2008 there really hadn’t been much demand until the public outcry for transparency in policing became overwhelming.

Taser now also dominates the wearable camera market.  Like its namesake stun gun however, sales of Tasers or wearable cameras are basically a one-and-done market.  Once saturated, it offers only replacement sales, not a robust model for corporate expansion.  So far this sounds more like a story about a hardware company than a data science tech company and here’s the transition.

The cameras are producing huge volumes of video images that need to be preserved at the highest levels of chain-of-evidence security for use in criminal justice proceedings.  Taser bought a startup in the photo sharing space and adapted it to their new flagship product Evidence.com, a subscription based software platform now positioned as a ‘secure cloud-based solution’.

According the BW article, “4.6 Petabytes of video have been uploaded to the platform, an amount comparable to Netflix’s entire streaming catalogue”.  Taser is a major customer of MS Azure. And for police departments that have adopted, video is now reported to be presented as evidence in 20% to 25% of cases.

But this story is not just about storing recorded video.  It is about how police and prosecutors have become overwhelmed with the sheer volume of ‘video data’ and the need to simplify and speed access.  The answer is image recognition driven by deep learning.  Taser now earns more than ¾ ths of its revenue from its Evidence.com platform and is rapidly transforming from hardware to app to data science company to answer the need for easier, faster, more accurate identification of relevant images.

 

The Direction Forward

You already know about real-time license plate scanners mounted on patrol cars that are able to automatically photograph license plates without operator involvement, transmit the scan to a central database, and return information in real time about wants and warrants associated with that vehicle.

What Taser and law enforcement say is quite close is a similar application using full time video from police-wearable cameras combined with facial recognition.  Once again those civil liberties questions will have to be answered but there’s no question that this application of data science will make policing more effective.

About those huge volumes of videos and the need to recognize faces and images.  There are plenty of startups that will benefit from this and many with products already in commercial introduction.  Here’s a sampling.

Take a look at Nervve Technologies whose byline is “Visual search insanely fast”.  Using their visual search technology originally developed for government spy agencies they are analyzing hours of sporting event tape in a few seconds to identify the number of times a sponsor’s logo (on uniforms or on billboards) actually appears in order to value the exposure for advertising.

And beyond simple facial recognition is an emerging field called facial or emotional analytics.  That’s right, from video these companies have developed deep learning models that predict how you are feeling or reacting. 

Stoneware incorporates image processing and emotional analytics in its classroom management products to judge the attentiveness of each student in a classroom.

Emotient and Affectiva have similar products in use by major CPG companies to evaluate audience response to advertisements, and to study how NBA spectators respond to activities such as a dance cam.

Real time facial-emotional scanning of crowds to find individuals most likely to commit misdeeds can’t be far away.

For audio, Beyond Verbal has a database of 1.5 million voices used to analyze vocal intonations to identify more than 300 mood variants in more than 40 languages with a claim of 80% accuracy.

All of these are deep learning based data science being put rapidly to work in our law enforcement and criminal justice systems.

 

 

About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001.  He can be reached at:

[email protected]

Follow us @IoTCtrl | Join our Community

Read more…

Start Building an IOT Solution

By Ashish Modi. This article originally appeared here

To build an IOT application we required following things.

  1. A problem where we required IOT solution. 
  2. Identify and design IOT based solution (Hardware + software + connection).

A problem where we required IOT solution

Nowadays everything is connected to the internet.  We need to move our existing system into IOT based solution.

Identify and design IOT based solution 

Use following mention tool and technology to create our first IOT solution. 

Hardware 

  • Arduino 

Arduino is an open-source prototyping platform based on easy-to-use hardware and software. You need to learn Arduino programming language to pass the set of instruction to a microcontroller on the Arduino board. 

  • Raspberry Pi

The Raspberry Pi hardware has evolved through several versions that feature variations in memory capacity and peripheral device support.

  • Sensors

According to the problem, we are select sensors, here we are sharing some sensors details. 

  1. Pressure Sensor
  2. Temperature sensor
  3. Humidity sensor
  4. Touch sensor
  5. PIR Movement Sensor

Software

To store data collected from different - different device, we required one system, for this, we are used cloud-based storage. AWS, SalesForce or Microsoft Assure etc provide public and private storage location in different - different regions. 

Connection

To Connect our hardware device to software based system we required some connectivity protocol. 

  • CoAP - Constrained Application Protocol
  • MQTT - Machine-to-machine protocol

Read more…

By Rick Blaisdell. This article originally appeared here.

In the next five years Internet of Things communications will see unprecedented growth, and cellular connectivity will become even more valuable. Wireless cellular technologies have found enormous potential as key enablers for IoT, and the continuously increasing technology enhancements and innovations in cellular technologies are promising to be the major primary access methodologies to enable a great number of IoT applications.

Cellular technologies are already being used for IoT today in several use cases and are expected to be used even more in the future as these use cases require excellent mobility, strong networks, robust security, economic scale and communications independent of third party access. At the same time, the Internet of Things requires low complexity, low cost devices with long battery life times as well as good coverage for long communication range and penetration to reach the most challenging locations.

The challenge for the cellular industry now is to unlock the value of this interconnected web of devices in a secure, flexible and manageable manner. The goal is to identify a framework of promising solutions and cover a set of innovative approaches and technologies to meet these challenges.

The cellular IoT alphabet

MTC, Cat-0, Cat-1, LTE-M … Some might get confused with all the acronyms related to cellular IoT, so let’s go through it and explain where the different terms come from and what they mean.

As you probably know, 3GPP (3rd Generation Partnership Project) uses the concept of “Releases” to refer to a stable set of specifications which can be used for implementation of features at a given point of time. User Equipment (UE) Category is one important term here. Categories are used to define general UE performance characteristics – for example, maximum supported data rate in uplink and downlink data channels, and to what extent different multi-antenna capabilities and modulation schemes are supported.

The latest stable Release is Release 12, where the categories range from Category 0 up to Category 13. Release 13, which is being finalized at the moment, will include further UE Categories including at least the so-called “Cat-M1” intended for IoT devices.

Cat-1 – Category 1 – was included in the LTE specifications already in the beginning, Release 8. With a Cat-1 UE, it is possible to achieve 10 Mbps downlink and 5 Mbps uplink channel data rates. Cat-1 has not been a relevant UE category for LTE-based mobile broadband services, as its performance is below the best 3G performance. Now it has become an attractive, early alternative for IoT applications over LTE, because it is already standardized.

Cat-0 – Category 0 – is one of the newest standardized categories from Release 12. Cat-0 UEs are intended for IoT use cases, and provide 1 Mbps data rates for both up- and downlink. Cat-0 UEs have reduced complexity by up to 50% compared to Cat-1; requirements include only one receiver antenna and support of half-duplex operation, providing ways for the manufacturers to significantly reduce the modem cost compared to more advanced UE categories.

LTE-Advanced technology, the chief vehicle of 4G cellular connectivity, started to and will continue evolving to provide new features that support a range of high and low performance and cost-optimized IoT device categories. So far, the focus has been on meeting the huge demand for mobile data with highly capable devices that utilize new spectrum.

However, the arrival of LTE-M signifies an important step in addressing MTC (Machine-Type Communications) capabilities over LTE. LTE-M brings new power-saving functionality suitable for serving a variety of IoT applications; Power Saving Mode and eDRX extend battery life for LTE-M to 10 years or more. LTE-M traffic is multiplexed over a full LTE carrier, and it is therefore able to tap into the full capacity of LTE. Additionally, new functionality for substantially reduced device cost and extended coverage for LTE-M are also specified within 3GPP.

The Internet of Things is set to ascend, and operators have a unique opportunity to offer affordable connectivity on a global scale. At the same time, for IoT applications, existing cellular networks offer distinct advantages over alternative WAN technologies, such as unlicensed LPWA.

Read more…

Guest post by Preston Tesvich. This article originally appeared here.

Let’s say you’re in the planning phase of an IoT project. You have a lot of decisions to make, and maybe you're not sure where to start:

 

In this article, we focus on a framework of how you can think about this problem of standards, protocols, and radios. 

The framework of course depends on if your deployment is going to be internal, such as in a factory, or external, such as a consumer product. In this conversation we’ll focus on products that are launching externally to a wider audience of customers, and for that we have a lot to consider.

Let’s look at the state of the IoT right now— bottom line, there’s not a standard that’s so prolific or significant that you’re making a mistake by not using it. What we want to do, then, is pick the thing that solves the problem that we have as closely as possible and has acceptable costs to implement and scale, and not worry too much about fortune telling the future popularity of that standard.  

So, it first comes down to technical constraints:
    - What are the range and bandwidth requirements? 
    - How many nodes are going to be supported in the network?
    - What is the cost for the radio? 


That radio choice has big impacts—not only is it a hefty line item on your BOM on its own, it’s also going to determine the resources that the device needs as well. For example, if you have a WiFi radio at the end, there’s considerable CPU and memory expectations, whereas if we have BLE or some mesh network, it’ll need a lot less. There’s infrastructure scaling costs to consider as well. If we go WiFi: is there WiFi infrastructure already in place where this is being deployed? How reliable is it? If we’re starting from scratch, what’s the plan for covering a large area? That can become very costly, especially if you’re using industrial grade access points, so it’s important to consider these effects that are downstream of your decision.

Zooming in on specific standards

In our opinion, the biggest misconception we find: “Isn’t there going to be one standard to rule them all?” There’s no future of that, and it’s not just because we’re never going to all agree on stuff as an industry, it’s because in many cases different standards aren’t solving the same problems differently, they are solving different problems. So understanding that, we can now look at what each protocol attempts to solve and where they live on the OSI model, or "the stack."

 

MQTT

Some would suggest that it is a full protocol to do communication from a device to a server, but it’s not quite that. MQTT is used as a data format to communicate to something, and that payload can be sent over any transport, be it WiFi, mesh, or some socket protocol. What it tries to solve is to define a way to manipulate attributes of some thing. It centers around reading and writing properties, which lends itself very well to an IoT problem. It certainly saves development time in some regards, but depending on how strictly you’re trying to implement it, it may cost you more development time. As soon as you one-off any part of it, you have to document it really well, and at some point you approach a time and cost factor where implementing your own payload scheme may be a better option.

Is it prolific enough to where you should absolutely use it? No, it hasn't reached that level, and it won’t likely reach that level. What it is right now is a convenient standard for device-direct-to-cloud where we don’t control both ends because it gives some measure of a common language that we can agree on; however, the thing to keep in mind is that most of the time it does in fact need additional documentation—what properties are being read/written and what the exact implementation looks like—ultimately, you’re not getting out of a lot of work using MQTT.

Zigbee and Z-wave

Also starting at the network layer, Zigbee and Z-wave are the big incumbents everyone likes for mesh networking. They attempt to solve two problems: provide a reasonable specification to move packets from one place to another on a mesh network, and actually suggest how those packets should be structured; so, they both reach up higher in the stack. And that's the part hinders their futures. For example, Zigbee uses a system called profiles, which are collections of capabilities, such as the smart energy profile or the home automation profile. When a protocol gets so specific as to say ‘this is what a light bulb does’ it’s pretty difficult to implement devices that aren’t included in the profile. While there are provisions for custom data, you’re not really using a cross-compatible spec at that point—you’re basically off the standard as soon as you’re working with a device not defined in the profile.  

The other consideration with these two is that they are both routed mesh networks. We use one node to communicate with another node using intervening nodes. In other words, we can send a message from A to B to C to D, but in practice we’ve sent a message from A to D. As routed meshes, each node understands the path the message needs to take, and that has an in-memory cost associated with it. While Z-wave and Zigbee have a theoretical limit of 65,535 nodes on a network (the address space is a 16-bit integer), the practical limit is closer to few hundred nodes, because these devices are usually low power, low memory devices. The routing also has a time cost, so a large mesh network may manifest unacceptable latency for your use case. Another consideration, especially if you’re launching a cloud controlled consumer product, is that these mesh networks can’t directly connect to the internet—they require an intervening bridge (a.k.a gateway, hub, edge server) to communicate to the cloud.   

A final caveat is that Z-wave is a single source supplier—the radios are made and sold by Zensys, so you have to buy it from them. Zigbee has a certification process, and there are multiple suppliers of the radio, from Atmel to TI.

Bluetooth

You really just can’t compete with the amount of silicon being shipped based on Bluetooth. 10,000 unique SKUs were launched in Bluetooth in 2014. Other than WiFi, there’s nothing that compares in terms of adoption. Bluetooth was originally designed for  ‘personal area networks,’  with the original standard supporting 7 concurrent devices. And now we have Bluetooth low energy (BLE) which has a theoretically infinite limit. BLE did a ton to optimize around IoT challenges. They looked heavily at the amount of energy required to support a communication. They considered every facet of "low energy," not just the radio-- they looked at data format, packet size, how long the radio needed to be on to transmit those packets, how much memory was required to support it, what the power cost was for that memory, and what the protocol expects of the CPU, all while keeping overall BOM costs in mind. For example, they figured out that the radio should only be on for 1.5ms at a time. That’s a sweet spot—if you transmit for longer, the components heat up and thus require more power. They also figured out that button cell batteries are better at delivering power in short bursts as opposed to continuously. Further, they optimized it to be really durable against WiFi interference because the protocols share the same radio space (2.4GHz).

And then CSR came along and implemented a mesh standard over Bluetooth. Take all the advantages afforded with BLE, and then get all the benefits of a mesh network. The Bluetooth Mesh is flood mesh, meaning instead of specific routing to nodes, a message is sent indiscriminately across all nodes. This scales better than routed mesh because there’s no memory constraints. It’s a good solution for many problems in the IoT and at scale is probably going to be the lowest cost to implement. 

Thread

An up and coming standard that’s built on top of the same silicon that powers the Zigbee radio. It solves the problem of mesh nodes not being able to communicate directly to the cloud by adding IPv6 support, meaning that nodes on the network can make fully qualified internet requests. There’s a lot of weight behind this standard. Google seems to think it’s interesting enough to make their own protocol (known as “Weave”) on top of it. And then there’s Nest Weave which is some other version of Google Weave. As it stands, it takes a long time for a standard to really take hold-- you can immediately see how the story with Thread is a little muddier, which will not help its adoption. It’s also solving a problem that it just doesn’t seem that many devices have. Let’s take sensors as an example. Do these low power, lightweight, low cost, low memory, low processing, fairly dumb devices NEED to make internet requests directly? With Thread, each node now knows a lot more about the world—where your servers are for example, and maybe they shouldn’t be concerned with those things, because not only do the requirements of the device increase, but now the probability and frequency of having to update them in the field goes way up. When it comes to the actual sensors and other endpoints, philosophically you want minimize those responsibilities, except in special cases where offline durability, local processing and decision making is required (this is called fog computing).

When Thread announced their product certification last year, only 30 products submitted. Another thing to note about Thread's adoption is that the mesh-IPv6 problem has been solved before-- there’s actually a spec in Bluetooth 4.2 that adds IPv6 routing to Bluetooth, but very few people are using it. Although Nordic Semiconductor thought it was going to be a big deal and went ahead and implemented it first, it just hasn’t come up much in the industry—that happened Q4 2014 and no one’s talking about it.

One thing Thread does have going for it is that it steps out of defining how devices talk to each other, and how devices format their data—doing this makes it more future proof. This is where Weave comes in, because it does suppose how the data should be structured. So basically a way to look at it is that Weave + Thread = direct Zigbee/Z-wave competitor. We haven’t seen anyone outside of Google really take an initiative on Weave, other than Nest who have put a good marketing effort into making it look like they are getting traction with it.

AllJoyn

Other protocols live higher in the stack and remain agnostic at the network layer. The most well known of these is probably Qualcomm’s Alljoyn effort. They have the Allseen Alliance, although their branding is a bit murky—Allplay, AllShare, etc. We’ve seen some traction with it, but not a ton-- the biggest concern that it’s fighting is that it’s a really open ended protocol, loosely defined enough that you’re really not going to build something totally interoperable with everything else. That’s a big risk for product teams. If there aren’t enough devices in the world that speak that language, then why do I need to speak it? That said, LIFX implemented it, and it worked really well for them, especially since Windows implemented it as well. Now it’s part of Windows 10—there’s a layer specifically for AllJoyn stuff and it seems to do well. There's evidence with AllJoyn that you can bring devices to the table that don’t know anything about each other and get some kind of durable interoperability. However, at a glance, it seems complicated—the way authorization is dealt with and the way devices need to negotiate with each other. There really isn't runaway adoption

IEEE’s WiFi

They’ve ruled the roost with their 802.11 series. B then G then A, and now we have AC. 802.11 has been really good at being simple to set up and being high bandwidth. It doesn’t care about power consumption, it’s more concerned with performance because it’s meant to be a replacement for wires. Almost 2 years ago, they announced 802.11 AH which they’ve branded as HaLow, which attempts to address power, range, and pairing concerns of classic WiFi. Most WiFi devices are not headless ("headless" - no display or other input), they have a rich user interface—meaning we can login and configure them to connect to WiFi. Pairing headless devices has been a very tedious process. With HaLow, they’re solving two problems—how do we get things on easier, and how do we decrease the expectations (particularly power) of the device running the radio. It’s too early to know what type of traction this will get, but IEEE has a great track record at standards adoption.

LoRa and SIGFOX

More like: LoRa vs. SIGFOX. With these protocols we’re looking at how to connect things over fairly long distances, such as in smart city applications. LoRaWAN is an open protocol that's following a bottoms-up adoption strategy. SIGFOX is building out the infrastructure from the top down, and handing APIs to their customers. In that way, SIGFOX is more like a service. It'll be interesting to see the dance-off between these two as the IoT is adopted in these more public-type applications. 

That’s the body of standards that need to be addressed. There’s a ton more, but we don’t see them as exciting for the IoT today.

- P

Read more…
RSS
Email me when there are new items in this category –

Upcoming IoT Events

More IoT News

iPhone 7 Plus accounts for bigger piece of U.S. pie

U.S. iPhone buyers significantly shifted purchase preference to the larger 7 Plus in 2016, boosting the 5.5-in. smartphone's share of all Apple handsets, a research analyst said Thursday.

"The U.S. market finally likes these bigger…

Continue

IoT Career Opportunities