Here are seven IoT maps that we have written about on Iot Central.
Mapping the Internet of Things (Four maps in this one post, plus a bonus in the comments).
Let us know of the maps and resources you use in the comments below.
Here are seven IoT maps that we have written about on Iot Central.
Let us know of the maps and resources you use in the comments below.
We are living in a century where technology dominates lifestyle;Digital Transformation with Big Data, IoT, Artificial Intelligence(AI) are such examples.
Over the past six months, Chatbots have dominated much of the tech conversation, the next big gold rush in the field of online marketing.
Chatbots are built to mimic human interaction, making them seem like an actual individual existing digitally. It could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.), powered by basic rules engine or NLP and AI.
Chatbots have helped in conversation commerce in real time such as booking a cab or ordering a bouquet of flowers or pizza. Consumers will benefit from chatbots through personalization, and this is where social media plays a big part.
KLM has a customer service bot that's able to check your flight status and let you know if it's been delayed.
Interacting with software at a human level is becoming more mainstream from digital assistants like Google Home, Google Now, Apple Siri.
Here's a quick list of the 6 Best Online Resources for Embedded Firmware. Enjoy!
A few years ago, the idea of a “Telco in a Box” was very usual among the Telecommunication industry. Basically, it was a pre-integrated, turnkey real-time billing and customer care solution that enabled communications service providers (CSPs) to accelerate their growth strategies and increase profitability.
Companies like Accenture, Oracle, Redknee or Tech Mahindra used this concept addressed to Mobile Virtual Network Operators or MVNOs, Tier 3 Operators and Tier 1 sub brands. The benefits of this solution were clear:
It was a matter of time that this marketing slogan reached the Internet of Things (IoT). And so it has been, at the moment with little noise, but it is certain that we will see much more "IoT in a Box" in the next months.
Today we could say that IoT in a Box is:
In the first case, the IoT in a Box must include some of the following components depending of the application:
When you receive your IoT in a Box. All you must do is:
“The concept behind a basic “IoT in a box” is that It takes you less than 1 hour to set up your own IoT system.”
In the second case, the IoT in a Box must include a Development Kit and all required building blocks to develop a wireless IoT system. We will see some examples later.
Although IoT in a Box is aimed at solving a simple business need, in certain scenarios or industries it may be necessary to extend the capabilities included in the Box. In this regard, vendors must provide accessories, expansion modules, I/Os and peripherals, Multi-standard connectivity options and additional Pre-configured dashboards and alerts depending of the industry and application.
When I wrote Welcome to the first “Selling IoT” Master Class!, I did not emphasize in selling IoT to Small and Medium Business (SMB) and Consumer market. Precisely, the main objective that vendors pursue with the “IoT in a box” is increase sales in SMB market. This is a huge market and vendors need a way to escalate by channel partners, but as I do not consider myself an expert selling to SMB, so I look forward for your advices.
Due to confidentiality agreements, I cannot include info from different vendors that will be selling IoT in a Box very soon. But we can find already some examples of IoT in a Box in the market. See below some of them based on public information.
T- Mobile IoT in a Box - With the T-Mobile IoT Box, you can realize your individual M2M application without great effort. Connect your devices and sensors and transfer the obtained data to a cloud system via mobile radio. A data interface provides processing and integration information to other systems, websites, or apps. The T-Mobile IoT Box consists of a developer board with an integrated M2M SIM card, several inputs / outputs and Bluetooth smart interface, an online portal and a RESTful API.
T-Mobile US – IoT promotion for device makers - Building on its movement into the internet of things (IoT) market, T-Mobile US announced a new IoT-specific pricing model as part of a promotion that includes a free Cat1 LTE module along with data services.
T-Mobile US, SVP Doug Chartier said: “The wireless industry needs simpler options for IoT to take off, and that’s exactly what we’re delivering.”
Telia M2M in a Box - M2M technology easy and affordable for any business. Telia M2M in a Box gives you a set of hardware with sensors providing you with real time information about position, movement and climate, which you can monitor directly in the web portal. A versatile and user-friendly measurement tool to observe, monitor and protect your business remotely.
Capgemini IoT-in-a-Box is a rapid, low-cost, low-risk, method to pilot IoT strategy to test and define business cases and provides a pre-configured, enterprise-ready IoT system for monitoring up to 25 devices. It simplifies the task of aligning integrating and configuring all IoT components to provide rapid time to value.
IBM - The Intelligent Building – IoT Starter Kit (Enterprise Edition) is an out–of-the-box IoT solution for Intelligent Buildings. The kit provides seamless integration of the EnOcean Smart Gateway with the Watson IoT Platform.
Relayr- Relayr -Industrial Grade Starter Kit for IoT Acceleration powered by relayr, Intel, Dell and Bosch.
Microsoft – Solair IoT in a Box was an IoT plug&play kit to connect things, sensors, machines to a gateway and then, in a few clicks, instantly visualize data on the Solair application. After acquisition of Solair probably Microsoft had discontinued this offer.
Bosch - Bosch IoT Starter kits that come with pre-configured XDK devices + cloud connectivity. It is as out of the box as it could be!
HPE - HPE Uncorks IoT In A Box - Called (at least by Hewlett Packard Enterprise) the ‘industry’s first converged systems for the IoT’, the Edgeline EL1000 and Edgeline EL4000 systems ‘integrate data capture, control, compute and storage to deliver heavy-duty analytics and insights at the edge to enable real-time decision making.’
Electric Imp - IoT QuickStart by Electric Imp - Electric Imp’s IoT QuickStart Family is designed to help you cut the time to build, test and prototype complex IoT solutions all while maintaining industrial-strength security, scalability and control. Based on reference designs that Electric Imp experts have developed over the past five years, the IoT QuickStart Family appliances represent the most frequently requested secure connectivity and device prototype solutions, each delivered in a fraction of the time and cost required by custom-built solutions.
Creator Ci40 IoT Developer Kit - The Creator Ci40 board is a high-performance, low-power microcomputer that packs a cXT200 chip based on a subsystem optimized by us specifically for IoT applications. The cXT200 SoC includes a dual-core, dual-threaded MIPS CPU clocked at 550 MHz and an Ensigma connectivity engine that covers super-fast 802.11ac 2×2 MIMO Wi-Fi and low-power Bluetooth/Bluetooth low energy (Classic and Smart). See also: Imagination Launches ‘IoT In A Box’ Kickstarter - and Build a home IoT irrigation system with 'IoT-in-a-box' kit .
Nextcloud Box – a private cloud and IoT solution for home users – from Nextcloud, Canonical and WDLabs. Nextcloud Box makes hosting a personal cloud simple and cost effective whilst maintaining a secure private environment that can be expanded with additional features via apps. The Nextcloud Box consists of a hard drive and a case, complemented by a Raspberry Pi 2 or a similar credit-card sized computer. The pre-configured, easy-to-use platform handles file storage and syncing, communication and more, requires no maintenance and enables users to install more functionality through apps like Spreed, OpenHab and Collabora Online. The box offers 1TB of storage at the price point of Eur 70. For information on where to buy please visit nextcloud.com/box.
WIKON – My M2M BOX – Our special expertise lies in the compliance with industrial standards for our product developments and the development of powerful embedded hardware and software. Special developments for explosion zones, adverse environmental conditions, IP-68 standards and extended temperature ranges are frequently in demand.
Mobica collaborates with Advantech to develop a complete IoT Solution - Mobica, a Silver member of Oracle Partner Network (OPN) and global provider of a leading-edge software engineering, testing and consultancy services, developed a solution which aggregates data from a variety of sensors and sends it to the Oracle Internet of Things Cloud Service for analysis and integration. Mobica used an Advantech UTX-3115 IoT gateway and a M2.COM based WISE-1520 Low-Power Wi-Fi IoT node for sensor input.
The ThingBox Project - Use Internet of Things technologies without any technical knowledge and for free.
Eight best IoT starter kits: The best internet of things developer kits –
Imagination Meluncurkan kit IoT –“IoT http://misteriotcom/2015/11/24/imagination-meluncurkan-kit-iot-iot-in-a-box/
There are many IoT Vendors who offer Devices, IoT platform, Apps and Services bundled with the same purpose of IoT in a Box, democratize the IoT.
IoT in a Box and IoT Marketplaces
As we know “IoT is not only about connecting things, neither controlling things”, it is about the Things become more intelligent and therefore companies could offer new services under new business models. I believe that IoT marketplaces will play a key role in the evolution of IoT in a box. We have already some examples:
Libelium, the IoT Marketplace is a one stop click-and-buy online store. The company is helping frustrated companies with pre-integrated solutions from choosing the right hardware, cloud components to application.
Telus IoT Marketplace – Connect the things that matter to your business by leveraging connected devices provided by their partner network.
ThingWorks Marketplace – gives easy access to everything you need to build and run your ThingWorx based IoT application. All components listed on the ThingWorx Marketplace are customized, tested and guaranteed to work with the ThingWorx platform.
Intel IoT marketplace – Coming soon.
“IoT in a Box solutions that encompass infrastructure, networking, analytics, service enablement and monetization to connect devices, expose data, services and processes to applications, consumers and machines will be the foundation for IoT marketplaces”.
I believe that the logical evolution of IoT in a Box will be IoT Service in a Box sold through IoT marketplaces. It is a matter of time that we will see:
As more and more companies are drawn into the IOT bandwagon for the lure of the future business potential, the value realization from the IOT technologies continues to be more elusive than ever. If one were to take into account the enormous spends by the enterprises till date on the IOT products and solutions, and compare this with the new business opportunity generated by IOT till date, we can sum this investment with the tagline – “Chasing Million with Billions”.
While this is mostly true for most emerging technologies and usually dominating a technology market more often than not, becomes a battle of investments, as companies outbid each other for the acquisitions and market share, IOT leadership would be far more challenging than anything else seen before.
So what is the curious case about IOT? Well for starters winning the IOT leadership would determine the future existence of many companies. We would witness the demise of many companies and the rise of new giants by the time dust settles for the IOT leadership. More than 2 years have elapsed since IOT became part of every boardroom discussion and now the battle of IOT has moved on from the strategy to execution.
With the appeal so universal which originates by adding the adjectives, “smart”, “connected” and “digital” to all the products and services we are using or would use in future, the IOT technology space has slowly morphed itself into the “battle of the platform”. So consortium of companies are aligning with each other and are positioning their bets on platform leadership.
So what is the problem with this shift? The very nature of the platform development encompassing the needs of applications hosted for smart homes, smart cities, connected vehicles and products spanning the domains across healthcare, retail, manufacturing to name a few means an enormous list of “backlog” for development of the “new” platform. So IOT platform development is hit by the bane of the trinity – development of full scope, in budget and on time for scalable market adoption. Estimates for building such a multi-purpose all-encompassing platform with full feature set would set aside any product development organization by a few billion dollars.
Additional complexity to the platform development is the timeline for the market availability of such feature complete platform, especially considering that data ingestion from thousands of disparate devices across multiple network protocols in streaming format real-time. Above and beyond this, the cost to secure all end points and prevent the devices from the potential hacking would surely add several million dollars to the cost.
So what is challenge with the platform development? The problem is the very nature of the IOT market – the universal appeal and low price points. Most markets which have such a universal appeal often can accommodate 2-3 players at maximum, so all the competing platforms in development now and spending big dollars can face a high potential of failure. As more announcements are made and more investments pours in, the bloodier the war for IOT supremacy would become. The very nature of the digital market which ensures “winner take all market” is both the lure and the source of agony.
What does chasing Million with Billions imply? As the transaction volumes increases, the transaction value dramatically decrease and with smaller per capita spending by the end user, the ROI calculations moves the break-even date far out into the future. Net Present Value for the future cash flow projections with the diminished order sizes for the next few years at best could accrue in Millions, but the upfront investment required to win the IOT leadership would require investment in the order of Billion. A more detailed analysis of the IOT Economic Perspective is presented in this previous series. (http://bit.ly/2a2sfcq). Generally bigger the stakes at the end, the fierce the competition becomes and IOT would witness one of the longest standing investment war for supremacy. While the winner would definitely be taking all, the pain for the competition would be intense. While many would drop out of the race in the short term due to the lack of funding or cash crunch, a few giants with deep pockets would continue to wrestle on.
So would your strategy be the best? Would you leapfrog the paradox of earning million with billion and come out as the eventual winner? And which side of the competition would you stand when this IOT leadership war is over?
In the next series I would be providing more recommendation to solve the curious case of IOT platform leadership. Please drop in your comments.
Note: This article is independent view and presents the IOT story from a vendor neutral perspective.
In this article, I’ll keep introducing AggreGate IoT Platform-based products. In 2010, two years after AggreGate Network Manager release, we started AggreGate SCADA/HMI project ‒ fourth-generation SCADA system.
So what is fourth-generation SCADA?
Wikipedia suggests the following definitions:
IoT Platform-based AggreGate SCADA/HMI (http://aggregate.tibbo.com/solutions/scada-hmi.html) has inherited all functions of fourth-generation SCADA:
There is still a question left: why have we developed another SCADA? The international market is saturated with such solutions.
The point is that AggreGate SCADA/HMI as an AggreGate Platform add-on is technically a set of drivers for data collection and typical HMI vector images. All features necessary for SCADA are AggreGate Platform components: GUI (widget) builder, report editor, alert and event control tools, tag modeling system, failover clustering technology, SDK with DDK, etc.
Our investment to SCADA system development was not so great comparing to the development of such a system from scratch. To implement industrial and building automation projects, we developed the drivers for standard process control protocols (Modbus, OPC, OPC UA, BACnet, DNP3, etc.) and designed several thousands of vector images.
Along with standard SCADA system functions, AggreGate Platform fills it with exceptional features, for instance:
These features allow you to apply the system in multiple projects, not typical for SCADA solutions. AggreGate SCADA/HMI, in particular, is used for manufacturer fleet telemetry, MES replacement, cell tower and data center engineering infrastructure monitoring (included into AggreGate Data Center Supervisor solution).
In terms of AggreGate architecture and project building concept, AggreGate SCADA/HMI resembles most of other products. A typical project development cycle includes:
Running on Linux, AggreGate server collects data from OPC servers running on Windows. This procedure is implemented via IP network and DCOM protocol. As a result, there is no need for installing SCADA server and OPC server on a single computer anymore.
There are no such notions as “project”, “development environment”, and “runtime environment” in AggreGate SCADA/HMI. According to its concept, a single primary server is installed on a worksite. During the initial deployment phase, system engineers can connect to the server locally or remotely for developing HMIs, create PLC user accounts, set up data storage, and so on. After this phase, the same server will be utilized during commissioning and further on a regular basis, although the system migration to another server is possible and simple.
Unified environment enables to introduce modifications into the production server without any interruptions. In this case one should:
One of the vital SCADA system parts is GUI Builder. Inherited from AggreGate Platform, GUI Builder assists in drawing and animating any HMIs containing both simple components (buttons, captions, text fields, lists, etc.) and complex ones (tables, multi-layer panes, tabbed panes, charts, geographical maps, dynamic SVG images, video windows, etc.).
Even though AggreGate GUI Builder is similar to other system editors of this kind, it has an outstanding feature. Alongside with standard visual component absolute layout, any pane can utilize a grid layout similar to an HTML table. Plus, in case of a complex form with multiple tabbed panes (simple, multi-layer, tabbed, split panes), every pane can employ both absolute and grid layouts.
Grid layout allows designing HMIs, data input forms, and dashboards that seamlessly adjust to any screen resolution. In case of absolute layout, component proportional scaling is used. In this case, component height also increases, which leads to unacceptable results for almost all forms and dialogs.
HMIs are animated through bindings that allow data copying between server object properties and visual component properties in response to server and HMI events. AggreGate expression language brings aid in applying any operations to replicated data on the fly (processing numbers, strings, dates and time, tables, etc.).
Any data processed by AggreGate can be utilized for reporting. Expression builder and integrated SQL-like query language help retrieve necessary indicators, and the system creates the optimal template for their visual representation. After this, you can customize the template using the report builder.
As for the KPIs, you can configure alerts raised in response to critical object state events or event chain retrieving. The system sends alert notifications in almost any way (popup windows, sound notifications, E-mail messages, SMS). Automatically launched corrective actions can run both autonomously and under operator control. The alert module supports other typical industrial control features: flapping detection, hysteresis, prioritization, acknowledgement, escalation, etc.
AggreGate SCADA/HMI automates industrial processes, displays all necessary data in the operator center, provides visualization, saves information into a database, and creates reports ‒ in fact, everything that is expected from SCADA. The system promptly analyzes technological process efficiency and takes important decisions on its optimization, i.e. it partially performs MES software functions.
Usually, there are several SCADA installations operating simultaneously at large enterprises. Every installation has its own function in a certain workshop. The systems are logically bound by the production chain. Thus, their integration and automated KPIs transmission to MES/ERP levels are required. In AggreGate ecosystem, this is carried out by exchanging unified data model parts between servers with the help of distributed architecture (http://aggregate.tibbo.com/technology/architecture/distributed-architecture.html).
It often happens that on a single object/within a single project, it’s necessary to implement not only SCADA, but also IT infrastructure management system, building automation, access control and physical access control, automatic system for commercial accounting of power consumption, and other solutions in various combinations. AggreGate has all these features implemented within one installation and possibility of binding modules on a single server. Where can you run across it? For example, in data centers where active networking equipment, climate sensors, UPS, DGU, conditioners, water-cooling system, personnel access, time and attendance should be monitored. Some more examples: cell towers, where radio-relay equipment of transport network, sector antenna parameters, intrusion detection sensors, and other systems must be controlled. In large warehouses, it is vital to monitor personnel access, loader behavior, ventilation and lighting systems. Almost all large-scale objects can gain an advantage from merging various monitoring and management systems.
In our upcoming articles, we will describe distinguishing features of our SCADA solution, various industrial automation problems and their described solutions, as well as newsworthy projects we’ve taken part in.
Victor Polyakov, Managing Director, Tibbo Systems
Guest blog post by Sandeep Raut
I am excited to launch the 2nd version of my Interactive Map of IoT Organizations Thanks for all the support and encouragement from David Oro!
Here are the material changes from the first version:
I set up a Twitter account @EyeOhTee and although I still need to tweet more, you may see some interesting news on there and feel free to tweet out this post, plug plug!
Besides the basic data shown on the map, I also track many more attributes of each product. I will publish additional findings and analysis on this blog and here on IoT Central.
I hope you find the map useful and I would love to hear if, and how, it has helped you. Whether you located a company in your area to collaborate or a supplier for a problem you are trying to solve or just learning like me it will have made it worth the time I spend on this.
To illustrate the use of the MQTT library, we have created two simple Tibbo BASIC applications called "mqtt_publisher" and "mqtt_subscriber".
In our MQTT demo, the publisher device is monitoring three buttons (Tibbits #38). This is done through the keypad (kp.) object.
The three buttons on the publisher device correspond to the red, yellow, and green LEDs (Tibbits #39) on the subscriber device.
As buttons are pushed and released, the publisher device calls mqtt_publish() with topics "LED/Red", "LED/Green", and "LED/Red". Each topic's data is either 0 for "button released" or 1 for "button pressed". The related code is in the on_kp() event handler.
The subscriber device subscribes to all three topics with a single call to mqtt_sub() and the line "LED/#". This is done once, inside callback_mqtt_connect_ok().
With every notification message received from the server, the subscriber device gets callback_mqtt_notif() invoked. The LEDs are turned on and off inside this functions's body.
The demo was designed to run on our TPS3 boards, but you can easily modify it for other devices.
The easiest way to get the test hardware is to order "MQTTPublisher" and "MQTTSubscriber" TPS configurations.
You can also order all the parts separately:
Guest post by Ian Skerrett, Eclipse Foundation
In the previous article, Three Software Stacks Required to Implement IoT , we introduce the 3 software stacks that are required for any IoT solution: 1) Constrained Devices, 2) IoT Gateways and Smart Devices, and 3) IoT Cloud Platforms. In part 2 of this series, we discuss how open source software communities and in particular the Eclipse IoT open source community is becoming a key provider of the building blocks required to implement each of the three software stacks. Similar to how the LAMP (Linux/Apache HTTP Server/MySQL/PHP) stack has dominated the web infrastructures, it is believed a similar open source stack will dominate IoT deployments.
The separation of concerns brought by separating any IoT architecture into three stacks is a great step forward for building scalable and maintainable solutions. What’s more, building a software stack on top of open technologies helps achieve the following:
The open source community has become an active producer of technology for IoT solutions. Like the LAMP stack for websites, there are a set of open source projects that can be used as the building blocks for an IoT solution architecture.
The Eclipse IoT community is very active in providing the technology that can be used in each stack of an IoT solution. Eclipse IoT has 26 different open source projects that address different features of the IoT stacks. In addition to the Eclipse IoT projects, there are other open source projects that are also relevant to an IoT stack. The next few pages provide a brief summary of how Eclipse IoT as well as other open source projects can be used to implement IoT stacks.
Eclipse IoT provides a set of libraries that can be deployed on a constrained embedded device to provide a complete IoT development stack.
Within the Eclipse IoT community there are a variety of projects that work to provide the capabilities that an IoT gateway requires.
Eclipse Kura provides a general purpose middleware and application container for IoT gateway services. An IoT gateway stack based on Eclipse Kura would include the following:
Eclipse SmartHome provides an IoT gateway platform that is specifically focused on the home automation domain. An Eclipse SmartHome stack would including the following:
Eclipse 4DIACprovides an industrial-grade open source infrastructure for distributed industrial process measurement and control systems based on the IEC 61499 standard. 4DIAC is ideally suited for Industrie 4.0 and Industrial IoT applications in a manufacturing setting. The IEC 61499 standard defines a domain specific modeling language for developing distributed industrial control solutions by providing a vendor independent format and for simplifying support for controller to controller communication.
The Eclipse IoT Community has a number of projects that are focused on providing the functionality required for IoT cloud platforms.
Eclipse Kapua is a modular platform providing the services required to manage IoT gateways and smart edge devices. Kapua provides a core integration framework and an initial set of core IoT services including a device registry, device management services, messaging services, data management, and application enablement.
The goal of Eclipse Kapua is to create a growing ecosystem of micro services through the extensions provided by other Eclipse IoT projects and organizations.
Eclipse OM2M is an IoT Platform specific for the telecommunication industry, based on the oneM2M specification. It provides a horizontal Common Service Entity (CSE) that can be deployed in an M2M server, a gateway, or a device. Each CSE provides Application Enablement, Security, Triggering, Notification, Persistency, Device Interworking, Device Management.
The Eclipse IoT community also has a number of standalone projects that provide functionality to address key features required for an IoT cloud platform. These projects can be used independently of Eclipse Kapua and over time some may be integrated into Kapua.
Connectivity and Protocol Support
Device Management and Device Registry
Event management and application enablement
Analytics and Visualization – Outside of the Eclipse IoT community there are many open source options for data analytics and visualization, including Apache Hadoop, Apache Spark, and Apache Storm. Within the Eclipse community, Eclipse BIRT provides support for dashboards and reporting of data stored in a variety of data repositories.
Development Tools and SDKs
An IoT Solution requires substantial amount of technology in the form of software, hardware, and networking. In this series of articles we have defined the software requirements across three different stacks and the open source software that can be used to build the stacks
The last twenty years have proven that open source software and open source communities are key providers of technology for the software industry. The Internet of Things is following a similar trend, and it is expected that more and more IoT solutions will be built on open source software.
For the past five years, the Eclipse IoT community has been very active in building a portfolio of open source projects that companies and individuals use today to build their IoT solutions. If you are interested in participating, please join us and visit https://iot.eclipse.org.
Tibbo Project System (TPS) is a highly configurable, affordable, and innovative automation platform. It is ideal for home, building, warehouse, and production floor automation projects, as well as data collection, distributed control, industrial computing, and device connectivity applications.
Suppliers of traditional “control boxes” (embedded computers, PLCs, remote automation and I/O products, etc.) typically offer a wide variety of models differing in their I/O capabilities. Four serial ports and six relays. Two serial ports and eight relays. One serial port, four relays, and two sensor inputs. These lists go on and on, yet never seem to contain just the right mix of I/O functions you are looking for.
Rather than offering a large number of models, Tibbo Technology takes a different approach: Our Tibbo Project System (TPS) utilizes Tibbits® – miniature electronic blocks that implement specific I/O functions. Need three RS232 ports? Plug in exactly three RS232 Tibbits! Need two relays? Use a relay Tibbit. This module-based approach saves you money by allowing you to precisely define the features you want in your automation controller.
Here is a closer look at the process of building a custom Tibbo Project System.
Start with a Tibbo Project PCB (TPP)
A Tibbo Project PCB is the foundation of TPS devices.
Available in two sizes – medium and large – each board carries a CPU, memory, an Ethernet port, power input for +5V regulated power, and a number of sockets for Tibbit Modules and Connectors.
Add Tibbit® Blocks
Tibbits (as in “Tibbo Bits”) are blocks of prepackaged I/O functionality housed in brightly colored rectangular shells. Tibbits are subdivided into Modules and Connectors.
Want an ADC? There is a Tibbit Module for this. 24V power supply? Got that! RS232/422/485 port? We have this, and many other Modules, too.
Same goes for Tibbit Connectors. DB9 Tibbit? Check. Terminal block? Check. Infrared receiver/transmitter? Got it. Temperature, humidity, and pressure sensors? On the list of available Tibbits, too.
Assemble into a Tibbo Project Box (TPB)
Most projects require an enclosure. Designing one is a tough job. Making it beautiful is even tougher, and may also be prohibitively expensive. Finding or making the right housing is a perennial obstacle to completing low-volume and hobbyist projects.
Strangely, suppliers of popular platforms such as Arduino, Raspberry Pi, and BeagleBone do not bother with providing any enclosures, and available third-party offerings are primitive and flimsy.
Tibbo understands enclosure struggles and here is our solution: Your Tibbo Project System can optionally be ordered with a Tibbo Project Box (TPB) kit.
The ingenious feature of the TPB is that its top and bottom walls are formed by Tibbit Connectors. This eliminates a huge problem of any low-volume production operation – the necessity to drill holes and openings in an off-the-shelf enclosure.
The result is a neat, professionally looking housing every time, even for projects with the production quantity of one.
Like boards, our enclosures are available in two sizes – medium and large. Medium-size project boxes can be ordered in the LCD/keypad version, thus allowing you to design solutions incorporating a user interface.
Unique Online Configurator
To simplify the process of planning your TPS we have created an Online Configurator.
Configurator allows you to select the Tibbo Project Board (TPP), “insert” Tibbit Modules and Connectors into the board’s sockets, and specify additional options. These include choosing whether or not you wish to add a Tibbo Project Box (TPB) enclosure, LCD and keypad, DIN rail mounting kit, and so on. You can choose to have your system shipped fully assembled or as a parts kit.
Configurator makes sure you specify a valid system by watching out for errors. For example, it verifies that the total power consumption of your future TPS device does not exceed available power budget. Configurator also checks the placement of Tibbits, ensuring that there are no mistakes in their arrangement.
Completed configurations can be immediately ordered from our online store. You can opt to keep each configuration private, share it with other registered users, or make it public for everyone to see.
Develop your application
Like all programmable Tibbo hardware, Tibbo Project System devices are powered by Tibbo OS (TiOS).
Use our free Tibbo IDE (TIDE) software to create and debug sophisticated automation applications in Tibbo BASIC, Tibbo C, or a combination of the two languages.
To learn more about the Tibbo Project System click here
OPC – «Open Platform Communications» – is a set of standards and specifications for manufacturing telecommunication. OPC specifies the transfer of real-time plant data between control devices from various producers. OPC was designed to process control hardware and support a common bridge for Windows-based software applications. OPC was aimed to reduce the number of duplicated effort performed by hardware manufacturers and their software partners.
The most typical OPC specification, OPC Data Access (OPC DA), is supported by Tibbo OPC Server. Any device compatible with the Tibbo AggreGate protocol can be a data source. AggreGate is a white-label IoT integration platform using up-to-date network technologies to control, configure, monitor and support electronic devices, along with distributed networks of such electronic devices. It also helps you collect device data in the cloud, where you can slice and dice it in alignment with your needs. In addition, the platform lets other enterprise applications transparently access this data via the AggreGate server.
Tibbo OPC server has embedded AggreGate network protocol. It can both interact with any Tibbo devices via AggreGate agent protocol and connect to AggreGate server. The AggreGate agent protocol open-source solution is published for Java, C#, and C++ programming languages, so your connection scheme is not restricted to AggreGate server or Tibbo devices only.
A simple example: TPS reads Tibbit #29 (Ambient temperature meter) and forwards data to OPC server via AggreGate agent protocol.
A more complex example: we have a Windows-based PC controlling a wood processing machine by means of AggreGate server through the Modbus protocol. If Tibbo OPC server is linked with AggreGate server, the data from the machine is sent to Tibbo OPC server, and therefore, we can operate and monitor the machine via any OPC client.
Compatibility with Windows XP/2003 or later (Microsoft Visual C++ 2013 redistributable is required - installed automatically)
Support of DA Asynchronous I/O 2.0 and Synchronous I/O with COM/DCOM technology
Tibbo OPC Server transmits the information on the Value, Quality and Timestamp of an item (tag) to the OPC Client applications. These fields are read from the AggreGate variables.
The process values are set to Bad [Configuration Error] quality if OPC Server loses communication with its data source (AggreGate Agent or AggreGate Server). The quality is set to Uncertain [Non-Specific] if the AggreGate variable value is empty.
In the following chart below you can see a concordance table of the AggreGate variables and the OPC data types:
|AggreGate Data Type||OPC Data Type|
|DATATABLE||OPC VT_BSTR (by default)|
To learn more about Tibbo OPC server, click here
Guest post by James Stansberry.
A fascinating article from Philip N. Howard at George Washington University asserts that based on multiple sources, the number of connected devices surpassed the number of people on the planet in 2014. Further, it estimates that by 2020 we will be approaching 50 billion devices on the Internet of Things (IoT).
Philip N. Howard’s Study of Connected Devices
In other words, while humans will continue to connect their devices to the web in greater numbers, a bigger explosion will come from “things” connecting to the web that weren’t before, or which didn’t exist, or which now use their connection as more of a core feature.
The question is, how will these billions of things communicate between the end node, the cloud, and the service provider?
This article dives into that subject as it relates to a particular class of devices that are very low cost, battery-powered, and which must operate at least seven years without any manual intervention.
In particular, it looks at two emerging messaging protocols to address the needs of these “lightweight” IoT nodes. The first, MQTT, is very old by today’s standards from way back in 1999. And the second, CoAP, is relatively new but gaining traction.
One definition of IoT is connecting devices to the internet that were not previously connected. A factory owner may connect high-powered lights. A triathlete may connect a battery-powered heart-rate monitor. A home or building automation provider may connect a wireless sensor with no line power source.
But the important thing here is that in all the above cases the “Thing” must communicate through the Internet to be considered an “IoT” node.
Since it must use the Internet, it must also adhere to the Internet Engineering Task Force’s (IETF) Internet Protocol Suite. However, the Internet has historically connected resource-rich devices with lots of power, memory and connection options. As such, its protocols have been considered too heavy to apply wholesale for applications in the emerging IoT.
Internet Protocol Suite Overview
There are other aspects of the IoT which also drive modifications to IETF’s work. In particular, networks of IoT end nodes will be lossy, and the devices attached to them will be very low power, saddled with constrained resources, and expected to live for years.
The requirements for both the network and its end devices might look like the table below. This new model needs new, lighter weight protocols that don’t require the large amount of resources.
MQTT and CoAP address these needs through small message sizes, message management, and lightweight message overhead. We look at each below.
Requirements for low-cost, power-constrained devices and associated networks
MQTT and CoAP allow for communication from Internet-based resource-rich devices to IoT-based resource-constrained devices. Both CoAP and MQTT implement a lightweight application layer, leaving much of the error correction to message retries, simple reliability strategies, or reliance on more resource rich devices for post-processing of raw end-node data.
Conceptual Diagram of MQTT and CoAP Communication to Cloud / Phone
IBM invented Message Queuing Telemetry Transport (MQTT) for satellite communications with oil field equipment. It had reliability and low power at its core and so made good sense to be applied to IoT networks.
The MQTT standard has since been adopted by the OASIS open standards society and released as version 3.1.1. It is also supported within the Eclipse community, as well as by many commercial companies who offer open source stacks and consulting.
MQTT uses a “publish/subscribe” model, and requires a central MQTT broker to manage and route messages among an MQTT network’s nodes. Eclipse describes MQTT as “a many-to-many communication protocol for passing messages between multiple clients through a central broker.”
MQTT uses TCP for its transport layer, which is characterized as “reliable, ordered and error-checked.”
MQTT’s “pub/sub” model scales well and can be power efficient. Brokers and nodes publish information and others subscribe according to the message content, type, or subject. (These are MQTT standard terms.) Generally the broker subscribes to all messages and then manages information flow to its nodes.
There are several specific benefits to the Pub/Sub model.
While the node and the broker need to have each other’s IP address, nodes can publish information and subscribe to other nodes’ published information without any knowledge of each other since everything goes through the central broker. This reduces overhead that can accompany TCP sessions and ports, and allows the end nodes to operate independently of one another.
A node can publish its information regardless of other nodes’ states. Other nodes can then receive the published information from the broker when they are active. This allows nodes to remain in sleepy states even when other nodes are publishing messages directly relevant to them.
A node that in the midst of one operation is not interrupted to receive a published message to which it is subscribed. The message is queued by the broker until the receiving node is finished with its existing operation. This saves operating current and reduces repeated operations by avoiding interruptions of on-going operations or sleepy states.
MQTT uses unencrypted TCP and is not “out-of-the-box” secure. But because it uses TCP it can – and should – use TLS/SSL internet security. TLS is a very secure method for encrypting traffic but is also resource intensive for lightweight clients due to its required handshake and increased packet overhead. For networks where energy is a very high priority and security much less so, encrypting just the packet payload may suffice.
The term “QoS” means other things outside of MQTT. In MQTT, “QoS” levels 0, 1 and 2 describe increasing levels of guaranteed message delivery.
This is commonly known as “Fire and forget” and is a single transmit burst with no guarantee of message arrival. This might be used for highly repetitive message types or non-mission critical messages.
This attempts to guarantee a message is received at least once by the intended recipient. Once a published messaged is received and understood by the intended recipient, it acknowledges the message with an acknowledgement message (PUBACK) addressed to the publishing node. Until the PUBACK is received by the publisher, it stores the message and retransmits it periodically. This type of message may be useful for a non-critical node shutdown.
This level attempts to guarantee the message is received and decoded by the intended recipient. This is the most secure and reliable MQTT level of QoS. The publisher sends a message announcing it has a QoS level 2 message. Its intended recipient gathers the announcement, decodes it and indicates that it is ready to receive the message. The publisher relays its message. Once the recipient understands the message, it completes the transaction with an acknowledgement. This type of message may be useful for turning on or off lights or alarms in a home.
MQTT provides a “last will and testament (LWT)” message that can be stored in the MQTT broker in case a node is unexpectedly disconnected from the network. This LWT retains the node’s state and purpose, including the types of commands it published and its subscriptions. If the node disappears, the broker notifies all subscribers of the node’s LWT. And if the node returns, the broker notifies it of its prior state. This feature accommodates lossy networks and scalability nicely.
An MQTT node may subscribe to all messages within a given functionality. For example a kitchen “oven node” may subscribe to all messages for “kitchen/oven/+”, with the “+” as a wildcard. This allows for a minimal amount of code (i.e., memory and cost). Another example is if a node in the kitchen is interested in all temperature information regardless of the end node’s functionality. In this case, “kitchen/+/temp” will collect any message in the kitchen from any node reporting “temp”. There are other equally useful MQTT wildcards for reducing code footprint and therefore memory size and cost.
The use of a central broker can be a drawback for distributed IoT systems. For example, a system may start small with a remote control and window shade, thus requiring no central broker. Then as the system grows, for example adding security sensors, light bulbs, or other window shades, the network naturally grows and expands and may have need of a central broker. However, none of the individual nodes wants to take on the cost and responsibility as it requires resources, software and complexity not core to the end-node function.
In systems that already have a central broker, it can become a single point of failure for the complete network. For example, if the broker is a powered node without a battery back-up, then battery-powered nodes may continue operating during an electrical outage while the broker is off-line, thus rendering the network inoperable.
TCP was originally designed for devices with more memory and processing resources than may be available in a lightweight IoT-style network. For example, the TCP protocol requires that connections be established in a multi-step handshake process before any messages are exchanged. This drives up wake-up and communication times, and reduces battery life over the long run.
Also in TCP it is ideal for two communicating nodes to hold their TCP sockets open for each other continuously with a persistent session, which again may be difficult with energy- and resource-constrained devices.
Again, using TCP without session persistence can require incremental transmit time for connection establishment. For nodes with periodic, repetitive traffic, this can lead to lower operating life.
With the growing importance of the IoT, the Internet Engineering Task Force (IETF)took on lightweight messaging and defined the Constrained Application Protocol (CoAP). As defined by the IETF, CoAP is for “use with constrained nodes and constrained (e.g., low-power, lossy) networks.” The Eclipse community also supports CoAP as an open standard, and like MQTT, CoAP is commercially supported and growing rapidly with IoT providers.
CoAP is a client/server protocol and provides a one-to-one “request/report” interaction model with accommodations for multi-cast, although multi-cast is still in early stages of IETF standardization. Unlike MQTT, which has been adapted to IoT needs from a decades-old protocol, the IETF specified CoAP from the outset to support IoT with lightweight messaging for constrained devices operating in a constrained environment. CoAP is designed to interoperate with HTTP and the RESTful web through simple proxies, making it natively compatible to the Internet.
CoAP runs over UDP which is inherently and intentionally less reliable than TCP, depending on repetitive messaging for reliability instead of consistent connections. For example, a temperature sensor may send an update every few seconds even though nothing has changed from one transmission to the next. If a receiving node misses one update, the next will arrive in a few seconds and is likely not much different than the first.
UDP’s connectionless datagrams also allow for faster wake-up and transmit cycles as well as smaller packets with less overhead. This allows devices to remain in a sleepy state for longer periods of time conserving battery power.
A CoAP network is inherently one-to-one; however it allows for one-to-many or many-to-many multi-cast requirements. This is inherent in CoAP because it is built on top of IPv6 which allows for multicast addressing for devices in addition to their normal IPv6 addresses. Note that multicast message delivery to sleeping devices is unreliable or can impact the battery life of the device if it must wake regularly to receive these messages.
CoAP uses DTLS on top of its UDP transport protocol. Like TCP, UDP is unencrypted but can be – and should be – augmented with DTLS.
CoAP uses URI to provide a standard presentation and interaction expectations for network nodes. This allows a degree of autonomy in the message packets since the target node’s capabilities are partly understood by its URI details. In other words, a battery-powered sensor node may have one type of URI while a line-powered flow control actuator may have another. Nodes communicating to the battery-powered sensor node might be programmed to expect longer response times, more repetitive information, and limited message types. Nodes communicating to the line-powered flow control actuator might be programmed to expect rich, detailed messages, very rapidly.
Within the CoAP protocol, most messages are sent and received using the request/report model; however, there are other modes of operation that allow nodes to be somewhat decoupled. For example, CoAP has a simplified “observe” mechanism similar to MQTT’s pub/sub that allows nodes to observe others without actively engaging them.
As an example of the “observe” mode, node 1 can observe node 2 for specific transmission types, then any time node 2 publishes a relevant message, node 1 receives it when it awakens and queries another node. It’s important to note that one of the network nodes must hold messages for observers. This is similar to MQTT’s broker model except that there is no broker requirement in CoAP, and therefore no expectation of being able to hold or queue messages for observers.
There are currently draft additions to the standard which may provide a similar CoAP function to MQTT’s pub/sub model over the short-to-medium term. The leading candidate today is a draft proposal from Michael Koster, allowing CoAP networks to implement a pub/sub model like MQTT’s mentioned above.
MQTT is currently a more mature and stable standard than CoAP. It’s been Silicon Labs’ experience that it is easier to get an MQTT network up and running very quickly than a similar one using CoAP. That said, CoAP has tremendous market momentum and is rapidly evolving to provide a standardized foundation with important add-ons in the ratification pipeline now.
It is likely that CoAP will reach a similar level of stability and maturity as MQTT in the very near term. But the standard is evolving for now, which may present some troubles with interoperability.
CoAP’s “reliability” is MQTT’s QoS and provides a very simple method of providing a “confirmable” message and a “non-confirmable” message. The confirmable message is acknowledged with an acknowledgement message (ACK) from the intended recipient. This confirms the message is received but stops short of confirming that its contents were decoded correctly or at all. A non-confirmable message is “fire and forget.”
The two messaging protocols MQTT and CoAP are emerging as leading lightweight messaging protocols for the booming IoT market. Each has benefits and each has issues. As leaders in mesh networking where lightweight nodes are a necessary aspect of almost every network, Silicon Labs has implemented both protocols, including gateway bridging logic to allow for inter-standard communication.
Excellent source for MQTT information – http://www.hivemq.com/mqtt-essentials-wrap-up/
Specification - https://tools.ietf.org/html/rfc7252
Excellent source for CoAP information - http://coap.technology/
Specification – http://mqtt.org/2013/12/mqtt-for-sensor-networks-mqtt-sn
Excellent white paper on using MQTT, CoAP, and other messaging protocols – http://www.prismtech.com/sites/default/files/documents/MessagingComparsionNov2013USROW_vfinal.pdf
This article originally appeared here.
Transportation has become one of the most frequently highlighted areas where the internet of things can improve our lives. Specifically, a lot of people are excited about the IoT's potential to further the progress toward entire networks of self-driving cars. We hear a lot about the tech companies that are involved in building self-driving cars, but it's the IoT that will actually allow these vehicles to operate. In fact, CNET quoted one IoT expert just last year as saying that because of the expanding IoT, self-driving cars will rule the roads by 2030.
On a much smaller scale, there are also some niche applications of the IoT that are designed to fix specific problems on the road. For instance, many companies have looked to combat distracted driving by teenagers through IoT-related tools. As noted by PC World, one device called the Smartwheel monitors teens' driving activity by sensing when they're keeping both hands on the wheel. The device sounds an alert when a hand comes off the wheel and communicates to a companion app that compiles reports on driver performance. This is a subtle way in which the IoT helps young drivers develop better habits.
In a way, these examples cover both extremes of the effect the IoT is having on drivers. One is a futuristic idea that's being slowly implemented to alter the very nature of road transportation. The other is an application for individuals meant to make drivers safer one by one. But there are also some IoT-related tools that fall somewhere in the middle of the spectrum. One is an exciting new app that seeks to make the roads safer for the thousands of shipping fleet drivers operating on a daily basis.
At first this might sound like a niche category. However, the reality is that the innumerable companies and agencies relying shipping and transportation fleets have a ton of drivers to take care of. That means supervising vehicle performance, safety, and more for each and every one of them. That process comprises a significant portion of road activity, particularly in cities and on highways. These operations are able to be simplified and streamlined through Networkfleet Driver, which Verizon describes as a tool to help employees manage routes, maintenance, communication, and driving habits all in one place.
The app can communicate up-to-date routing changes or required stops, inform drivers of necessary vehicle repairs or upkeep, and handle communication from management. It can also make note of dangerous habits (like a tendency to speed or make frequent sudden stops), helping the driver to identify bad habits and helping managers to recommend safer performance. All of this is accomplished through various IoT sensors on vehicles interacting automatically with the app, and with systems that can be monitored by management.
The positive effect, while difficult to quantify, is substantial. Fleet drivers make up a significant portion of road activity, and through the use of the IoT we can make sure that the roads are safer for everyone.
Internet of Things has raised concerns over safety. Nowadays, it is possible to control your home using your Smartphone. In the coming years, mobile devices will work as a remote control to operate all the things in your house.
Some devices display one or several vulnerabilities that can be exploited by the hackers to infiltrate them and the whole network of the connected home. For instance:
1. During configuration, data – including the device ID and MAC address - is sometimes transmitted in plain text.
2. The communication between the device and the app passes unencrypted through the manufacturer’s servers.
3. The hotspot is poorly secured with a weak username and password and sometimes remains active after configuration.
4. The device comes pre-installed with a Telnet client carrying default credentials.
With rising cases of identity theft and vishing, it has become absolutely necessary to install any of these 5 free tools in your smartphone in order to keep your data safe from hackers.
1- LastPass - It lets you store passwords in a secure vault that is easy to use, searchable and organized the way you like. It is perhaps the safest vault available online today that lets you store password data for unlimited websites.
2- Lookout - This tool offers security for the today's mobile generation. It is a free app that protects your iOS or Android device around the clock from mobile threats such as unsecure WiFi networks, malicious apps, fraudulent links, etc. It has a worldwide network of 100 million mobile sensors, world's largest mobile data set and a smarter machine intelligence to keep your smartphone secure from all kinds of threats.
3- Authy - This app generates secure 2 step verification tokens on your device and protects your account from hackers and hijackers by adding an additional layer of security. Moreover, it offers secure cloud backup, multi device synchronization and multi factor authentication. 2 step authentication is the best kind of security available today that ensures your accounts don't get hacked.
4- BullGuard - It protects your smartphone from all forms of viruses and malware. With an inbuilt, rigorous anti-theft functionality, BullGuard enables you to lock, locate and wipe device remotely in case it gets lost or stolen. It allows automatic scans so that the security remains updated. Moreover, it doesn't drains down your battery.
5- Prey - It is a lightweight theft protection software that lets you keep an eye over your mobile devices in case you have more than one and you are leaving one in your home. Prey lets you recover the phone in case it gets stolen. After installing the software on your laptop, tablet or phone, Prey will sleep silently in the background awaiting your command. Once remotely triggered from your Prey account, your device will gather and deliver detailed evidence back to you, including a picture of who's using it – often the crucial piece of data that police officers need to take action.
As if the Internet of Things (IoT) was not complicated enough, the Marketing team at Cisco introduced its Fog Computing vision in January 2014, also known as Edge Computing for other more purist vendors.
Given Cisco´s frantic activity in their Internet of Everything (IoE) marketing campaigns, it is not surprising that many bloggers have abused of shocking headlines around this subject taking advantage of the Hype of the IoT.
I hope this post help you better understand what is the role of Fog Computing in the IoT Reference Model and how companies are using IoT Intelligent gateways in the Fog to connect the "Things" to the Cloud through some applications areas and examples of Fog Computing.
As the Internet of Things proliferates, businesses face a growing need to analyze data from sources at the edge of a network, whether mobile phones, gateways, or IoT sensors. Cloud computing has a disadvantage: It can’t process data quickly enough for modern business applications.
The IoT owes its explosive growth to the connection of physical things and operation technologies (OT) to analytics and machine learning applications, which can help glean insights from device-generated data and enable devices to make “smart” decisions without human intervention. Currently, such resources are mostly being provided by cloud service providers, where the computation and storage capacity exists.
However, despite its power, the cloud model is not applicable to environments where operations are time-critical or internet connectivity is poor. This is especially true in scenarios such as telemedicine and patient care, where milliseconds can have fatal consequences. The same can be said about vehicle to vehicle communications, where the prevention of collisions and accidents can’t afford the latency caused by the roundtrip to the cloud server.
“The cloud paradigm is like having your brain command your limbs from miles away — it won’t help you where you need quick reflexes.”
Moreover, having every device connected to the cloud and sending raw data over the internet can have privacy, security and legal implications, especially when dealing with sensitive data that is subject to separate regulations in different countries.
IoT nodes are closer to the action, but for the moment, they do not have the computing and storage resources to perform analytics and machine learning tasks. Cloud servers, on the other hand, have the horsepower, but are too far away to process data and respond in time.
The fog layer is the perfect junction where there are enough compute, storage and networking resources to mimic cloud capabilities at the edge and support the local ingestion of data and the quick turnaround of results.
The variety of IoT systems and the need for flexible solutions that respond to real-time events quickly make Fog Computing a compelling option.
A study by IDC estimates that by 2020, 10 percent of the world’s data will be produced by edge devices. This will further drive the need for more efficient fog computing solutions that provide low latency and holistic intelligence simultaneously.
“Computing at the edge of the network is, of course, not new -- we've been doing it for years to solve the same issue with other kinds of computing.”
The Fog Computing or Edge Computing is a paradigm championed by some of the biggest IoT technology players, including Cisco, IBM, and Dell and represents a shift in architecture in which intelligence is pushed from the cloud to the edge, localizing certain kinds of analysis and decision-making.
Fog Computing enables quicker response times, unencumbered by network latency, as well as reduced traffic, selectively relaying the appropriate data to the cloud.
The concept of Fog Computing attempts to transcend some of these physical limitations. With Fog Computing processing happens on nodes physically closer to where the data is originally collected instead of sending vast amounts of IoT data to the cloud.
The OpenFog Consortium, was founded on the premise based on open architectures and standards that are essential for the success of a ubiquitous Fog Computing ecosystem.
The collaboration among tech giants such as ARM, Cisco, Dell, GE, Intel, Microsoft and Schneider Electric defining an Open, Interoperable Fog Computing Architecture is without any doubt good news for a vibrant supplier ecosystem.
The OpenFog Reference Architecture is an architectural evolution from traditional closed systems and the burgeoning cloud-only models to an approach that emphasizes computation nearest the edge of the network when dictated by business concerns or critical application the functional requirements of the system.
The OpenFog Reference Architecture consists of putting micro data centers or even small, purpose-built high-performance data analytics machines in remote offices and locations in order to gain real-time insights from the data collected, or to promote data thinning at the edge, by dramatically reducing the amount of data that needs to be transmitted to a central data center. Without having to move unnecessary data to a central data center, analytics at the edge can simplify and drastically speed analysis while also cutting costs.
Read more: http://bigdata.sys-con.com/node/3809885
The applications of fog computing are many, and it is powering crucial parts of IoT ecosystems, especially in industrial environments. See below some use cases and examples.
See more at: Why Edge Computing Is Here to Stay: Five Use Cases By Patrick McGarry
What is the future of fog computing?
The current trend shows that fog computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Sensors that only log data might one day become a thing of the past.
Janakiram MSV wondered if Fog Computing will be the Next Big Thing In Internet of Things? . It seems obvious that while cloud is a perfect match for the Internet of Things, we have other scenarios and IoT solutions that demand low-latency ingestion and immediate processing of data where Fog Computing is the answer.
Does the fog eliminate the cloud?
Fog computing improves efficiency and reduces the amount of data that needs to be sent to the cloud for processing. But it’s here to complement the cloud, not replace it.
The cloud will continue to have a pertinent role in the IoT cycle. In fact, with fog computing shouldering the burden of short-term analytics at the edge, cloud resources will be freed to take on the heavier tasks, especially where the analysis of historical data and large datasets is concerned. Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.
And there are still many cases where the centralized, highly efficient computing infrastructure of the cloud will outperform decentralized systems in performance, scalability and costs. This includes environments where data needs to be analyzed from largely dispersed sources.
“It is the combination of fog and cloud computing that will accelerate the adoption of IoT, especially for the enterprise.”
In essence, Fog Computing allows for big data to be processed locally, or at least in closer proximity to the systems that rely on it. Newer machines could incorporate more powerful microprocessors, and interact more fluidly with other machines on the edge of the network. While fog isn’t a replacement for cloud architecture, it is a necessary step forward that will facilitate the advancement of IoT, as more industries and businesses adopt emerging technologies.
Fog computing is far from a panacea. One of the immediate costs associated with this method pertains to equipping end devices with the necessary hardware to perform calculations remotely and independent of centralized data centers. Some vendors, however, are in the process of perfecting technologies for that purpose. The tradeoff is that by investing in such solutions immediately, organizations will avoid frequently updating their infrastructure and networks to deal with ever increasing data amounts as the IoT expands.
There are certain data types and use cases that actually benefit from centralized models. Data that carries the utmost security concerns, for example, will require the secure advantages of a centralized approach or one that continues to rely solely on physical infrastructure.
Though the benefits of Fog Computing are undeniable, the Cloud has a secure future in IoT for most companies with less time-sensitive computing needs and for analysing all the data gathered by IoT sensors.
Thanks in advance for your Likes and Shares
Thoughts ? Comments ?
Originally posted on Data Science Central
Printed electronics are being vouched as the next best thing in Internet of Things (IoT), the technology that is rightly regarded as a boon of advancing technology. Silicon-based sensors are the first that have been associated with IoT technology. These sensors have numerous applications, such as track data from airplane, wind turbines, engines, and medical devices, amongst other internet connected devices.
However, these silicon-based are not suitable for several other applications. Bendable packaging and premium items are some of the application where embedded sensors do not work. For such applications, printed electronics befit the need. Using sensor technology, information is transferred on smart labels that can be attached to packages to be tracked in real time.
Some Applications of Printed Sensor Technology
Grocery Industry: While bar code is the standard technology used in the grocery sector, the technology has limitations pertaining to the data it can store. Also, for some products, product packaging can run up to 30-40% of the cost, for which printed sensor are best-suited to save packaging costs. For such needs, a printed sensor is the most apt solution for real-time information about a product’s temperature, moisture, location, movement, and much more. Companies can check these parameters to validate the freshness and prevent substantial spoilage. Smart labels are also used to validate the authenticity of products.
Click here to get report.
Healthcare: The use of smart labels enables manufacturers and logistics firms to track the usage and disposal of pharmaceuticals and to control inventory. The use of smart labels on patients’ clothing enables to check their body temperature, dampness of adult diapers, or bandages for assisted living scenarios.
Logistics: Radio frequency identification (RFID) was the standard tag used by logistics companies until recently to identify shipping crates that carried perishable products. RFID is increasingly being replaced by smart labels that enable tracking of individual items. This facilitates companies to track products at the item level rather than at the container shipping level.
Biosensors Lead Printed and Flexible Sensors Market
As per the research study, the global market for printed and flexible sensors is estimated to grow at a fast pace, due to which several investors are interested in pouring funds into the market. This is expected to create potential opportunities for commercialization and product innovation. In addition, several new players are also projected to participate in order to gain a competitive advantage in the market. In 2013, the global printed and flexible sensors market stood at US$6.28 bn and is projected to be worth US$7.51 bn by the end of 2020. The market is expected to register a healthy 2.50% CAGR between 2012 and 2020, as per the study.
The rapid growth in individual application segments and several benefits over the conventional sensors are some of the key factors driving the global market for printed and flexible sensors. In addition, the developing global market for Internet of Things is further anticipated to fuel the growth of the market in the next few years. On the flip side, several challenges in conductive ink printing are estimated to hamper the growth of the market for printed and flexible sensors in the near future.
Biosensors are most extensively used with the largest market share in the global market for printed and flexible sensors. Glucose strips incorporated with a biosensor are one of the most sought after ways to track and monitor glucose levels among diabetics. Thus, it accounts as a multi-billion dollar segment in the global market for printed and flexible sensors. To evaluate and monitor working of the heart, kidney diseases, and cancer are the other emerging applications where printed biosensors technology is being utilized.
The expanding automobile industry holds promise for piezoelectric type printed flexible sensors for performance testing during production. Due to these varied applications of printed and flexible sensors, the global market for printed and flexible sensors will expand at a slow but steady 2.5% CAGR in the next six years starting from 2012.
Please, subscribe to get an access.
Please, subscribe to get an access.