I am excited to launch the 2nd version of my Interactive Map of IoT Organizations Thanks for all the support and encouragement from David Oro!
Here are the material changes from the first version:
- Each organization now has their specific address instead of being city-based
- Now includes the Founder(s) of the organization and a link to more information about them. This is in addition to the “Founded” year which was in the first version
- Cleanup of categories. Folks are still trying to determine what it means to be an IoT Platform. For me, it’s most important to focus on standards and integration of systems as there will be organizations that specialize in one aspect of an IoT platform whether it’s the analytics, rules engine, device management, workflow, or visualization functions.
- The initial launch of the map had 246 organizations, this new map has 759 organizations. Thanks to many people on LinkedIn and through blog comments for suggesting their companies which accounted for 180 additional organizations. The other 330+ organizations I have been finding on my own by trolling news, Twitter, IoT conference Web sites, “Partners” sections of each organization.
I set up a Twitter account @EyeOhTee and although I still need to tweet more, you may see some interesting news on there and feel free to tweet out this post, plug plug!
Besides the basic data shown on the map, I also track many more attributes of each product. I will publish additional findings and analysis on this blog and here on IoT Central.
I hope you find the map useful and I would love to hear if, and how, it has helped you. Whether you located a company in your area to collaborate or a supplier for a problem you are trying to solve or just learning like me it will have made it worth the time I spend on this.
Air conditioning is a consumptive business while air conditioners are big-ticket to run. In general, AC systems, older ones, in particular, do not have any real temperature feedback. You set the temperature on your remote, but alas, it has absolutely nothing to do with the actual temperature in the room. Even when it gets colder outsides, many aircons keep blasting cold air into your space. As a result, you have to constantly readjust the temperature as needed for optimal comfort throughout the day.
No doubt, AC systems are improving day by day, but there are still old systems that cannot get updated. In some instances, it’s absolutely impossible to invest in a new system. Sometimes, it is just a catch 22 to rip the old aircon out and install a new one. A basic aircon has many parts that typically are split between an outside and inside configuration, hence you may have to undergo a drastic interior renovation. In Tibbo office in Taipei, we have got trapped in an identical situation. We just have to get by with the AC system we’ve got. Our aircon is controlled with a dozen of infrared remotes lying around.
Some time ago, we set out to create a management system for our dated HVAC system. We used Tibbo Project System (TPS) for this endeavor. Our spec for the aircon controller consisted of exactly two items:
- The aircon must run or not run depending on whether the lights are on or off. The formula is simple: no lights = no people = no need to run the AC.
- The temperature in the room must be monitored by the device that stops the aircon whenever the temperature is cooled off to the preset point.
To achieve our goal, we used a TPS2L system equipped with these Tibbits:
- Ambient temperature probe
- IR code processor Tibbit (#26)
- IR front-end Tibbit (#27)
- Ambient light sensor Tibbit (#28)
Let us tell you about the probe.The probe replaces the ambient temperature meter (Tibbit #29). It is nice to have the meter built right into the TPS. The problem is, the meter is affected by the internal heat of the TPSsystem itself. This influence is especially noticeable for the TPS2L device – it’s LCD really warms up the box! The new probe has the same circuit as the Tibbit #29, with the added benefit of being external to the TPSdevice. Now the measurements are accurate.
Here is a look at the items you need to set up in the menu:
IR commands. This is where you train your IR code processor to be able to transmit two commands: “On,” and “Off.” For the “On” command, use the lowest temperature that your aircon’s remote allows you to set (usually 16 degrees C). The logic here is that when you need to lower the temperature in the room you can use the coldest temperature setting, and when the room cools down to the preset temperature, the aircon is turned off. So really, you only need two commands.
Target temperature. You don’t need to set it here. There are dedicated buttons on the main screen.
Pre-cool start time. This is something we added along the way. Now it is possible to turn the aircon on, once a day, even before we all arrive at the office. Our day starts at 9 am. We set this time for 8:30 am, and by the time we get in, the office is nice and cool (while the scorching Taipei summer keeps on raging outside). The pre-cool timer is hardcoded for 45 minutes. If the lights are still off at 9:15 the aircon is turned off.
*Brightness threshold. *This is the brightness that the TPS will consider to correspond to “lights on.” The value is not expressed in any standard measurement units; it’s just the value the Tibbit #28 returns. So, how do you know what number to set here? Simple: the brightness is displayed on the main screen, like this: “Light level: 718”. Note the value with the lights off and on, then set the threshold to some value in the middle between the two.
Temp. meas. adjustment. This is useful for when you choose to use the Tibbit #29. As we’ve explained above, its measurements are affected by the internal heat of the TPS itself. You can use a regular thermometer and determine the measurement error. For example, if your thermometer reads 25C, and TPS shows 28C, then you must adjust the temperature by 3 degrees C. The data returned by the new external probe need no adjustment.
In phase 2 of this project we will connect our aircon controller to an AggreGate server. It will be possible to control the system via a smartphone app, which we going to design for this purpose. Now you know why our configuration menu has items like Network, AggreGate, etc. Stay tuned!
Tibbo Project System (TPS) is a highly configurable, affordable, and innovative automation platform. It is ideal for home, building, warehouse, and production floor automation projects, as well as data collection, distributed control, industrial computing, and device connectivity applications.
Suppliers of traditional “control boxes” (embedded computers, PLCs, remote automation and I/O products, etc.) typically offer a wide variety of models differing in their I/O capabilities. Four serial ports and six relays. Two serial ports and eight relays. One serial port, four relays, and two sensor inputs. These lists go on and on, yet never seem to contain just the right mix of I/O functions you are looking for.
Rather than offering a large number of models, Tibbo Technology takes a different approach: Our Tibbo Project System (TPS) utilizes Tibbits® – miniature electronic blocks that implement specific I/O functions. Need three RS232 ports? Plug in exactly three RS232 Tibbits! Need two relays? Use a relay Tibbit. This module-based approach saves you money by allowing you to precisely define the features you want in your automation controller.
Here is a closer look at the process of building a custom Tibbo Project System.
Start with a Tibbo Project PCB (TPP)
A Tibbo Project PCB is the foundation of TPS devices.
Available in two sizes – medium and large – each board carries a CPU, memory, an Ethernet port, power input for +5V regulated power, and a number of sockets for Tibbit Modules and Connectors.
Add Tibbit® Blocks
Tibbits (as in “Tibbo Bits”) are blocks of prepackaged I/O functionality housed in brightly colored rectangular shells. Tibbits are subdivided into Modules and Connectors.
Want an ADC? There is a Tibbit Module for this. 24V power supply? Got that! RS232/422/485 port? We have this, and many other Modules, too.
Same goes for Tibbit Connectors. DB9 Tibbit? Check. Terminal block? Check. Infrared receiver/transmitter? Got it. Temperature, humidity, and pressure sensors? On the list of available Tibbits, too.
Assemble into a Tibbo Project Box (TPB)
Most projects require an enclosure. Designing one is a tough job. Making it beautiful is even tougher, and may also be prohibitively expensive. Finding or making the right housing is a perennial obstacle to completing low-volume and hobbyist projects.
Strangely, suppliers of popular platforms such as Arduino, Raspberry Pi, and BeagleBone do not bother with providing any enclosures, and available third-party offerings are primitive and flimsy.
Tibbo understands enclosure struggles and here is our solution: Your Tibbo Project System can optionally be ordered with a Tibbo Project Box (TPB) kit.
The ingenious feature of the TPB is that its top and bottom walls are formed by Tibbit Connectors. This eliminates a huge problem of any low-volume production operation – the necessity to drill holes and openings in an off-the-shelf enclosure.
The result is a neat, professionally looking housing every time, even for projects with the production quantity of one.
Like boards, our enclosures are available in two sizes – medium and large. Medium-size project boxes can be ordered in the LCD/keypad version, thus allowing you to design solutions incorporating a user interface.
Unique Online Configurator
To simplify the process of planning your TPS we have created an Online Configurator.
Configurator allows you to select the Tibbo Project Board (TPP), “insert” Tibbit Modules and Connectors into the board’s sockets, and specify additional options. These include choosing whether or not you wish to add a Tibbo Project Box (TPB) enclosure, LCD and keypad, DIN rail mounting kit, and so on. You can choose to have your system shipped fully assembled or as a parts kit.
Configurator makes sure you specify a valid system by watching out for errors. For example, it verifies that the total power consumption of your future TPS device does not exceed available power budget. Configurator also checks the placement of Tibbits, ensuring that there are no mistakes in their arrangement.
Completed configurations can be immediately ordered from our online store. You can opt to keep each configuration private, share it with other registered users, or make it public for everyone to see.
Develop your application
Like all programmable Tibbo hardware, Tibbo Project System devices are powered by Tibbo OS (TiOS).
Use our free Tibbo IDE (TIDE) software to create and debug sophisticated automation applications in Tibbo BASIC, Tibbo C, or a combination of the two languages.
To learn more about the Tibbo Project System click here
OPC – «Open Platform Communications» – is a set of standards and specifications for manufacturing telecommunication. OPC specifies the transfer of real-time plant data between control devices from various producers. OPC was designed to process control hardware and support a common bridge for Windows-based software applications. OPC was aimed to reduce the number of duplicated effort performed by hardware manufacturers and their software partners.
The most typical OPC specification, OPC Data Access (OPC DA), is supported by Tibbo OPC Server. Any device compatible with the Tibbo AggreGate protocol can be a data source. AggreGate is a white-label IoT integration platform using up-to-date network technologies to control, configure, monitor and support electronic devices, along with distributed networks of such electronic devices. It also helps you collect device data in the cloud, where you can slice and dice it in alignment with your needs. In addition, the platform lets other enterprise applications transparently access this data via the AggreGate server.
Tibbo OPC server has embedded AggreGate network protocol. It can both interact with any Tibbo devices via AggreGate agent protocol and connect to AggreGate server. The AggreGate agent protocol open-source solution is published for Java, C#, and C++ programming languages, so your connection scheme is not restricted to AggreGate server or Tibbo devices only.
A simple example: TPS reads Tibbit #29 (Ambient temperature meter) and forwards data to OPC server via AggreGate agent protocol.
A more complex example: we have a Windows-based PC controlling a wood processing machine by means of AggreGate server through the Modbus protocol. If Tibbo OPC server is linked with AggreGate server, the data from the machine is sent to Tibbo OPC server, and therefore, we can operate and monitor the machine via any OPC client.
Compatibility with Windows XP/2003 or later (Microsoft Visual C++ 2013 redistributable is required - installed automatically)
Support of DA Asynchronous I/O 2.0 and Synchronous I/O with COM/DCOM technology
Tibbo OPC Server transmits the information on the Value, Quality and Timestamp of an item (tag) to the OPC Client applications. These fields are read from the AggreGate variables.
The process values are set to Bad [Configuration Error] quality if OPC Server loses communication with its data source (AggreGate Agent or AggreGate Server). The quality is set to Uncertain [Non-Specific] if the AggreGate variable value is empty.
In the following chart below you can see a concordance table of the AggreGate variables and the OPC data types:
|AggreGate Data Type||OPC Data Type|
|DATATABLE||OPC VT_BSTR (by default)|
To learn more about Tibbo OPC server, click here
When you think about consumer cloud platforms, which ones come to mind? Amazon AWS, Microsoft Azure and Google’s Cloud Platform are likely to be at the top of your list. But what about industrial cloud platforms? Which ones rise to the top for you? Well, GE’s Predix, Siemen's MindSphere, and the recently announced Honeywell Sentience are likely to be on any short list of industrial cloud platforms. But they aren’t the only ones in this space. Cisco's Jasper, IBM’s Watson IoT, Meshify, Uptake, and at least 20 others are competing to manage all those billions of sensors that are expected to encompass the Industrial Internet of Things (IIoT). Which one do you think will end up dominating the market?
A Brief Overview of Cloud Computing
To answer the above question, let's start with a very brief overview of cloud computing to put industrial cloud platforms in their proper context. Cloud platforms are one of several services that cloud computing providers offer, with the main ones being: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
Three Main Services of Cloud Computing
Software as a Service (SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
Industrial Cloud Platforms by Industrial Companies
Industrial cloud platforms have a much deeper focus on operational technology than consumer platforms. They are designed to allow data gathering throughout manufacturing production processes, in order to improve performance as well as predict failures before they happen. Here are three industrial cloud platforms by long time industrial companies:
Industrial Cloud Platforms by Software Development Firms
Industrial companies aren’t the only ones developing industrial cloud platforms. There’s already a long list of industrial cloud platforms available from software development firms. Here are a few worth mentioning:
|Table 1: The List of Industrial Cloud Platform Providers|
IBM Watson IoT
SAP Hana Cloud
What Do You Think Will Be The Future of the Industrial Cloud?
Will there be a dominant Industrial Cloud Platform? It's hard to say at this point. GE Predix is hoping for 500,000 connected devices by the beginning of 2017, while C3 IoT is said to have 70 million devices connected to its platform already.
The Crowded Cloud of Industrial Platforms
Will this market consolidate around a few big-name platforms, or will a lesser known provider be the winner and take all?
This post originally appeared here.
Internet of Things is surrounded with a lot of buzz, which is there for a reason. It is one of the most revolutionary technologies and it is the closest we’ve come to predicting our future. Of course, the IoT is not based on spells and witchcraft (it’s way scarier than that), but on machine-to-machine communication, cloud computing and networks of small sensors, which collect and analyze data. In this article we’ll share some of things and processes that will change in the IoT Era.
Home security systems
Today you can monitor home security cameras from your smartphone screen. More advanced home security systems go even further. They come with different types of sensors that control air quality, motion, sound, vibration and temperature. These systems use machine learning to determine the normal activity in your home and they send alerts to your smartphone, when something out of the ordinary occurs. Because of their smart machine learning approach, home security systems that are based on IoT concept drastically reduce the incidence of false alarms.
Even our beds will become smart. At the moment you can buy several types of sleep trackers from the ones that come in the form of bracelet and measure your heart rate and blood pressure to smart mattresses that can connect to home automation systems, prepare your bed temperature, track your heart and breathing rate and wake you up in the morning. These special mattresses also collect information about your sleep and give you recommendations for improving your bed rest.
Recently several companies released Wi-Fi enabled sensors that can connect to the home electrical panel and control and track your energy use. These small sensors recognize all appliances and gadgets by their “power signatures” and can monitor the energy use and brake it down to every single device. They will allow you to have a deep look into your monthly energy use, to recognize and deal with critical points and to save money on utility bills. Same as many other home security and home automation systems, these sensors learn to interpret the activity of your home devices and send warnings when incidents happen.
All home appliances and systems
All-in-one smart home automation systems can control several home appliances at once. People can use them to turn their porch lights on and off when they are on vacation and to preheat their home or their oven before they arrive home from work. These systems also control various conditions in your home and use smart sensors and machine learning to create the perfect comfort. Some home automation systems also come with a Bluetooth speaker and a microphone and they can work as voice assistants.
Self-storage monitoring protects stored goods from climate changes, theft and other unforeseen incidents. New storage monitoring systems based on the IoT concept control storage lighting, air-conditioning and security. They also use sensors to track variables that are critical for perishable goods like temperature and humidity. You can find these smart storages in many different cities around the world.
Construction site managers can use IoT solutions to monitor the work of heavy machinery and the movement of construction employees. This basically means that they don’t need to leave their trailer office. Sensors track the movement of supply and dumping trucks through geo-location technology and insure that everything works as scheduled. If there’re any irregularities in the work of heavy machinery, supply trucks or employees, the site manager will be instantly notified by smartphone push-notification.
In many cities the only connection between emergency vehicles and their headquarters is established through old-fashion radios. This offers a limited control in emergency situations. Advanced telematics already appeared in many emergency vehicles around the world. This technology allows lone drivers to receive updates in real time from the environment they are entering, including: over speeding, harsh events or the incidents of other team members. Employees at the headquarters also receive the information about emergency vehicle’s hours of service, speed, siren state and location. This way, they can easily schedule vehicle’s regular maintenance and minimize its downtime.
Internet of Things is the biggest tech trend that is happening at the moment. It will completely rock our world and bring a lot of positive disruption to every segment of our lives. Soon, we’ll be able to control all of our possessions through one smart app, which will leave us more time to focus on ourselves and our friends and family.
Transportation has become one of the most frequently highlighted areas where the internet of things can improve our lives. Specifically, a lot of people are excited about the IoT's potential to further the progress toward entire networks of self-driving cars. We hear a lot about the tech companies that are involved in building self-driving cars, but it's the IoT that will actually allow these vehicles to operate. In fact, CNET quoted one IoT expert just last year as saying that because of the expanding IoT, self-driving cars will rule the roads by 2030.
On a much smaller scale, there are also some niche applications of the IoT that are designed to fix specific problems on the road. For instance, many companies have looked to combat distracted driving by teenagers through IoT-related tools. As noted by PC World, one device called the Smartwheel monitors teens' driving activity by sensing when they're keeping both hands on the wheel. The device sounds an alert when a hand comes off the wheel and communicates to a companion app that compiles reports on driver performance. This is a subtle way in which the IoT helps young drivers develop better habits.
In a way, these examples cover both extremes of the effect the IoT is having on drivers. One is a futuristic idea that's being slowly implemented to alter the very nature of road transportation. The other is an application for individuals meant to make drivers safer one by one. But there are also some IoT-related tools that fall somewhere in the middle of the spectrum. One is an exciting new app that seeks to make the roads safer for the thousands of shipping fleet drivers operating on a daily basis.
At first this might sound like a niche category. However, the reality is that the innumerable companies and agencies relying shipping and transportation fleets have a ton of drivers to take care of. That means supervising vehicle performance, safety, and more for each and every one of them. That process comprises a significant portion of road activity, particularly in cities and on highways. These operations are able to be simplified and streamlined through Networkfleet Driver, which Verizon describes as a tool to help employees manage routes, maintenance, communication, and driving habits all in one place.
The app can communicate up-to-date routing changes or required stops, inform drivers of necessary vehicle repairs or upkeep, and handle communication from management. It can also make note of dangerous habits (like a tendency to speed or make frequent sudden stops), helping the driver to identify bad habits and helping managers to recommend safer performance. All of this is accomplished through various IoT sensors on vehicles interacting automatically with the app, and with systems that can be monitored by management.
The positive effect, while difficult to quantify, is substantial. Fleet drivers make up a significant portion of road activity, and through the use of the IoT we can make sure that the roads are safer for everyone.
Almost three years ago, I wrote in my IoT blog the posts “Are you prepared to answer M2M/IoT security questions of your customers ?. and “There is no consensus how best to implement security in IoT” given the importance that Security has to fulfil the promise of the Internet of Things (IoT).
And during this time I have been sharing my opinion about the key role of IoT Security with other international experts in articles “What is the danger of taking M2M communications to the Internet of Things?, and events (Cycon , IoT Global Innovation Forum 2016).
The Security has been always a tradeoff between cost and benefit
I am honest when I say that I do not known how McKinsey gets calculate the total impact that IoT will have on the world economy in 2025, even on one of the specific sectors, and if they had taking into account the challenge of the Security, but it hardly matters: “The opportunities generated by IoT far outweigh the risks”.
With increased IoT opportunity comes increased security risks and a flourishing IoT Security Market (According with Zion Research the IoT Security Market will growth to USD 464 million in 2020).
A decade of breaches and the biggest attack target yet is looming
We all know the negative impact that news about cyber-attacks has in the society and enterprises. In less than a decade and according to Data Source: ICS- CERT (US) have gone from 39 incidents in 2010 to 295 incidents in 2015.
In a survey published by ATT, the company has logged a 458% increase in vulnerability scans of IoT devices in the last 2 years.
It is a temptation for hackers to test their skills in connected objects, whether connected cars or smart homes appliances. But I'm afraid they will go far beyond attacking smart factories, or smart transportation infrastructure or smart grids.
With the millions of unprotected devices out there, the multitude of IoT networks, IoT Platforms, and developers with lack of security I am one more that believes the biggest attack target yet is looming.
With the Internet of Things, we should be prepared for new attacks and we must design new essential defences.
The complex IoT Security Threat Map from Beecham Research provides an overlayed summary of the full set of threat and vulnerability analyses that is used to help clients shape their strategies. This Threat Map “summary” many of the top 5 features from each of those analyses.
1. external threats and the top internal vulnerabilities of IoT applications
2. the needs for robust authentication & authorisation & confidentiality
3. the features and interactions between multiple networks used together in IoT;
4. the complexities of combining Service Sector optimised capabilities of differing Service Enablement Platforms;
5. the implementation and defences of edge device operating systems, chip integration and the associated Root of Trust.
The OWASP Internet of Things Project is designed to help manufacturers, developers, and consumers better understand the security issues associated with the Internet of Things, and to enable users in any context to make better security decisions when building, deploying, or assessing IoT technologies.
The project looks to define a structure for various IoT sub-projects such as Attack Surface Areas, Testing Guides and Top Vulnerabilities. Bellow the top IoT Vulnerabilities.
Subex white paper presenting their IoT solution add some real examples of these vulnerabilities.
Insecure Web Interface: To exploit this vulnerability, attacker uses weak credentials or captures plain text credentials to access web interface. The impact results in data loss, denial of service and can lead to complete device take over. An insecure web interface was exploited by hackers to compromise Asus routers in 2014 that were shipped with default admin user name and password.
Insufficient Authentication/Authorization: Exploitation of this vulnerability involves attacker brute forcing weak passwords or poorly protected credentials to access a particular interface. The impact from this kind of attack is usually denial of service and can also lead to compromise of device. This vulnerability was exploited by ethical hackers to access head unit of Jeep Cherokee2 via WiFi-connectivity. The WiFi password for Jeep Cherokee unit is generated automatically based upon the time when car and head unit is started up. By guessing the time and using brute force techniques, the hackers were able to gain access to head unit.
Insecure Network Services: Attacker uses vulnerable network services to attack the device itself or bounce attacks off the device. Attackers can then use the compromised devices to facilitate attacks on other devices. This vulnerability was exploited by hackers that used 900 CCTV cameras3 globally to DoS attack a cloud platform service.
Lack of Transport Encryption: A lack of transport encryption allows 3rd parties to view data transmitted over the network. The impact of this kind of attack can lead to compromise of device or user accounts depending upon the data exposed. This weakness was exhibited by Toy Talk’s server domain which was susceptible to POODLE attack. Toy Talk helps Hello Barbie doll4 to talk to a child by uploading the words of a child to server and provide appropriate response after processing it. Though there was no reported hack on this, such a vulnerability could easily lead to one.
Privacy Concerns: Hackers use different vectors to view and/or collect personal data which is not properly protected. The impact of this attack is collection of personal user data. This vulnerability was exemplified by the VTech hack5 wherein in hackers were able to steal personal data of parents as well as children using VTech’s tablet.
Who owns the problem?
With the IoT we are creating a very complicated supply chain with lots of stakeholders so it's not always clear 'who owns the problem'. By way of an example with a simple home application and not Super Installers around; if you buy a central heating system and controller which requires you to push a button to increase the temperature then if it stops working you contact the company who supplied it. But if you buy a central heating boiler from one company, a wireless temperature controller from another, download a mobile App from another and have a weather station from another supplier then whose job is it to make sure it's secure and reliable? The simple cop-out is to say 'the homeowner bought the bits and connected them together therefore it's their responsibility' – well I'm sorry but that isn't good enough!
Manufacturers can't simply divest themselves of responsibility simply because the home owner bought several component parts from different retailers. As a manufacturer you have a responsibility to ensure that your product is secure and reliable when used in any of the possible scenarios and use cases which means that manufacturers need to work together to ensure interoperability – we all own the problem!
This might come as a shock to some companies/industries but at some level even competitors have to work together to agree and implement architectures and connectivity that is secure and reliable. Standardization is a good example of this, if you look at the companies actively working together in ISO, ETSI, Bluetooth SIG etc. then they are often fierce competitors but they all recognize the need to work together to define common, secure and reliable platforms around which they can build interoperable products.
If Cybersecurity is already top of mind for many organizations, is justified the alarm of lack of security in IoT?
In this three last years of evangelization of IoT, it has been no event or article not collect questions or comments on IoT Security and Privacy.
The good news is that according with the ATT State of IoT Security survey 2015, 85% of global organizations are considering exploring or implementing an IoT strategy but the bad news is that only 10% are fully confident that their connected devices are secure.
Source: ATT State of IoT Security survey 2015
And if we consider the report of Auth0, it scares me that only 10% of developers believe that most IoT devices on the market right now have the necessary security in place.
In a publication from EY titled “Cybersecurity and the IoT”, the company define three Stages to classify the current status of organizations in the implementation of IoT Security.
Stage 1: Activate
Organizations need to have a solid foundation of cybersecurity. This comprises a comprehensive set of information security measures, which will provide basic (but not good) defense against cyber-attacks. At this stage, organizations establish their fundamentals — i.e., they “activate” their cybersecurity.
Stage 2: Adapt
Organizations change — whether for survival or for growth. Threats also change. Therefore, the foundation of information security measures must adapt to keep pace and match the changing business requirements and dynamics otherwise they will become less and less effective over time. At this stage, organizations work to keep their cybersecurity up-to-date; i.e., they “adapt” to changing requirements.
Stage 3: Anticipate
Organizations need to develop tactics to detect and detract potential cyber-attacks. They must know exactly what they need to protect their most valuable assets, and rehearse appropriate responses to likely attack/incident scenarios: this requires a mature cyber threat intelligence capability, a robust risk assessment methodology, an experienced incident response mechanism and an informed organization. At this stage, organizations are more confident about their ability to handle more predictable threats and unexpected attacks; i.e., they anticipate cyber-attacks.
What enterprises needs to do
If you are thinking only in the benefits of IoT without consider the Security as a key component in your strategy you will probably regret very soon. Here below some recommendations either before start your IoT journey or if you are already started. Hope is not too late for wise advices.
With the proliferation and variety of IoT Devices, IoT Networks, IoT Platforms, Clouds, and applications, during the next few years we will see new vulnerabilities and a variety of new attacks. The progress in the security technologies and processes that prevent them will be key for the adoption of IoT in enterprises and consumers.
In the future Internet of Things world an end to end security approach to protect physical and digital assets. The ecosystems of this fragmented market must understand the need of Security by Design and avoid the temptation to reduce cost at the expense of the security.
Do not stop asking for security when you buy a connected product or use an IoT Service, the temptation of time to market, competitive prices and the lack of resources must not be an excuse to offer secure IoT solutions to enterprises, consumers and citizens.
Thanks in advance for your Likes and Shares
Thoughts ? Comments ?
As if the Internet of Things (IoT) was not complicated enough, the Marketing team at Cisco introduced its Fog Computing vision in January 2014, also known as Edge Computing for other more purist vendors.
Given Cisco´s frantic activity in their Internet of Everything (IoE) marketing campaigns, it is not surprising that many bloggers have abused of shocking headlines around this subject taking advantage of the Hype of the IoT.
I hope this post help you better understand what is the role of Fog Computing in the IoT Reference Model and how companies are using IoT Intelligent gateways in the Fog to connect the "Things" to the Cloud through some applications areas and examples of Fog Computing.
The problem with the cloud
As the Internet of Things proliferates, businesses face a growing need to analyze data from sources at the edge of a network, whether mobile phones, gateways, or IoT sensors. Cloud computing has a disadvantage: It can’t process data quickly enough for modern business applications.
The IoT owes its explosive growth to the connection of physical things and operation technologies (OT) to analytics and machine learning applications, which can help glean insights from device-generated data and enable devices to make “smart” decisions without human intervention. Currently, such resources are mostly being provided by cloud service providers, where the computation and storage capacity exists.
However, despite its power, the cloud model is not applicable to environments where operations are time-critical or internet connectivity is poor. This is especially true in scenarios such as telemedicine and patient care, where milliseconds can have fatal consequences. The same can be said about vehicle to vehicle communications, where the prevention of collisions and accidents can’t afford the latency caused by the roundtrip to the cloud server.
“The cloud paradigm is like having your brain command your limbs from miles away — it won’t help you where you need quick reflexes.”
Moreover, having every device connected to the cloud and sending raw data over the internet can have privacy, security and legal implications, especially when dealing with sensitive data that is subject to separate regulations in different countries.
IoT nodes are closer to the action, but for the moment, they do not have the computing and storage resources to perform analytics and machine learning tasks. Cloud servers, on the other hand, have the horsepower, but are too far away to process data and respond in time.
The fog layer is the perfect junction where there are enough compute, storage and networking resources to mimic cloud capabilities at the edge and support the local ingestion of data and the quick turnaround of results.
The variety of IoT systems and the need for flexible solutions that respond to real-time events quickly make Fog Computing a compelling option.
The Fog Computing, Oh my good another layer in IoT!
A study by IDC estimates that by 2020, 10 percent of the world’s data will be produced by edge devices. This will further drive the need for more efficient fog computing solutions that provide low latency and holistic intelligence simultaneously.
“Computing at the edge of the network is, of course, not new -- we've been doing it for years to solve the same issue with other kinds of computing.”
The Fog Computing or Edge Computing is a paradigm championed by some of the biggest IoT technology players, including Cisco, IBM, and Dell and represents a shift in architecture in which intelligence is pushed from the cloud to the edge, localizing certain kinds of analysis and decision-making.
Fog Computing enables quicker response times, unencumbered by network latency, as well as reduced traffic, selectively relaying the appropriate data to the cloud.
The concept of Fog Computing attempts to transcend some of these physical limitations. With Fog Computing processing happens on nodes physically closer to where the data is originally collected instead of sending vast amounts of IoT data to the cloud.
The OpenFog Consortium
The OpenFog Consortium, was founded on the premise based on open architectures and standards that are essential for the success of a ubiquitous Fog Computing ecosystem.
The collaboration among tech giants such as ARM, Cisco, Dell, GE, Intel, Microsoft and Schneider Electric defining an Open, Interoperable Fog Computing Architecture is without any doubt good news for a vibrant supplier ecosystem.
The OpenFog Reference Architecture is an architectural evolution from traditional closed systems and the burgeoning cloud-only models to an approach that emphasizes computation nearest the edge of the network when dictated by business concerns or critical application the functional requirements of the system.
The OpenFog Reference Architecture consists of putting micro data centers or even small, purpose-built high-performance data analytics machines in remote offices and locations in order to gain real-time insights from the data collected, or to promote data thinning at the edge, by dramatically reducing the amount of data that needs to be transmitted to a central data center. Without having to move unnecessary data to a central data center, analytics at the edge can simplify and drastically speed analysis while also cutting costs.
Benefits of Fog Computing
- · Frees up network capacity - Fog computing uses much less bandwidth, which means it doesn't cause bottlenecks and other similar occupancies. Less data movement on the network frees up network capacity, which then can be used for other things.
- · It is truly real-time - Fog computing has much higher expedience than any other cloud computing architecture we know today. Since all data analysis are being done at the spot it represents a true real time concept, which means it is a perfect match for the needs of Internet of Things concept.
- · Boosts data security - Collected data is more secure when it doesn't travel. Also makes data storing much simpler, because it stays in its country of origin. Sending data abroad might violate certain laws.
- · Analytics is done locally- Fog computing concept enables developers to access most important IoT data from other locations, but it still keeps piles of less important information in local storages;
- · Some companies don't like their data being out of their premises- with Fog Computing lots of data is stored on the devices themselves (which are often located outside of company offices), this is perceived as a risk by part of developers' community.
- · Whole system sounds a little bit confusing- Concept that includes huge number of devices that store, analyze and send their own data, located all around the world sounds utterly confusing.
Disadvantages of Fog Computing
Read more: http://bigdata.sys-con.com/node/3809885
Examples of Fog Computing
The applications of fog computing are many, and it is powering crucial parts of IoT ecosystems, especially in industrial environments. See below some use cases and examples.
- Thanks to the power of fog computing, New York-based renewable energy company Envision has been able to obtain a 15 percent productivity improvement from the vast network of wind turbines it operates. The company is processing as much as 20 terabytes of data at a time, generated by 3 million sensors installed on the 20,000 turbines it manages. Moving computation to the edge has enabled Envision to cut down data analysis time from 10 minutes to mere seconds, providing them with actionable insights and significant business benefits.
- Plat One is another firm using fog computing to improve data processing for the more than 1 million sensors it manages. The company uses the Cisco-ParStream platform to publish real-time sensor measurements for hundreds of thousands of devices, including smart lighting and parking, port and transportation management and a network of 50,000 coffee machines.
- In Palo Alto, California, a $3 million project will enable traffic lights to integrate with connected vehicles, hopefully creating a future in which people won’t be waiting in their cars at empty intersections for no reason.
- In transportation, it’s helping semi-autonomous cars assist drivers in avoiding distraction and veering off the road by providing real-time analytics and decisions on driving patterns.
- It also can help reduce the transfer of gigantic volumes of audio and video recordings generated by police dashboard and video cameras. Cameras equipped with edge computing capabilities could analyze video feeds in real time and only send relevant data to the cloud when necessary.
See more at: Why Edge Computing Is Here to Stay: Five Use Cases By Patrick McGarry
What is the future of fog computing?
The current trend shows that fog computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Sensors that only log data might one day become a thing of the past.
Janakiram MSV wondered if Fog Computing will be the Next Big Thing In Internet of Things? . It seems obvious that while cloud is a perfect match for the Internet of Things, we have other scenarios and IoT solutions that demand low-latency ingestion and immediate processing of data where Fog Computing is the answer.
Does the fog eliminate the cloud?
Fog computing improves efficiency and reduces the amount of data that needs to be sent to the cloud for processing. But it’s here to complement the cloud, not replace it.
The cloud will continue to have a pertinent role in the IoT cycle. In fact, with fog computing shouldering the burden of short-term analytics at the edge, cloud resources will be freed to take on the heavier tasks, especially where the analysis of historical data and large datasets is concerned. Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.
And there are still many cases where the centralized, highly efficient computing infrastructure of the cloud will outperform decentralized systems in performance, scalability and costs. This includes environments where data needs to be analyzed from largely dispersed sources.
“It is the combination of fog and cloud computing that will accelerate the adoption of IoT, especially for the enterprise.”
In essence, Fog Computing allows for big data to be processed locally, or at least in closer proximity to the systems that rely on it. Newer machines could incorporate more powerful microprocessors, and interact more fluidly with other machines on the edge of the network. While fog isn’t a replacement for cloud architecture, it is a necessary step forward that will facilitate the advancement of IoT, as more industries and businesses adopt emerging technologies.
'The Cloud' is not Over
Fog computing is far from a panacea. One of the immediate costs associated with this method pertains to equipping end devices with the necessary hardware to perform calculations remotely and independent of centralized data centers. Some vendors, however, are in the process of perfecting technologies for that purpose. The tradeoff is that by investing in such solutions immediately, organizations will avoid frequently updating their infrastructure and networks to deal with ever increasing data amounts as the IoT expands.
There are certain data types and use cases that actually benefit from centralized models. Data that carries the utmost security concerns, for example, will require the secure advantages of a centralized approach or one that continues to rely solely on physical infrastructure.
Though the benefits of Fog Computing are undeniable, the Cloud has a secure future in IoT for most companies with less time-sensitive computing needs and for analysing all the data gathered by IoT sensors.
Thanks in advance for your Likes and Shares
Thoughts ? Comments ?
- Identity: name, location, gender, age and other demographic data
- Relationships: their influence, connections, associations with others
- Current activity: orders, complaints, deliveries, returns
- History: contacts, campaigns, processes, cases across all lines of business and channels
- Value: which products or services they are associated with, including history
- Flags: prompts to give context, e.g. churn propensity, up-sell options, fraud risk, mood of last interactions, complaint record, frequency of contact
- Actions: expected, likely or essential steps based on who they are and the fact they are calling now
- All customer touch point data in a single repository for fast queries
- Next best actions or recommendations for customers
- All key metrics in a single location for business users to know and advise customers
- Intuitive and customizable dashboards for quick insights
- Real time hyper personalized customer interaction
- Enhanced customer loyalty
Customer 360º helps achieve Single View of Customer across Channels – online, stores, marketplaces, Devices – wearables, mobile, tablets, laptops & Interactions – purchase, posts, likes, feedback, service.
EDITOR'S NOTE: This story originally appeared on the A10 Networks blog.
A pair of distributed denial-of-service (DDoS) attacks against high-profile targets last week rank among the largest DDoS attacks on record. And a common thread has emerged: these attacks are leveraging botnets comprising hundreds of thousands of unsecured Internet of Things (IoT) devices.
OVH attack reaches 1 Tbps
European Web hosting company OVH confirmed last week that it suffered a string of DDoS attacks that neared the 1 Tbps mark. On Twitter, OVH CTO Octave Klaba said the attacks OVH suffered were “close to 1 Tbps” and noted that the flood of traffic was fueled by a botnet made up of nearly 150,000 digital video recorders and IP cameras capable of sending 1.5 Tbps in DDoS traffic. Klaba said OVH servers were hit by multiple simultaneous attacks exceeding 100 Gbps each, totaling more than 1 Tbps. The most severe single attacks that was documented by OVH reached 93 million packets-per-second (mpps) and 799 Gbps.
Last days, we got lot of huge DDoS. Here, the list of "bigger that 100Gbps" only. You can see the— Octave Klaba / Oles (@olesovhcom) September 22, 2016
simultaneous DDoS are close to 1Tbps ! pic.twitter.com/XmlwAU9JZ6
SC Magazine UK quoted security researcher Mustafa Al-Bassam as saying the DDoS attack against OVH is “the largest DDoS attack ever recorded.”
Krebs gets slammed
The OVH attack came on the heels of another gargantuan DDoS incident, this one targeting respected cybersecurity blog Krebsonsecurity.com, which knocked the site offline for several hours.
“The outage came in the wake of a historically large distributed denial-of-service (DDoS) attack which hurled so much junk traffic at Krebsonsecurity.com that my DDoS protection provider Akamai chose to unmoor my site from its protective harbor,” Brian Krebs wrote, adding that he has since implemented DDoS protection from Google’s Project Shield.
The attack on Krebs clocked in at a massive 620 Gbps in size, which is several orders of magnitude more traffic than is typically necessary to knock most websites offline.
SecurityWeek reported that Krebs believes the botnet used to target his blog mostly consists of IoT devices — perhaps millions of them — such as webcams and routers that have default or weak credentials.
“There is every indication that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called ‘Internet of Things,’ (IoT) devices — mainly routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords,” Krebs wrote.
Reports indicate that the attack was in response to Krebs reporting on and exposing vDOS, a service run by two Israelis who were offering a DDoS-as-a-Service play and were arrested after Krebs’ story was published.
Security researchers have warned that improperly secured IoT devices are more frequently being used to launch DDoS attacks. Symantec last week noted that hackers can easily hijack unsecured IoT devices due to lack of basic security controls and add them to a botnet, which they then use to launch a DDoS attack.
“Poor security on many IoT devices makes them soft targets and often victims may not even know they have been infected,” Symantec wrote. “Attackers are now highly aware of lax IoT security and many pre-program their malware with commonly used and default passwords.”
And while DDoS attacks remain the main purpose of IoT malware, Symantec warned that the proliferation of devices and their increased processing power may create new ways for threat actors to leverage IoT, such as cryptocurrency mining, information stealing and network reconnaissance.
- Increased need & desire among businesses to gain greater value from their data
- Over 80% of data/information that businesses generate and collect is unstructured or semi-structured data that need special treatment
- Typically requires mix of skills - mathematics, statistics, computer science, machine learning and most importantly business knowledge
- They need to employ the R or Python programming language to clean and remove irrelevant data
- Create algorithms to solve the business problems
- Finally effectively communicate the findings to management
Any company, in any industry, that crunches large volumes of numbers, possesses lots of operational and customer data, or can benefit from social media streams, credit data, consumer research or third-party data sets can benefit from having a data scientist or a data science team.
- Kirk D Borne of BoozAllen
- D J Patil Chief Data Scientist at White House
- Gregory Piatetsky of kdnuggets
- Vincent Granville of Analyticsbridge
- Jonathan Goldman of LinkedIn
- Ronald Van Loon
For IoT and M2M device security assurance, it's critical to introduce automated software development tools into the development lifecycle. Although software tools' roles in quality assurance is important, it becomes even more so when security becomes part of a new or existing product's requirements.
Automated Software Development Tools
There are three broad categories of automated software development tools that are important for improving quality and security in embedded IoT products:
- Application lifecycle management (ALM): Although not specific to security, these tools cover requirements analysis, design, coding, testing and integration, configuration management, and many other aspects of software development. However, with a security-first embedded development approach, these tools can help automate security engineering as well. For example, requirements analysis tools (in conjunction with vulnerability management tools) can ensure that security requirements and known vulnerabilities are tracked throughout the lifecycle. Design automation tools can incorporate secure design patterns and then generate code that avoids known security flaws (e.g. avoiding buffer overflows or checking input data for errors). Configuration management tools can insist on code inspection or static analysis reports before checking in code. Test automation tools can be used to test for "abuse" cases against the system. In general, there is a role for ALM tools in the secure development just as there is for the entire project.
- Dynamic Application Security Testing (DAST): Dynamic testing tools all require program execution in order to generate useful results. Examples include unit testing tools, test coverage, memory analyzers, and penetration test tools. Test automation tools are important for reducing the testing load on the development team and, more importantly, detecting vulnerabilities that manual testing may miss.
- Static Application Security Testing (SAST): Static analysis tools work by analyzing source code, bytecode (e,g, compiled Java), and binary executable code. No code is executed in static analysis, but rather the analysis is done by reasoning about the potential behavior of the code. Static analysis is relatively efficient at analyzing a codebase compared to dynamic tools. Static analysis tools also analyze code paths that are untested by other methods and can trace execution and data paths through the code. Static analysis can be incorporated early during the development phase for analyzing existing, legacy, and third-party source and binaries before incorporating them into your product. As new source is added, incremental analysis can be used in conjunction with configuration management to ensure quality and security throughout.
Figure 1: The application of various tool classes in the context of the software development lifecycle.
Although adopting any class of tools helps productivity, security, and quality, using a combination of these is recommended. No single class of tools is the silver bullet. The best approach is one that automates the use of a combination of tools from all categories, and that is based on a risk-based rationale for achieving high security within budget.
The role of static analysis tools in a security-first approach
Static analysis tools provide critical support in the coding and integration phases of development. Ensuring continuous code quality, both in the development and maintenance phases, greatly reduces the costs and risks of security and quality issues in software. In particular, it provides some of the following benefits:
- Continuous source code quality and security assurance: Static analysis is often applied initially to a large codebase as part of its initial integration as discussed below. However, where it really shines is after an initial code quality and security baseline is established. As each new code block is written (file or function), it can be scanned by the static analysis tools, and developers can deal with the errors and warnings quickly and efficiently before checking code into the build system. Detecting errors and vulnerabilities (and maintaining secure coding standards, discussed below) in the source at the source (developers themselves) yields the biggest impact from the tools.
- Tainted data detection and analysis: Analysis of the data flows from sources (i.e. interfaces) to sinks (where data gets used in a program) is critical in detecting potential vulnerabilities from tainted data. Any input, whether from a user interface or network connection, if used unchecked, is a potential security vulnerability. Many attacks are mounted by feeding specially-crafted data into inputs, designed to subvert the behavior of the target system. Unless data is verified to be acceptable both in length and content, it can be used to trigger error conditions or worse. Code injection and data leakage are possible outcomes of these attacks, which can have serious consequences.
- Third-party code assessment: Most projects are not greenfield development and require the use of existing code within a company or from a third party. Performing testing and dynamic analysis on a large existing codebase is hugely time consuming and may exceed the limits on the budget and schedule. Static analysis is particularly suited to analyzing large code bases and providing meaningful errors and warnings that indicate both security and quality issues. GrammaTech CodeSonar binary analysis can analyze binary-only libraries and provide similar reports as source analysis when source is not available. In addition, CodeSonar binary analysis can work in a mixed source and binary mode to detect errors in the usage of external binary libraries from the source code.
- Secure coding standard enforcement: Static analysis tools analyze source syntax and can be used to enforce coding standards. Various code security guidelines are available such as SEI CERT C  and Microsoft's Secure Coding Guidelines . Coding standards are good practice because they prevent risky code from becoming future vulnerabilities. As mentioned above, integrating these checks into the build and configuration management system improves the quality and security of code in the product.
As part of a complete tools suite, static analysis provides key capabilities that other tools cannot. The payback for adopting static analysis is the early detection of errors and vulnerabilities that traditional testing tools may miss. This helps ensure a high level of quality and security on an on-going basis.
Machine to machine and IoT device manufacturers incorporating a security-first design philosophy with formal threat assessments, leveraging automated tools, produce devices better secured against the accelerating threats on the Internet. Modifying an existing successful software development process that includes security at the early stages of product development is key. Smart use of automated tools to develop new code and analyze existing and third party code allows development teams to meet strict budget and schedule constraints. Static analysis of both source and binaries plays a key role in a security-first development toolset.
- No Silver Bullet – Essence and Accident in Software Engineering, Fred Brooks, 1986
- SEI CERT C Coding Standard,
- Outsource Code Development Driving Automated Test Tool Market, VDC Research, IoT & Embedded Blog, October 22, 2013
Using Mattermarks’s list of the Top 100 IoT startups in 2015 (ranked by funding, published in Forbes Oct 25, 2015) Ipqwery has looked behind the analytics to reveal the nature of the intellectual property (IP) behind these innovative companies. Our infographic presents a general summary of the IP within the group as a whole, and illustrates the trailing 5-year trends related to IP filing activity.
The vast majority of these companies have both patents (84%) and trademarks (85%) in their IP portfolio. There was a sharp and mostly linear increase in filings for both patents and trademarks, from 2011 through to 2014, with a slight decrease showing in 2015. 2016 looks to be on pace to meet or exceed last year’s filing activity as well. All this is consistent with the ever-expanding number of companies operating within the IoT ecosystem.
A closer look at the top 5 patent class descriptions amongst all patents granted or published yields close results between these classes. This is not surprising given the similar technologies behind many IoT products, such that their patents will incorporate the same or similar descriptions within their claims. Comparatively, there is a wider variance in the Top 5 Trademark classes used, but this speaks more to the wider variety of marketing and branding potential than to the underlying IoT technologies.
What’s striking in Mattermark’s original analysis of the Top 100 IoT Startups is that 30% of all funding raised by this group as a whole has been concentrated in only the top 5 companies; Jawbone, Genband, Silver Spring Networks, View Glass and Jasper Technologies. Ipqwery’s analysis further reveals that only two of these companies (Silver Spring and Jasper) have Top 5 inventors within the group. In fact, Jasper actually has 2 of the Top 5 inventors. The other top inventors come from Hello and Kineto Wireless.=
The broad-strokes approach of IPqwery’s infographic doesn’t directly illustrate the IP held by any one company, but certainly hints at where exactly this type of analysis could be very useful indeed. For where Mattermark sought to pinpoint where the greatest growth potential (momentum) was within the group of companies by looking at the overall IoT funding environment, IPqwery’s analysis of the general IP trends within this group sheds additional light on the matter, and perhaps raises some additional issues. Wouldn’t potential correlations between IP and funding also be a useful measure of momentum across metrics, and thus shouldn’t IP data be generally more integrated into business growth analytics, from the get go?
Here's a link to a new infographic by IPqwery summarizing the intellectual property held by the Top 100 IoT Startups (2015).
A smart, highly optimized distributed neural network, based on Intel Edison "Receptive" Nodes
Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.
While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.
These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.
These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.
Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.
To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.
Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.
With the global population expected to reach 9.7 billion people by 2050, innovation in the agricultural industry is more important than ever. This growth brings with it a need for increased food production and a dwindling availability of arable land. Projections show that feeding a population near 9 billion people would require raising global food production by some 70%. To provide for such steep demands, old farming techniques are simply no longer adequate. Thankfully, the agriculture industry is a burgeoning sector within the Internet of Things, and farmers globally are ready to reap the benefits.
Let’s look at a few ways IoT is helping the agriculture industry around the world.
Smart Ag Is Environmentally Friendly
Agriculture is responsible for a significant environmental footprint, accounting for 80% to 90% of US water consumption, releasing massive quantities of chemical pesticides, and producing more greenhouse gas emissions with animal agriculture alone than all combined transportation. IoT technology can maximize production capabilities and minimize required resources in order to reduce the industry’s environmental harm. Sensors can be implemented to test agricultural factors such as soil for moisture and nutrient levels to ensure resources are being used as efficiently as possible. This way water and pesticides can be reserved from unnecessary use and alternates can be implemented.
IoT Provides Precision Control for Farmers
The agriculture industry reaches far and wide, and each sector involves too much labor for one person to accomplish alone. As a result, much of the agriculture industry relies heavily on trusting intuition and human judgment. This also means that workers are nearly irreplaceable during illness or absence. Implementation of IoT technology can allow for real-time access to information that otherwise would take too much time or effort to obtain. Managers can have remotely controlled decision-making capabilities at their fingertips rather than having to wait for reports and then send out orders to workers. For example, rather than using valuable workers, drones are now available to monitor crops or apply treatment to specific areas. Also, companies like Lecia have developed GPS-guided combines and other agricultural IoT technologies. Aeris is helping to make solutions like these possible through cellular connectivity, perfect for remote real-time access to critical data.
Farms Are More Productive With IoT
As agricultural businesses gain more insight and control over their operations, they are able to make better business decisions and thus increase productivity. If a farm can use drones or sensors to monitor fields or cattle, for example, then the experience of the farmer can be utilized to make decisions while the manual labor previously needed to monitor these areas can be better repurposed elsewhere. IoT technology can also be applied to agricultural machinery, allowing for preventative maintenance and more accurate reports in the case of a malfunction, saving time and money. As smarter decisions are made regarding resources, productivity will improve.
Smart Farming Saves Money
According to the USDA, some of the top expenses of agricultural businesses include feed, labor, fuel, chemicals, and farm supplies and repairs, all expenditures that can be reduced with the help of IoT. Implementing IoT technology can allow businesses to make better decisions about efficiently using resources including assets, labor, and capital, significantly reducing operational costs. Replacing and improving past techniques will be the only way to maintain a competitive advantage with rising demands and ever-improving technology.
IoT Provides Transparency for Consumers
As more data is made available to the public, consumers demand high quality products more emphatically than ever. Concerns continue to emerge regarding the environmental footprint, personal health effects, and other details surrounding food production. Utilizing IoT technology in production is the only way to provide consumers with the data and transparency that is now the standard expectation, and thus maintain a competitive advantage in the industry.
When you’re ready to connect your agriculture devices to the Internet of Things, contact Aeris for a customized IoT solution.
Originally Posted by: Mimi Spier
The Internet of Things (IoT) is here to stay—and rapidly evolving. As we try to make sense of IoT’s impact on our lives and businesses, we also continue grappling with the security challenges.
As the IoT security landscape evolves, here are five key insights for designing and implementing IoT deployments for your enterprise.
1. Protect Your People
IoT has opened up a world of possibilities in business, but it has also opened up a host of ways to potentially harm employees and customers. A security breach is not limited to stealing credit card data, anymore. Anyone with the right access could breach firewalls or steal health records. A key challenge of the IoT world is providing the right access to the right people at the right time.
2. Watch Your Things
As millions of “things” start joining the enterprise network, it also expands the surface area for hackers to breach your system. All these devices will be leveraging public Wi-Fi, cloud, Bluetooth networks, etc., which will create multiple points of vulnerabilities. Your system needs to be designed for security from the bottom up to account for:
A) Device level: better quality devices
B) Data level: encryption and cryptology
C) Network level: certificates and firewalls
D) Application level: login/authorized access
3. Poor Quality of Things
The standards for IoT hardware and software are still evolving, which means until we have any established guidelines, we need to account for a vast range in the quality of “things.” Some of these may be very sophisticated and hardy, while others may be of the cheap disposable variety. Which devices you pick may depend upon factors like cost, usage and the use case itself. However, be warned that lower-quality devices have been used to gain entry to a secure network.
“By 2020, more than 25% of identified attacks in enterprises will involve the Internet of Things (IoT), although the IoT will account for less than 10% of the IT security budget.” Gartner
4. Is Your Network Ready?
One of the biggest challenge for any IT department implementing company-wide IoT projects will be assessing and managing bandwidth. As millions of devices join your network at increasing rates, scaling your network’s bandwidth will be an ongoing struggle. Your bandwidth needs must remain elastic, so you can support your enterprise needs, while minimizing costs. It is critical to minimize exposure of your networks by using, for example, micro-segmentation.
5. Data Is Your Friend
As with protecting any system, predictive maintenance is the way to stay a step ahead of breaches. The usual ways of pushing out timely security patches and software upgrades will continue to be helpful. However, one big advantage of IoT is the sheer amount of data it generates. You can track operational data to create alerts based on anomalies in the system. For example, if someone logs into the system from Atlanta and then, 30 minutes later, logs in again from Palo Alto, the system should raise a red flag.
You can view the original post by clicking Here.
Note: this page contains paid content.
Please, subscribe to get an access.
Note: this page contains paid content.
Please, subscribe to get an access.
Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…Continue
Without electrical engineers, everything from your home lighting to your smartphone wouldn’t work properly. Needless to say, electrical engineers make our world go round, and it’s them who spearhead the latest innovations in tech. If you’re…Continue
Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…Continue