Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Programming (233)

The Internet of Things (IoT) concept promises to improve our lives by embedding billions of cheap purpose-built sensors into devices, objects and structures that surround us (appliances, homes, clothing, wearables, vehicles, buildings, healthcare tech, industrial equipment, manufacturing, etc.).

IoT Market Map -- Goldman Sachs

What this means is that billions of sensors, machines and smart devices will simultaneously collect volumes of big data, while processing real-time fast data from almost everything and... almost everyone!!!

IoT vision is not net reality

Simply stated, the Internet of Things is all about the power of connections.

Consumers, for the moment anyway, seem satisfied to have access to gadgets, trendy devices and apps which they believe will make them more efficient (efficient doesn't necessarily mean productive), improve their lives and promote general well-being.

Corporations on the other hand, have a grand vision that convergence of cloud computing, mobility, low-cost sensors, smart devices, ubiquitous networks and fast-data will help them achieve competitive advantages, market dominance, unyielding brand power and shareholder riches.

Global Enterprises (and big venture capital firms) will spend billions on the race for IoT supremacy. These titans of business are chomping at the bit to develop IoT platforms, machine learning algorithms, AI software applications & advanced predictive analytics. The end-game of these initiatives is to deploy IoT platforms on a large scale for;

  • real-time monitoring, control & tracking (retail, autonomous vehicles, digital health, industrial & manufacturing systems, etc.)
  • assessment of consumers, their emotions & buying sentiment,
  • managing smart systems and operational processes,
  • reducing operating costs & increasing efficiencies,
  • predicting outcomes, and equipment failures, and
  • monetization of consumer & commercial big data, etc.

 

IoT reality is still just a vision

No technology vendor (hardware or software), service provider, consulting firm or self-proclaimed expert can fulfill the IoT vision alone.

Recent history with tech hype-cycles has proven time and again that 'industry experts' are not very accurate predicting the future... in life or in business!

Having said this, it only makes sense that fulfilling the promise of IoT demands close collaboration & communication among many stake-holders.

A tech ecosystem is born

IoT & Industrial IoT comprise a rapidly developing tech ecosystem. Momentum is building quickly and will drive sustainable future demand for;

  • low-cost hardware platforms (sensors, smart devices, etc.),
  • a stable base of suppliers, developers, vendors & distribution,
  • interoperability & security (standards, encryption, API's, etc.),
  • local to global telecom & wireless services,
  • edge to cloud networks & data centers,
  • professional services firms (and self-proclaimed experts),
  • global strategic partnerships,
  • education and STEM initiatives, and
  • broad vertical market development.

I'll close with one final thought; "True IoT leaders and visionaries will first ask why, not how..!"

Read more…

Guest blog post by Peter Bruce

When Apple CEO Tim Cook finally unveiled his company’s new Apple Watch in a widely-publicized rollout, most of the press coverage centered on its cost ($349 to start) and whether it would be as popular among consumers as the iPod or iMac.

Nitin Indurkhya saw things differently.

“I think the most significant revelation was that of ResearchKit,” Indurkhya said. “It allows the iWatch to gather huge amounts of health-related data from its sensors that could then be used for medical research, an area that has traditionally been plagued by small samples and inconsistent and costly data collection, and for preventive care.”

Indurkhya is in a perfect position to know. He teaches text mining and other online courses for Statistics.com and the Institute for Statistics Education. And if you’ve ever wondered about the origins of a term we hear everywhere today – Big Data - the mystery is over. Indurkhya, along with Sholom Weiss, first coined "Big Data" in a predictive data mining book in 1998. (I never anticipated Big Data becoming a buzzword,” he said. “although we did expect the concept to take off.”)

The ResearchKit already has five apps that link users to studies on Parkinson's disease, diabetes, asthma, breast cancer and heart disease. Cook has touted other health benefits from Apple Watch, including its ability to tap users with a reminder to get up and move around if they have been sitting for a while. “We've taken (the mobile operating system) iOS and extended it into your car, into your home, into your health. All of these are really critical parts of your life,” Cook told a Goldman Sachs technology and Internet conference recently.

That helps explain the media fascination over another new Apple product. But it also tells us the importance of learning about Big Data. Having access to large amounts of raw numbers alone doesn’t necessarily change our lives. The transformation occurs when we master the skills needed to understand both the potential and the limitations of that information.

The Apple Watch exemplifies this because the ResearchKit essentially recruits test subjects for research studies through iPhone apps and taps into Apple Watch data. The implications for privacy, consent, sharing of data, and other ethical issues, are enormous. The Apple Watch likely won’t be the only device in the near future to prompt these kinds of concerns. It all leads to the realization that we need to be on a far more familiar basis with how data is collected and used than we’ve ever had to be in the past.

“We are increasingly relying on decisions, often from "smart" devices and apps that we accept and even demand,  that arise from data-based analyses,” Indurkhya said. “ So we do need to know when to, for example, manually override them in particular instances.

“Allowing our data to be pooled with others has benefits as well as risks. A person would need to understand these if they are to opt for a disclosure level that they are comfortable with. Otherwise the danger is that one would go to one or the other extreme, full or no participation, and have to deal with unexpected consequences.”

The Big Data questions raised by the Apple Watch are similar to the concerns over access to and disclosure of other reams of personal information. Edward Snowden’s leaks most famously brought these kinds of worries into play, publicizing the spying on ordinary Americans by the National Security Agency. There’s also commonly expressed fear that Big Data is dehumanizing, and that it’s used more for evil than for good.

These fears, Indurkhya noted, have seeped into the popular culture. Consider this list of Big Data movies: War Games, in which a super computer is given control of all United States defense assets.  Live Free or Die Hard, in which a data scientist hacker hopes to eventually bringing down the entire U.S. financial system. Even Batman gets into the act, hacking every cell phone in Gotham.

Little wonder people might shy away from studying big data. But that would be a mistake, said Indurkhya, who has a rebuttal for all the Hollywood hyped-fears.

First, he said, there are strong parallels between the Big Data revolution and the industrial revolution. Look at history. Despite all the dire predictions, machines aren't "taking over the world" and neither will Big Data.

Second, it’s also helpful to appreciate what Big Data gives us. It provides us with better estimates - they are more accurate and our confidence in them is higher. Perhaps more importantly, it provides estimates in situations where, in the absence of Big Data, answers were not obtainable at all, or not readily accessible. Think about searching the web for  "Little Red Riding Hood and Ricky Ricardo."  Even in the early days of the internet, you would have gotten lots of results individually for "Little Red Riding Hood" and "Ricky Ricardo," but it was not until Google had accumulated a massive enough data set, and perfected its Big Data search techniques, that you could reliably get directed to the "I Love Lucy" episode where Ricky dramatically reenacts the story for little Ricky. 

Data specialists can set policies and procedures that protect us from some of the risks of Big Data.  But we also need to become much more familiar with how our data is collected, analyzed, and distributed. If the Apple Watch rollout proves anything, it might be this: Going forward, we’ll all have to be as smart about data as our devices.

Follow us @IoTCtrl | Join our Community

Read more…

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. It will play a big part in the IoT. From our friends at R2D3 is a very interesting visual introduction to machine learning. Check it out here

Read more…

Keeping Up With Tech Trends

With the continuing evolution of technology, it's not surprising how trends are constantly changing as well. A big number of companies try to create new trends or keep up and ride with the current ones as they create new tech startups that will hook the public and keep them wanting for more.

Take Flappy Bird for example. Although the application was released May of 2013, it made huge waves in 2014 and even became the most downloaded free game in the Apple App Store. It even earned $50,000 a day! After feeling guilty about it's addictive nature, the creator of the game removed the game from application stores. But that gave opportunity for other developers to create their own application that was similar to it. So not long after that, hundreds of flappy bird-like applications were created and released in the application stores. But now it seems like the hype has died down and flappy bird will now just become another tech trend faded memory.

If you're one of the companies that are making plans now for a new tech trend or you're simply just making drafts for a new one, make sure you pay attention to what we think are the four rising tech trends this year. Bring out your pads and take note, people!

Think Smart

Nowadays, people are getting smart. Companies are creating better smart phones. Don't know what the time is? Check your smart watch. You can now even wear your computer with smart glasses and smart homes are getting popular too. Yes, developers are finding new ways to bring smarter things in this world. Create your own application or device that will keep the momentum going.

Some Privacy, Please

With plenty of emerging news about invasion of privacy, people are becoming more conscious about keeping things to themselves or just among a few people. Private applications like, Snapchat or ones that secure your pictures and other information have made waves because people favour them more for their privacy-promising quality. Look into creating or capitalizing more applications that will cater to this consumer need.

Drones!

With Amazon launching the video for their Prime Air, you know that they are coming. And with a great feedback of interest from everyone who watched the videos, we can predict that drones will no longer be something that we see on TV but something that we'll be experiencing very soon.

Data Forever

With growing popularity of social media or websites, unlimited data became and still is a huge trend. Look into ways to use that unlimited data as opportunity to use it for either better advertising, create better products or develop the consumer's experience. The possibilities are endless.

 

 

Read more…
The ‘connected’ car, not to be confused with the self-driving, autonomous car, is defined as any vehicle equipped with Internet access that allows data to be sent to and from the vehicle.

Since the automobiles were invented, car makers have been trying to add features which may reduce driver error. Today’s car has the computing power of 20 personal computers, features about 100 million lines of programming code, and processes up to 25 gigabytes of data an hour.

Digital technology is also changing how we use and interact with our cars, and in more ways than you probably realize.

The market for smart vehicles is certainly set for takeoff and many analysts predict they could revolutionize the world of automobiles in much the same way smartphones have changed the face of telecommunications.

Is your car connected to the Internet? Millions of vehicles around the world had embedded Internet access, offering their drivers a multitude of smart options and benefits. These include better engine controls, automatic crash notifications and safety alerts, to name just a few. Owners can also interact with their connected vehicles through apps from any distance.

Vehicle-to-vehicle communications, for example, could help automobiles detect one another's presence and location to avoid accidents. That could be especially useful when it comes to driver-less cars - another advance already very much in development. Similar technology could help ensure that cars and their drivers slow down for school zones or stop at red lights.

Connected vehicle technologies provide the tools to make transformational improvements in safety, to significantly reduce the number of lives lost each year through connected vehicle crash prevention applications.

The Connected Car will be optimized to track and report its own diagnostics, which is part of its appeal for safety conscious drivers.

Connected cars give superior Infotainment services like navigation, traffic, weather, mobile apps, emails and also entertainment.

Auto insurers also have much to gain from the connected car revolution, as personalized, behavior based premiums are already becoming new industry standard.

OEMS and dealers must embrace the  Big Data revolution now, so they’re ready to harness the plethora of data that will become available as more and more connected cars hit the roads.

Cloud computing powers much of the audio streaming capabilities and dashboard app functions that are becoming more commonplace in autos.

In the next 5 years it seems that non-connected cars will become a thing of the past.  Here are some good examples of connected cars:

  • Mercedes-Benz models introduced this year can link directly to Nest, the Internet of Things powered smart home system, to remotely activate a home’s temperature controls prior to arrival.
  • Audi has developed a 12.3 inch, 3d graphics fully digital dashboard in partnership with NVIDIA.
  • Telematics Company OnStar can shut down your stolen car remotely helping police solve the case.
  • ParkMe covers real time dynamic parking information and guide drivers to open parking lots and meters. It if further integrating with mobile payments.

The next wave is driver-less, fully equipped and connected car, where there will be no steering wheels, brakes, gas pedals and other major devices. You just have to sit back, relax and enjoy the ride!!

This article originally appeared here.
Read more…

Node.js and The Internet of Things

Last year, we interviewed Patrick Catanzariti and asked him if Javascript will be the language of IoT. It was one of our most shared Q&As. Charlie Key's talk at the Node Community Conference provides a nice overview of how Node is driving adoption of IoT. In software development, Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Although Node.js is not a JavaScript framework, many of its basic modules are written in JavaScript, and developers can write new modules in JavaScript.

Here's his presentation and a look at where the market of the Internet of Things is and how technologies like Node.js (JavaScript) and the Intel Edison are making it easier to create connected solutions. 

The major topics include: 
* What is the Internet of Things 
* Where is IoT Today 
* 4 Parts of IoT (Collect, Communicate, Analyze, Act) 
* Why JavaScript is Good for IoT 
* How Node.js is Making a Dent in the Internet of Things 
* What npm Modules are used for Hardware (Johnny-Five, Cylon.js, MRAA) 
* What is the Intel Edison 
* How to Best Work with the Edison 
* Tips for Edison (MRAA, Grove Kit, UPM) 
* Where the World of JavaScript and IoT is Going Node.js 

Read more…

The IoT Database

Phillip Zito at the highly resourceful blog Building Automation Monthly has consolidated multiple IoT Frameworks and Offerings into the IoT Database. You will see links to the Frameworks and Offerings below. He says over time that he will be working on providing summary articles on each Framework and Offering. He could use your help. If you have an offering/framework you would like added to this list feel free to add it in the comments. You can find the IoT Database here

Read more…

The IoT User Experience Urgency

As we evolve toward a software-defined world, there’s a new user experience urgency emerging.  That’s because the definition of “user” is going to be vastly expanded.  In the Internet of Things (IoT) era, users include machines.

Companies today are generating, collecting and analyzing more data than ever before.  They want to get better insights into their customers and their business operations.  This is driving substantial Investments in new architectures that extend to cloud and mobility.

They’re also yielding to user demands for more and newer sources of big data.  They’re experimenting with data lakes to store this potential trove.  And they’re investing in data blending and visualization technologies to analyze it all.

In the IoT world of the near future, however, much of this analysis is going to be done by machines with deep learning capabilities.  With forecasts for as many as 50 billion connected devices by 2020, the experience of these “users” with the applications they engage with will be no less critical to achieving strategic objectives than customer experience is now – and will remain.

But how are companies going to get smarter if user experience sucks?  Where is this greater insight going to come from if whatever business intelligence software they’ve deployed is not performing to user expectations?

They’re not going to win customer satisfaction and loyalty by frustrating users.  And the risks involved with disappointing machine users could be catastrophic.

It's Time to Get Strategic

More companies have come to realize the strategic value of their data.  As such, they’re seeking ways to get a higher return on those data assets.  The databases – both transactional and analytic – they’ve invested in are critical to corporate strategy.

In order to maximize the performance of business-critical apps companies must get strategic about user experience and application performance.  Monitoring technologies can no longer be implemented as short-term tactical bandages.

They might put out a brush fire temporarily, but they create more complexity and management headaches in the long run.  They often don’t work well together and generate more false positives than a smoke detector with a failing battery.  Annoying right?

IT teams are going to have to get more efficient with their ops data.   They will need a standardized approach to integrating diverse data sets, including those from SaaS applications and IaaS or PaaS clouds.  This is critical to gaining physical and logical knowledge of the computing environment across the entire application delivery chain.

Next-generation data integration technologies can unify ops data from traditional monitoring solutions with real-time streams of machine data and other types of big data.  They automate much of the cleansing, matching, error handling and performance monitoring that IT Ops teams often struggle with manually.

As this ops data grows with IoT, it can be fed into a data lake for analysis.  In fact, IT teams can kill two birds with one stone.  First, IT Ops data is a natural fit as an early test case for a data lake.  And by starting now they can hone skills sets for big data analytics and the coming IoT data deluge.

IT Ops, which are increasingly becoming a part of DevOps teams, can learn from and share their experiences with data management and analytics teams – as well as business teams.  It makes sense to bring application governance and data governance together because they share a common goal: ensuring that users have access to the highest quality data at the point of decision to optimize business outcomes and mitigate risks. 

The Path to ROI and Risk Management Objectives

This environment necessitates communication and collaboration among IT and business teams to proactively anticipate, identify and resolve application performance and user experience problems.  It also facilitates orchestration and management of both internally and externally sourced services efficiently to improve decision-making and business outcomes.

Through a unified approach to performance analytics, IT can help their companies leverage technology investments to discover, interpret and respond to the myriad events that impact their operations, security, compliance and competitiveness.  Ops data efficiency becomes actionable to facilitate strategic initiatives and positively impact financial results.

Successful strategy implementation manifests in return on investment (ROI) and risk management.  Multiple studies, including ours and the annual Puppet Labs State of DevOps report confirm that companies taking a strategic approach to user experience and application performance outperform their respective peer groups in financial metrics and market performance.

Vendors in this space – usually referred to as application performance management ( APM) – need to advance their thinking and technology.  Machine learning and predictive analytics are going to be table stakes in the IoT future.

APM vendors have a choice: they can maintain a focus on human user experience, which will always be essential.  Or they can think more broadly about user experience in the IOT world.  Because some of today’s enterprise customers – that produce everything from home monitoring devices and appliances to turbine engines, agricultural machinery and healthcare equipment – could one day well become competitors.

By capturing data from embedded sensors and applying advanced analytics to provide customers using their equipment with deeper insights, they could close out what will become the lion’s share of the IoT user experience market.  Leading manufacturers are already there.

 Photo: Gorbash Varvara

Originally posted on Big Data News by Gabriel Lowy

Follow us @IoTCtrl | Join our Community

Read more…

IoT Dictionary and M2M Industry Terms

camera-dictionary

Here's a great resource  from Aeris - an IoT Dictionary.

Aeris Communications has been in the machine-to-machine market for some time and are both a technology provider and a cellular network operator delivering comprehensive IoT / M2M services.

This glossary includes key terms of the IoT (Internet of Things) & M2M (machine-to-machine) communications industry, including wireless and cellular technologies spanning many different markets. It is updated to present current terminology and usage. It's a crowd-sourced resource, so feel free to contact Aries with suggestions. 

Also, if you need an IT-related dictionary, I just love WhatIs.com.

Read more…

This is an interesting resource for data scientists, especially for those contemplating a career move to IoT (Internet of things). Many of these modern, sensor-based data sets collected via Internet protocols and various apps and devices, are related to energy, urban planning, healthcare, engineering, weather, and transportation sectors. 

Sensor data sets repositories

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Eight IOT Analytics Products

Vitria IoT Platform

Vitria’s IoT analytics platform enables you to transform your business operations and boost revenue growth through Faster Analytics, Smarter Actions, and Better Outcomes Faster.

Faster and unified analytics via Temporal Analytics Engine over all data types and cycles. Smarter Actions enable better outcomes by combining prescriptive analytics with intelligent actions. Self-service and automation capabilities empower teams to accelerate time-to-value. and create analytics solutions in minutes vs. months.

Tellient

Tellient's IoT Analytics gives you the whole story with beautiful graphs for humans, organized data for machines, designed for the Internet of Things. As the only analytics platform built specifically for the Internet of Things, Tellient's IoT Analytics helps manufacturers of smart connected devices know what those devices are doing so they can make them better.

ParStream

ParStream’s Analytics Platform was purpose-built for scale to handle the massive volumes and high velocity of IoT data. The Platform helps companies generate timely, actionable insights from IoT data by providing more innovative and efficient ways to analyze that data – faster, with greater flexibility and closer to the source. The Platform uniquely queries at the source of data for real-time analysis as data is being loaded. It also provides unified analytics of real-time data in every query and generates more accurate insights for decision-makers with the continuous import of new data.

IBM IoT Platform

IBM Internet of Things Foundation provides simple, but powerful application access to IoT devices and data to help you rapidly compose analytics applications, visualization dashboards and mobile IoT apps.

Dell Statistica IoT Platform

Dell has empowered its users with a powerful business data analytics tool named ‘Dell Statistica’, which is capable of delivering wide range of solutions to various sectors say process optimization in manufacturing sector to fraud detection in banking industry and it even allows analytics on the gateway providing faster local insights.

Spunk IoT Platform

It offers a platform for operational intelligence that assists you to search, monitor, analyze and visualize machine generated big data from various websites, networks and other IoT devices. In recent announcement, Splunk is to deliver Real time Analytics and Visualization for AWS IoT Service.

Intel® IoT Analytics Platform

This beta cloud-based analytics system for IoT includes resources for the collection and analysis of sensor data. Using this service, you can jump-start data acquisition and analysis without having to invest in large-scale storage and processing capacity.

Pentaho IoT Platform

Sensor, machine-to-machine, and network data are expected to play a larger role in analytics as the Internet of Things becomes a reality. However, these data types present significant challenges related to data volume and variety, as well as predictive modeling. Pentaho provides the ability to blend operational data with data from your IT systems of record and deliver intelligent analytics to those stakeholders who need them most.


Originally posted on Data Science Central


Follow us @IoTCtrl | Join our Community

Read more…

Will Javascript be the Language of IoT?

language.jpg

JavaScript has proven itself worthy for web applications, both client and server side, but does it have potential to be the de-facto language of IoT?  

This is a topic I posed to Patrick Catanzariti, founder of DevDiner.com, a site for developers looking to get involved in emerging tech. Patrick is a regular contributor and curator of developer news and opinion pieces on new technology such as the Internet of Things, virtual/augmented reality and wearables. He is a SitePoint contributing editor, an instructor at SitePoint Premium and O'Reilly, a Meta Pioneer and freelance web developer who loves every opportunity to tinker with something new in a tech demo.

Why does IoT require a de facto language any more than any other system? Wouldn't that stifle future language evolution?

Honestly, I think it's a bit too much to ask for every single IoT device out there to run on JavaScript or any one de facto language. That's unbelievably tough to manage. Getting the entire world of developers to agree on anything is pretty difficult. Whatever solution the world of competing tech giants and startups come to (which is likely to be a rather fragmented one if current trends are anything to go by), the most important thing is that these devices need to be able to communicate effectively with each other and with as little barriers as possible. They need to work together. It's the "Internet of Things". The entire benefit of connecting anything to the Internet is allowing it to speak to other devices at a massive scale. I think we'd be able to achieve this goal even with a variety of languages powering the IoT. So from that standpoint, I think it's totally okay for various devices to run on whichever programming language suits them best.

On the other hand, we need to honestly look at the future of this industry from a developer adoption and consistency perspective. The world of connected devices is going to skyrocket. We aren't talking about a computer in every home, we're talking dozens of interconnected devices in every home. If each one of those devices is from a different company who each decided on a different programming language to use, things are going to get very tough to maintain. Are we going to expect developers to understand all programming languages like C, C++, JavaScript, Java, Go, Python, Swift and more to be able to develop solutions for the IoT? Whilst I'm not saying that's impossible to do and I'm sure there'll be programmers up to the task of that - I worry that will impact the quality of our solutions. Every language comes with its quirks and best practices, it'll be tough to ensure every developer knows how to create best practice software for every language. Managing the IoT ecosystem might become a costly and difficult endeavour if it is that fragmented.

I've no issue with language evolution, however if every company decides to start its own language to better meet the needs of the IoT, we're going to be in a world of trouble too. The industry needs to work together on the difficulties of the IoT, not separately. The efforts of the Open Interconnect Consortium, AllSeen Alliance and IoT Trust Framework are all positive signs towards a better approach.

C, C++ and Java always seem to be foundational languages that are used by all platforms, why do you think JavaScript will be the programming language of IoT?

My position is actually a bit more open than having JavaScript as the sole programming language of the IoT. I don't think that's feasible. JavaScript isn't great as a lower level language for memory management and the complexities of managing a device to that extent. That's okay. We are likely to have a programming language more suited to that purpose, like C or C++, as the de facto standard operational language. That would make perfect sense and has worked for plenty of devices so far. The issues I see are in connecting these devices together nicely and easily.

My ideal world would involve having devices running on C or C++ with the ability to also run JavaScript on top for the areas in which JavaScript is strongest. The ability to send out messages in JSON to other devices and web applications. That ability alone is golden when it comes to parsing messages easily and quickly. The Internet can speak JavaScript already, so for all those times when you need to speak to it, why not speak JavaScript? If you've got overall functionality which you can share between a Node server, front end web application and a dozen connected IoT devices, why not use that ability?

JavaScript works well with the event driven side of things too. When it comes to responding to and emitting events to a range of devices and client web applications at once, JavaScript does this pretty well these days.

JavaScript is also simpler to use, so for a lot of basic functionality like triggering a response on a hardware pin or retrieving data from a sensor, why overcomplicate it? If it's possible to write code that is clear and easy for many developers to understand and use without needing to worry about the lower level side of things - why not? We have a tonne of JavaScript developers out there already building for the web and having them on board to work with joining these devices to their ecosystem of web applications just makes sense.

Basically, I think we're looking at a world where devices run programming languages like C at their core but also can speak JavaScript for the benefits it brings. Very similar to what it looks like IoT.js and JerryScript will bring. I really like the Pebble Smartwatch's approach to this. Their watches run C but their apps use JavaScript for the web connectivity.

When it comes to solutions like IoT.js and JerryScript, they're written initially in C++. However they're providing an entire interface to work with the IoT device via JavaScript. One thing I really like about the IoT.js and JerryScript idea is that I've read that it works with npm - the Node Package Manager. This is a great way of providing access to a range of modules and solutions that already exist for the JavaScript and Node ecosystems. If IoT.js and JerryScript manage memory effectively and can provide a strong foundation for all the low level side of things, then it could be a brilliant way to help make developing for the IoT easier and more consistent with developing for the web with all the benefits I mentioned earlier. It would be especially good if the same functionality was ported to other programming languages too, that would be a fantastic way of getting each IoT device to some level of compatibility and consistency.

I'm hoping to try IoT.js and JerryScript out on a Raspberry Pi 2 soon, I'm intrigued to see how well it runs everything.

What do developers need to consider when building apps for IoT?

Security - If you are building an IoT device which is going to ship out to thousands of people, think security first. Make sure you have a way of updating all of those devices remotely (yet securely) with a security fix if something goes wrong. There will be bugs in your code. Security vulnerabilities will be found in even the most core technologies you are using. You need to be able to issue patches for them!

Battery life - If everyone needs to change your brand of connected light bulbs every two months because they run out of juice - that affects the convenience of the IoT. IoT devices need to last a long time. They need to be out of the way. Battery life is crucial. Avoid coding things in a way which drains battery power unnecessarily.

Compatibility - Work towards matching a standard like the Open Interconnect Consortium or AllSeen Alliance. Have your communication to other devices be simple and open so that your users can benefit from the device working with other IoT devices in new and surprising ways. Don't close it off to your own ecosystem!

What tools do you recommend for developing apps in IoT?

I'm a fan of the simple things. I still use Sublime Text for my coding most of the time as it's simple and out of the way, yet supports code highlighting for a range of languages and situations. It works well!

Having a portable 4G Wi-Fi dongle is also very very valuable for working on the go with IoT devices. It serves as a portable home network and saves a lot of time as you can bring it around as a development Wi-Fi network you turn on whenever you need it.

Heroku is great as a quick free platform to host your own personal IoT prototypes on too while you're testing them out. I often set up Node servers in Heroku to manage my communication between devices and it is the smoothest process I've found out of all of the hosting platforms so far.

For working locally - I've found a service called ngrok is perfect. It creates a tunnel to the web from your localhost, so you can host a server locally but access it online via a publicly accessible URL while testing. I've got a guide on this and other options like it on SitePoint.

Are you seeing an uptick in demand for IoT developers?

I've seen a demand slowly rising for IoT developers but not much of a developer base that is taking the time to get involved. I think partially it is because developers don't know where to start or don't realise how much of their existing knowledge already applies to the IoT space. It's actually one of the reasons I write at SitePoint as a contributing editor - my goal is to try and get more developers thinking about this space. The more developers out there who are getting involved, the higher the chances we hit those breakthrough ideas that can change the world. I really hope that having devices enabled with JavaScript helps spur on a whole community of developers who've spent their lives focused on the value of interconnected devices and shared information get involved in the IoT.

My latest big website endeavour called Dev Diner (http://www.devdiner.com) aims to try and make it easier for developers to get involved with all of this emerging tech too by providing guides on where to look for information, interviews and opinion pieces to get people thinking. The more developers we get into this space, the stronger we will all be as a community! If you are reading this and you're a developer who has an Arduino buried in their drawer or a Raspberry Pi 2 still in their online shopping cart - just do it. Give it a go. Think outside the box and build something. Use JavaScript if that is your strength. If you're stronger at working with C or C++, work to your strength but know that JavaScript might be a good option to help with the communication side of things too.

For more on Patrick’s thoughts on Javascript, read his blog post “Why JavaScript and the Internet of Things?” and catch his O’Reilly seminar here.

Read more…

Guest blog post by ajit jaokar

By Ajit Jaokar @ajitjaokar Please connect with me if you want to stay in touch on linkedin and for future updates

Cross posted from my blog - I look forward to discussion/feedback here

Note: The paper below is best read as a pdf which you can download from the blog for free

Background and Abstract

This article is a part of an evolving theme. Here, I explain the basics of Deep Learning and how Deep learning algorithms could apply to IoT and Smart city domains. Specifically, as I discuss below, I am interested in complementing Deep learning algorithms using IoT datasets. I elaborate these ideas in the Data Science for Internet of Things program which enables you to work towards being a Data Scientist for the Internet of Things  (modelled on the course I teach at Oxford University and UPM – Madrid). I will also present these ideas at the International conference on City Sciences at Tongji University in Shanghai  and the Data Science for IoT workshop at the Iotworld event in San Francisco

Please connect with me if you want to stay in touch on linkedin and for future updates

Deep Learning

Deep learning is often thought of as a set of algorithms that ‘mimics the brain’. A more accurate description would be an algorithm that ‘learns in layers’. Deep learning involves learning through layers which allows a computer to build a hierarchy of complex concepts out of simpler concepts.

The obscure world of deep learning algorithms came into public limelight when Google researchers fed 10 million random, unlabeled images from YouTube into their experimental Deep Learning system. They then instructed the system to recognize the basic elements of a picture and how these elements fit together. The system comprising 16,000 CPUs was able to identify images that shared similar characteristics (such as images of Cats). This canonical experiment showed the potential of Deep learning algorithms. Deep learning algorithms apply to many areas including Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition etc

 

How does a Computer Learn?

To understand the significance of Deep Learning algorithms, it’s important to understand how Computers think and learn. Since the early days, researchers have attempted to create computers that think. Until recently, this effort has been rules based adopting a ‘top down’ approach. The Top-down approach involved writing enough rules for all possible circumstances.  But this approach is obviously limited by the number of rules and by its finite rules base.

To overcome these limitations, a bottom-up approach was proposed. The idea here is to learn from experience. The experience was provided by ‘labelled data’. Labelled data is fed to a system and the system is trained based on the responses. This approach works for applications like Spam filtering. However, most data (pictures, video feeds, sounds, etc.) is not labelled and if it is, it’s not labelled well.

The other issue is in handling problem domains which are not finite. For example, the problem domain in chess is complex but finite because there are a finite number of primitives (32 chess pieces)  and a finite set of allowable actions(on 64 squares).  But in real life, at any instant, we have potentially a large number or infinite alternatives. The problem domain is thus very large.

A problem like playing chess can be ‘described’ to a computer by a set of formal rules.  In contrast, many real world problems are easily understood by people (intuitive) but not easy to describe (represent) to a Computer (unlike Chess). Examples of such intuitive problems include recognizing words or faces in an image. Such problems are hard to describe to a Computer because the problem domain is not finite. Thus, the problem description suffers from the curse of dimensionality i.e. when the number of dimensions increase, the volume of the space increases so fast that the available data becomes sparse. Computers cannot be trained on sparse data. Such scenarios are not easy to describe because there is not enough data to adequately represent combinations represented by the dimensions. Nevertheless, such ‘infinite choice’ problems are common in daily life.

How do Deep learning algorithms learn?

Deep learning is involved with ‘hard/intuitive’ problem which have little/no rules and high dimensionality. Here, the system must learn to cope with unforeseen circumstances without knowing the Rules in advance. Many existing systems like Siri’s speech recognition and Facebook’s face recognition work on these principles.  Deep learning systems are possible to implement now because of three reasons: High CPU power, Better Algorithms and the availability of more data. Over the next few years, these factors will lead to more applications of Deep learning systems.

Deep Learning algorithms are modelled on the workings of the Brain. The Brain may be thought of as a massively parallel analog computer which contains about 10^10 simple processors (neurons) – each of which require a few milliseconds to respond to input. To model the workings of the brain, in theory, each neuron could be designed as a small electronic device which has a transfer function similar to a biological neuron. We could then connect each neuron to many other neurons to imitate the workings of the Brain. In practise,  it turns out that this model is not easy to implement and is difficult to train.

So, we make some simplifications in the model mimicking the brain. The resultant neural network is called “feed-forward back-propagation network”.  The simplifications/constraints are: We change the connectivity between the neurons so that they are in distinct layers. Each neuron in one layer is connected to every neuron in the next layer. Signals flow in only one direction. And finally, we simplify the neuron design to ‘fire’ based on simple, weight driven inputs from other neurons. Such a simplified network (feed-forward neural network model) is more practical to build and use.

Thus:

a)      Each neuron receives a signal from the neurons in the previous layer

b)      Each of those signals is multiplied by a weight value.

c)      The weighted inputs are summed, and passed through a limiting function which scales the output to a fixed range of values.

d)      The output of the limiter is then broadcast to all of the neurons in the next layer.

Image and parts of description in this section adapted from : Seattle robotics site

The most common learning algorithm for artificial neural networks is called Back Propagation (BP) which stands for “backward propagation of errors”. To use the neural network, we apply the input values to the first layer, allow the signals to propagate through the network and read the output. A BP network learns by example i.e. we must provide a learning set that consists of some input examples and the known correct output for each case. So, we use these input-output examples to show the network what type of behaviour is expected. The BP algorithm allows the network to adapt by adjusting the weights by propagating the error value backwards through the network. Each link between neurons has a unique weighting value. The ‘intelligence’ of the network lies in the values of the weights. With each iteration of the errors flowing backwards, the weights are adjusted. The whole process is repeated for each of the example cases. Thus, to detect an Object, Programmers would train a neural network by rapidly sending across many digitized versions of data (for example, images)  containing those objects. If the network did not accurately recognize a particular pattern,  the weights would be adjusted. The eventual goal of this training is to get the network to consistently recognize the patterns that we recognize (ex Cats).

How does Deep Learning help to solve the intuitive problem

The whole objective of Deep Learning is to solve ‘intuitive’ problems i.e. problems characterized by High dimensionality and no rules.  The above mechanism demonstrates a supervised learning algorithm based on a limited modelling of Neurons – but we need to understand more.

Deep learning allows computers to solve intuitive problems because:

  • With Deep learning, Computers can learn from experience but also can understand the world in terms of a hierarchy of concepts – where each concept is defined in terms of simpler concepts.
  • The hierarchy of concepts is built ‘bottom up’ without predefined rules by addressing the ‘representation problem’.

This is similar to the way a child learns ‘what a dog is’ i.e. by understanding the sub-components of a concept ex  the behavior(barking), shape of the head, the tail, the fur etc and then putting these concepts in one bigger idea i.e. the Dog itself.

The (knowledge) representation problem is a recurring theme in Computer Science.

Knowledge representation incorporates theories from psychology which look to understand how humans solve problems and represent knowledge.  The idea is that: if like humans, Computers were to gather knowledge from experience, it avoids the need for human operators to formally specify all of the knowledge that the computer needs to solve a problem.

For a computer, the choice of representation has an enormous effect on the performance of machine learning algorithms. For example, based on the sound pitch, it is possible to know if the speaker is a man, woman or child. However, for many applications, it is not easy to know what set of features represent the information accurately. For example, to detect pictures of cars in images, a wheel may be circular in shape – but actual pictures of wheels may have variants (spokes, metal parts etc). So, the idea of representation learning is to find both the mapping and the representation.

If we can find representations and their mappings automatically (i.e. without human intervention), we have a flexible design to solve intuitive problems.   We can adapt to new tasks and we can even infer new insights without observation. For example, based on the pitch of the sound – we can infer an accent and hence a nationality. The mechanism is self learning. Deep learning applications are best suited for situations which involve large amounts of data and complex relationships between different parameters. Training a Neural network involves repeatedly showing it that: “Given an input, this is the correct output”. If this is done enough times, a sufficiently trained network will mimic the function you are simulating. It will also ignore inputs that are irrelevant to the solution. Conversely, it will fail to converge on a solution if you leave out critical inputs. This model can be applied to many scenarios as we see below in a simplified example.

An example of learning through layers

Deep learning involves learning through layers which allows a computer to build a hierarchy of complex concepts out of simpler concepts. This approach works for subjective and intuitive problems which are difficult to articulate.

Consider image data. Computers cannot understand the meaning of a collection of pixels. Mappings from a collection of pixels to a complex Object are complicated.

With deep learning, the problem is broken down into a series of hierarchical mappings – with each mapping described by a specific layer.

The input (representing the variables we actually observe) is presented at the visible layer. Then a series of hidden layers extracts increasingly abstract features from the input with each layer concerned with a specific mapping. However, note that this process is not pre defined i.e. we do not specify what the layers select

For example: From the pixels, the first hidden layer identifies the edges

From the edges, the second hidden layer identifies the corners and contours

From the corners and contours, the third hidden layer identifies the parts of objects

Finally, from the parts of objects, the fourth hidden layer identifies whole objects

Image and example source: Yoshua Bengio book – Deep Learning

Implications for IoT

To recap:

  • Deep learning algorithms apply to many areas including Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition etc
  • Deep learning systems are possible to implement now because of three reasons: High CPU power, Better Algorithms and the availability of more data. Over the next few years, these factors will lead to more applications of Deep learning systems.
  • Deep learning applications are best suited for situations which involve large amounts of data and complex relationships between different parameters.
  • Solving intuitive problems: Training a Neural network involves repeatedly showing it that: “Given an input, this is the correct output”. If this is done enough times, a sufficiently trained network will mimic the function you are simulating. It will also ignore inputs that are irrelevant to the solution. Conversely, it will fail to converge on a solution if you leave out critical inputs. This model can be applied to many scenarios

In addition, we have limitations in the technology. For instance, we have a long way to go before a Deep learning system can figure out that you are sad because your cat died(although it seems Cognitoys based on IBM watson is heading in that direction). The current focus is more on identifying photos, guessing the age from photos(based on Microsoft’s project Oxford API)

And we have indeed a way to go as Andrew Ng reminds us to think of Artificial Intelligence as building a rocket ship

“I think AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. If you have a large engine and a tiny amount of fuel, you won’t make it to orbit. If you have a tiny engine and a ton of fuel, you can’t even lift off. To build a rocket you need a huge engine and a lot of fuel. The analogy to deep learning [one of the key processes in creating artificial intelligence] is that the rocket engine is the deep learning models and the fuel is the huge amounts of data we can feed to these algorithms.”

Today, we are still limited by technology from achieving scale. Google’s neural network that identified cats had 16,000 nodes. In contrast, a human brain has an estimated 100 billion neurons!

There are some scenarios where Back propagation neural networks are suited

  • A large amount of input/output data is available, but you’re not sure how to relate it to the output. Thus, we have a larger number of “Given an input, this is the correct output” type scenarios which can be used to train the network because it is easy to create a number of examples of correct behaviour.
  • The problem appears to have overwhelming complexity. The complexity arises from Low rules base and a high dimensionality and from data which is not easy to represent.  However, there is clearly a solution.
  • The solution to the problem may change over time, within the bounds of the given input and output parameters (i.e., today 2+2=4, but in the future we may find that 2+2=3.8) and Outputs can be “fuzzy”, or non-numeric.
  • Domain expertise is not strictly needed because the output can be purely derived from inputs: This is controversial because it is not always possible to model an output based on the input alone. However, consider the example of stock market prediction. In theory, given enough cases of inputs and outputs for a stock value, you could create a model which would predict unknown scenarios if it was trained adequately using deep learning techniques.
  • Inference:  We need to infer new insights without observation. For example, based on the pitch of the sound – we can infer an accent and hence a nationality

Given an IoT domain, we could consider the top-level questions:

  • What existing applications can be complemented by Deep learning techniques by adding an intuitive component? (ex in smart cities)
  • What metrics are being measured and predicted? And how could we add an intuitive component to the metric?
  • What applications exist in Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition etc which also apply to IoT

Now, extending more deeply into the research domain, here are some areas of interest that I am following.

Complementing Deep Learning algorithms with IoT datasets

In essence, these techniques/strategies complement Deep learning algorithms with IoT datasets.

1)      Deep learning algorithms and Time series data : Time series data (coming from sensors) can be thought of as a 1D grid taking samples at regular time intervals, and image data can be thought of as a 2D grid of pixels. This allows us to model Time series data with Deep learning algorithms (most sensor / IoT data is time series).  It is relatively less common to explore Deep learning and Time series – but there are some instances of this approach already (Deep Learning for Time Series Modelling to predict energy loads using only time and temp data  )

2)      Multiple modalities: multimodality in deep learning. Multimodality in deep learning algorithms is being explored  In particular, cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time

3)      Temporal patterns in Deep learning: In their recent paper, Ph.D. student Huan-Kai Peng and Professor Radu Marculescu, from Carnegie Mellon University’s Department of Electrical and Computer Engineering, propose a new way to identify the intrinsic dynamics of interaction patterns at multiple time scales. Their method involves building a deep-learning model that consists of multiple levels; each level captures the relevant patterns of a specific temporal scale. The newly proposed model can be also used to explain the possible ways in which short-term patterns relate to the long-term patterns. For example, it becomes possible to describe how a long-term pattern in Twitter can be sustained and enhanced by a sequence of short-term patterns, including characteristics like popularity, stickiness, contagiousness, and interactivity. The paper can be downloaded HERE

Implications for Smart cities

I see Smart cities as an application domain for Internet of Things. Many definitions exist for Smart cities/future cities. From our perspective, Smart cities refer to the use of digital technologies to enhance performance and wellbeing, to reduce costs and resource consumption, and to engage more effectively and actively with its citizens (adapted from Wikipedia). Key ‘smart’ sectors include transport, energy, health care, water and waste. A more comprehensive list of Smart City/IoT application areas are: Intelligent transport systems – Automatic vehicle , Medical and Healthcare, Environment , Waste management , Air quality , Water quality, Accident and  Emergency services, Energy including renewable, Intelligent transport systems  including autonomous vehicles. In all these areas we could find applications to which we could add an intuitive component based on the ideas above.

Typical domains will include Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition. Of special interest are new areas such as the Self driving cars – ex theLutz pod and even larger vehicles such as self driving trucks

Conclusions

Deep learning involves learning through layers which allows a computer to build a hierarchy of complex concepts out of simpler concepts. Deep learning is used to address intuitive applications with high dimensionality.  It is an emerging field and over the next few years, due to advances in technology, we are likely to see many more applications in the Deep learning space. I am specifically interested in how IoT datasets can be used to complement deep learning algorithms. This is an emerging area with some examples shown above. I believe that it will have widespread applications, many of which we have not fully explored(as in the Smart city examples)

I see this article as part of an evolving theme. Future updates will explore how Deep learning algorithms could apply to IoT and Smart city domains. Also, I am interested in complementing Deep learning algorithms using IoT datasets.

I elaborate these ideas in the Data Science for Internet of Things program  (modelled on the course I teach at Oxford University and UPM – Madrid). I will also present these ideas at the International conference on City Sciences at Tongji University in Shanghai  and the Data Science for IoT workshop at the Iotworld event in San Francisco

Please connect with me if you want to stay in touch on linkedin and for future updates

Follow us @IoTCtrl | Join our Community

Read more…
RSS
Email me when there are new items in this category –

Premier Sponsors