Analytics has taken world by storm & It it the powerhouse for all the digital transformation happening in every industry.
Machine Learning (ML) has revolutionized the world of computers by allowing them to learn as they progress forward with large datasets, thus mitigating many previous programming pitfalls and impasses. Machine Learning builds algorithms, which when exposed to high volumes of data, can self-teach and evolve. When this unique technology powers Artificial Intelligence (AI) applications, the combination can be powerful. We can soon expect to see smart robots around us doing all our jobs – much quicker, much more accurately, and even improving themselves at every step. Will this world need intelligent humans anymore or shall we soon be outclassed by self-thinking robots? What are the most visible 2017 Machine Learning trends?
2017 Machine Learning Trends in Research
In the research areas, Machine Learning is steadily moving away from abstractions and engaging more in business problem solving with support from AI and Deep Learning. In What Is the Future of Machine Learning , Forbes predicts the theoretical research in ML will gradually pave the way for business problem-solving. With Big Data making its way back to mainstream business activities, now smart (ML) algorithms can simply use massive loads of both static and dynamic data to continuously learn and improve for enhanced performance.
2017 ML Application Development Trends
Gartner’s Top 10 Technology Trends for 2017 predicts that the combined AI and advanced ML practice that ignited about four years ago and since continued unscathed, will dominate Artificial Intelligence application development in 2017. This lethal combination will deliver more systems that “understand, learn, predict, adapt and potentially operate autonomously. “Cheap hardware, cheap memory, cheap storage technologies, more processing power, superior algorithms, and massive data streams will all contribute to the success of ML-powered AI applications. There will be a steady rise in Ml-powered AI application in industry sectors like preventive healthcare, banking, finance, and media. For businesses that mean more automated functions and fewer human checkpoints. 2017 Predictions from Forrester suggests that the Artificial Intelligence and Machine Learning Cloud will increasingly feed on IoT data as sensors and smart apps take over every facet of our daily lives.
Democratization of Machine Learning in the Cloud
The democratization of AI and ML through Cloud technologies, open standards, and algorithm economy will continue. The growing trend of deploying prebuilt ML algorithms to enable Self-Service Business Intelligence and Analytics is a positive step towards democratization of ML. In Google Says Machine Learning is the Future, the author champions the democratization of ML through idea sharing. A case in point is Google’s Tensor Flow, which has championed the need for open standards in Machine Learning. This article claims that almost anyone with a laptop and an Internet connection can dare to be a Machine Learning expert today provided they have the right mindset.
The provisioning of Cloud-based IT services was already a good step to make advanced Data Science a mainstream activity, and now with Cloud and packaged algorithms, mid-sized ad smaller businesses will have access to Self-Service BI and Analytics, which was only a dream till now. Also, the mainstream business users will gradually take an active role in data-centric business systems. Machine Learning Trends – Future AI claims that more enterprises in 2017 will capitalize on the Machine Learning Cloud and do their part to lobby for democratized data technologies.
Platform Wars will Peak in 2017
The platform war between IBM, Microsoft, Google, and Facebook to be the leader in ML developments will peak in 2017. Where Machine Learning Is Headed predicts that 2017 will experience a tremendous growth of smart apps, digital assistants and mainstream use of Artificial Intelligence. Although many ML-enabled AI systems have turned into success stories, the self-driving cars may die a premature death.
Humans will Make Peace with Machines
Since 2012 the global business community has witnessed a meteoric rise and widespread proliferation of data technologies. Finally, humans will realize that it is time to stop fearing the machines and begin working with them. The InfoWorld article titled Application Development, Docker, Machine Learning Are Top Tech Trends for 2017 asserts humans and machines will work with each other, not against each other. In this context, readers should review the DATAVERSITY® article The Future of Machine Learning: Trends, Observations, and Forecasts, where the readers are reminded that as businesses develop a strong dependence on pre-built ML algorithms for Advanced Analytics, the need for Data Scientists or large IT departments may diminish.
Demand-Supply Gaps in Data Science and Machine Learning will Rise
The business world is steadily heading toward the prophetic 2018, when according to McKinsey the first void in data technology expertise will be felt in the US and then gradually in the rest of the world. The demand-supply gap in Data Science and Machine Learning skills will continue to rise till academic programs and industry workshops begin to produce a ready workforce. In response to this sharp rise in the demand-supply gap, more enterprises and academic institutions will collaborate to train future Data Scientists and ML experts. This kind of training will compete with the traditional Data Science classroom and will focus more on practical skills rather than on theoretical knowledge.
The Algorithm Economy will take Centre Stage
Over the next year or two, businesses will be using canned algorithms for all data-centric activities like BI, Predictive Analytics, and CRM. The algorithm economy, which Forbes mentions, will usher in a marketplace where all data companies will compete for space. In 2017, global businesses will engage in Self-Service BI, and experience the growth of algorithmic business solutions, and ML in the Cloud. So far as algorithm-driven business decision making is concerned, 2017 may actually see two distinct types of algorithm economies. On one hand, average businesses will utilize canned algorithmic models for their operational and customer-facing functions. On the other hand, proprietary ML algorithms will become a market differentiator among large, competing enterprises.
Some Thoughts to Ponder
If the threat of intelligent machines taking over Data Scientists is really as real as it is made out to be, then 2017 is probably the year when the global Data Science community should take a new look at the capabilities of so-called “smart machines.” The repeated failure of autonomous cars has made one point clear – that even learning machines cannot surpass the natural thinking faculties bestowed by nature on human beings. If autonomous or self-guided machines have to be useful to human society, then the current Artificial Intelligence and Machine Learning research should focus on acknowledging the limits of machine power and assign tasks that are suitable for the machines and include more human interventions at necessary checkpoints to avert disasters. Repetitive, routine tasks can be well handled by machines, but any out-of-the-ordinary situations will still require human intervention.
To know more about High-End professional training on ML, AI, IoT, Big Data, Cloud, Analytics, Data Science and more, feel free to drop a line at: [email protected]
This article originally appeared here.
In the recent Cricket world cup, every team had its own team of Data Analysts. They used various technologies like Cloud Platform and visualizations to predict scores, player performance, player profiles and more. Around 40 years’ worth of Cricket World Cup data is being mined to produce insights that enhances the viewer's experience.
- What would be the most profitable food served at the concession stand?
- What would be the best prices to sell game day tickets?
- Determine which player on the team is the most productive?
- Which players in the draft will become all-stars, and which ones will be considered role players?
- Understand the fans behavior at the stadium via their app and push relevant information accordingly.
Now is the time to move beyond just collecting, storing & managing the data to take rapid actions on the continuous streaming data – Real-Time!!
- Digital makes customer self-service easy.
- Digitally engaged customers trust their utilities.
- Customer care, provided through digital technology, offers utilities both cost-to-serve efficiencies and improved customer intimacy.
- Digital technology brings the capability to provide more accurate billing and payment processing, as well as faster response times for changing addresses and bills, removing and adding services, and many other functions
- Using Mobile as a primary customer engagement channel for tips and alerts
- Predictive maintenance with outage maps and real time alerts to service engineer helps reduce the downtime and costs
- Smart meters allows utilities organizations to inform their customers about the energy consumption, tailor products and services to their customers while achieving significant operational efficiencies at the same time
- Minimize maintenance costs - Don’t waste money through over-cautious time bound maintenance. Only repair equipment when repairs are actually needed.
- Reduce unplanned downtime - Implement predictive maintenance to predict future equipment malfunctioning and failures and minimize the risk for unplanned disasters putting your business at risk.
- Root cause analysis - Find causes for equipment malfunctions and work with suppliers to switch-off reasons for high failure rates. Increase return on your assets.
- Efficient labor planning — no time wasted replacing/fixing equipment that doesn’t need it
- Avoid warranty cost for failure recovery – thousands of recalls in case of automakers while production loss in assembly line
TrainItalia has invested 50M euros in Internet of Things project which expects to cut maintenance costs by up to 130M euros to increase train availability and customer satisfaction.
- Identity: name, location, gender, age and other demographic data
- Relationships: their influence, connections, associations with others
- Current activity: orders, complaints, deliveries, returns
- History: contacts, campaigns, processes, cases across all lines of business and channels
- Value: which products or services they are associated with, including history
- Flags: prompts to give context, e.g. churn propensity, up-sell options, fraud risk, mood of last interactions, complaint record, frequency of contact
- Actions: expected, likely or essential steps based on who they are and the fact they are calling now
- All customer touch point data in a single repository for fast queries
- Next best actions or recommendations for customers
- All key metrics in a single location for business users to know and advise customers
- Intuitive and customizable dashboards for quick insights
- Real time hyper personalized customer interaction
- Enhanced customer loyalty
Customer 360º helps achieve Single View of Customer across Channels – online, stores, marketplaces, Devices – wearables, mobile, tablets, laptops & Interactions – purchase, posts, likes, feedback, service.
- Increased need & desire among businesses to gain greater value from their data
- Over 80% of data/information that businesses generate and collect is unstructured or semi-structured data that need special treatment
- Typically requires mix of skills - mathematics, statistics, computer science, machine learning and most importantly business knowledge
- They need to employ the R or Python programming language to clean and remove irrelevant data
- Create algorithms to solve the business problems
- Finally effectively communicate the findings to management
Any company, in any industry, that crunches large volumes of numbers, possesses lots of operational and customer data, or can benefit from social media streams, credit data, consumer research or third-party data sets can benefit from having a data scientist or a data science team.
- Kirk D Borne of BoozAllen
- D J Patil Chief Data Scientist at White House
- Gregory Piatetsky of kdnuggets
- Vincent Granville of Analyticsbridge
- Jonathan Goldman of LinkedIn
- Ronald Van Loon
One of the main attractions of automated analytics appears to be the perception that it represents an automated process that is able to learn automatically from data without the need to do any programming of rules. Furthermore, it is perceived that the IOT will allow organisations to apply analytics to data being generated by any physical asset or business process and thereafter being able to use automated analytics to monitor asset performance, detect anomalies and generate problem resolution / trouble-shooting advice; all without any programming of rules!
In reality, automated analytics is a powerful technology for turning data into actionable insight / knowledge and thereby represents a key enabling technology for automation in Industrial IOT. However, automated analytics alone cannot deliver complete solutions for the following reasons:
i- In order for analytics to learn effectively it needs data that spans the spectrum of normal, sub normal and anomalous asset/process behaviour. Such data can become available relatively quickly in a scenario where there are tens or hundreds of thousands of similar assets (central heating boilers, mobile phones etc.). However, this is not the case for more complex equipment / plants / processes where the volume of available faults or anomalous behaviour data is simply not large enough to facilitate effective analytics learning/modelling. As a result any generated automated analytics will be very restricted in its scope and will generate a large number of anomalies representing operating conditions that do not exist in the data.
ii- By focussing on data analytics alone we are ignoring the most important asset of any organisation; namely the expertise of its people in how to operate plants / processes. This expertise covers condition / risk assessment, planning, configuration, diagnostics, trouble-shooting and other skills that can involve decision making tasks. Automating ‘Decision making’ and applying it to streaming real-time IOT data offers huge business benefits and is very complementary to automated analytics in that it addresses the very areas in point 1 above where data coverage is incomplete, but human expertise exists.
Capturing expertise into an automated decision making system does require the programming of rules and decisions but that need not be a lengthy or cumbersome in a modern rules/decision automation technology such as Xpertrule. Decision making tasks can be represented in a graphical way that a subject matter expert can easily author and maintain without the involvement of a programmer. This can be done using graphical and easy to edit decision flows, decision trees, decision tables and rules. From my experience in using this approach, a substantial decision making task of tens of decision trees can be captured and deployed within a few weeks.
Given the complementary nature of automated analytics and automated decisions, I would recommend the use of symbolic learning data analytics techniques. Symbolic analytics generate rules/tree structures from data which are interpretable and understandable to the domain experts. Whilst rules/tree analytics models are marginally less accurate than deep learning or other ‘blackbox models’, the transparency of symbolic data models offer a number of advantages:
i- The analytics models can be validated by the domain experts
ii- The domain experts can add additional decision knowledge to the analytics models
iii- The transparency of the data models gives the experts insights into the root causes of problems and highlights opportunities for performance improvement.
Combining automated knowledge from data analytics with automated decisions from domain experts can deliver a paradigm shift in the way organisations use IOT to manage their assets / processes. It allows organisations to deploy their best practice expertise 24/7 real time throughout the organisation and rapidly turn newly acquired data into new and improved knowledge.
Below are example decision and analytics knowledge from an industrial IOT solution that we developed for a major manufacturer of powder processing mills. The solution monitors the performance of the mills to diagnose problems and to detect anomalous behaviour:
The Fault diagnosis tree below is part of the knowledge captured from the subject matter experts within the company
The tree below is generated by automated data analytics and relates the output particle size to other process parameters and environmental variables. The tree is one of many analytics models used to monitor anomalous behaviour of the process.
The above example demonstrates both the complementary nature of rules and analytics automation and the interpretability of symbolic analytics. In my next posting I will cover the subject of the rapid capture of decision making expertise using decision structuring and the induction of decision trees from decision examples provided by subject matter experts.
Original article is published at Forbes: link
Have heard about the magic pill? Not sure how it works, but it helps you lose 20 pounds in a week while consuming the same calories as before. And you’ve probably also heard about the scary side effects of that pill. The need for magic pills is appearing in the IoT market as well. Thanks to the explosion of sensors to measure everything imaginable within the Internet of Things, enterprises are confronted with a never-ending buffet of tempting data.
Typically data has been consumed like food: first it is grown, harvested, and prepared. Then this enjoyable meal is ingested into a data warehouse and digested through analytics. Finally we extract the nutritional value and put it to work to improve some part of our operations. Enterprises have evolved to consume data from CRM, ERP, and even the Web that is high in signal nutrition in this genteel, managed manner from which they can project trends or derive useful BI.
The IoT and its superabundance of sensors completely changes that paradigm and we need to give serious consideration to our data dietary habits if we want to succeed in this new data food chain. Rather than being served nicely prepared data meals, sensor data is the equivalent of opening your mouth in front of some kind of cartoon food fire hose. Data comes in real-time, completely raw, and in such sustained volume that all you can do is keep stuffing it down.
And, as you would expect, your digestion will be compromised. You won’t benefit from that overload of raw IoT data. In fact, we’ll need to change our internal plumbing, our data pipelines, to get the full nutritional benefit of IoT sensor data.
That will require work, but if you can process the data and extract the value, that’s where the real power comes in. In fact, you can attain something like superpowers. You can have the eyesight of eagles (self-driving cars), the sonar wave perception of dolphins (for detecting objects in the water), and the night vision of owls (for surveillance cameras).If we can digest all this sensor data and use it in creative ways, the potential is enormous. But how can we adapt to handle this sort of data? Doing so demands a new infrastructure with massive storage, real-time ingestion, and multi-genre analytics.
If we can digest all this sensor data and use it in creative ways, the potential is enormous. But how can we adapt to handle this sort of data? Doing so demands a new infrastructure with massive storage, real-time ingestion, and multi-genre analytics.
Massive storage. More than five years ago, Stephen Brobst predicted that the volume of sensor data would soon crush the amount of unstructured data generated by social media(remember when that seemed like a lot?). Sensor data demands extreme scalability.
Real-time ingestion. The infrastructure needs to be able to ingest raw data and determine moment by moment where to land it. Some data demands immediate reaction and should move into memory. Other data is needed in the data warehouse for operational reporting and analytics. Still other data will add benefit as part of a greater aggregation using Hadoop. Instant decisions will help parse where cloud resources are appropriate versus other assets.
Multi-genre analytics. When you have data that you’ve never seen before, you need to transform data and apply different types of algorithms. Some may require advanced analytics and some may just require a standard deviation. Multi-genre analytics allows you to apply multiple analytics models in various forms so that you can quickly discern the value of the data.
The self-driving car is a helpful metaphor. I’ve heard estimates that each vehicle has 60,000 sensors generating terabytes of data per hour. Consider the variety of that data. Data for obstacle detection requires millisecond response and must be recognized as such if it is to be useful. A sensor on the battery to predict replacement requires aggregation to predict a trend over time and does not require real-time responsiveness. Nevertheless both types of data are being created constantly and must be directed appropriately based on the use case.
How does this work at scale? Consider video games. Real-time data is critical to everything from in game advertising, which depends on near instant delivery of the right ad at a contextually appropriate moment, to recommendations and game features that are critical to the user experience and which are highly specific to moments within the game. At the same time, analyzing patterns at scale is critical to understanding and controlling churn and appeal. This is a lot of data to parse on the fly in order to operate effectively.
From a data perspective, we’re going to need a new digestive system if we are to make the most of the data coming in from the IoT. We’ll need vision and creativity as well. It’s an exciting time to be in analytics.
You might think that once a sale has been made, or an email subscription confirmed, that your job is done. You’ve made the virtual handshake, you can have a well-earned coffee and sit down now right? Wrong! (You knew we were…Continue
Anyone who has ever had to justify social media spend will appreciate that it feels good to have figures to cling to. We know that a lot of the value is relatively intangible – it’s about sentiment, awareness, relationship…Continue
Snapchat is, relatively speaking, one of the newbies on the social media block. First launched in 2011, it started with a less than desirable reputation. “Is that the one that people use to send dirty pictures when they’re…Continue
When you think of social channels like Facebook, what do you picture? Is it people over sharing feelings and pictures of their children? Do you imagine it to be chock full of personal complaints, boasts and holiday snaps?…Continue
Amazon’s second-generation Echo Dot ($50) is essentially an…Continue
Look! A new Amazon Echo!
The Amazon Echo Look is like the original Echo, plus a camera.
The $200 device delivers the Alexa virtual assistant. But the camera is optimized for helping you choose clothing to look your best when you get…Continue
The release of the latest Galaxy S phone is always a major event, and with good reason. Samsung’s premium flagship practically defines our expectations for high-end, high-price Android phones for the year to come. The…Continue
Here's a Bluetooth keyboard for your computer that you can also use with your tablet and smartphone -- switch between all three effortlessly by just turning the dial. And unlike other Bluetooth keyboards, Logitech has integrated a cradle so your…Continue