Join IoT Central | Join our LinkedIn Group | Post on IoT Central


OSes (231)

Guest blog post by Ajit Jaokar

Introduction

This two part blog is based on my forthcoming book:  Data Science for Internet of Things.

It is also the basis for the course I teach  Data Science for Internet of Things Course.  I will be syndicating sections of the book on the Data Science Central blog.  Welcome your comments.  Please email me at ajit.jaokar at futuretext.com  - Email me also for a pdf version if you are interested in joining the course

 

Here, we start off with the question:  At which points could you apply analytics to the IoT ecosystem and what are the implications?  We then extend this to a broader question:  Could we formulate a methodology to solve Data Science for IoT problems?  I have illustrated my thinking through a number of companies/examples.  I personally work with an Open Source strategy (based on R, Spark and Python) but  the methodology applies to any implementation. We are currently working with a range of implementations including AWS, Azure, GE Predix, Nvidia etc.  Thus, the discussion is vendor agnostic.

I also mention some trends I am following such as Apache NiFi etc

The Internet of Things and the flow of Data

As we move towards a world of 50 billion connected devices,  Data Science for IoT (IoT  analytics) helps to create new services and business models.  IoT analytics is the application of data science models  to IoT datasets.  The flow of data starts with the deployment of sensors.  Sensors detect events or changes in quantities. They provide a corresponding output in the form of a signal. Historically, sensors have been used in domains such as manufacturing. Now their deployment is becoming pervasive through ordinary objects like wearables. Sensors are also being deployed through new devices like Robots and Self driving cars. This widespread deployment of sensors has led to the Internet of Things.

 

Features of a typical wireless sensor node are described in this paper (wireless embedded sensor  architecture). Typically, data arising from sensors is in time series format and is often geotagged. This means, there are two forms of analytics for IoT: Time series and Spatial analytics. Time series analytics typically lead to insights like Anomaly detection. Thus, classifiers (used to detect anomalies) are commonly used for IoT analytics to detect anomalies.  But by looking at historical trends, streaming, combining data from multiple events(sensor fusion), we can get new insights. And more use cases for IoT keep emerging such as Augmented reality (think – Pokemon Go + IoT)

 

Meanwhile,  sensors themselves continue to evolve. Sensors have shrunk due to technologies like MEMS. Also, their communications protocols have improved through new technologies like LoRA. These protocols lead to new forms of communication for IoT such as Device to Device; Device to Server; or Server to Server. Thus, whichever way we look at it, IoT devices create a large amount of Data. Typically, the goal of IoT analytics is to analyse the data as close to the event as possible. We see this requirement in many ‘Smart city’ type applications such as Transportation, Energy grids, Utilities like Water, Street lighting, Parking etc

IoT data transformation techniques

 

Once data is captured through the sensor, there are a few analytics techniques that can be applied to the Data. Some of these are unique to IoT. For instance, not all data may be sent to the Cloud/Lake.  We could perform temporal or spatial analysis. Considering the volume of Data, some may be discarded at source or summarized at the Edge. Data could also be aggregated and aggregate analytics could be applied to the IoT data aggregates at the ‘Edge’. For example,  If you want to detect failure of a component, you could find spikes in values for that component over a recent span (thereby potentially predicting failure). Also, you could correlate data in multiple IoT streams. Typically, in stream processing, we are trying to find out what happened now (as opposed to what happened in the past).  Hence, response should be near real-time. Also, sensor data could be ‘cleaned’ at the Edge. Missing values in sensor data could be filled in(imputing values),  sensor data could be combined to infer an event(Complex event processing), Data could be normalized, we could handle different data formats or multiple communication protocols, manage thresholds, normalize data across sensors, time, devices etc

 

 

Applying IoT Analytics to the Flow of Data

Overview

 

Here, we address the possible locations and types of analytics that could be applied to IoT datasets.

Some initial notes:

Some initial thoughts:

  • IoT data arises from  sensors and ultimately resides in the Cloud.
  • We  use  the  concept  of  a  ‘Data  Lake’  to  refer  to  a repository of Data
  • We consider four possible avenues for IoT analytics: ‘Analytics  at  the  Edge’,  ‘Streaming  Analytics’ , NoSQL databases and ‘IoT analytics at the Data Lake’
  • For  Streaming  analytics,  we  could  build  an  offline model and apply it to a stream
  • If  we  consider  cameras  as  sensors,  Deep  learning techniques could be applied to Image and video datasets (for example  CNNs)
  • Even when IoT data volumes are high, not  all  scenarios  need  Data  to  be distributed. It is very much possible to run analytics on a single node using a non-distributed architecture using Python or R systems.
  • Feedback mechanisms are a key part of IoT analytics. Feedback is part of multiple IoT analytics modalities ex Edge, Streaming etc
  • CEP (Complex event processing) can be applied to multiple points as we see in the diagram

 

We now describe various analytics techniques which could apply to IoT datasets

Complex event processing

 

Complex Event Processing (CEP) can be used in multiple points for IoT analytics (ex : Edge, Stream, Cloud et).

 

In general, Event processing is a method of tracking and  analyzing  streams  of  data and deriving a conclusion from them. Complex event processing, or CEP, is event processing that combines data from multiple sources to infer events or patterns that suggest more complicated circumstances. The goal of complex event processing is to identify meaningful events (such as opportunities or threats) and respond to them as quickly as possible.

 

In CEP, the data is at motion. In contrast, a traditional Query (ex an RDBMS) acts on Static Data. Thus, CEP is mainly about Stream processing but the algorithms underlining CEP can also be applied to historical data

 

CEP relies on a number of techniques including for Events: pattern detection, abstraction, filtering,  aggregation and transformation. CEP algorithms model event hierarchies and detect relationships (such as causality, membership or timing) between events. They create an abstraction of an  event-driven processes. Thus, typically, CEP engines act as event correlation engines where they analyze a mass of events, pinpoint the most significant ones, and trigger actions.

 

Most CEP solutions and concepts can be classified into two main categories: Aggregation-oriented CEP and Detection-oriented CEP.  An aggregation-oriented CEP solution is focused on executing on-line algorithms as a response  to  event  data  entering  the  system  –  for example to continuously calculate an average based on data in the inbound events. Detection-oriented CEP is focused on detecting combinations of events called events patterns or situations – for example detecting a situation is to look for a specific sequence of events. For IoT, CEP techniques are concerned with deriving a higher order value / abstraction from discrete sensor readings. 

 

CEP uses techniques like Bayesian    networks,    neural    networks,     Dempster- Shafer methods, kalman filters etc. Some more background at Developing a complex event processing architecture for IoT

 

Streaming analytics

Real-time systems differ in the way they perform analytics. Specifically,  Real-time  systems  perform  analytics  on  short time  windows  for  Data  Streams.  Hence, the scope  of  Real Time analytics is a ‘window’ which typically comprises of the last few time slots. Making Predictions on Real Time Data streams involves building an Offline model and applying it to a stream. Models incorporate one or more machine learning algorithms which are trained using the training Data. Models are first built offline based on historical data (Spam, Credit card fraud etc). Once built, the model can be validated against a real time system to find deviations in the real time stream data. Deviations beyond a certain threshold are tagged as anomalies.

                                                                       

IoT ecosystems can create many logs depending on the status of IoT devices. By collecting these logs for a period of time and analyzing the sequence of event patterns, a model to predict a fault can be built including the probability of failure for the sequence. This model to predict failure is then applied to the stream (online). A technique like the Hidden Markov Model can be used for detecting failure patterns based on the observed sequence. Complex Event Processing can be used to combine events over a time frame (ex in the last one minute) and co-relate patterns to detect the failure pattern.

Typically, streaming systems could be implemented in Kafka and spark

 

Some interesting links on streaming I am tracking:

 

 Newer versions of kafka designed for iot use cases

Data Science Central: stream processing and streaming analytics how it works

Iot 101 everything you need to know to start your iot project – Part One

Iot 101 everything you need to know to start your iot project – Part Two

 

 

Part two will consider other  more technologies including Edge processing and Deep learning

If you want to be a part of my course please see the testimonials at Data Science for Internet of Things Course.   

Follow us @IoTCtrl | Join our Community

Read more…

Using Mattermarks’s list of the Top 100 IoT startups in 2015 (ranked by funding, published in Forbes Oct 25, 2015) Ipqwery has looked behind the analytics to reveal the nature of the intellectual property (IP) behind these innovative companies. Our infographic presents a general summary of the IP within the group as a whole, and illustrates the trailing 5-year trends related to IP filing activity.

The vast majority of these companies have both patents (84%) and trademarks (85%) in their IP portfolio. There was a sharp and mostly linear increase in filings for both patents and trademarks, from 2011 through to 2014, with a slight decrease showing in 2015. 2016 looks to be on pace to meet or exceed last year’s filing activity as well. All this is consistent with the ever-expanding number of companies operating within the IoT ecosystem.

A closer look at the top 5 patent class descriptions amongst all patents granted or published yields close results between these classes. This is not surprising given the similar technologies behind many IoT products, such that their patents will incorporate the same or similar descriptions within their claims. Comparatively, there is a wider variance in the Top 5 Trademark classes used, but this speaks more to the wider variety of marketing and branding potential than to the underlying IoT technologies. 

What’s striking in Mattermark’s original analysis of the Top 100 IoT Startups is that 30% of all funding raised by this group as a whole has been concentrated in only the top 5 companies; Jawbone, Genband, Silver Spring Networks, View Glass and Jasper Technologies. Ipqwery’s analysis further reveals that only two of these companies (Silver Spring and Jasper) have Top 5 inventors within the group. In fact, Jasper actually has 2 of the Top 5 inventors. The other top inventors come from Hello and Kineto Wireless.=

The broad-strokes approach of IPqwery’s infographic doesn’t directly illustrate the IP held by any one company, but certainly hints at where exactly this type of analysis could be very useful indeed. For where Mattermark sought to pinpoint where the greatest growth potential (momentum) was within the group of companies by looking at the overall IoT funding environment, IPqwery’s analysis of the general IP trends within this group sheds additional light on the matter, and perhaps raises some additional issues. Wouldn’t potential correlations between IP and funding also be a useful measure of momentum across metrics, and thus shouldn’t IP data be generally more integrated into business growth analytics, from the get go?

Here's a link to a new infographic by IPqwery summarizing the intellectual property held by the Top 100 IoT Startups (2015). 

 

Read more…

Article: A Brief History of Field Programmable Devices (FPGAs)-data-analytics-alone-cannot-deliver-effective-automation-solutions-industrial-iot-min-jpg

By Akeel Al-Attar. This article first appeared here

Automated analytics (which can also be referred to as machine learning, deep learning etc.) are currently attracting the lion’s share of interest from investors, consultants, journalists and executives looking at technologies that can deliver the business opportunities being afforded by the Internet of Things. The reason for this surge in interest is that the IOT generates huge volumes of data from which analytics can discover patterns, anomalies and insights which can then be used to automate, improve and control business operations.


One of the main attractions of automated analytics appears to be the perception that it represents an automated process that is able to learn automatically from data without the need to do any programming of rules. Furthermore, it is perceived that the IOT will allow organisations to apply analytics to data being generated by any physical asset or business process and thereafter being able to use automated analytics to monitor asset performance, detect anomalies and generate problem resolution / trouble-shooting advice; all without any programming of rules!

In reality, automated analytics is a powerful technology for turning data into actionable insight / knowledge and thereby represents a key enabling technology for automation in Industrial IOT. However, automated analytics alone cannot deliver complete solutions for the following reasons:

i- In order for analytics to learn effectively it needs data that spans the spectrum of normal, sub normal and anomalous asset/process behaviour. Such data can become available relatively quickly in a scenario where there are tens or hundreds of thousands of similar assets (central heating boilers, mobile phones etc.). However, this is not the case for more complex equipment / plants / processes where the volume of available faults or anomalous behaviour data is simply not large enough to facilitate effective analytics learning/modelling. As a result any generated automated analytics will be very restricted in its scope and will generate a large number of anomalies representing operating conditions that do not exist in the data.

ii- By focussing on data analytics alone we are ignoring the most important asset of any organisation; namely the expertise of its people in how to operate plants / processes. This expertise covers condition / risk assessment, planning, configuration, diagnostics, trouble-shooting and other skills that can involve decision making tasks. Automating ‘Decision making’ and applying it to streaming real-time IOT data offers huge business benefits and is very complementary to automated analytics in that it addresses the very areas in point 1 above where data coverage is incomplete, but human expertise exists.

Capturing expertise into an automated decision making system does require the programming of rules and decisions but that need not be a lengthy or cumbersome in a modern rules/decision automation technology such as Xpertrule. Decision making tasks can be represented in a graphical way that a subject matter expert can easily author and maintain without the involvement of a programmer. This can be done using graphical and easy to edit decision flows, decision trees, decision tables and rules. From my experience in using this approach, a substantial decision making task of tens of decision trees can be captured and deployed within a few weeks.

Given the complementary nature of automated analytics and automated decisions, I would recommend the use of symbolic learning data analytics techniques. Symbolic analytics generate rules/tree structures from data which are interpretable and understandable to the domain experts. Whilst rules/tree analytics models are marginally less accurate than deep learning or other ‘blackbox models’, the transparency of symbolic data models offer a number of advantages:

i- The analytics models can be validated by the domain experts
ii- The domain experts can add additional decision knowledge to the analytics models
iii- The transparency of the data models gives the experts insights into the root causes of problems and highlights opportunities for performance improvement.

Combining automated knowledge from data analytics with automated decisions from domain experts can deliver a paradigm shift in the way organisations use IOT to manage their assets / processes. It allows organisations to deploy their best practice expertise 24/7 real time throughout the organisation and rapidly turn newly acquired data into new and improved knowledge.

Below are example decision and analytics knowledge from an industrial IOT solution that we developed for a major manufacturer of powder processing mills. The solution monitors the performance of the mills to diagnose problems and to detect anomalous behaviour:

The Fault diagnosis tree below is part of the knowledge captured from the subject matter experts within the company

Article: A Brief History of Field Programmable Devices (FPGAs)-fault-diagnosis-tree-min-jpg



The tree below is generated by automated data analytics and relates the output particle size to other process parameters and environmental variables. The tree is one of many analytics models used to monitor anomalous behaviour of the process.

Article: A Brief History of Field Programmable Devices (FPGAs)-automated-data-analytics-min-jpg



The above example demonstrates both the complementary nature of rules and analytics automation and the interpretability of symbolic analytics. In my next posting I will cover the subject of the rapid capture of decision making expertise using decision structuring and the induction of decision trees from decision examples provided by subject matter experts.

Read more…

A smart, highly optimized distributed neural network, based on Intel Edison "Receptive" Nodes

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.

These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.

These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.

Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.

To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.

Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.

Read more…

Originally Posted and Written by: Michelle Canaan, John Lucker, & Bram Spector

Connectivity is changing the way people engage with their cars, homes, and bodies—and insurers are looking to keep pace. Even at an early stage, IoT technology may reshape the way insurance companies assess, price, and limit risks, with a wide range of potential implications for the industry.

Insurers’ path to growth: Embrace the future

In 1997, Progressive Insurance pioneered the use of the Internet to purchase auto insurance online, in real time.1 In a conservative industry, Progressive’s innovative approach broke several long-established trade-offs, shaking up traditional distribution channels and empowering consumers with price transparency.

This experiment in distribution ended up transforming the industry as a whole. Online sales quickly forced insurers to evolve their customer segmentation capabilities and, eventually, to refine pricing. These modifications propelled growth by allowing insurers to serve previously uninsurable market segments. And as segmentation became table stakes for carriers, a new cottage industry of tools, such as online rate comparison capabilities, emerged to capture customer attention. Insurers fought to maintain their competitive edge through innovation, but widespread transparency in product pricing over time created greater price competition and ultimately led to product commoditization. The tools and techniques that put the insurer in the driver’s seat slowly tipped the balance of power to the customer.

This case study of insurance innovation and its unintended consequences may be a precursor to the next generation of digital connectivity in the industry. Today, the availability of unlimited new sources of data that can be exploited in real time is radically altering how consumers and businesses interact. And the suite of technologies known as the Internet of Things (IoT) is accelerating the experimentation of Progressive and other financial services companies. With the IoT’s exponential growth, the ways in which citizens engage with their cars, homes, and bodies are getting smarter each day, and they expect the businesses they patronize to keep up with this evolution. Insurance, an industry generally recognized for its conservatism, is no exception.

IoT technology may still be in its infancy, but its potential to reshape the way insurers assess, price, and limit risks is already quite promising. Nevertheless, since innovation inevitably generates unintended possibilities and consequences, insurers will need to examine strategies from all angles in the earliest planning stages.

To better understand potential IoT applications in insurance, the Deloitte Center for Financial Services (DCFS), in conjunction with Wikistrat, performed a crowdsourcing simulation to explore the technology’s implications for the future of the financial services industry. Researchers probed participants (13 doctorate holders, 24 cyber and tech experts, 20 finance experts, and 6 entrepreneurs) from 20 countries and asked them to imagine how IoT technology might be applied in a financial services context. The results (figure 1) are not an exhaustive compilation of scenarios already in play or forthcoming but, rather, an illustration of several examples of how these analysts believe the IoT may reshape the industry.2

ER_2824_Fig.1

CONNECTIVITY AND OPPORTUNITY

Even this small sample of possible IoT applications shows how increased connectivity can generate tremendous new opportunities for insurers, beyond personalizing premium rates. Indeed, if harnessed effectively, IoT technology could potentially boost the industry’s traditionally low organic growth rates by creating new types of coverage opportunities. It offers carriers a chance to break free from the product commoditization trend that has left many personal and commercial lines to compete primarily on price rather than coverage differentiation or customer service.

For example, an insurer might use IoT technology to directly augment profitability by transforming the income statement’s loss component. IoT-based data, carefully gathered and analyzed, might help insurers evolve from a defensive posture—spreading risk among policyholders and compensating them for losses—to an offensive posture: helping policyholders prevent losses and insurers avoid claims in the first place. And by avoiding claims, insurers could not only reap the rewards of increased profitability, but also reduce premiums and aim to improve customer retention rates. Several examples, both speculative and real-life, include:

  • Sensors embedded in commercial infrastructure can monitor safety breaches such as smoke, mold, or toxic fumes, allowing for adjustments to the environment to head off or at least mitigate a potentially hazardous event.
  • Wearable sensors could monitor employee movements in high-risk areas and transmit data to employers in real time to warn the wearer of potential danger as well as decrease fraud related to workplace accidents.
  • Smart home sensors could detect moisture in a wall from pipe leakage and alert a homeowner to the issue prior to the pipe bursting. This might save the insurer from a large claim and the homeowner from both considerable inconvenience and losing irreplaceable valuables. The same can be said for placing IoT sensors in business properties and commercial machinery, mitigating property damage and injuries to workers and customers, as well as business interruption losses.
  • Socks and shoes that can alert diabetics early on to potential foot ulcers, odd joint angles, excessive pressure, and how well blood is pumping through capillaries are now entering the market, helping to avoid costly medical and disability claims as well as potentially life-altering amputations.3

Beyond minimizing losses, IoT applications could also potentially help insurers resolve the dilemma with which many have long wrestled: how to improve the customer experience, and therefore loyalty and retention, while still satisfying the unrelenting market demand for lower pricing. Until now, insurers have generally struggled to cultivate strong client relationships, both personal and commercial, given the infrequency of interactions throughout the insurance life cycle from policy sale to renewal—and the fact that most of those interactions entail unpleasant circumstances: either deductible payments or, worse, claims. This dynamic is even more pronounced in the independent agency model, in which the intermediary, not the carrier, usually dominates the relationship with the client.

The emerging technology intrinsic to the IoT that can potentially monitor and measure each insured’s behavioral and property footprint across an array of activities could turn out to be an insurer’s holy grail, as IoT applications can offer tangible benefits for value-conscious consumers while allowing carriers to remain connected to their policyholders’ everyday lives. While currently, people likely want as few associations with their insurers as possible, the IoT can potentially make insurers a desirable point of contact. The IoT’s true staying power will be manifested in the technology’s ability to create value for both the insurer and the policyholder, thereby strengthening their bond. And while the frequency of engagement shifts to the carrier, the independent agency channel will still likely remain relevant through the traditional client touchpoints.

By harnessing continuously streaming “quantified self” data, using advanced sensor connectivity devices, insurers could theoretically capture a vast variety of personal data and use it to analyze a policyholder’s movement, environment, location, health, and psychological and physical state. This could provide innovative opportunities for insurers to better understand, serve, and connect with policyholders—as well as insulate companies against client attrition to lower-priced competitors. Indeed, if an insurer can demonstrate how repurposing data collected for insurance considerations might help a carrier offer valuable ancillary non-insurance services, customers may be more likely to opt in to share further data, more closely binding insurer and customer.

Leveraging IoT technologies may also have the peripheral advantage of resuscitating the industry’s brand, making insurance more enticing to the relatively small pool of skilled professionals needed to put these strategies in play. And such a shift would be welcome, considering that Deloitte’s Talent in Insurance Survey revealed that the tech-savvy Millennial generation generally considers a career in the insurance industry “boring.”4 Such a reputational challenge clearly creates a daunting obstacle for insurance executives and HR professionals, particularly given the dearth of employees with necessary skill sets to successfully enable and systematize IoT strategies, set against a backdrop of intense competition from many other industries. Implementing cutting-edge IoT strategies could boost the “hip factor” that the industry currently lacks.

With change comes challenges

While most stakeholders might see attractive possibilities in the opportunity for behavior monitoring across the insurance ecosystem, inevitable hurdles stand in the way of wholesale adoption. How insurers surmount each potential barrier is central to successful evolution.

For instance, the industry’s historically conservative approach to innovation may impede the speed and flexibility required for carriers to implement enhanced consumer strategies based on IoT technology. Execution may require more nimble data management and data warehousing than currently in place, as engineers will need to design ways to quickly aggregate, analyze, and act upon disparate data streams. To achieve this speed, executives may need to spearhead adjustments to corporate culture grounded in more centralized location of data control. Capabilities to discern which data are truly predictive versus just noise in the system are also critical. Therefore, along with standardized formats for IoT technology,5 insurers may see an increasing need for data scientists to mine, organize, and make sense of mountains of raw information.

Perhaps most importantly, insurers would need to overcome the privacy concerns that could hinder consumers’ willingness to make available the data on which the IoT runs. Further, increased volume, velocity, and variety of data propagate a heightened need for appropriate security oversight and controls.

For insurers, efforts to capitalize on IoT technology may also require patience and long-term investments. Indeed, while bolstering market share, such efforts could put a short-term squeeze on revenues and profitability. To convince wary customers to opt in to monitoring programs, insurers may need to offer discounted pricing, at least at the start, on top of investments to finance infrastructure and staff supporting the new strategic initiative. This has essentially been the entry strategy for auto carriers in the usage-based insurance market, with discounts provided to convince drivers to allow their performance behind the wheel to be monitored, whether by a device installed in their vehicles or an application on their mobile device.

Results from the Wikistrat crowdsourcing simulation reveal several other IoT-related challenges that respondents put forward. (See figure 2.)6

ER_2824_Fig.2a

Each scenario implies some measure of material impact to the insurance industry. In fact, together they suggest that the same technology that could potentially help improve loss ratios and strengthen policyholder bonds over the long haul may also make some of the most traditionally lucrative insurance lines obsolete.

For example, if embedding sensors in cars and homes to prevent hazardous incidents increasingly becomes the norm, and these sensors are perfected to the point where accidents are drastically reduced, this development may minimize or eliminate the need for personal auto and home liability coverage, given the lower frequency and severity of losses that result from such monitoring. Insurers need to stay ahead of this, perhaps even eventually shifting books of business from personal to product liability as claims evolve from human error to product failure.

Examining the IoT through an insurance lens

Analyzing the intrinsic value of adopting an IoT strategy is fundamental in the development of a business plan, as executives must carefully consider each of the various dimensions to assess the potential value and imminent challenges associated with every stage of operationalization. Using Deloitte’s Information Value Loop can help capture the stages (create, communicate, aggregate, analyze, act) through which information passes in order to create value.7

The value loop framework is designed to evaluate the components of IoT implementation as well as potential bottlenecks in the process, by capturing the series and sequence of activities by which organizations create value from information (figure 3).

ER_2824_Fig.3

To complete the loop and create value, information passes through the value loop’s stages, each enabled by specific technologies. An act is monitored by a sensor that creates information. That information passes through a network so that it can be communicated, and standards—be they technical, legal, regulatory, or social—allow that information to be aggregated across time and space. Augmented intelligence is a generic term meant to capture all manner of analytical support, collectively used to analyze information. The loop is completed via augmented behavior technologies that either enable automated, autonomous action or shape human decisions in a manner leading to improved action.8

For a look at the value loop through an insurance lens, we will examine an IoT capability already at play in the industry: automobile telematics. By circumnavigating the stages of the framework, we can scrutinize the efficacy of how monitoring driving behavior is poised to eventually transform the auto insurance market with a vast infusion of value to both consumers and insurers.

Auto insurance and the value loop

Telematic sensors in the vehicle monitor an individual’s driving to create personalized data collection. The connected car, via in-vehicle telecommunication sensors, has been available in some form for over a decade.9 The key value for insurers is that sensors can closely monitor individual driving behavior, which directly corresponds to risk, for more accuracy in underwriting and pricing.

Originally, sensor manufacturers made devices available to install on vehicles; today, some carmakers are already integrating sensors into showroom models, available to drivers—and, potentially, their insurers—via smartphone apps. The sensors collect data (figure 4) which, if properly analyzed, might more accurately predict the unique level of risk associated with a specific individual’s driving and behavior. Once the data is created, an IoT-based system could quantify and transform it into “personalized” pricing.

ER_2824_Fig.4

Sensors’ increasing availability, affordability, and ease of use break what could potentially be a bottleneck at this stage of the Information Value Loop for other IoT capabilities in their early stages.

IoT technology aggregatesand communicatesinformation to the carrier to be evaluated. To identify potential correlations and create predictive models that produce reliable underwriting and pricing decisions, auto insurers need massive volumes of statistically and actuarially credible telematics data.

In the hierarchy of auto telematics monitoring, large insurers currently lead the pack when it comes to usage-based insurance market share, given the amount of data they have already accumulated or might potentially amass through their substantial client bases. In contrast, small and midsized insurers—with less comprehensive proprietary sources—will likely need more time to collect sufficient data on their own.

To break this bottleneck, smaller players could pool their telematics data with peers either independently or through a third-party vendor to create and share the broad insights necessary to allow a more level playing field throughout the industry.

Insurers analyze data and use it to encourage drivers to act by improving driver behavior/loss costs. By analyzing the collected data, insurers can now replace or augment proxy variables (age, car type, driving violations, education, gender, and credit score) correlated with the likelihood of having a loss with those factors directly contributing to the probability of loss for an individual driver (braking, acceleration, cornering, and average speed, as figure 4 shows). This is an inherently more equitable method to structure premiums: Rather than paying for something that might be true about a risk, a customer pays for what is true based on his own driving performance.

But even armed with all the data necessary to improve underwriting for “personalized” pricing, insurers need a way to convince millions of reluctant customers to opt in. To date, insurers have used the incentive of potential premium discounts to engage consumers in auto telematics monitoring.10 However, this model is not necessarily attractive enough to convince the majority of drivers to relinquish a measure of privacy and agree to usage-based insurance. It is also unsustainable for insurers that will eventually have to charge rates actually based on risk assessment rather than marketing initiatives.

Substantiating the point about consumer adoption is a recent survey by the Deloitte Center for Financial Services of 2,193 respondents representing a wide variety of demographic groups, aiming to understand consumer interest in mobile technology in financial services delivery, including the use of auto telematics monitoring. The survey identified three distinct groups among respondents when asked whether they would agree to allow an insurer to track their driving experience, if it meant they would be eligible for premium discounts based on their performance (figure 5).11 While one-quarter of respondents were amenable to being monitored, just as many said they would require a substantial discount to make it worth their while (figure 5), and nearly half would not consent.

ER_2824_Fig.5

While the Deloitte survey was prospective (asking how many respondents would be willing to have their driving monitored telematically), actual recruits have been proven to be difficult to bring on board. Indeed, a 2015 Lexis-Nexis study on the consumer market for telematics showed that usage-based insurance enrollment has remained at only 5 percent of households from 2014 to 2015 (figure 6).12

ER_2824_Fig.6

Both of these survey results suggest that premium discounts alone have not and likely will not induce many consumers to opt in to telematics monitoring going forward, and would likely be an unsustainable model for insurers to pursue. The good news: Research suggests that, while protective of their personal information, most consumers are willing to trade access to that data for valuable services from a reputable brand.13 Therefore, insurers will likely have to differentiate their telematics-based product offerings beyond any initial early-adopter premium savings by offering value-added services to encourage uptake, as well as to protect market share from other players moving into the telematics space.

In other words, insurers—by offering mutually beneficial, ongoing value-added services—can use IoT-based data to become an integral daily influence for connected policyholders. Companies can incentivize consumers to opt in by offering real-time, behavior-related services, such as individualized marketing and advertising, travel recommendations based on location, alerts about potentially hazardous road conditions or traffic, and even diagnostics and alerts about a vehicle’s potential issues (figure 7).14 More broadly, insurers could aim to serve as trusted advisers to help drivers realize the benefits of tomorrow’s connected car.15

Many IoT applications offer real value to both insurers and policyholders: Consider GPS-enabled geo-fencing, which can monitor and send alerts about driving behavior of teens or elderly parents. For example, Ford’s MyKey technology includes tools such as letting parents limit top speeds, mute the radio until seat belts are buckled, and keep the radio at a certain volume while the vehicle is moving.16 Other customers may be attracted to “green” monitoring, in which they receive feedback on how environmentally friendly their driving behavior is.

Insurers can also look to offer IoT-related services exclusive of risk transfer—for example, co-marketing location-based services with other providers, such as roadside assistance, auto repairs, and car washes may strengthen loyalty to a carrier. They can also include various nonvehicle-related service options such as alerts about nearby restaurants and shopping, perhaps in conjunction with points earned by good driving behavior in loyalty programs or through gamification, which could be redeemed at participating vendors. Indeed, consumers may be reluctant to switch carriers based solely on pricing, knowing they would be abandoning accumulated loyalty points as well as a host of personalized apps and settings.

For all types of insurance—not just auto—the objective is for insurers to identify the expectations that different types of policyholders may have, and then adapt those insights into practical applications through customized telematic monitoring to elevate the customer experience.

Telematics monitoring has demonstrated benefits even beyond better customer experience for policyholders. Insurers can use telematics tools to expose an individual’s risky driving behavior and encourage adjustments. Indeed, people being monitored by behavior sensors will likely improve their driving habits and reduce crash rates—a result to everyone’s benefit. This “nudge effect” indicates that the motivation to change driving behavior is likely linked to the actual surveillance facilitated by IoT technology.

The power of peer pressure is another galvanizing influence that can provoke beneficial consumer behavior. Take fitness wearables, which incentivize individuals to do as much or more exercise than the peers with whom they compete.17 In fact, research done in several industries points to an individual’s tendency to be influenced by peer behavior above most other factors. For example, researchers asked four separate groups of utility consumers to cut energy consumption: one for the good of the planet, a second for the well-being of future generations, a third for financial savings, and a fourth because their neighbors were doing it. The only group that elicited any drop in consumption (at 10 percent) was the fourth—the peer comparison group.18

Insurers equipped with not only specific policyholder information but aggregated data that puts a user’s experience in a community context have a real opportunity to influence customer behavior. Since people generally resist violating social norms, if a trusted adviser offers data that compares customer behavior to “the ideal driver”—or, better, to a group of friends, family, colleagues, or peers—they will, one hopes, adapt to safer habits.

ER_2824_Fig.7a

The future ain’t what it used to be—what should insurers do?

After decades of adherence to traditional business models, the insurance industry, pushed and guided by connected technology, is taking a road less traveled. Analysts expect some 38.5 billion IoT devices to be deployed globally by 2020, nearly three times as many as today,19 and insurers will no doubt install their fair share of sensors, data banks, and apps. In an otherwise static operating environment, IoT applications present insurers with an opportunity to benefit from technology that aims to improve profits, enable growth, strengthen the consumer experience, build new market relevance, and avoid disruption from more forward-looking traditional and nontraditional competitors.

Incorporating IoT technology into insurer business models will entail transformation to elicit the benefits offered by each strategy.

  • Carriers must confront the barriers associated with conflicting standards—data must be harvested and harnessed in a way that makes the information valid and able to generate valuable insights. This could include making in-house legacy systems more modernized and flexible, building or buying new systems, or collaborating with third-party sources to develop more standardized technology for harmonious connectivity.
  • Corporate culture will need a facelift—or, likely, something more dramatic—to overcome longstanding conventions on how information is managed and consumed across the organization. In line with industry practices around broader data management initiatives,20 successfully implementing IoT technology will require supportive “tone at the top,” change management initiatives, and enterprisewide training.
  • With premium savings already proving insufficient to entice most customers to allow insurers access to their personal usage data, companies will need to strategize how to convince or incentivize customers to opt in—after all, without that data, IoT applications are of limited use. To promote IoT-aided connectivity, insurers should look to market value-added services, loyalty points, and rewards for reducing risk. Insurers need to design these services in conjunction with their insurance offerings, to ensure that both make best use of the data being collected.
  • Insurers will need to carefully consider how an interconnected world might shift products from focusing on cleaning up after disruptions to forestalling those disruptions before they happen. IoT technology will likely upend certain lines of businesses, potentially even making some obsolete. Therefore, companies must consider how to heighten flexibility in their models, systems, and culture to counterbalance changing insurance needs related to greater connectivity.
  • IoT connectivity may also potentially level the playing field among insurers. Since a number of the broad capabilities that technology is introducing do not necessarily require large data sets to participate (such as measuring whether containers in a refrigerated truck are at optimal temperatures to prevent spoilage21 or whether soil has the right mix of nutrients for a particular crop22), small to midsized players or even new entrants may be able to seize competitive advantages from currently dominant players.
  • And finally, to test the efficacy of each IoT-related strategy prior to implementation, a framework such as the Information Value Loop may become an invaluable tool, helping forge a path forward and identify potential bottlenecks or barriers that may need to be resolved to get the greatest value out of investments in connectivity.

The bottom line: IoT is here to stay, and insurers need look beyond business as usual to remain competitive.

The IoT is here to stay, the rate of change is unlikely to slow anytime soon, and the conservative insurance industry is hardly impervious to connectivity-fueled disruption—both positive and negative. The bottom line: Insurers need to look beyond business as usual. In the long term, no company can afford to engage in premium price wars over commoditized products. A business model informed by IoT applications might emphasize differentiating offerings, strengthening customer bonds, energizing the industry brand, and curtailing risk either at or prior to its initiation.

IoT-related disruptors should also be considered through a long-term lens, and responses will likely need to be forward-looking and flexible to incorporate the increasingly connected, constantly evolving environment. With global connectivity reaching a fever pitch amid increasing rates of consumer uptake, embedding these neoteric schemes into the insurance industry’s DNA is no longer a matter of if but, rather, of when and how.

You can view the original post in its entirety Here

Read more…

Guest post by Ruud Wetzels. This article originally appeared here

This is a bold claim perhaps, certainly as it is made by a data scientist with no particular knowledge of machines, trains or planes, or any experience in running a factory. Yet I am convinced that in your data we can find clues for improving your maintenance process, be it in the form of improved up-time, reduced costs or extended operational lifetime of your equipment. The reason I'm confident is that nowadays the power of computers to find patterns and correlations in vast amounts of data far exceeds the ability of any human to detect such clues.

Killer-app

The technique is called predictive maintenance, and it has been named the killer-app of the Internet of Things. I could agree to that. Predictive maintenance is about something tangible and familiar from everyday practice, it's relatively easy to start with without a need for significant investments up-front, and the results become evident fairly quickly. In other words: predictive maintenance has the potential to quickly yield real benefits, both operational and financial.

It all starts with data

As most machines and equipment in use today are fitted with sensors that gather data, there is usually an abundance of data to work with. Even if this sensory data is not yet collected for maintenance purposes, often the amount of data that is available - say in maintenance logbooks or financial databases - is sufficient to get started. And of course external data about anything that might be relevant can be included, weather data for instance.

The art of data analytics

The next steps with predictive maintenance are to feed all available data into a model and identify the appropriate algorithms to analyze those data with. This choice of algorithms by the way is really what constitutes the art of data analytics, it is the very step that determines the quality of the results. These algorithms then turn all those data into a single number: the probability that a certain event - in the context of maintenance: a failure or a breakdown - will happen within a future time slot, say the next 72 hours. Provided that the input data is cleaned-up and ready for use, such a model could be up and running in a matter of weeks.

Making tradeoffs

What remains to be done is to establish a threshold for intervention: at what probability level of failure will you intervene and perform the appropriate maintenance activity? This is how the maintenance process can be tailored to specific business objectives, which in my view is one of the major benefits that predictive maintenance offers. It allows railway companies and airlines for example to make an explicit trade-off between customer satisfaction (i.e. run on schedule) and safety (i.e. accept delays in case predictive maintenance is warranted), a trade-off that will probably be different in both cases.

 A leap of faith or an evidence-backed claim?

At this point the maintenance process has in effect been formalized. It is no longer run based on experience, intuition or a fixed set of instructions. Instead it is governed by an algorithm that produces a single number. To many companies, this is a somewhat scary thought, at least initially. Whether companies are willing to adopt such a formalized process depends to a large extent on human factors: how much affinity and understanding do management and maintenance engineers have of data analytics and what it can accomplish? In case the step to predictive maintenance requires too big a leap of faith, it can be run in parallel to the traditional maintenance process for some time, without making any interventions based upon its predictions. This way companies can gather enough evidence that I'm confident will back up my claim: predictive maintenance is the superior way of doing maintenance.

Photo courtesy of Teruhide Tomori 

Read more…

Hospitals and medical centers have more to gain from big data analytics than perhaps any other industry. But as data sets continue to grow, healthcare facilities are discovering that success in data analytics has more to do with storage methods than with analysis software or techniques. Traditional data silos are hindering the progress of big data in the healthcare industry, and as terabytes turn into petabytes, the most successful hospitals are the ones that are coming up with new solutions for storage and access challenges.

 

 

Reducing Reliance on Data Silos is Crucial for True Data Integration

 

Big data in any healthcare application is only as efficient as the technical foundation on which it is built. The healthcare industry is learning that data silos get in the way of analytical efforts. The ability to integrate independent, far-flung information sets is the backbone of successful analytics.

 

When operational data, clinical data and financial data, for example, are segregated in isolated silos, the information in each category remains segmented and dislocated. When these data are integrated, however, useful analysis be conducted. Only then can analysts discover opportunities for cost reduction, locate gaps in patient care and find out where resources could be better utilized.

 

To get a feel for just how scattered different pieces of the data puzzle can become, some health-care operations are considering incorporating patient consumer data, such as credit-card purchases, to gain a more complete picture of a patient's lifestyle and choices. When these data are compartmentalized in silos, integration is impossible. The organizations that get the best results are the organizations that find a way to get their information out of silos and merge different sets into one fluid, cohesive network.

 

Data Triage and Tiering: Beyond Cloud Storage

 

But if organizations shouldn't store data in silos, what are their other options?

 

The obvious solution is cloud storage, which is an excellent choice for operations that measure data in terabytes, such as a marketing agency that monitors metrics for Gospaces landing pages. Smaller organizations or facilities that are just entering the data-storage ecosystem are naturally drawn to cloud storage, and for good reason — it gets their data out of silos and takes pressure off their own infrastructure.

 

But consider the case of Intermountain, a chain of 22 hospitals in Salt Lake City. With 4.7 petabytes of data under its management, cloud storage becomes cost prohibitive. The network estimates the hospital chain's data will grow by 25-30 percent each year until it reaches 15 petabytes five years from now.

 

With such massive data needs, Intermountain found ways to cut costs and streamline efficiency. One way was through data tiering, which is the creation of data storage tiers that can be accessed at the appropriate speeds. Tiering is currently done manually through triaging, but several different organizations are exploring auto-tiering, which automatically stores data according to availability needs.

 

 

From wearables to records sharing, big data is having more of an impact on the healthcare industry than perhaps any other field. But hospitals and medical centers face barriers to storage and access in the face of ever-growing data sets that so far have proven difficult to overcome — but not impossible. Large health networks are using advanced techniques like data triage and data tiering to deal with growing volume — all while reducing costs.

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Original article is published at Forbes: link

Have heard about the magic pill? Not sure how it works, but it helps you lose 20 pounds in a week while consuming the same calories as before. And you’ve probably also heard about the scary side effects of that pill. The need for magic pills is appearing in the IoT market as well. Thanks to the explosion of sensors to measure everything imaginable within the Internet of Things, enterprises are confronted with a never-ending buffet of tempting data.

Typically data has been consumed like food: first it is grown, harvested, and prepared. Then this enjoyable meal is ingested into a data warehouse and digested through analytics. Finally we extract the nutritional value and put it to work to improve some part of our operations. Enterprises have evolved to consume data from CRM, ERP, and even the Web that is high in signal nutrition in this genteel, managed manner from which they can project trends or derive useful BI.

sensory

The IoT and its superabundance of sensors completely changes that paradigm and we need to give serious consideration to our data dietary habits if we want to succeed in this new data food chain. Rather than being served nicely prepared data meals, sensor data is the equivalent of opening your mouth in front of some kind of cartoon food fire hose. Data comes in real-time, completely raw, and in such sustained volume that all you can do is keep stuffing it down.

And, as you would expect, your digestion will be compromised. You won’t benefit from that overload of raw IoT data. In fact, we’ll need to change our internal plumbing, our data pipelines, to get the full nutritional benefit of IoT sensor data.

That will require work, but if you can process the data and extract the value, that’s where the real power comes in.  In fact, you can attain something like superpowers. You can have the eyesight of eagles (self-driving cars), the sonar wave perception of dolphins (for detecting objects in the water), and the night vision of owls (for surveillance cameras).If we can digest all this sensor data and use it in creative ways, the potential is enormous. But how can we adapt to handle this sort of data? Doing so demands a new infrastructure with massive storage, real-time ingestion, and multi-genre analytics.

If we can digest all this sensor data and use it in creative ways, the potential is enormous. But how can we adapt to handle this sort of data? Doing so demands a new infrastructure with massive storage, real-time ingestion, and multi-genre analytics.

Massive storage. More than five years ago, Stephen Brobst predicted that the volume of sensor data would soon crush the amount of unstructured data generated by social media(remember when that seemed like a lot?). Sensor data demands extreme scalability.

Real-time ingestion. The infrastructure needs to be able to ingest raw data and determine moment by moment where to land it. Some data demands immediate reaction and should move into memory. Other data is needed in the data warehouse for operational reporting and analytics. Still other data will add benefit as part of a greater aggregation using Hadoop. Instant decisions will help parse where cloud resources are appropriate versus other assets.

Multi-genre analytics. When you have data that you’ve never seen before, you need to transform data and apply different types of algorithms. Some may require advanced analytics and some may just require a standard deviation. Multi-genre analytics allows you to apply multiple analytics models in various forms so that you can quickly discern the value of the data.

The self-driving car is a helpful metaphor. I’ve heard estimates that each vehicle has 60,000 sensors generating terabytes of data per hour. Consider the variety of that data. Data for obstacle detection requires millisecond response and must be recognized as such if it is to be useful. A sensor on the battery to predict replacement requires aggregation to predict a trend over time and does not require real-time responsiveness. Nevertheless both types of data are being created constantly and must be directed appropriately based on the use case.

How does this work at scale? Consider video games. Real-time data is critical to everything from in game advertising, which depends on near instant delivery of the right ad at a contextually appropriate moment, to recommendations and game features that are critical to the user experience and which are highly specific to moments within the game. At the same time, analyzing patterns at scale is critical to understanding and controlling churn and appeal. This is a lot of data to parse on the fly in order to operate effectively.

From a data perspective, we’re going to need a new digestive system if we are to make the most of the data coming in from the IoT. We’ll need vision and creativity as well. It’s an exciting time to be in analytics.

Read more…

Guest blog post by Bernard Marr

We are constantly generating increasing volumes of data with everything we do. During a recent business trip, I started thinking about how travelling presents a great example of this. With the explosion of the Internet of Things into our lives, the amount of analysable data we leave behind us as we go about our day-to-day lives is growing exponentially. So I decided to try and identify some key bits of data I generated and left behind on a trip from my home in the heart of the English midlands (actually, the “smart city” of Milton Keynes) across the western ocean to the Big Apple.

Source for picture: click here

I am generating data before I even wake up – the sleep monitor on the fitness-tracking wristband I am wearing is uploading data on my movement, heart rate and skin temperature to a cloud somewhere, so I can anaylze the quality of my rest. Once I am awake and moving about the house, my Nest thermostats, fire detectors and cameras detect this activity and register that I am home and awake.

Although my home city is host to the UK’s first driverless car trials, this technology isn’t available to me yet, so I will have to drive myself. As I head down the motorway to the airport, the GPS in my phone is constantly updating an entire conglomerate of mostly California-based software companies on my location. Telemetry fitted to my car is passing information on the speed I am travelling in and the route I am taking to my insurers, which use this to dynamically assess the premiums I am paying according to the risks I take. Crawling along at 5mph on the M1 motorway this morning, the risks seem fairly low so I smile to myself in anticipation of the cuts to my premiums I can doubtless expect (right?)

On top of that, the networks of CCTV cameras fitted with ANPR catch me at key points in my journey – sometimes this information goes to highway authorities to help measure traffic flow, and sometimes to law enforcement where red flags will pop up if my car is uninsured, untaxed, reported stolen or if its driver is of interest to them (hopefully not!)

At the airport I check in using the smartphone app provided by the airline I am flying with. Now they know I am at the airport, but in reality they’ve been tracking me for far longer than that, and thanks to this they know that I like to sit in a window seat and the cabin crew may well know what refreshments to offer me on board thanks to data I’ve left behind on previous trips.

The aircraft I’m on will be generating a lot of data of its own, too. If (as is likely) it is one of the 50,000 which is fitted with a Rolls Royce engine, then data from over 100 sensors will be live streaming to engineers at the company headquarters in Derby, England. Air pressure, operating temperature, speed and vibration levels are all sent in real time and minor problems can be corrected, literally on the fly. More detailed data (several terabytes per flight) is stored on flight data recorders and transferred to HQ when the flight has landed.

After landing, I might be feeling a little peckish, and decide to grab a coffee and something to eat at the airport. For small transactions like these, I am increasingly becoming used to using digital payment services such as Apple Pay. But whether I go with that or decide to use a credit card, once again data on my location is being sent to whoever is handling my payment, as well as data on what I buy, which will be sold to other people to help them decide what to sell me. The franchise outlet selling me my food and beverage also adds this same information to the file it holds on me. Even if I don’t hold their loyalty card, it will most likely be building a profile of me using my credit card number or digital payment ID as an anchor. If I pay cash, I’m still generating data although it will only be tied more loosely to me as a relatively anonymous air traveller who has just arrived at New York JFK Airport.   

Next I need to get to my hotel, and like a lot of other people these days I am finding that Uber is a convenient and cost efficient way of getting about in big cities. So I’ll be adding to the data which that company holds about me, with information on my location, the distance I am traveling, the cost of my journey and any personal preferences I specify while using the service.

Arriving at my hotel the staff already have a good idea of who I am, thanks to information I’ve left behind on previous visits. The international hotel chain I am using today records data on every detail of how its customers interact with its services, from how and when they made their booking to their room service preferences and information left on customer feedback forms. This is then used to accommodate me and meet my needs in a manner which is likely to make me want to return.

And that’s just one journey. As I’ve arrived late in the day and I’m free to enjoy the city until tomorrow morning I might choose to visit a theatre, museum or restaurant. All of these are likely to cause me to generate more data which will be recorded somewhere for someone to analyze. In fact there’s very little I can do without generating data! Even if I choose to wander the streets and sightsee, countless cameras will capture my image and my phone will transmit information which will enable my location to be recorded, from satellites in space through GPS and through radio waves directly to the nearby transmitter towers. RFID tags in shop window displays will register my presence and any sounds I make to make sure I’m not shooting anybody.

Everywhere we go and everything we do generates data, and it seems likely that this will be increasingly true as time goes on. It’s important to remember that a lot of this data is anonymized, in fact it should be hoped that this is true for all of it, except in cases where you have explicitly given permission to be identified individually. But with the efforts some companies take to secure those permissions, you’ve probably given them away already to more people than you think!

Sources

 

Follow us @IoTCtrl | Join our Community

Read more…

The technology sector is buzzing with predictions and hype about the Internet of Things (IoT), but many people are still confused about what it means, what the real world opportunities are and why businesses should be looking into IoT.

At a fundamental and simplistic level the Internet of Things refers to 'physical objects which linked via wired or wireless networks'

These physical objects could be anything (such as medical machines, vehicles, building systems, signage, toasters, smoke alarms, temperature sensors, weather monitors, intelligent tags or rubbish bins for example). Almost any object, in any sector, in any location could potentially join the Internet of Things, so its no wonder that Gartner predict there will be 50 billion devices connected by 2020 (and other analysts estimate several orders of magnitude more).  

Typically the Internet of Things is used to gather data and insight, find efficiency, automate tasks or improve an experience or service. At Smarter Technology Solutions (STS) we put this down to a simple formula, with greater insight, comes better decisions.

I know what you're thinking, why would you connect an object like a rubbish bin to the Internet?

Well its a simple example but it has tremendous flow on effects. Simply tracking the fill level of a rubbish bin using a smart sensor, councils and waste providers can find out a few important facts such as fill-level trends, how often the bin really needs emptying and when, to better plan waste collection services (eg timing of bin collection near food outlets to avoid lunchtimes) and to identify areas that may need more/less bins (to assist with city/service planning).
By collecting just the fill level data of a waste bin the following benefits could be attained:

  1. Reduction in cost as less bin collections = less waste trucks on the road, no unnecessary collections for a bin that's 20% full, less labour to complete waste collection. This also provides a level of operational efficiency and optimized processes.
  2. Environmental benefit - where waste is not overflowing and truck usage is reduced, flow on environmental impact, pollution and fuel consumption is minimized. By ensuring waste bins are placed in convenient locations, littering and scattered waste is also minimized.
  3. Service improvements - truck collection routes can be optimized, waste bins can be collected at convenient times and planning of future/additional services can be amended as the data to trend and verify assumptions is available. 

More complex examples of IoT include:

  • Intelligent transport systems which update digital signage on the highway and adjusts the traffic lights in real time to divert traffic, optimise traffic flow and reduce congestion;
  • A farm which uses sensors to measure soil moisture, chemical levels and weather patterns, adjusting the watering and treatment schedules accordingly;
  • The building which draws the blinds to block out the afternoon sun, reducing the need to consume more power cooling the building and to keep the environment comfortable;
  • Health-care devices which monitor patients and auto-alert medical practitioners once certain symptoms or attributes are detected; 
  • Trucks which automatically detect mechanical anomalies and auto schedule themselves in for preventative maintenance once they reach certain thresholds; 
  • Asset tracking of fleet vehicles within a services company which provides operations staff with fleet visibility to quickly dispatch the closest resource to a job based on proximity to the next task;
  • Water/gas/electric meters which sends in their own reading in on a monthly basis and trends analysis which can detect potential water/gas leaks; or
  • A retail store which analyses your in-store behavior or purchasing patterns and recommend products to you based on previous choices and your personal preferences.

At Smarter Technology Solutions we specialize in consulting with organizations  to understand the benefits of IoT, design best fit solutions, engineer and implement solutions as well as supporting the ongoing support needs of the organization. This results in 3 key outcomes:

  • Discovery of New Opportunities - With better visibility, trends, opportunities, correlations and inefficiencies can be understood. From this, products, services and business models can be adjusted or changed to achieve competitive advantage.
  • Improved Efficiency - By identifying inefficiencies in existing business practices, work-flows can be improved and more automated services can be provided.
  • Improved Services - With trends and real time data businesses are able make smarter decisions and alter the way you services are delivered.

www.smartertechnologysolutions.com.au

Read more…

The Internet of Things (IoT) concept promises to improve our lives by embedding billions of cheap purpose-built sensors into devices, objects and structures that surround us (appliances, homes, clothing, wearables, vehicles, buildings, healthcare tech, industrial equipment, manufacturing, etc.).

IoT Market Map -- Goldman Sachs

What this means is that billions of sensors, machines and smart devices will simultaneously collect volumes of big data, while processing real-time fast data from almost everything and... almost everyone!!!

IoT vision is not net reality

Simply stated, the Internet of Things is all about the power of connections.

Consumers, for the moment anyway, seem satisfied to have access to gadgets, trendy devices and apps which they believe will make them more efficient (efficient doesn't necessarily mean productive), improve their lives and promote general well-being.

Corporations on the other hand, have a grand vision that convergence of cloud computing, mobility, low-cost sensors, smart devices, ubiquitous networks and fast-data will help them achieve competitive advantages, market dominance, unyielding brand power and shareholder riches.

Global Enterprises (and big venture capital firms) will spend billions on the race for IoT supremacy. These titans of business are chomping at the bit to develop IoT platforms, machine learning algorithms, AI software applications & advanced predictive analytics. The end-game of these initiatives is to deploy IoT platforms on a large scale for;

  • real-time monitoring, control & tracking (retail, autonomous vehicles, digital health, industrial & manufacturing systems, etc.)
  • assessment of consumers, their emotions & buying sentiment,
  • managing smart systems and operational processes,
  • reducing operating costs & increasing efficiencies,
  • predicting outcomes, and equipment failures, and
  • monetization of consumer & commercial big data, etc.

 

IoT reality is still just a vision

No technology vendor (hardware or software), service provider, consulting firm or self-proclaimed expert can fulfill the IoT vision alone.

Recent history with tech hype-cycles has proven time and again that 'industry experts' are not very accurate predicting the future... in life or in business!

Having said this, it only makes sense that fulfilling the promise of IoT demands close collaboration & communication among many stake-holders.

A tech ecosystem is born

IoT & Industrial IoT comprise a rapidly developing tech ecosystem. Momentum is building quickly and will drive sustainable future demand for;

  • low-cost hardware platforms (sensors, smart devices, etc.),
  • a stable base of suppliers, developers, vendors & distribution,
  • interoperability & security (standards, encryption, API's, etc.),
  • local to global telecom & wireless services,
  • edge to cloud networks & data centers,
  • professional services firms (and self-proclaimed experts),
  • global strategic partnerships,
  • education and STEM initiatives, and
  • broad vertical market development.

I'll close with one final thought; "True IoT leaders and visionaries will first ask why, not how..!"

Read more…

Editors Note: Members of IoT Central are encouraged to participate in Ventana Research's study. The author of the blog shares details below.

The emerging Internet of Things (IoT) is an extension of digital connectivity to devices and sensors in homes, businesses, vehicles and potentially almost anywhere. This innovation means that virtually any device can generate and transmit data about its operations – data to which analytics can be applied to facilitate monitoring and a range of automatic functions. To do these tasks IoT requires what Ventana Research calls operational intelligence (OI), a discipline that has evolved from the capture and analysis of instrumentation, networking and machine-to-machine interactions of many types. We define operational intelligence as a set of event-centered information and analytic processes operating across an organization that enable people to use that event information to take effective actions and make optimal decisions. Ventana Research first began covering operational intelligence over a decade ago.

In many industries, organizations can gain competitive advantage if they reduce the elapsed time between an event occurring and actions taken or decisions made in response to it. Existing business intelligence (BI) tools provide useful analysis of and reporting on data drawn from previously recorded transactions, but to improve competitiveness and maximize efficiencies organizations are concluding that employees and processes in IT, business operations and front-line customer sales, service and support also need to be able to detect and respond to events as they happen.

Both business objectives and regulations are driving demand for new operational intelligence technology and practices. By using them many activities can be managed better, among them manufacturing, customer engagement processes, algorithmic trading, dynamic pricing, yield management, risk management, security, fraud detection, surveillance, supply chain and call center optimization, online commerce and gaming. Success in efforts to combat money laundering, terrorism or other criminal behavior also depends on reducing information latency through the application of new techniques.

The evolution of operational intelligence, especially in conjunction with IoT, is encouraging companies to revisit their priorities and spending for information technology and application management. However, sorting out the range of options poses a challenge for both business and IT leaders. Some see potential value in expanding their network infrastructure to support OI. Others are implementing event processing (EP) systems that employ new technology to detect meaningful patterns, anomalies and relationships among events. Increasingly, organizations are using dashboards, visualization and modeling to notify nontechnical people of events and enable them to understand their significance and take appropriate and immediate action.

As with any innovation, using OI for IoT may require substantial changes to organizations. These are among the challenges they face as they consider adopting this evolving operational intelligence:

  • They find it difficult to evaluate the business value of enabling real-time sensing of data and event streams using radio frequency identification (RFID) tags, agents and other systems embedded not only in physical locations like warehouses but also in business processes, networks, mobile devices, data appliances and other technologies.
  • They lack an IT architecture that can support and integrate these systems as the volume, variety and frequency of information increase. In addition, our previous operational intelligence research shows that these data sources are incomplete or inadequate in nearly two out of five organizations.
  • They are uncertain how to set reasonable business and IT expectations, priorities and implementation plans for important technologies that may conflict or overlap. These can include BI, event processing, business process management, rules management, network upgrades, and new or modified applications and databases.
  • They don’t understand how to create a personalized user experience that enables nontechnical employees in different roles to monitor data or event streams, identify significant changes, quickly understand the correlation between events and develop a context adequate to enable determining the right decisions or actions to take.

Today’s fast-paced, 24-by-7 world has forced organizations to reduce the latency between when transactions and other data are recorded and when applications and BI systems are made aware of them and thus can take action. Furthermore, the introduction of low-cost sensors and the instrumentation of devices ranging from appliances and airline engines to crop management and animal feeding systems creates opportunities that have never before existed. Technological developments such as smart utility meters, RFID and embedded computing devices for environmental monitoring, surveillance and other tasks also are creating demand for tools that can provide insights in real time from continuous streams of event data.

As organizations expand business intelligence to serve operational needs by deploying dashboards and other portals, they are recognizing the need to implement technology and develop practices that collect events, correlate them into meaningful patterns and use workflow, rules and analytics to guide how employees and automated processes should react. In financial services, online commerce and other industries, for example, some organizations have built proprietary systems or have gone offshore to employ large teams of technicians at outsourcing providers to monitor transactions and event streams for specific patterns and anomalies. To reduce the cost, complexity and imperfections in these procedures, organizations now are seeking technology that can standardize and automate event processing and notify appropriate personnel of significant events in real time.

Conventional database systems are geared to manage discrete sets of data for standard BI queries, but event streams from sources such as sensing devices typically are continuous, and their analysis requires tools designed to enable users to understand causality, patterns, time relationships and other factors. These requirements have led to innovation in event stream processing, event modeling, visualization and analytics. More recently the advent of open source and Hadoop-related big data technologies such as Flume, Kafka, Spark and Storm are enabling a new foundation for operational intelligence. Innovation in the past few years has occurred in both the open source community and proprietary implementations.

Many of the early adopters of operational intelligence technologies were in financial services and intelligence, online services and security. However, as organizations across a range of other industries seek new competitive advantages from information or require real-time insight for risk management and regulatory compliance, demand is increasing broadly for OI technologies. Organizations are considering how to incorporate event-driven architectures, monitor network activity for significant event patterns and bring event notification and insight to users through both existing and new dashboards and portals.

To help understand how organizations are tackling these changes Ventana Research is conducting benchmark research on The Internet of Things and Operational Intelligence. The research will explore how organizations are aligning themselves to take advantage of trends in operational intelligence and IoT. Such alignment involves not just information and technology, but people andprocesses as well. For instance, IoT can have a major impact on business processes, but only if organizations can realign IT systems to a discover-and-adapt rather than a model-and-apply paradigm. For instance, business processes are often outlined in PDF documents or through business process systems. However, these processes are often carried out in an uneven fashion different from the way the model was conceived. As more process flows are directly instrumented and some processes carried out by machines, the ability to model directly based on the discovery of those event flows and to adapt to them (either through human learning or machine learning) becomes key to successful organizational processes.

By determining how organizations are addressing the challenges of implementing these technologies and aligning them with business priorities, this research will explore a number of key issues, the following among them:

  • What is the nature of the evolving market opportunity? What industries and LOBs are most likely to adopt OI for IoT?
  • What is the current thinking of business and IT management about the potential of improving processes, practices and people resources through implementation of these technologies?
  • How far along are organizations in articulating operational intelligence and IoT objectives and implementing technologies, including event processing?
  • Compared to IT management, what influence do various business functions, including finance and operations management, have on the process of acquiring and deploying these event-centered technologies?
  • What suppliers are organizations evaluating to support operational intelligence and IoT, including for complex event processing, event modeling, visualization, activity monitoring, and workflow, process and rules management?
  • Who are the key decision-makers and influencers within organizations?

Please join us in this research. Fill out the survey to share your organization’s existing and planned investments in the Internet of Things and operational intelligence. Watch this space for a report of the findings when the research is completed.

Regards,

David Menninger

SVP & Research Director

Read more…

The Role of Big Data In Medicine

Evolution or Revolution?

The part of enormous information in medication is one where we can construct better wellbeing profiles and better prescient models around individual patients with the goal that we can better analyze and treat sickness.

One of the fundamental restrictions with medicine today and in the pharmaceutical business is our comprehension of the biology of sickness. Huge information becomes an integral factor around collecting more data around different scales for what constitutes a disease—from the DNA, proteins, and metabolites to cells, tissues, organs, life forms, and biological communities. Those are the sizes of the science that we should be integrating so as to display enormous information. In the event that we do that, the models will advance, the models will assemble, and they will be more prescient for particular patients.

The life sciences are not the first to experience huge information. We have data power organizations like Google and Amazon and Facebook, and a great deal of the calculations that are connected there—to foresee what sort of motion picture you get a kick out of the chance to watch or what sort of sustenances you jump at the chance to purchase—utilize the same machine-learning procedures. Those same sorts of routines, the base for dealing with the information, can all be applied in medicine.

What big data means for patients, payers, and pharma

What I see for the future for patients is connecting with them as an accomplice in this new method of comprehension their wellbeing and health better and seeing how to settle on better choices around those components.

The greater part of their information gathering will be latent, so people won't need to be dynamic consistently—logging things for instance—yet they'll stay connected with on the grounds that they'll get an advantage from it. They'll consent to have their information utilized as a part of thusly on the grounds that they get some apparent advantage. Eventually, that'll be the quantity of specialist visits you require, the quantity of times you were debilitated, the quantity of times you advanced into a given illness state.All should diminish. And there’s a benefit from being presented with the information, so they’re looking at dashboards about themselves—they’re not blind to the information or dependent on a physician to interpret it for them, they’re able to see it every day and understand what it means.

I trust payers are maybe among the highest point of the chain similarly as who can profit by this. Since, at last, payers need to compel the expense of every patient. They think about the strength of the patient, yet they need to do whatever they can to propel both the patients and the medicinal systems that treat them to minimize the expense through better safeguard measures, better focused on treatments, and expanded consistence for prescription utilization.Then there’s just the general risk profiling of patients. Of course, payers care a lot about understanding the overall risk of a patient and what they’re likely to cost year over year. 

For gadget creators, I simply see this as a transformation that is theirs to lose on the off chance that they don't grasp the improvement of shopper wearable gadgets or sensors, all the more for the most part, in situations where each individual in the US or on the planet is purchasing a gadget versus one of a modest bunch of therapeutic frameworks. That is a superior plan of action that is going to create loads of income. 

At long last, from the pharmaceutical point of view, I believe it's major. That is to say, simply take a gander at Regeneron Pharmaceuticals and Geisinger drawing in the Geisinger Health System and sequencing everyone in that populace to make a superior comprehension of infection and insurances against ailment to do therapeutics.

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

The Next Big Thing In Big Data: BDaaS

Guest blog post by Bernard Marr

We’ve had software as a service, platform as a service and data as a service. Now, by mixing them all together and massively upscaling the amount of data involved, we’ve arrived at Big Data as a Service (BDaaS).

It might not be a term you’re familiar with yet – but it suitably describes a fast-growing new market. In the last few years many businesses have sprung up offering cloud based Big Data services to help other companies and organizations solve their data dilemmas.

Source for illustration: click here

Some estimate that business IT spending on cloud-based, x-as-a-service activity will increase from about 15% today to 35% by 2021. Given that it is estimated that the global Big Data market will be worth $88 billion by that point, we can see that the forecast value of the BDaaS market could be $30 billion.

So, here I will attempt to give a brief(ish) overview of the concept, as well as examples of how it is being put into practice in real life businesses and organizations around the world.

What is BDaaS?

Big Data refers to the ever-growing amount of information we are creating and storing, and the analysis and use of this data. In a business sense, it particularly refers to applying insights gleaned from this analysis in order to drive business growth.

At the moment, BDaaS it is a somewhat nebulous term, which is often used to describe a wide variety of outsourcing of various Big Data functions to the cloud.

This can range from the supply of data, to the supply of analytical tools with which to interrogate the data (often through a web dashboard or control panel) to carrying out the actual analysis and providing reports. Some BDaaS providers also include consulting and advisory services within their BDaaS packages.

So, in many ways, BDaaS encompasses elements of what has become known as software as a service, platform as a service, data as a service, and so on – and applies them to solving Big Data problems.

Why is BDaaS useful?

There are several advantages to outsourcing or virtualizing your analytics activities involving large datasets.

The popularity of Hadoop has to some extent democratized Big Data – anyone can use cheap off-the-shelf hardware and open source software to analyze data, if they invest time learning how. But most commercial Big Data initiatives will still involve money being spent up front on components and infrastructure. When a large company launches a major initiative, this is likely to be substantial.

On top of upfront costs, storing and managing large quantities of information requires an ongoing investment of time and resources. When you use BDaaS, all of the techy “nuts and bolts” are, in theory, out of sight and out of mind, leaving you free to concentrate on business issues.

BDaaS providers generally take this on for the customer – they have everything set up and ready to go – and you simply rent the use of their cloud-based storage and analytics engines and pay either for the time you use them or the amount of data crunched.

Additionally BDaaS providers often take on the cost of compliance and data protection. When the data is stored on their servers, they are (generally) responsible for it.

Who provides and uses BDaaS?

A good example is IBM’s Analytics for Twitter service, which provides businesses with access to data and analytics on Twitter’s 500 million tweets per day and 280 million monthly active users.

As well as the “firehose” of tweets it provides analytics tools and applications for making sense of that messy, unstructured data and has trained 4,000 consultants to help businesses put plans into action to profit from them.

Another is agricultural manufacturers John Deere, which fits all of its tractors with sensors that stream data about the machinery as well as soil and crop conditions to the MyJohnDeere.com and Farmsight services. Farmers can subscribe to access analytical intelligence on everything from when to order spare parts to where to plant crops.

The arrival of Apple’s Watch – perhaps the device that will bring consumer wearables into the mainstream – will doubtlessly bring with it a tsunami of new BDaaS apps. They will soak up the data from the presumed millions of people who will soon be using it for everything from monitoring their heart rate to arranging their social calendar to remote controlling their home entertainment. Then they will find innovative ways to package it and sell it back to us. Apple and IBM have just announced their collaboration on a big data health platform.

In sales and marketing, BDaaS is increasingly playing its part, too. Many companies now offer customer profiling services, including Acxiom – the world’s biggest seller of direct marketing data. By applying analytics to the massive amount of personal data they collect, they can more effectively profile us as consumers and hand their own customers potential leads.

Amazon’s AWS as well as Google’s AdSense and AdWords are better known services that would also fall under the banner. They are all used by thousands of small to medium-sized businesses to host data infrastructure, and target their marketing at relevant niches where potential customers could be lurking.

The future of BDaaS?

The term may be rather unwieldy and inelegant (I’ve written before that I’m not even particularly a fan of the term Big Data, so BDaaS is a further step into the ridiculous) but the concept is rock solid.

As more and more companies realize the worth of implementing Big Data strategies, more services will emerge to support them. Data analysis can and generally does bring positive change to any organization that takes it seriously, and this includes smaller scale operations which won’t have the expertise (or budget to develop that expertise) to do it themselves.

With the growth in popularity of software as a service, we are increasingly used to working in a virtualized environment via a web interface, and integrating analytics into this process is a natural next step. We can already see that it is making Big Data projects viable for many businesses that previously would have considered them out of reach – and I think it is something we will see and hear a lot more about in the near future.

AboutBernard Marr is a globally recognized expert in big data, analytics and enterprise performance. He helps companies improve decision-making and performance using data. His new book is Data: Using Smart Big Data, Analytics and Metrics To Make Better Decisions and Improve PerformanceYou can read a free sample chapter here.

Follow us @IoTCtrl | Join our Community

Read more…

Trending Report On Micro-Mobile Data Center

A micro-mobile data center is a small, portable data center system designed to resolve or share the workloads of the conventional data centers. Micro-mobile data centers come with small set of servers and virtual machines as compared to the traditional data centers. Micro-mobile data centers provide secure and fast access to data and compute resources. These are even enhanced with in-built cooling systems, security systems, associated management tools, fire protection and uninterrupted power supplies (UPS). Thus, micro-mobile data centers offer ease of installation, simplicity in use and cost effectiveness. These are suitable for use in remote locations, disaster prone areas and even for temporary deployments. In terms of rack size micro-mobile data centers are classified as 5-25 rack units (RU), 26-50 RU and more than 50 RU. Various end-users of micro-mobile data centers include government, telecom and IT, healthcare, oil and gas, smart cities, retail, education, energy, banking and others. On the basis of end-user micro-mobile data centers can be segmented into small and medium enterprises (SMEs) and large enterprises.

Adoption of cloud computing and cloud-based services by enterprises has been growing incrementally over the past few years. Increasing investments into cloud infrastructure has resulted in greater focus on cost-effectiveness and low power consumption. Today, high growth in data center traffic has increased focus on cloud more than ever before.Growing demand from cloud services is a key factor driving the growth of micro-mobile data centers. Micro-mobile data centers help in reducing latency between the cloud and devices, which improves the performance of the cloud services. Along with this,micro-mobile data centers are highly energy efficient and portable, which makes them much superior to use in comparison to the conventional data centers. Also, with increasing digital traffic due to services such as Internet of Things (IoT), micro-mobile data centers help in reducing latency and enhance security of data.

Lack of awareness regarding micro-mobile data centers is one of the major concerns which may restrain its market. Most of the businesses are still using large and conventional data centers which lacks the features provided by the micro-mobile data centers such as portability. Further, installation of micro-mobile data centers experiences challenges due to vendor lock-in barrier as there are issues such as lack of interoperability with the previously installed traditional data center system. Further, software for traditional data centers need to be adapted to efficiently distribute workloads to micro-mobile data centers and to address software product gap created due to lack of interoperability.

The increasing number of IT services and social networking platforms, globally,is leading to rapid growth in number of data centers. People are now moving towards online services, which is increasing the data load on the present data centers.With growing data consumption across the globe, lightweight scale-out workloads have also been increasing. This will increase the demand for data centers,especially, micro-mobile data centers, which are more sophisticated as compared to conventional data center.

Some of the key manufacturers of micro-mobile data center market are Schneider Electric SE, Rittal GmbH & Co., Huawei Technologies Co., Ltd., Panduit Corp., Dell, Inc., Zellabox, Silicon Graphics International Corp., Canovate Elektronik and Elliptical Mobile Solutions.

Read More :  Micro-Mobile Data Center Market

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Transform Big data to Smart Data

We are entering into the age of IOT, lot of connected devices will talk to each other using sensors/signals and it is expected that those devices will generate an enormous amount of data.

Handling that much of data [Big data] and generating actionable signals from that would be a challenge.

Companies are investing a lot of resources to build the platforms that can process the big data and retrieve some intelligence out of it. I am wondering that IOT will generate limitless data. Would these platforms will still be able to handle that situation and would it still remain cost effective to process the data of thousands of devices and then retrieving the signals that could be understand able  to other devices.

For example.

Today SkyServer had data on 200 million galaxies in the database. The SDSS data exceeds 150 terabytes, covering more than 220 million galaxies and 260 million stars. The images alone include 2.5 trillion pixels of original raw data.  

As the accuracy of devices will improve with the passage of time, their discovery capabilities will also improve resulting into more petabytes of data. Would we continue adding Servers, RAM, Hard drives and processors to manage all this? 

Why we cannot invest all of these resources in making the devices intelligent? Every device do necessary intelligence at its own end its output interface expose only the actionable data to the outside world?

For example the sky telescope can compare all the images itself and only send the images to sky server which has some new insights. Some kind of machine learning built-in to sky telescope which could help it to learn it from the past and generate actionable insights based on that…in simple words, fit a human brain inside telescope that will transform big data to smart data. Storing, managing and processing garbage [Big data] at SkyServer do not make any sense to me. Why we should bear the data transfer cost of garbage? May be hardware device keep it in its unconsciousness and can use it if needed to generate the smart data. 

In the world of IOT if output interface of all devices could expose smart data, it will tremendously decrease the data size that needs to be traveled on the IOT wire and software world would be more comfortable with handling those megabytes of data. 

We also need to rethink the way we use the internet today, the data being generated from the internet is tremendously increasing with the passage of time. Software and hardware technology cannot handle the situation at all in future.

May be we think of a new language for the internet which can help to convey the message in a few signals instead of writing a story of 1000 words. If you see conversations between deaf people, they use few signals and tell a full a story in few actions. Consider internet users as deaf people and think of a language which could tell a story in few signals.

We think of a platform that could help to reduce the size of the data that we are generating today on the internet. 

Transform the world towards the smart data instead of wasting money to handle the garbage [Big Data]

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

As we move towards widespread deployment of sensor-based technologies, three issues come to the fore: (1) many of the these applications will need machine learning to be localized and personalized, (2) machine learning needs to be simplified and automated, and (3) machine learning needs to be hardware-based. 

Beginning of the era of personalization of machine learning

Imagine a complex plant or machinery being equipped with all kinds of sensors to monitor and control its performance and to predict potential points of failure. Such plants can range from an oil rig out in the ocean to an automated production line. Or such complex plants can be human beings, perhaps millions of them, who are being monitored with a variety of devices in a hospital or at home. Although we can use some standard models to monitor and compare performance of these physical systems, it would make more sense to either rebuild these models from scratch or adjust them to individual situations. This would be similar to what we do in economics. Although we might have some standard models to predict GDP and other economic variables, we would need to adjust each one of them to individual countries or regions to take into account their individual differences. The same principle of adjustment to individual situations would apply to physical systems that are sensor-based. And, similar to adjusting or rebuilding models of various economic phenomena, the millions of sensor-based models of our physical systems would have to be adjusted or rebuilt to account for differences in plant behavior. We are, therefore, entering an era of personalization of machine learning at a scale that we have never imagined before. The scenario is scary because we wouldn’t have the resources to pay attention to these millions of individual models. Cisco projects 50 billion devices to be connected by 2020 and the global IoT market size to be over $14 trillion by 2022 [1, 2].

 

The need for simplification and automation of machine learning technologies 

If this scenario of widespread deployment of personalized machine learning is to play out, we absolutely need automation of machine learning to the extent that requires less expert assistance. Machine learning cannot continue to depend on high levels of professional expertise.  It has to be simplified to be similar to automobiles and spreadsheets where some basic training at a high school can certify one to use these tools. Once we simplify the usage of machine learning tools, it would lead to widespread deployment and usage of sensor-based technologies that also use machine learning and would create plenty of new jobs worldwide. Thus, simplification and automation of machine learning technologies is critical to the economics of deployment and usage of sensor-based systems. It should also open the door to many new kinds of devices and technologies.

 

The need for hardware-based localized machine learning for "anytime, anywhere" deployment and usage 

Although we talk about the Internet of Things, it would simply be too expensive to transmit all of the sensor-based data to a cloud-based platform for analysis and interpretation. It would make sense to process most of the data locally. Many experts predict that, in the future, about 60% of the data would be processed at the local level, in local networks - most of it may simply be discarded after processing and only some stored locally. There is a name for this kind of local processing – it’s called “edge computing” [3].

The main characteristics of data generated by these sensor-based systems are: high-velocity, high volume, high-dimensional and streaming. There are not many machine learning technologies that can learn in such an environment other than hardware-based neural network learning systems. The advantages of neural network systems are: (1) learning involves simple computations, (2) learning can take advantage of massively parallel brain-like computations, (3) they can learn from all of the data instead of samples of data, (4) scalability issues are non-existent, and (4) implementations on massively parallel hardware can provide real-time predictions in micro seconds. Thus, massively parallel neural network hardware can be particularly useful with high velocity streaming data in these sensor-based systems. Researchers at Arizona State University, in particular, are working on such a technology and it is available for licensing [4].

 

Conclusions

Hardware-based localized learning and monitoring will not only reduce the volume of Internet traffic and its cost, it will also reduce (or even eliminate) the dependence on a single control center, such as the cloud, for decision-making and control. Localized learning and monitoring will allow for distributed decision-making and control of machinery and equipment in IoT.

We are gradually moving to an era where machine learning can be deployed on an “anytime, anywhere” basis even when there is no access to a network and/or a cloud facility.

 

References

  1. Gartner (2013). "Forecast: The Internet of Things, Worldwide, 2013."

         https://www.gartner.com/doc/2625419/forecast-internet-things-worldwide-

     2. 10 Predictions for the Future of the Internet of Things

     3. Edge Computing

     4. Neural Networks for Large Scale Machine Learning

 

Read more…

Big data solutions: trends, innovation

We have seen the birth to a generation of enterprises that are data-rich and analytically driven, eagerly following trends in big data and analytics. Let’s take a closer look as I provide some use cases demonstrating how IBM is helping clients find innovative big data solutions.


1. Datafication-led innovation
Data is the new basis of competitive advantage. Enterprises that use data and sophisticated analytics turn insight into innovation, creating efficient new business processes, informing strategic decision making and outpacing their peers on a variety of fronts.


2. Sophisticated analytics for rich media
Much of produced data is useless without applying appropriate analytics to it. Where does opportunity lie? According to the International Data Corporation (IDC), rich media (video, audio, images) analytics will at least triple in 2015 to emerge as a key driver for big data and analytics technology investment. And such data requires sophisticated analytics tools. Indeed, consider e-commerce–based image search: accurate, relevant image search analysis that doesn't require human tagging or intervention is a significant opportunity in the market. We can expect similar smart analytics capabilities to offer similar opportunities.


3. Predictive analytics driving efficiency
Applications featuring predictive capabilities are picking up speed. Predictive analytics enhances value by boosting effectiveness, providing measurability of the application itself, recognizing the value of the data scientist and maintaining a dynamically adaptive infrastructure. For these reasons, predictive analytics capabilities are becoming an integral component of analytics tools.


4. Big data in the cloud
Over the next five years, IDC predicts, spending on cloud-based big data analytics solutions will grow three times more quickly than spending on on-premises solutions—and hybrid deployments will become a must-have. Moreover, says IDC, with data sources located both in and out of the cloud, business-level metadata repositories will be used to relate data. Organizations should evaluate offerings from public cloud providers to seek help overcoming challenges associated with big data management, including the following:

  • Security and privacy policies and regulations affecting deployment options
  • Data movement and integration requirements for supporting hybrid cloud environments
  • Building a business glossary and managing map data to prevent overwhelming data
  • Building a cloud metadata repository (containing business terms, IT assets, data definitions and logical data models) that points to physical data elements.


5. Cognitive computing
Cognitive computing is a game-changing technology that uses natural language processing and machine learning to help humans and machines interact naturally and to augment human expertise. Personalization applications using cognitive computing will help consumers shop for clothes, choose a bottle of wine or even create a new recipe. And IBM Watson is leading the charge.


6. Big money for big data
Increasingly, organizations are monetizing their data, whether by selling it or by providing value-added content. According to IDC, 70 percent of large organizations already purchase external data, and 100 percent are expected to do so by 2019. Accordingly, organizations must understand what their potential customers value and must become proficient at packaging data and value-added content products, experimenting to find the “right” mix of data and combining content analytics with structured data, delivered through dashboards, to help create value for external parties interacting with the analysis.


7. Real-time analytics and the Internet of Things
The Internet of Things (IoT) is expected to grow at a five-year CAGR of 30 percent and, in its role as a business driver, to lead many organizations to their first use of streaming analytics. Indeed, the explosion of data coming from the Internet of Things will accelerate real-time and streaming analytics, requiring data scientists and subject matter experts to sift through data in search of repeatable patterns that can be developed into event processing models. Event processing can then process incoming events, correlating them with relevant models and detecting in real time conditions requiring response. Moreover, event processing is an integral part of systems and applications that operationalize big data, for doing so involves continuous processing and thus requires response times as near to real time as possible.

8. Increased investments in skills
Many organizations want to combine business knowledge and analytics but have difficulty finding individuals who are skilled enough to do so. Leading companies in particular feel this talent gap keenly, for as they move to broaden skills across the enterprise, the need for combined skills becomes ever more apparent. Indeed, combined skills are of critical importance in speed-driven organizations, for such skills speed the translation of insights into actions through deep knowledge of the business drivers—and the data related to them—that are likely to affect performance.

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…
RSS
Email me when there are new items in this category –

Sponsor