Join IoT Central | Join our LinkedIn Group | Post on IoT Central


OSes (231)

Unlocking Personal Data

Guest blog post by Daniel Calvo-Marin

All the people interested in data is always looking or researching for new data to integrate either in their business, academic research, software solution, etc.

There are lots of data sets meaningful for ones and not important for others, but there is one kind of data that every person who is passionate about data will find interesting, personal data.

We’re leaving logs of everything we do even without knowing it. Here is a list of data sets that we’re generating and we can study to know a little more of how we behave:

  1. Fitness data: if you’re a user of apps like Endomondo, Nike +, Adidas MyCoach,MapMyRun or you wear things like a Jawbone, Misfit, Fitbit or Garmin and a lot more, you have data to study. Steep counter, distance, speed, pace, carbs lost and heart rate are some of the dimensions that you can analyze to get a deep understanding of your exercise and improve on it. In some way you can become your own coach by establishing realistic goals, scheduling exercise sessions in a more efficient way and planning rest days when you need them.
  2. Personal Schedule: maybe this tool doesn’t sounds as awesome as fit bands or greatest gadgets but it has data that could help you a lot. If you are a person that is a high user of agendas and schedules you can analyze it to even predict your future. Yes I know, it sounds crazy, you are the owner of your future. But how many times you get late to an appointment? If you analyze your schedule maybe you can define a “late index” for every entry in your agenda. Or what about reminders to get in touch with people you care about based on your last dates with them. What about academics, maybe you can adjust a model that can set up the right timing for start studying for the exam based on your last scores and time of studying, so the next time you schedule an exam it automatically will also schedule your study sessions.
  3. Chats history: depending on the chat service you use, you can export your chat history. First of all, it's fun! Definitely you’re going to find things that you don’t remember and will give you a laugh. After that, you can analyze your data to know how is going your relation with others. Topics, number of messages, number of images and emoticons are some indicators to analyze. Other thing you can do is measure the level of importance you are giving to a person Are you being absorbed by your job?, this is something that you can get to know through your chat history. What if you could apply sentimental analysis to your conversations? Are they nice conversations or there is someone you should avoid to prevent getting angry or upset. Go ahead and give a try to “Chat analytics”.
  4. Personal mail: in most of the cases this is a less intense communication channel than chat but it will also let you know better how you behave. Analyze your vacation bookings, orders history in different online stores, blogs, forums, advertising and way more. You have a lot of information here! First of all maybe you can create your own spam detector, those used by the mail services are very good, but for a practice it is a good exercise for analytics. We all are receiving messages that are not considered as important, so go ahead and make your own spam detector. After that you can analyze your preferences in products or vacation places to improve your next choice. You don’t have enough time to read emails that maybe you would like to analyze. For example, make a discount searcher that can find in all your mails promotions that interest you.

This isn’t an exhaustive list. From now on stay with your eyes open so you can discover new sources of personal data that you can analyze to improve your life. I think most of this type of analytics could help a lot in become a more efficient person in all of your activities, but I want to give you an advise, despite all of this information, live every day as you want, don’t feel restricted by your agenda, chat history analytics or vacation planing algorithm, you’re free to choose what you prefer in every moment of your life, this is just a help guide, not the final guide!

Originally posted here

Follow us @IoTCtrl | Join our Community

Read more…

Will 2016 be the Year you Clean up your Dirty Data?

Guest blog post by Martin Doyle

For ever it seems, we’ve been warning about the dangers of low quality data. Our warnings have been reinforced and echoed by some of the world’s biggest think tanks. However, despite this, some organisations still haven’t acted to improve the quality of their data.  And we’re wondering why?

Over the last 12 months, we’ve blogged about business automation , and about cutting the waste that’s destroying your ROI. We’ve reminded you that your data is vulnerable to decay, and we correctly predicted that Google Now would become a bigger presence in our lives.

Despite our best efforts though:

  • 78% of companies have trouble getting emails delivered
  • 83% of companies are struggling with silos of data
  • 81% of retailers cannot leverage loyalty programs due to inaccurate data
  • 87% of financial institutions have difficulty obtaining reliable intelligence
  • 63% of companies still don’t have a coherent approach to data quality

These Experian data quality statistics prove that businesses are failing to take action. Their data quality challenges are growing, despite the fact that data quality software is getting better all the time.

Will 2016 be the year that the message finally gets through, or will we be singing from the same carol book this time next year?

What might you achieve in 2016?

The benefits of better data management are vast, and they benefit everyone who comes into contact with that data. For a profit-making business, or an efficiency-driven private sector organisation, we can divide the benefits into three distinct categories: Efficiency, Innovation and Experience.

  • There are benefits for the organisation in terms of better efficiency, which drives profit and reduces waste. Smart businesses are digitally transforming their processes, they are improving efficiency and automating certain tasks. They’re enhancing their data by joining it with intelligence from third party sources, and they’re improving the usability of data by integrating separate systems and channels. For the public sector, there are massive cost savings in applying the same principles to the services they provide.
  • In turn, business innovations help to give employees the tools they need to do their jobs. Once you’ve digitally transformed a process, it will require less manual effort, and will usually be less reliant on problematic legacy IT systems. Your employees expect to be able to use cloud services, agile working practices and a reliable CRM, all of which hinge on having high data quality to work from. If you don’t implement better data management, staff will eventually circumvent your security policies and ‘go it alone’.
  • Better data management results in a better experience for customers, or service users. The improvements to the organisation, and its employees, have a positive knock-on effect for the people they’re serving. Public sector service users benefit from an experience that exceeds their expectations, with queries being answered more quickly and service levels being far improved. Paying customers will be more loyal, more trusting, and more inclined to spend.

Laying the foundations for 2016

Have you decided to get to grips with data quality in 2016?  Whether you’re planning cloud migration, digital transformation, or simply want to improve your bottom line, you can start putting the basics in place as soon as the Christmas tree is packed away.

Consider adding a Chief Data Officer to the organisation to act as a data quality ambassador. Why? Data is going to have to be valid and reliable all the time if customers are going to receive the quality of service they are looking for. This means that there has to be constant focus on data quality, rather than conducting occasional data quality reviews, and you need someone who can drive change in your processes and culture.

Additionally, look at the way the world around you is changing. A couple of years ago, tablet computers were on everyone’s Christmas list. This year, it’s wearable technology and products to automate the home. Data is already shifting towards centre stage position, and this should provide all the inspiration you need to modernise your business accordingly.

Finally, imagine a world where your organisation was more streamlined and agile. Imagine the cost savings of automation and efficiency. Think about how much your staff could do if they didn’t have to duplicate their work. Consider how much more accurate your reports would be if you had access to reliable data, and how much money you’re currently gambling on data that doesn’t make sense.

By 2016, the amount of money spent on digital marketing will consume 35% of total marketing budgets. Experian says we’re already wasting £197 million because of bad data. How much more can you afford to waste? If you continue wasting money at the current rate, how long will it take for a more agile competitor to overtake you?

Turn over a new leaf in 2016 with better data

Data is the one constant in every business. It flows through every process and helps us make sense of what we do. We owe it to our staff, our users and our customers to manage data properly and improve its accuracy. The New Year presents a great time to change the way we’re managing data.

Data quality software is no longer a niche purchase, , or something that can be pushed back into next year’s budget. It’s now an essential component in the workings of an efficient business. Not only that, but data quality is married to automation and integration, and your business needs both if it’s to survive.

From marketing to business intelligence, data quality is becoming a prime concern. Forrester predicts that 2016 will bring more personalisation, better customer experience, more ‘digitally savvy’ leaders and a requirement to make digital a “core driver of business transformation”. Will your organisation be one of the few that puts data quality at the heart of its strategy?

The original blog can be seen here.

Follow us @IoTCtrl | Join our Community

Read more…

The first prediction is that data and analytics will continue to grow at an astounding pace and with increased velocity

This is no big surprise as all the past reports have pointed towards this growth and expansion -

Venturebeat * note that “Although the big data market will be nearly $50B by 2019 according to analysts, what’s most exciting is that the disruptive power of machine data analytics is only in its infancy. Machine analytics will be the fastest growing area of big data, which will have CAGR greater than 1000%.”

The move towards cloud based solutions opens up opportunities and it is not going to reverse. Following on from the trend in recent years yet more and more companies are increasing their use of cloud based solutions and along with this the opportunity to extract and collect data provides a potential for gleaning some information and knowledge from that data.

Suhale Kapoor, Co-Founder and Executive Vice President, Absolutdata Analytics *  highlights “The fast shift to the cloud: The cloud has become a preferred information storage place. Its rapid adoption is likely to continue even in 2016. According to Technology Business Research, big data will lead to tremendous cloud growth; Revenues for top 50 public cloud providers shot up from 47% in the last quarter of 2013 to $ 6.2 billion"

It is not difficult to predict that in 2016 the cloud and the opportunities that open up for data, analytics and machine learning will becomes huge drivers for business

predictions-for-2016-2

Applications will learn how to make themselves better

Applications will be designed to discover self improvement strategies as a new breed of log and machine data analytics, at the cloud layer, using predictive algorithms, enables; continuous improvement, continuous integration and continuous deployment. The application will learn from its users, in this sense the users will become the system architects teaching the system what they, the users, want and how the system is to deliver it to them.

Gartner view Advanced Machine Learning amongst the top trends to emerge in 2016 * with “advanced machine learning where deep neural nets (DNNs) move beyond classic computing and information management to create systems that can autonomously learn to perceive the world, on their own … (being particularly applicable to large, complex datasets) this is what makes smart machines appear "intelligent." DNNs enable hardware- or software-based machines to learn for themselves all the features in their environment, from the finest details to broad sweeping abstract classes of content. This area is evolving quickly, and organisations must assess how they can apply these technologies to gain competitive advantage.” the capability of systems to use advanced machine learning does not need to be confined to the information it finds outside it will also be introspective and be applied to the systems own itself and how it interfaces with human users.

A system performing data analytics needs to learn what questions it is being asked, how the questions are framed, as well as the vocabulary and the syntax the user chooses to ask those questions. No longer will the user be required to struggle with the structure of queries and programing language aimed at eliciting insight from data. The system will understand the users natural language requests such as “get me all the results that are relevant to my understanding of ‘x,y and z’ ”. The system will be able to do this because of the experience it has of the user/s asking these questions many times in structured programming languages (a corpus of language that the machine has long understood) and matching them to a new vocabulary that is more native to the non specialised user.

2016 will be the year these self learning applications emerge due to changes in the technology landscape for as Himanshu Sareen, CEO at Icreon Tech *  points out this move to machine learning is being fuelled by the technology that is becoming available “Just as all of the major cloud companies (Amazon Web Services, Google, IBM, etc.) provide analytics as a service, so do these companies provide machine learning APIs in the cloud. These APIs allow everyday developers to ‘build smart, data-driven applications’ ” it would be a foolish if these developers did not consider a system that was not self learning.

Our prediction is that through 2016 many more applications will become self learning thanks to developments in deep learning technology

predictions-for-2016-3

Working with data will become easier

While the highly specialised roles of the programmer,the  data scientist, and the data analyst are not going to disappear the exclusivity of the insights they have been part to is set to dissipate. Knowledge gleaned from data will not remain in the hands of the specialist and technology will once again democratise information. The need for easy to use applications providing self serve reports and self serve analysis is already recognised by business According to Hortonworks Chief Technology Officer Scott Gnau *  “There is a market need to simplify big data technologies, and opportunities for this exist at all levels: technical, consumption, etc.” … “Next year there will be significant progress towards simplification,”

Data will become democratised, first from programmers, then from data scientists and finally from analysts as Suhale Kapoor, Co-Founder and Executive Vice President, Absolutdata remarks “Even those not specially trained in the field will begin to crave a more mindful engagement with analytics. This explains why companies are increasingly adopting platforms that allow end users to apply statistics, seek solutions and be on top of numbers.” … “Humans can’t possibly know all the right questions and, by our very nature, those questions are loaded with bias, influenced by our presumptions, selections and what we intuitively expect to see. In 2016, we’ll see a strong shift from presumptive analytics — where we rely on human analysts to ask the right, bias-free questions — toward automated machine learning and smart pattern discovery techniques that objectively ask every question, eliminating bias and overcoming limitations.”

 “Historically, self-service data discovery and big data analyses were two separate capabilities of business intelligence. Companies, however, will soon see an increased shift in the blending of these two worlds. There will be an expansion of big data analytics with tools to make it possible for managers and executives to perform comprehensive self-service exploration with big data when they need it, without major handholding from information technology (IT), predicts a December study by business intelligence (BI) and analytics firm Targit Inc.” *…“Self-service BI allows IT to empower business users to create and discover insights with data, without sacrificing the greater big data analytics structures that help shape a data-driven organisation,” Ulrik Pedersen, chief technology officer of Targit, said in the report.

We are able to confidently predict that in 2016 more and more applications for analysing data will require less technical expertise.

predictions-for-2016-4

Data integration will become the key to gaining useful information

The maturity of big data processing engines enable an agile exploration of data and agile analytics able to make huge volumes of disparate and complex data fathomable. Connecting and combining datasets unlocks the insights held across data silos and will be done in the automatically in the background by SaaS applications rather than by manually manipulating spreadsheets.

David Cearley, vice president and Gartner Fellow postulates a “The Device Mesh” that “refers to an expanding set of endpoints people use to access applications and information or interact with people, social communities, governments and businesses” and that "In the postmobile world the focus shifts to the mobile user who is surrounded by a mesh of devices extending well beyond traditional mobile devices," that are “increasingly connected to back-end systems through various networks” and “As the device mesh evolves, we expect connection models to expand and greater cooperative interaction between devices to emerge”.

In the same report Cearley says that “Information has always existed everywhere but has often been isolated, incomplete, unavailable or unintelligible. Advances in semantic tools such as graph databases as well as other emerging data classification and information analysis techniques will bring meaning to the often chaotic deluge of information.”

It is an easy prediction but, more and more data sets will be blended from different sources allowing more insights, this will be a noticeable trend that will emerge during 2016.

predictions-for-2016-5

Seeing becomes all important, visualisations are the key to unlocking the path from data to information to knowledge

Having the ability to collect and explore complex data leads to an inevitable need to have a toolset to understand them. Tools that can present the information in these complex data as visual representations have been getting more mature and more widely adopted. Suhale Kapoor, Co-Founder and Executive Vice President, Absolutdata Analytics *  “Visuals will come to rule: The power of pictures over words is not a new phenomenon – the human brain has been hardwired to favour charts and graphs over reading a pile of staid spreadsheets. This fact has hit data engineers who are readily welcoming visualisation softwares that enable them to see analytical conclusions in a pictorial format.”

The fact that visualisation do leverage knowledge from data will lead to more adaptive and dynamic visualisation tools   “Graphs and charts are very compelling, but also static, sometimes giving business users a false sense of security about the significance — or lack of it — in the data they see represented. … data visualisation tools will need to become more than pretty graphs — they’ll need to give us the right answers, dynamically, as trends change …  leading to dynamic dashboards … automatically populating with entirely new charts and graphs depicting up-to-the-minute changes as they emerge, revealing hidden insights that would otherwise be ignored”*

We predict that in 2016 a new data centric semiotic, a visual language for communicating data derived information, will become stronger, grow in importance and be the engine of informatics .

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Top 5 Trends in Big Data Analytics

While many of us recognize that companies are empowered by actionable information penetrations and help drive sales, devotion and superior customer experiences, the thought of making sense of enormous quantities of information and undertaking the task of unifying is daunting. But that is slowly changing. Experts forecast that this year, budgets will be allocated by most companies, and that 2015 will undoubtedly be the year of big data and discover the best tools and resources to really harness their data.

Information gathering has developed radically, and both C-level executives as well as their teams now recognize they have to join the data arms race that was big to keep and grow their customer base, also to stay competitive in today's data-driven marketplace. Terms like in-memory databases, sensor information, customer data platforms and predictive analytics will end up more widely understood.

With terabytes of information being gathered by companies at multiple touchpoints, platforms, devices and offline places, companies will start to focus more on possessing their info, to be able to access, visualize and control this data, and on monetizing their audience in real-time together with the right content. More emphasis will likely be placed on ethically info is accumulated, how clean and collect the big data is and to be an information hoarder that accumulates information you don't really want.

Here are the top 5 information trends that we predict will reign 2015:

1. Data agility will take center stage


It's not sufficient to just own quantities of customer information if this info is not agile. More companies are seeking approaches that are simple, quick and easy to offer unified and protected use of customer information, across departments and systems. CMOs, CTOs, information scientists, business analysts, programmers and sales teams possess precisely the same pressing need for tools and training to assist them navigate their customer data. With the growing popularity of wearables, sensors and IoT apparatus, there's additional real time information flooding in. Plus having customer information saved on multiple legacy platforms and third party vendor systems only makes information agility that much more challenging. Most firms only use about 12.5% of their available data to grow their company. Having access to the proper tools which make customer information more agile and easy to use is going to be a significant focus of businesses in 2015.

2. Information is the New Gold & Puts Businesses In Control
For several businesses, the most commonly-faced information need is ownership and unification: Volumes of information being generated every second, being saved on multiple legacy platforms that still use dated structure, as well as the inability to access all this customer data in a single area to get a "complete view" of their customers. But together with technology that makes information union easier and the introduction of new tools, businesses are beginning to appreciate the worth of controlling and possessing their customer data. The frustrations of working with multiple third party sellers to gain possession of information, along with a lack of information rights keys that permits you to automatically pull information from these vendors will be major pain points which is handled. Companies can now select from a variety of systems like Umbel to help gather first-party customer information from multiple online and offline sources, platforms and sellers, possess and unify the data, and make use of the information in real-time to power and optimize marketing and sales efforts.

3. The Rise of Customer Information Platforms

While DMPs and CRMs help fulfill many business needs, today's marketers want a centralized customer information platform like Umbel that examines and gives profound penetrations on their customer base to them. Very few businesses really have one genuinely complete, unified customer database alternative. They're largely still using multiple systems and platforms that collect information separately.

A CMO's top priority will probably be to possess a reliable Customer Info Platform that collects exact customer information from all online and offline touch points (enclosed web site visits and purchases, social interactions, beacon info, cellular and in store interactions etc.), removes duplicates and appends it with added data (demographic, geographic, behavioral, brand kinship) from other trusted sources.

4. Info Democratization Across Departments
The abundance of customer data offered to brands today is staggering and yet many companies are yet to fully use the information to supercharge marketing and sales efforts. Among the biggest hurdles that marketers face is the fact that accessibility for this information is quite limited at most firms. Primarily, only larger companies with IT resources had the capacity to gather, save, analyze, and monetize this information that is precious. Second even if data, the IT department was collecting and/or the business enterprise analytics teams have restricted access to this information and sales and marketing teams that actually use this data must undergo a convoluted, time-consuming procedure to get insights and the data they need.

With new tools like Umbel, teams don't desire an Information Scientist to make sense of their data.

For info to be genuinely valuable to an organization, it is critical that the info be democratized across teams and departments, empowering all employees, irrespective of their specialized expertise, to get information and make more informed decisions. In 2015 more companies will start to use automated platforms that enable anyone in the organization to see, assess and take actions according to customer data.

5. Mobile Data and Strategy Will End Up Vital to Advertising
Based on eMarketer, mobile search ad spend in the U.S. grew 120.8% in 2013 (overall gain of 122.0% for all mobile advertisements). Meanwhile, desktop advertisement spending went up by just 2.3% last year. The mobile program has become as useful as an essential component of any marketing plan, and sites for retailers. For companies to remain competitive, a seamless, secure, fast and instinctive experience on mobile devices, and also the ability to capture this information that is mobile and add it to a unified customer data base is critical. Having this unified information of customers from every touchpoint (including cellular and offline) will enable firms to identify trends and shape a better customer experience. More companies are getting to be conscious of how significant it is to be able to unify their information and compare analytics across all platforms to help them create personalised marketing campaigns centered on a "complete customer view."

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Guest blog and great infographic from Matt Zajechowski

It’s no secret that analytics are everywhere. We can now measure everything, from exabytes of organizational “big data”  to smaller, personal information like your heart rate during a run. And when this data is collected, deciphered, and used to create actionable items, the possibilities, both for businesses and individuals, are virtually endless.

One area tailor-made for analytics is the sports industry. In a world where phrases like “America’s pastime” are thrown around and “the will to win” is revered as an intangible you can’t put a number on, stats lovers with PhDs in analytics are becoming more and more essential to sports franchises. Since the sabermetric revolution, sports franchises have begun investing time and money in using sports analytics from wearable technology to help their athletes train and even make more money from their stadiums.

Today, Sports Fans Prefer the Couch Over the Stadium

For decades, television networks have tried to create an at-home experience that’s on par with the stadium experience — and they’ve succeeded emphatically. In a 1998 ESPN poll, 54% of sports fans reported that they would rather be at the game than watch it at home; however, when that same poll was readministered in 2011 found that only 29% preferred being at the game.

While this varies by sport to some degree, the conclusion is clear: people would rather watch a game in the comfort of their own climate-controlled homes, with easy access to the fridge and a clean bathroom, than experience the atmosphere of the stadium in person. Plus, sports fans today want the ability to watch multiple games at once; it’s not unusual for diehard fans to have two televisions set up with different games on, plus another game streaming on a tablet.

However, fans could be persuaded to make their way back to the stadiums; 45% of “premium fans” (who always or often buy season tickets) would pay more money for a better in-person experience. That’s where wearable technology comes into play.

Wearable Data — for Fans Too

At first glance, the sole application of wearable technology and data science should seemingly be to monitor and improve athlete performance. These tasks might include measuring heart rate and yards run, timing reactions and hand speed, gauging shot arch, and more, while also monitoring the body for signs of concussion or fatigue

And that’s largely true. For example, every NBA arena now uses SportVU, a series of indoor GPS technology-enabled cameras, to track the movements of the ball and all players on the court at a rate of 25 times per second. With that data, they can use myriad statistics concerning speed, distance, player separation, and ball possession to decide when to rest players.

Similarly, Adidas’ Micoach is used by the German national soccer team during training to monitor speed, running distances, and heart rates of each player. In fact, this system is credited with the decision to sub in German soccer player Mario Gotze in the 88th minute of the 2014 World Cup final; in the 113th minute, the midfielder scored the World Cup-winning goal.

However, some sports franchises are using that wearable technology to benefit the fan sitting in the stadium. For example, the Cleveland Cavaliers’ Quicken Loans Arena (an older stadium) was retrofitted with SportsVU; however, they don’t use them just for determining when LeBron James needs a break. Instead, the Cavs use the data tracked by SportsVU to populate their Humungotron with unique statistics tracked in real-time during the game. The Cavs then took this data to the next level by using the stats in their social media marketing and to partner with various advertisers.

How Analytics Are Improving the Stadium Experience

Besides sharing interesting statistics on the JumboTron during the game, stadiums are using data from athletes and fans to enhance the spectators’ experience. In fact, stadiums are actually mirroring the in-home experience, through various apps and amenities that reach the spectator right in their seat.

And at times, they’re going above and beyond simply imitating the in-home experience. Take the Sacramento Kings, for example. In 2014, the team partnered with Google to equip many of its courtside personnel (mascots, reporters, and even dancers) with Google Glass. Fans were able to stream close-up, first-person views of the action through their mobile devices, allowing them to feel closer than their upper-level seats would suggest.

Levi’s Stadium in Santa Clara (home of the San Francisco 49ers) boasts a fiber optic network that essentially powers every activity in their thoroughly modern stadium. The stadium contains 680 Wi-Fi access ports (one for every 100 seats in the stadium) and around 12,000 ethernet ports, allowing everything from video cameras and phones to connect to a 40 gigabit-per-second network that’s 10,000 times faster than the federal classification for broadband. 1700 wireless beacons use a version of Bluetooth to triangulate a fan’s position within the stadium and give them directions. And for fans who don’t want to leave their seats, a specially developed app can be used for tickets, food delivery to your seat, and watching replays of on-field action.

The Miami Dolphins, meanwhile, have partnered with IBM and use technology from their “Smart Cities” initiative to monitor and react to weather forecasts, parking delays, and even shortages of concessions at specific stands in Sun Life Stadium. The Dallas Cowboys’ AT&T Stadium features 2,800 video monitors throughout the stadium as well as more than five million feet of fiber optic cable, used for everything from gathering data to ordering food in-suite.

NFL teams aren’t the only franchises making use of sports analytics. The Barclays Center, home of the Brooklyn Nets, uses Vixi to display properly hashtagged tweets on multiple big screens throughout the arena. They also use AmpThink, a series of networking tools that require the user to submit some personal information before logging onto the arena’s Wi-Fi; that way, they’re able to collect data on how and where people are logging in, as well as what services they’re using while in the arena. Fans can already order food and drink from their seats and replay sequences from various camera angles, and in the future, they’ll be able to use an app that gives information about restroom waits and directions to the restrooms with the shortest lines.

To some, the increase of connectivity might seem to take away from the experience of watching a game live; after all, how can you enjoy live action if you’re constantly staring down at your phone? On the contrary: by employing these apps to judge the shortest bathroom lines or order food directly to their seats, fans are able to stay in their seat longer and watch more of the games.

While this technology certainly isn’t cheap (and will be reflected in increased ticket prices), those extra minutes of action may be worth the higher cost to some fans. Ultimately, it’s up to the fans to decide if paying more for tickets is worth the premium experience — and the time saved waiting in line.

Bringing Fans Back, One Byte at a Time

Sports teams aren’t going to lose their fans to television without a fight. And with the majority of sports franchises embracing wearable and mobile data in some form or another, it’s a natural transition for marketing departments to apply that data to the fan experience. With easy access to Wi-Fi, snacks, replays, and shorter restroom lines, sports fans can combine the atmosphere of game day with the comfort of being in their own homes.

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Big Data from Small Devices?

Predictions are in our DNA.  Millions of us live with them daily, from checking the weather to reading daily horoscopes.   When it comes to Big Data, the industry has shown no shortage of predictions for 2014.  In fact, you might have read about insights on women in data science, ambitions for Machine Learning or a vision for the consumerization of Advanced Analytics.

It is quite difficult to accurately assess when these predictions will materialize.  Some of them will see the light of the day in 2014 but many might take until 2020 to fully mature. 

Wearable Devices and Big Data

Take the case of wearable devices.  There is no question that mobile phones, tablets and smart watches will become pervasive over the next 5 years.  According to Business Insider, the market for wearables could reach $12B in 2018 and theses devices have a strong potential for changing our habits all together. 

The only issue is how quickly we will adopt them and in turn get clear value from them.  Pioneers like Robert Scoble have made a great case for the opportunity but also have provided a down to earth perspective for the rest of us (his recent article on “Why Google Glass is doomed ” is a gem).

So, I predict that, while the tipping point for such technologies might be 2014, but the true disruption might not happen before 2020.  Why?  Definitions and Center of Design. 

For starters, the definition of a “wearable device” is still very loose.  I’m a big fan of devices like the Jawbone UP, the Fitbit and the Basis watch.  In fact, I’ve built an analytical system that allows me to visualize my goals, measure and predict my progress already. My “smart devices” collect information I couldn’t easily understand before and offer the opportunity to know more about myself.  Big Data growth will primarily come from these types of smart devices. 

The wearables that are still confusing are the so-called “smart-watches”.  Theses watches, in my opinion, suffer from a “Center of Design” Dilemna.

Let me explain: the technology industry is famous for wanting new technologies to sunset old ones.  When Marc Benioff introduced Chatter, he said it would obliterate email.  When PC shipments went down, the industry rushed to talk about the “Post-PC” era.  Have any of these two trends fully materialized yet?! 

The answer is unfortunately not simple.  Smart watches, phones, tablets and PC all have a distinct use cases, just like email and social apps.  Expecting that one technology would completely overlap the other one would be disregarding what I call a product’s “center of design”.  The expression refers to the idea that a particular technology can be stretched for many uses but that it is particularly relevant for a set of defined use cases.  Let’s take the example of the phone, tablet and PC:

  • A phone is best used for quickly checking texts, browsing emails, calendar invites…and of course making phone calls (duh!)
  • A tablet is best used for reading and browsing websites, documents, books and emails.  Typing for 12 hours and creating content is possible but it’s not a tablet’s center of design…
  • A PC or a Macbook are best for creating content for many hours.  They might be best for typing, correcting and working on projects that require lots of editing.

When I see an ad like this on the freeway, I really question the value of an additional device.  What can a watch in this case add, if the wrist that wears it, is also connected to a hand that holds a much more appropriate device?

Big Data from Wearables is a Predictive Insight for 2020 in my opinion, because I think that, by then, the broad public will have embraced them into use cases that truly add value to their lives.

--

Bruno Aziza is a Big Data entrepreneur and author.  He’s lead Marketing at multiple start-ups and has worked at Microsoft, Apple and BusinessObjects/SAP.  One of his startups sold to Symantec in 2008 and two of them have raised tens of millions and experienced triple digit growth.   Bruno is currently Chief Marketing Officer at Alpine Data Labs, loves soccer and has lived in France, Germany and the U.K.  

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Guest blog post by Mike Davie.

With the exponential growth of IoT and M2M, data is seeping out of every nook and cranny of our corporate and personal lives. However, harnessing data and turning it into a valuable asset is still in its infancy stage of development.  In a recent study, IDC estimates that only 5% of data created is actually analyzed. Thankfully, this is set to change as companies now have found lucrative revenue streams by converting their data into products.

Impediments to Data Monetization

Many companies are unaware of the value of their data, the type of customers who might potentially be interested in those data, and how to go about monetizing the data. To further complicate matters, many also are concerned that the data they possess, if sold, could reveal trade secrets and personalized information of their customers, thus violating personal data protection laws.  

Dashboards and Applications

The most common approach for companies who have embarked on data monetization is to develop a dashboard or application for the data, thinking that it would give them greater control over the data. However, there are several downsides to this approach:

  • Limited customer base
    • The dashboard or application is developed with only one type of customer in mind, thus limiting the potential of the underlying data to reach a wider customer base.
  • Data is non-extractable
    • The data in a dashboard or application cannot be extracted to be mashed up with other data, with which valuable insights and analytics can be developed.
  • Long lead time and high cost to develop
    • Average development time for a dashboard or application is 18 months. Expensive resources including those of data scientists and developers are required.  

Data as a Product

What many companies have failed to realize is that the raw data they possess could be cleansed, sliced and diced to meet the needs of data buyers. Aggregated and anonymized data products have a number of advantages over dashboards and applications.

  • Short lead time and less cost to develop
    • The process of cleaning and slicing data into bite size data products could be done in a 2-3 month time frame without the involvement of data scientists.
  • Wide customer base
    • Many companies and organizations could be interested in your data product.  For example, real time footfall data from a telco could be used in a number of ways:
      • A retailer could use mall foot traffic to determine the best time of the day to launch a new promotion to drive additional sales during off-peak hours.
      • A logistics provider could be combining footfall data with operating expenses to determine the best location for a new distribution centre.
      • A maintenance company could be using footfall to determine where to allocate cleaners to maximize efficiency, while ensuring clean facilities.
  • Data is extractable
    • Data in its original form could be meshed and blended with other data sources to provide unique competitive advantages.  For example:
      • An airline could blend real time weather forecast data with customer profile data to launch a promotion package prior to severe bad weather for those looking to escape for the weekend.
      • Real time ship positioning data could be blended with a port’s equipment operation data to minimize downtime of the equipment and increase overall efficiency of the port.

Monetizing your data does not have to a painful and drawn out undertaking if you view data itself as the product. By taking your data product to market, data itself can become one of your company’s most lucrative and profitable revenue streams. By developing a data monetization plan now, you can reap the rewards of the new Data Economy.

About the Author:

Mike Davie has been leading the commercialization of disruptive mobile technology and ICT infrastructure for a decade with leading global technology firms in Asia, Middle East and North America.

He parlayed his vision and knowledge of evolution of ICT into the creation of DataStreamX, the world's first online marketplace for real time data. DataStreamX’s powerful platform enables data sellers to stream their data to global buyers across various industries in real time, multiplying their data revenue without having to invest in costly infrastructure and sales teams. DataStreamX's online platform provides a plethora of real time data to data hungry buyers at the click of their fingertips, enabling them to broaden and deepen their understanding of the industry they compete in, and to device effective strategies to out-manoeuvre their competitors.

Prior to founding DataStreamX, Mike was a member of the Advanced Mobile Product Strategy Division at Samsung where he developed go-to-market strategies for cutting edge technologies created in the Samsung R&D Labs. He also provided guidance to Asia and Middle East telcos on their 4G/LTE infrastructure data needs and worked closely with them to monetize their M2M and telco analytics data.

Mike has spoken at ICT and Big Data conferences including 4G World, LTE Asia, Infocomm Development of Singapore's IdeaLabs Sessions. Topics of his talks include Monetization of Data Assets, Data-as-a-Service, the Dichotomy of Real-time vs. Static Data.

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Guest blog post by ajit jaokar

The Open Cloud – Apps in the Cloud 

Smart Data

Based on my discussions at Messe Hannover , this blog explores the potential of applying Data Science to manufacturing and process control industries. In my new course at Oxford University (Data Science for IoT) and community (Data Science and Internet of Things ), I explore application of predictive algorithms to Internet of Things (IoT) datasets. 

The Internet of Things plays a key role here because sensors in machines and process control industries generate a lot of data. This data has real, actionable business value (Smart Data). The objective of Smart data is to improve productivity through digitization. I had a chance to speak to Siemens management and engineers about how this vision of Smart Data is translated into reality

 

When I discussed the idea of Smart Data with Siegfried Russwurm, Prof. Dr.-Ing. - Member of the Managing Board of Siemens AG ,  he spoke of key use cases that involve transforming big data into business value by providing context, increasing efficiency  and addressing large, complex problems. These include applications for Oil rigs, wind turbines and process control industries etc. In these industries, the smallest productivity increase translates to huge commercial gains.  

This blog is my view on how this vision (Smart data) could translate into reality within the context Data Science and IoT.


Data: the main driver for Industrie 4.0 ecosystem

 At Messe  Hannover, it was hard to escape the term ‘Industry 4.0’ (in German – Industrie 4.0). Broadly, Industry 4.0 refers to the use of electronics and IT to automate production and to create intelligent networks along the entire value chain that can control each other autonomously. Machines generate a lot of Data. In many cases, if you consider the large installation such as an Oil Rig, this data is bigger than the traditional ‘Big Data’.  Its use case is also slightly different i.e. the value does not like in capturing a lot of data from outside the enterprise – but rather in capturing (and making innovative uses of) a large volume of data generated within the enterprise.  The ‘smart’ in smart data is predictive and algorithmic. Thus, Data is the main driver of Industry 4.0 and it’s important to understand the flow of Data before it can be optimized

The flow of Data in the Digital Enterprise

The ‘Digital factory’ is already a reality. For instance,  Industrial Ethernet standards like Profinet, PLM(Product lifecycle management) software like Teamcenter  and Data models for lifecycle engineering and plan management such as Comos. To extend the Digital factory  to achieve end-to-end interconnection and autonomous operation across the value chain (as is the vision of Industry 4.0), we need a component  in the architecture.  

The Open Cloud: Paving the way for Smart Data analytics

In that context,  the cooperation of Siemens with SAP to create open cloud platform. Is very interesting. The Open Cloud enables ‘apps in the cloud’  based on the intelligent use of large quantities of data. The SAP Hana architecture based on in-memory, columnar database provides analytics services in the Cloud. For instance, the "Asset Analytics"(to increase the availability of machines through online monitoring, pattern recognition, simulation,  prediction of issues) and  “Energy Analytics" ( revealing hidden energy savings potential)

Conclusions

While it is early days, based on the above, the manufacturing domain offers real value and tangible benefits to customers. Even now, we see the customers  who harness value from large quantities of Data through predictive analytics stand to gain significantly. I will cover this subject in more detail as it evolves. 

About the author

Ajit''s work spans research, entrepreneurship and academia relating to IoT, predictive analytics and Mobility. His current research focus is on applying data science algorithms to IoT applications. This includes Time series, sensor fusion and deep learning.  This research underpins his teaching at Oxford University (Big Data and Telecoms) and the City sciences program at the Technical University of Madrid (UPM). Ajit also runs a community/learning program through his company - futuretext for Data Science and IoT

Follow us @IoTCtrl | Join our Community

Read more…

guest blog by Jin Kim, VP Product Development for Objectivity, Inc.

Almost any popular, fast-growing market experiences at least a bit of confusion around terminology. Multiple firms are frantically competing to insert their own “marketectures,” branding, and colloquialisms into the conversation with the hope their verbiage will come out on top.

Add in the inherent complexity at the intersection of Business Intelligence and Big Data, and it’s easy to understand how difficult it is to discern one competitive claim from another. Everyone and their strategic partner is focused on “leveraging data to glean actionable insights that will improve your business.” Unfortunately, the process involved in achieving this goal is complex, multi-layered, and very different from application to application depending on the type of data involved.

For our purposes, let’s compare and contrast two terms that are starting to be used interchangeably – Information Fusion and Data Integration. These two terms in fact refer to distinctly separate functions with different attributes. By putting them side-by-side, we can showcase their differences and help practitioners understand when to use each.

Before we delve into their differences, let’s take a look at their most striking similarity. Both of these technologies and best practices are designed to integrate and organize data coming in from multiple sources in order to present a unified view of data for consumption by various applications to derive actionable insights, thus making it easier for analytics applications to use and derive the “actionable insights” everyone is looking to generate.

However, Information Fusion diverges from Data Integration in a few key ways that make it much more appropriate for many of today’s environments.

• Data Reduction – Information Fusion is, first and foremost, designed to enable data abstraction. So, while data integration focuses on combining data to create consumable data, Information Fusion frequently involves “fusing” data at different abstraction levels and differing levels of uncertainty to support a more narrow set of application workloads.

• Handling Streaming/Real-Time Data – Data Integration is best used with data-at-rest or batch-oriented data. The problem is that the most compelling applications associated with Big Data and the Industrial Internet of Things are often based on streaming, sensor data. Information Fusion is capable of integrating, transforming and organizing all manner of data (structured, semi-structured, and unstructured), but specifically time-series data, for use by today’s most demanding analytics applications to bridge the gap between Fast Data and Big Data. Another way to put this is Data integration creates an integrated set of data where the larger set is retained. By comparison, Information Fusion uses multiple techniques to reduce the amount of stateless data and provide only the stateful, valuable and relevant, data to deliver improved confidence.

• Human Interfaces – Information Fusion also adds in the opportunity for a human analyst to incorporate their own contributions to the data in order to further reduce uncertainty. By adding and saving inferences and detail that can only be derived with human analysis and support into existing and new data, organizations are able to maximize their analytics efforts and deliver a more complete “Big Picture” view of a situation.

As you can see, Information Fusion, unlike Data Integration, focuses on deriving insight from real-time streaming data and enriching this stream with semantic context from other Big Data sources. This is a critical distinction, as todays most advanced, mission-critical, analytical applications start looking to Information Fusion to add real-time value.

Originally posted on Data Science Central

Follow us @IoTCtrl | Join our Community

Read more…

Brontobytes, Yottabytes, Geopbytes, and Beyond

Guest blog post by Bill Vorhies

Now that everyone is thinking about IoT and the phenomenal amount of data that will stream past us and presumably need to be stored we need to break out a vocabulary well beyond our comfort zone of mere terabytes (about the size of a good hard drive on your desk).

In this article Beyond Just “Big” Data author Paul McFedries argues for nomenclature even beyond Geopbytes (and I'd never heard of that one).  There is a presumption though that all that IoT data actually needs to be stored which is misleading.  We may want to store some big chunks of it but increasingly our tools are allowing for 'in stream analytics' and for filtering the stream to identify only the packets we're interested in.  I don't know that we'll ever need to store Geopbytes but you'll enjoy his argument.  Use the link Beyond Just “Big” Data.

Here's the beginning of his thoughts:

Beyond Just “Big” Data

We need new words to describe the coming wave of machine-generated information

When Gartner released its annual Hype Cycle for Emerging Technologies for 2014, it was interesting to note that big data was now located on the downslope from the “Peak of Inflated Expectations,” while the Internet of Things (often shortened to IoT) was right at the peak, and data science was on the upslope. This felt intuitively right. First, although big data—those massive amounts of information that require special techniques to store, search, and analyze—remains a thriving and much-discussed area, it’s no longer the new kid on the data block. Second, everyone expects that the data sets generated by the Internet of Things will be even more impressive than today’s big-data collections. And third, collecting data is one significant challenge, but analyzing and extracting knowledge from it is quite another, and the purview of data science.

Follow us @IoTCtrl | Join our Community

Read more…

Guest blog post by ajit jaokar

Often, Data Science for IoT differs from conventional data science due to the presence of hardware.

Hardware could be involved in integration with the Cloud or Processing at the Edge (which Cisco and others have called Fog Computing).

Alternately, we see entirely new classes of hardware specifically involved in Data Science for IoT(such as synapse chip for Deep learning)

Hardware will increasingly play an important role in Data Science for IoT.

A good example is from a company called Cognimem which natively implements classifiers(unfortunately, the company does not seem to be active any more as per their twitter feed)

In IoT, speed and real time response play a key role. Often it makes sense to process the data closer to the sensor.

This allows for a limited / summarized data set to be sent to the server if needed and also allows for localized decision making.  This architecture leads to a flow of information out from the Cloud and the storage of information at nodes which may not reside in the physical premises of the Cloud.

In this post, I try to explore the various hardware touchpoints for Data analytics and IoT to work together.

Cloud integration: Making decisions at the Edge

Intel Wind River edge management system certified to work with the Intel stack  and includes capabilities such as data capture, rules-based data analysis and response, configuration, file transfer and  Remote device management

Integration of Google analytics into Lantronix hardware –  allows sensors to send real-time data to any node on the Internet or to a cloud based application.

Microchip integration with Amazon Web services  uses an  embedded application with the Amazon Elastic Compute Cloud (EC2) service. Based on  Wi-Fi Client Module Development Kit . Languages like Python or Ruby can be used for development

Integration of Freescale and Oracle which consolidates data collected from multiple appliances from multiple Internet of things service providers.

Libraries

Libraries are another avenue for analytics engines to be integrated into products – often at the point of creation of the device. Xively cloud services is an example of this strategy through xively libraries

APIs

In contrast, keen.io provides APIs for IoT devices to create their own analytics engines ex (smartwatch Pebble’s using of keen.io)  without locking equipment providers into a particular data architecture.

Specialized hardware

We see increasing deployment  of specialized hardware for analytics. Ex egburt from Camgian which uses sensor fusion technolgies for IoT.

In the Deep learning space, GPUs are widely used and more specialized hardware emerges such asIBM’s synapse chip. But more interesting hardware platforms are emerging such as Nervana Systemswhich creates hardware specifically for Neural networks.

Ubuntu Core and IFTTT spark

Two more initiatives on my radar deserve a space in themselves – even when neither of them have currently an analytics engine:  Ubuntu Core – Docker containers+lightweight Linux distribution as an IoT OS and IFTTT spark initiatives

Comments welcome

This post is leading to vision for Data Science for IoT course/certification. Please sign up on the link if you wish to know more when launched in Feb.

Image source: cognimem

Follow us @IoTCtrl | Join our Community

Read more…
RSS
Email me when there are new items in this category –

Sponsor