Join IoT Central | Join our LinkedIn Group | Post on IoT Central


Featured Posts (677)

Sort by

IoT Central Digest, August 15, 2016

Articles on wireless standards, finance, and medical devices are just some of the stories highlighted in this issue IoT Central Digest. If you're interested in being featured, we always welcome your contributions on all things IoT Infrastructure, IoT Application Development, IoT Data and IoT Security, and more. All members can post on IoT Central. Consider contributing today. Our guidelines are here.

How IoT Will Transform The Automotive Industry

Posted by Luke Ryan

Here’s a glimpse of how IoT connectivity, smart sensors and gadgets, edge computing, mobile apps and cloud services will revolutionize how you interact with and use your car.

Behold the great possibilities of the Internet of Medical Things (IoMT)

By Rick Blaisdell

Unlike other industries, healthcare has been relatively conservative and slow in embracing innovations like cloud computing and the IoT, but that is starting to change, especially if we think about the past years. Innovative tech products and services are more and more part of our daily lives, making it harder for healthcare providers to ignore the potential advantages of connected medical devices.

Realizing the Elusive Value promised by the Internet of Things – An Economic Perspective

By Anirban Kundu

Much has been said about the value at stake and new growth opportunities presented by the Internet of Things trend. A Cisco estimates puts this at $ 14.4 Trillion opportunity where as a new McKinsey survey values this around $ 6.2 Trillion by 2025. One thing which comes undisputed from various reports across analyst’s community is the significant addition to the global GDP, trade volumes and new opportunities which would be created across sectors and industries.  Most reports in unison claim the benefits of the Internet of Things and the far reaching consequences this would have for the city we live in, the buildings we work and live in to the vehicles we drive. Every aspect of our experience with the physical world would be re-imagined from the way we work, our shopping experience, our medical services to the purchase of the insurance and banking services.

Global IoT Market Grows Again Says Machina Research

Posted by David Oro

UK-based Machina Research is adding to the mix of predictions for IOT with a new Global IoT Market research report. Their headline today: Global Internet of Things market to grow to 27 billion devices, generating USD3 trillion revenue in 2025.

Does IoT Need Wireless?

By Wade Sarver

Hell yeah! Don’t get me wrong, you could use CAT 5 to connect most of this stuff, but the idea is to have the equipment everywhere and talking all the time, or at least when we need to. They need to be wireless controlled for it to work properly and to be autonomous. What fun would a drone be if you needed to have a copper line connected to it. The FCC laid out their plan to sunset copper lines. I did a lot of work on them but I won’t miss them because wireless is so cool! If you like copper so much, then put that smartphone down and use a landline, if you can find one.

Thoughts on IoT and Finance

By Javier Saade

IoT, smart devices, wearables, mobile technology and nanotech - yes, nanotech - are forcing financial services incumbents and challengers to rethink every aspect of their value chains.  Those value chains are getting to be exponentially more distributed and automated.   Increased digitization means more data being generated, from all kinds of places at an accelerating rate.   IoT, regardless of your perspective, promises to enable the development of new value-added services to improve and automate user engagement, customer acquisition and service delivery - everywhere at all times.  

Data Analysis for Running the Business the Intelligent Way

Posted by Marcus Jensen 

Our very own selves from so little as a decade ago could not even comprehend the amount of information we are exposed to on a daily basis. Everything from planners to weather information is nowadays absorbed through technology. The amount of data that circulates our daily lives can turn out overwhelming; however, if used intelligently, it can bring upon a world of help when running a business.

Additional Links

Follow us on Twitter | Join our LinkedIn group | Members Only | For Bloggers | Subscribe

Read more…

How IoT Will Transform The Automotive Industry

The advent of smartphones, and the rise of mobile internet and mobile apps disrupted and transformed the way we live and do business. Thanks to the millions of mobile apps you can buy or download from app stores, you practically have your mailbox, office, photo album, TV, game console, shopping cart and much more at your disposal any time you like.

Now, thanks to the Internet of Things, the phenomenon that is already triggering the next digital revolution, your car will become integrated with your increasingly-connected life and will be added to the collection of things that fit in that little gadget you carry in your pocket all the time. Already, the combination of IoT gadgets and mobile apps in vehicles is gaining popularity among consumers and fleet operators, providing functionality and opportunities that were inconceivable a few years ago, which make them more efficient, safer to drive, more resistant to crime and theft, and less costly to maintain.

The current possibilities are virtually endless, and the future is even more exciting. Here’s a glimpse of how IoT connectivity, smart sensors and gadgets, edge computing, mobile apps and cloud services will revolutionize how you interact with and use your car.

IoT provides improved access and security

With every part of your vehicle being connected to the internet, you’ll have better remote access and control over your vehicle’s functionality with your phone. Ignition, windows, lights, trunk, everything can be manipulated through your smartphone while you’re busy elsewhere.

So you can start the engine with a tap on the phone and let it warm up in winter while you’re having breakfast and going over news headlines.

BMW puts this functionality to display with its My BMW Remote App, which enables car owners to remotely unlock or lock their cars, sound the horn, flash the lights, and turn on the auxiliary heating/ventilation system.

Viper SmartStart is an example of how you can integrate IoT with legacy technology. The kit, comprised of IoT gadgets, a mobile app, and a mobile app will give you enhanced control on your vehicle. After installing the IoT devices in your car, you can use the SmartStart app to start, lock, unlock and locate your car with a swipe and tap on your phone.

But mobile access surpasses convenience and also enters the realm of security.

Today’s mobile devices protect your data with state-of-the-art security and encryption features that are hard to hack even for government agencies. IoT will help you leverage this enhanced level of security in your car and improve theft prevention.

NFC door locks can relieve you of the nightmares linked to your car keys being lost or stolen. After registering the lock with your phone through its associated mobile app, you can unlock your car by tapping your phone against the handle. You can rest assured that only a person possessing your phone and being able to unlock it can unlock the door to your car. And in case you want to lend your car to a friend or family member, all you have to do is to grant access to their phone through your mobile app.

TapKey has implemented this concept successfully, creating a mobile app that turns the smartphone to a car key and enables car owners to securely and easily grant vehicle access to others.

And in case you lose your phone, having the lock registered with another phone will be a matter of logging into a cloud app and introducing your new phone.

Smart car alarms will quickly send an alert to your smartphone in case your car is being broken into, and in case your car does get stolen, your mobile app will help you find and track it through its GPS device. This can help report the theft and have it recovered much faster.

IoT provides improved control over vehicle status and driving

On-board Diagnostic (OBD). Telematics devices are smart cloud-connected IoT boxes installed on vehicles which provide insights and real-time information about vehicle health and driver habits. These devices function by communicating with a set of smart sensors installed on different vehicle parts including doors, windows, engine and tires, and constantly monitor and report the status of the vehicle.

A mobile app interacting with the telematics system can act as a digital assistant which alerts drivers in real-time about measurable events such as speeding, sharp cornering, seatbelt usage and over-acceleration. The app can also communicate with the cloud service where historical driving data is stored in order to enlighten drivers about bad habits they should correct, and their driving improvements over time.

EcoDrive is an interesting app that monitors your driving habits in real-time, including acceleration, deceleration, changing gears and speed variation, and gives you a score (or eco:Index) which helps you assess your safe driving skills.

More advanced use of IoT and telematics would be to keep tabs on and alert about maintenance issues that can compromise passenger safety, such as low tire pressure, malfunctioning engine, parts that need replacements and overdue services. Drivers would be able to get a complete report of their vehicles with a tap and swipe on their phone and without the need to look under the hood.

Chrysler’s UConnect app is an example of the efficient use of telematics and mobile technology. The app lets you remotely monitor and control your car’s maintenance, provides you with monthly health reports and alerts you about critical maintenance issues that need immediate attention.

The best part about telematics and on-board diagnostics is that they’re standardized across the industry and do not require vendor-specific integration, which means your mobile app and historical driving data can be migrated and ported when you switch vehicles.

IoT sensors improve vehicle safety

While the intersection of IoT and vehicles provides many opportunities, perhaps safety is the most prevalent. If there’s one thing that IoT should be praised for, it’s the fact that it’s promoting safe driving and assisting drivers in avoiding road incidents.

With more and more cities investing in smart infrastructures, IoT-powered vehicles are much better prepared to help drivers in commuting safely. Interacting with IoT sensors installed on roads, connected vehicles can detect when drivers are veering off the road as the result of distraction or fatigue, and alert them to steer back on the road. In the case of semi- and fully-autonomous vehicles, the car itself can take matters into its hands and correct the vehicle’s direction if the driver doesn’t react.

Smart sensors and smart cement can also gather information about road surface and bridge conditions. Connecting to cloud servers, mobile apps get real-time insights about road conditions and assist drivers in choosing safer roads and avoiding hazardous areas before heading out. In case a driver treks into a particularly dangerous zone, e.g. an ice-covered road, connected vehicles will directly communicate with local gateways and sensors, retrieve data about road conditions, and warn drivers about the dangers and instruct them to slow down.

In 2007, the collapse of the I-35W Mississippi Bridge in Minneapolis resulted in 13 casualties and hundreds of millions of dollars’ worth of damage. Today’s IoT technology could’ve detected the bridge’s failing structure and warned both maintenance authorities and drivers about the dangers, saving lives and preventing damage.

IoT helps avoid traffic and congestion

Few things are as frustrating as getting stuck in a traffic jam when you’re late for work or want to attend an important event. Being able to avoid congestion and plan in advance can save you time and also reduce fuel consumption.

Fortunately, IoT can help in this sector as well. IoT sensors in roadways track and report commuting in real-time, which can help drivers better plan their trip and avoid crowded areas while also assisting city authorities in distributing congestion and pushing traffic toward the less frequented areas.

Mobile apps gleaning information from traffic sensors can estimate time of arrival based on the level of traffic and also provide alternative routes to drivers which will cut down the time and stress of the trip.

The added benefit of controlling traffic through IoT technology will help reduce car accidents considerably, and will collectively reduce pollution and help us have greener cities.
IBM has a great post on how it’s using apps and its IoT platform to collect traffic data, generate insights and control congestion.

Caveats and requirements

All the benefits of connected, IoT-equipped and mobile controlled vehicles isn’t without its drawbacks. The vehicle industry is already dealing with several worries where vehicle IoT is concerned, chief among them being security and privacy issues. There have already been several cases where connected cars have been hacked through mobile apps, infotainment systems and other insecure connected gadgets that are installed on the car.

While none of these dismisses the importance and impact that IoT will have over the future of cars, it does highlight the need to pay more attention to the security of IoT, especially in the vehicle industry.

This can be achieved by making sure the developed software is built by experts that have the knowhow to deliver both functionality and security. Secure coding should be one of the main tenets of any software that will be installed in our cars and their related peripherals, lest we want to see them be exploited by malicious actors and used against us.

The future of IoT in vehicles

For the moment, you have your car in your pocket. But this is just a taste of how IoT is transforming the automotive industry. Cars that can be parked with a single tap of an app button, circular economies where automobiles are shared and rented as a service through mobile apps, and the era of completely autonomous vehicles are not far away. Every day, the Internet of Things is conquering new summits. Who knows what tomorrow holds?

See how Mokriya develops solutions for IoT problems

(Photo courtesy of Faraday Future)

Read more…

By Rick Blaisdell. This article originally appeared here.

Unlike other industries, healthcare has been relatively conservative and slow in embracing innovations like cloud computing and the IoT, but that is starting to change, especially if we think about the past years. Innovative tech products and services are more and more part of our daily lives, making it harder for healthcare providers to ignore the potential advantages of connected medical devices.

Moreover, a new term is used more and more to describe this amazing connection between the Internet of Things and healthcare, and that is the Internet of Medical Things (IoMT). IoMT is the collection of medical devices and applications that connect to healthcare IT systems through online computer networks. Medical devices equipped with Wi-Fi allow the machine-to-machine communication, thus developing the basis of IoMT.

At the same time, healthcare companies are renewing their operative models through digital health technologies and are focusing more on prevention, personalization, consumer engagement and improved patient outcomes to remain competitive. Here are some great examples:

  • An asthma inhaler with a built-in GPS-sensor – Propeller Health has released an FDA-approved asthma inhaler with a GPS-sensor. Basically, a tracking device is placed into an asthma inhaler, providing support and helping reduce the cost for health systems and thus for patients. Every time the inhaler is used, time and location are being saved, the GPS-data recorded and imported into a personal profile. This allows for tracking of the time and location of the use of the inhaler, allowing a user to even avoid those areas which may prompt his/her asthma attacks.
  • New system for optimizing workflows in hospitals – In cooperation with Microsoft andHealthcast, The Henry Mayo Newhall hospital in Valencia, California implemented a smart system which provides the doctors with access to a wide range of data: from patient files to test results, prescriptions and much more. This was achieved by connecting 175 hospital devices, as well as the personal devices of the doctors, to the available computing offices and systems. Thanks to the new system, the doctors have secure access to examine laboratory tests, to write prescriptions, or to view the patient files at any time. As a result, the time for registration was reduced by 95% – from two minutes to six seconds.
  • Digital contact lenses for diabetics – The contact lenses were jointly developed by Google and the Swiss health care group Novartis, and will help diabetics to measure their levels of blood sugar through tear liquid and to transfer it to a glucose monitor or a smart device like a mobile phone.
  • Smart monitoring of medication – Vitality has been one of the pioneers in the medication area, developing a new system called GlowCap. Those drug containers use light and sounds to signal the patient when the time to take the medicine has come. They also remind the patient automatically through a call. Moreover, every week a report is being sent to customers, with information about how they should be taking their medication.

To drive adoption of IoMT systems and to achieve more end-to-end solutions, hospital administrators, vendors and manufacturers must cooperate to lead healthcare through this important change. The impact is clearly visible, as companies are developing a collaborative culture in embracing digital technology, and the next five to 10 years will be essential as they manage the data from patients and incorporate this into the physician’s workflow.

Photo source: freedigitalphotos.net

Read more…

Does IoT Need Wireless?

By Wade Sarver. This article originally appeared here

Hell yeah! Don’t get me wrong, you could use CAT 5 to connect most of this stuff, but the idea is to have the equipment everywhere and talking all the time, or at least when we need to. They need to be wireless controlled for it to work properly and to be autonomous. What fun would a drone be if you needed to have a copper line connected to it. The FCC laid out their plan to sunset copper lines. I did a lot of work on them but I won’t miss them because wireless is so cool! If you like copper so much, then put that smartphone down and use a landline, if you can find one.

So, back to IOT, (Internet of Things), they rely on wireless connections for more than convenience. This is how the machine to machine, M2M, really take off. Whether it’s to control valves for a water company or to read your electric meter or to control natural gas flow, you need to have connectivity everywhere. We just need to define what that connectivity will be. It could be the standard carrier networks, LTE really. That is going to be key for so much of this. But most of these systems will need much less bandwidth.

Small data networks, that sounds crazy, right? NOT! You see the new networks are built for larger packets, so they are so inefficient, and too expensive, for a simple command to open or close a valve. LTE and Wi-Fi seem like overkill for these applications, although they are everywhere and the most convenient to work with, especially Wi-Fi, it’s in your house and would be a great way for your smart home full of IOT devices to talk to your smartphone and the real world.

That is why the LTE format may not be the best for IOT, although it would be everywhere so by default it may be the technology of choice.

So how will wireless IOT work?

They need something for outdoor communication like LoRa, the low-bandwidth system. There is a LoRa Alliance, if you want to read more about what they are up to. Another good article on LoRa is here where they go into detail about how it works. What they explain is that they are planning to use the spectrum that is left behind, with smaller bandwidth. They way the Semtech chip works is that they utilize spectrum that is sub giga-hertz, like 109MHz, 433MHz, 866MHz, and 915MHz where they have smaller amounts of spectrum. They need to stay away from the license free spectrum because it might interfere.

There is another format called SigFox for outdoor communication. Again, made for very small packets of data. I found information at here if you want more information but here is what I got out of it. They are using the 915MHz spectrum (ISM band license free), using 2 types of Phase Shift Keying, PSK. This supposedly will help get the data through the noise. I am not sure what the coverage would be for something like this but I would bet its very limited. This is a low power, wide area, (LPWA) network. A good article on SigFox is here if you want to learn how they plan to deploy. I am told that they already have several deployments in the USA, although I don’t know of any personally.

Now, for the smart home, inside a building, or the smart office, you could use Wi-Fi, ZigBee, Z-Wave, Bluetooth, or something proprietary. We all know Wi-Fi and Bluetooth, right? It’s on your smartphones and in your homes. What we don’t know if ZigBee and Z-Wave.

What is ZigBee for IOT? Well, according to the ZigBee Alliance it is a wireless language that is used to connect devices, which is such a generic explanation that I could use for any wireless protocol. Come on!

So I went into Wikipedia at https://en.wikipedia.org/wiki/ZigBee where they give a much better explanation. It is line of site, LOS, and very short-range. It works in the ISM band, just like Wi-Fi, (2.4GHz in most countries but also in 915MHz in USA and Australia, 784MHz in China, 868MHz in Europe). The data rate is very small, remember I said smaller packets are all you need? This is made for very small and efficient bursts of data. They also support mesh networking. Mesh means that the devices not only connect to the hub but they can repeat the signal to each other forming a mesh. This is a great way to extend coverage if you don’t need massive bandwidth.

What is Z-Wave for IOT? Z-Wave takes ZigBee and makes some enhancements. It specifically works in the 908.42GHz range in the USA and 868.42MHz band in Europe. For a great explanation go here but its made for very small networks in the home. Find more at http://www.z-wave.com/ but I haven’t heard much more on this except that they have a version that will work with the Apple iWatch.

As you can see there are many technologies to roll out the IOT format. I don’t really know if there is a clear winner but I think it depends on the need. The wireless backhaul will come down to a chip they add to the device based on need, coverage, and cost. I could see someone using all of the technologies in a device to get the coverage they need, like maybe utility meters. That would make sense because it would be a one-time up front cost. However, for the in home stuff, cheap is what they need. I seriously don’t see people putting in a new network in their homes if they don’t have to but many companies will say you need a “hub” which will be the special format switch that their devices will, in theory, talk to the Wi-Fi in their homes. I already see it but it looks like they want to sell more devices in the home. So maybe high-end stuff will need the hub. I could see the hub as another line of defense in security, where if someone hacks your Wi-Fi and/or cable router then they would need to get by another device to get to your thermostat or light switches.

However, for an outdoor network I could see a dedicated network taking off for several reasons, cost reliability, and security. It costs money to pay the carrier a fee every month when you have a small low data device on it when you could put one of the cheaper hotspots in a space to connect your devices. Again, it really comes down to cost and reliability. Many will say they want security, but how secure can they really be?

A few more articles that may interest you:

http://pages.silabs.com/rs/silabs/images/Wireless-Connectivity-for-IoT.pdf?mkt_tok=3RkMMJWWfF9wsRoguKjNZKXonjHpfsX86%2B4rWKK3lMI%2F0ER3fOvrPUfGjI4DSsJkI%2BSLDwEYGJlv6SgFTLPBMbNsz7gOXBg%3D

http://postscapes.com/internet-of-things-protocols/

https://en.wikipedia.org/wiki/LPWAN

http://www.semtech.com/wireless-rf/internet-of-things/

https://www.micrium.com/iot/devices/

http://www.networkcomputing.com/internet-things/10-leaders-internet-things-infrastructure/1612927605

https://www.thethingsnetwork.org/

So let me know what you think, email wade4wireless@gmail.com when you think of something to say!

Photo Credit here.

Read more…

With the explosion of IoT in insurance, ensuring cyber security, providing automated services and improving overall customer engagement are integral for businesses to develop.

Blockchain could be the saving grace to unlocking all of these issues and to supporting insurers looking to adopt IoT and make their business plan make sense. So, where are insurers at in adopting this technology and what are the benefits?

To put a spotlight on the connection between blockchain and IoT for insurers, Insurance Nexus conducted exclusive interviews with Everledger, Guardtime and CGSC and created an exclusive white paper, you can access the document right here.

Read the white paper to gain a clearer perspective on how blockchain can help insurers set up IoT in their business, including how to:

  • Improve security and privacy: customer confidentiality and security concerns act as a barrier to insurers looking to employ IoT, get to grips with the way blockchain circumvent the challenges of security and privacy
  • Drive better customer engagement: for years, insurance has been regarded with little trust by customers, discover how IoT and blockchain opens the doors to innovative engagement tools in insurance
  • Implement automated services: lengthy policy approvals and claims authorizations act as a deterrent to customers and are operationally inefficient, learn how blockchain can support fast and efficient automated services

The white paper is complimentary and can be accessed right now

I hope that you enjoy the read and please let me know if you have any questions.

Kind Regards,

Marsha

Marsha Irving
Head of Innovation
Insurance Nexus
T: 1 800 814 3459 ext 4353
E: marsha.irving@insurancenexus.com

Insurance Nexus is part of FC Business Intelligence Ltd. FC Business Intelligence Ltd is a registered company in England and Wales. Registered number 04388971, 7-9 Fashion Street, London, E1 6PX, UK

Insurance Nexus is the central hub for insurance executives. Through in-depth industry analysis, targeted research, niche events and quality content, we provide the industry with a platform to network, discuss, learn and shape the future of the insurance industry.

Read more…

By Anirban Kundu. This post originally appeared here

Much has been said about the value at stake and new growth opportunities presented by the Internet of Things trend. A Cisco estimates puts this at $ 14.4 Trillion opportunity where as a new McKinsey survey values this around $ 6.2 Trillion by 2025. One thing which comes undisputed from various reports across analyst’s community is the significant addition to the global GDP, trade volumes and new opportunities which would be created across sectors and industries.  Most reports in unison claim the benefits of the Internet of Things and the far reaching consequences this would have for the city we live in, the buildings we work and live in to the vehicles we drive. Every aspect of our experience with the physical world would be re-imagined from the way we work, our shopping experience, our medical services to the purchase of the insurance and banking services.

In midst of all these far reaching consequences lies the biggest dilemma for the early adopters of Internet of Things. The promised value seems to be bit more elusive and early adopters still have not found the golden bullet to unlock all the treasure trove as has been outlined in the research. While we are confident about the promises of 2020, the IOT early adopters working in 2016 seems to be in for a “cognitive dissonance”. The journey to the value realization is still more distant and needs some fundamental restructuring of the existing business processes and industry structure as it exists today.

In this blog I intend to take a detailed look at the value realization dilemma with concepts from Economics and Analytics and chart a detailed path to the all elusive value realization. This would lay the foundation of a “Business Value Calculator” for the IOT scenarios which can be adopted by various entities to realize the potential of IOT.  At the onset we need to reexamine the aggregate consumer demand in the context of Internet of Things.

The Promise of the Infinity:

 The “Consumer Demand” curve needs to be revisited in the context of Internet of Things to bring fore the “Promise of the Infinity”.  Today our industry structure and the cost of production imply a physical limit on the profitable supply of the aggregate quantity demanded and is limited by the equilibrium quantity arrived at by the intersection of the supply and demand curve. As can be seen in the Figure below there are 2 major opportunities which has not been part of the revenue for the company namely – consumer surplus and the area of the curve beyond the equilibrium quantity.

Interestingly enough the area beyond the equilibrium does not even have a mention in economics literature due to constraints of profitability. However, in the context of Internet of Things this region which till now has not been accounted in any financial calculations would be critically examined and holds the key for the promise of the infinity.

Fig 1: Consumer Demand Curve/Equilibrium Pricing

Business’s today are based on this demand supply structure where we have spent elaborate efforts to reach the highest possible quantity demanded and continuously worked towards decreasing the price and bringing more customers into the fold. However as with the physical networks this limit is still a finite limit and as such we never had to explore the “fat tail” of the consumer demand curve.  This however is changing with the new business models where products are being offered as services. This fundamental transition has now liberated the current constraints on the product pricing and opens up the “Promise of the Infinity”.  This coupled with the power of the network has now made it possible for the ecosystem to drastically bring down the prices of the products by converting them into usage based services.

With new pricing structure and the offering of the products as services we need to reexamine the demand curve being the equilibrium previously set due to physical constraints.  The new value is now added by the large number of quantity demanded in the calculation of the value captured by the enterprise. While we see the prices of the services driven down we more than compensate this decrease by an exponential increase in the quantity demanded at the price.  This open up 2 interesting analytical scenarios first being the “price elasticity” analysis of the consumers and the second being resource usage analysis. 

The Power of Exponential

We are now in the era of transition where we are set to see that more and more products would be offered as services and as such we are moving to a completely new paradigm of computing the quantity demanded.  In the earlier figure where the limits to quantity demanded were also bound by the limits of affordability. There is a finite limit to the number of the people who could afford to “buy” a Ferrari or the most expensive jets. On the supply side we also would have the limits on to the units produced profitably. This has a fundamental change in the price elasticity of the products v/s service.

As the product purchase is bound by the physical limits there is considerably higher price elasticity than the price elasticity of the “products as a service”. This is a fundamental change which changes the slope of the demand curve and makes it much flatter in case of products as services and hence increasing the quantity demanded exponentially. 

Earlier the revenue recognized by the company was at the time of the purchase and additional services paid by the users. In case of the product as services we would convert one time product cost into usage based pricing and this would imply that the number of transactions in case of the “products as a service” is exponentially higher than the number of products sold.  

In a resource sharing paradigm the quantity defined would be based on the number of times the service is utilized at a reduced price as compared to the outright purchase price. This coupled with the net new users of the services takes the number of transactions as an exponential of the previous constrained quantity supplied.

Fig 2: An exponential increase in the number of transactions resulting from the new business model of products being offered as services

This is the foundation to start the definition of the IOT Value calculator. The final revenue increase is produced by the interaction of the increased quantity demanded and the reduced price of product when offered as a service.  In the next blog we would illustrate a more analytical treatment of the difference in the price elasticity between the two models. Also the usage metrics analysis based on the customer preferences. As in evident in the revenue calculation we have 2 exponential effects against the substantial decrease of the product price. Considering the nature of the inelastic demand curve for the “product as a service” we have the quantity effects far outweigh the effects of the price decrease. A mathematical treatment is available on request.

The analysis therefore lays the foundation for unlocking the elusive value of the IOT. Here we define this from an economic perspective and a follow up paper would be published where a company can simulate the usage behavior, price elasticity and increased number of transactions.

Finally the appeal of Consumer Surplus and Perfect Price Discovery

This is the sweet spot where advanced analytics meets the Economics to present the additional opportunities of mass personalization. We have seen the value which is captured moving down the “fat tail” of the demand curve.  Advanced analytics through segmentation, clustering and perfect price discovery helps us to transform the consumer surplus into economic value. 

While the demand for the product as a service would gather momentum, we would still see the need of mass personalization being driven by the ability of the enterprises to transform their manufacturing facility to enable lot size 1 production.  Harley Davidson had cut the lead time in the development of the customized production to less than 6 hours. This leads the fragmentation of the existing business models fracturing along two paths- one path to capture the high value consumer surplus through value added personalized offering and on the other side we would exponentially increase the number of transactions being offered at a lower price made possible orienting the product offering as services.

With advanced techniques in customer segmentation and the availability of personalized data availability per user we now are able to offer personalized products to translate the consumer surplus to economic value. While the traditional pricing strategies related to segmentation to offer group, channel or regional pricing have been employed successfully in the past to capture more of the consumer surplus, there were still potential to capture additional value specific to individual users. “Mass personalization” would help to transform more of the consumer surplus into economic value.

Bring in the additional value of the consumer surplus and combining it with the value based on the products as a service companies would be able to significantly extract the elusive of the IOT and set us on the path to create an Internet of Things “Value Calculator”. 

Read more…

Technology. Big Data. Internet of Things. Cloud. The promises are so big, the possibilities so great, but it’s often so difficult to know if progress is really being made in ways that will benefit the world we live in. Here’s a look at some of the activities and initiatives worth reading in the agriculture technology category.

Read more…

UK-based Machina Research is adding to the mix of predictions for IOT with a new Global IoT Market research report.

Their headline today: Global Internet of Things market to grow to 27 billion devices, generating USD3 trillion revenue in 2025

Key findings include:

  • The total number of IoT connections will grow from 6 billion in 2015 to 27 billion in 2025, a CAGR of 16%.
  • Today 71% of all IoT connections are connected using a short range technology (e.g. WiFi, Zigbee, or in-building PLC), by 2025 that will have grown slightly to 72%. The big short-range applications, which cause it to be the dominant technology category, are Consumer Electronics, Building Security and Building Automation.
  • Cellular connections will grow from 334 million at the end of 2015 to 2.2 billion by 2025, of which the majority will be LTE. 45% of those cellular connections will be in the ‘Connected Car’ sector, including both factory-fit embedded connections and aftermarket devices.
  • 11% of connections in 2025 will use Low Power Wide Area (LPWA) connections such as Sigfox, LoRa and LTE-NB1.
  • China and the US will be neck-and-neck for dominance of the global market by 2025. China which will account for 21% of global IoT connections, ahead of the US on 20, with similar proportions for cellular connections. However, the US wins in terms of IoT revenue (22% vs 19%). Third largest market is Japan with 7% of all connections, 7% of cellular and 6% of global revenue.
  • The total IoT revenue opportunity will be USD3 trillion in 2025 (up from USD750 billion in 2015). Of this figure, USD1.3 trillion will be accounted for by revenue directly derived from end users in the form of devices, connectivity and application revenue. The remainder comes from upstream and downstream IoT-related sources such as application development, systems integration, hosting and data monetisation.
  • By 2025, IoT will generate over 2 zettabytes of data, mostly generated by consumer electronics devices. However it will account for less than 1% of cellular data traffic. Cellular traffic is particularly generated by digital billboards, in-vehicle connectivity and CCTV. 

In a prepared statement Machina Research CEO Matt Hatton commented: “Through our regular ongoing work in our IoT Forecasts Research Stream we are constantly monitoring hundreds of different constituent applications across every country and adjusting our outlook for each. Every year we take a snapshot of the IoT market, pulling our latest forecasts to examine how the overall market had developed in the year. This year the top line figures of 27 billion connections and USD3 trillion of revenue continue are eye-catching and the opportunity is substantial. However it's not just a case of rising tides lifting all boats. To take advantage of the opportunities in IoT, suppliers need to understand the key market dynamics and their competitive environment, and develop best practice. Most of what Machina Research does is focused on supporting various players understand and exploit the opportunities we outline in this study”.

Machina Research focuses on Internet of Things, M2M and Big Data markets. Their ‘IoT Global Forecast & Analysis 2015-2025’ provides an overview of the global IoT market from 2015 to 2025, featuring forecasts of connections, applications, technology, traffic and revenue. It is based on data extracted from Machina Research’s IoT Forecast Database in August 2016. The report is a summary snapshot of the detailed country-by-country and application-by-application forecasts contained within the IoT Forecast Database.

Read more…

Thoughts on IoT and Finance

By Javier Saade. This post originally appeared here.

IoT, smart devices, wearables, mobile technology and nanotech - yes, nanotech - are forcing financial services incumbents and challengers to rethink every aspect of their value chains.  Those value chains are getting to be exponentially more distributed and automated.   Increased digitization means more data being generated, from all kinds of places at an accelerating rate.   IoT, regardless of your perspective, promises to enable the development of new value-added services to improve and automate user engagement, customer acquisition and service delivery - everywhere at all times.  

In insurance for instance, user engagement is very low.  Customers like it that way because there are no incentives for them to interact other than once a year when a policy holder renews it.   But recasting the current low engagement environment with an IoT lens, insurers may be able to develop value-added services that give customers a reason to engage more frequently.  One way to do it is by providing discounts.  An example would be to give customers price breaks if they opt-in to apps that monitor perspiration levels, body temperature, and heart rate via smart clothing.   Sounds far fetched?  Think again.

Sounds far-fetched?  Think again.

My friend David Bray, the FCC’s CIO, once said this:  “...in 1977, 4.2 billion people lived on earth and the first Apple II went on sale running at 1MHz w 4 KB of RAM (note, that is the first half of a second of your favorite MP3 song).”   He continued, "today there are 7 billion people, about 850 million web servers online, and about 4 billion zetabytes of digital content worldwide.  By 2022 there will be 8 billion people, 75-300 billion networked devices globally and 96 zetabytes of digital content is estimated to exist”.

96 zetabytes, by the way is 96,000,000,000,000,000,000,000 bytes = 96 billion trillion bytes.  With this kind of exponential growth the opportunities are incalculable because data is the building block of the digitized economy.  Information its lifeblood and for that reason there are billions being deployed in IoT by players in almost every sector of the economy.   Real money to be sure yet for some products and services, like wearables and smart-home devices, the consumers themselves will bear the costs.  For other products, including but not limited to: automobile driving monitoring devices, smart city clouds, connected cars, smart farming, and industrial embedded data to name a few there is zero or very little incentive for consumers to bear the cost.  So in applications like these, companies are expected to seek partnerships with OEMs and OEDs to embed technologies (e.g., RFID tags) into their products.  Alternatively, innovators in the space may play a more integrated role designing and inventing applications for incumbents delivering IoT enabled services and products. 

RFID involves wireless communication that uses radio waves to identify and track objects.  It is analogous to a smart digital barcoding system that allows users to uniquely identify items without direct line-of-sight, identify thousands of items simultaneously and identify items within a defined proximity.  It can tell you what an object is, where it is, and how it is making the technology an indispensable IoT building block applicable in everything from supply chain and logistics finance to smart payments.   

Another interesting technology being used – telematics.  Telematics hardware uses GPS and wireless devices to collect real-time customer data.  Think about a car insurer adjusting a customers’ premiums based on a panoply of driving behavior and vehicle use.   These devices are now able to measure a number of additional behavioral factors, most notably hard braking (a decline of at least 10 MPH/second), which allows insurers to deeply refine risk models.  This refinement, if executed properly, could lead to potential pricing power and margins.

This refinement, if executed properly, could lead to potential pricing power and margins.    

Other technology evolutions are expected to make IoT even more viable.   One such evolution is miniaturization.  The number of transistors per chip has increased from thousands in the 1950s to over four billion in the present day.  One atom transistors are the natural limit of Moore’s Law.  This limit holds until a paradigm-shifting technology like quantum computing is able to perform at scale.  

Computing power is fundamentally and physically limited by the number of transistors that can fit on a chip.  In quantum computing there are magnitudes improvement in processing power because each quantum bit can theoretically be in an infinite number of states at one time.  In contrast, today there are two states, the well-known binary system which allows only “1s” or “0s”.   Increasing the amount of information conveyed per unit is the most realistic hope of extending Moore’s Law.  And extending Moore's law will give rise to whole new industries (e.g., everything becomes a computer) and super charging others more specifically (e.g., nanomechanics).   From a tech-enabled financial services perspective we can more effectively use real-time and uber-dynamic consumer data, perform individualized and highly contextualized analytics, and apply artificial intelligence to perform and deliver services across the entire value chain.

All of this potential raises security and privacy concerns.  Important everywhere but especially true when dealing people’s money or health.  The IoT’s infrastructure is vulnerable to hacking, almost by design.   Researchers recently claimed that they could access a plane’s satellite communications system during commercial flights via Wi-Fi or the plane’s entertainment console.  Other scary hack situations include thermostats, webcams, insulin pumps, automobiles, pacemakers and refrigerators.  As it applies to distributed ledger technologies, IoT-driven and blockchain-based systems require users to be both sophisticated and vigilant – not something to bet on.  Any systems used for the purpose of processing smart contracts, therefore, needs to be extremely robust and possess design redundancies to ensure the ability to withstand attacks.  This is not a surprise, but in a world where breaches can occur through an infinite number of entry points or nodes, cybersecurity becomes exponentially more important to maintain and difficult to manage.   

The internet of things is an exciting frontier where potentially hundreds of billions of devices will be able to talk to the network and to each other.   This efficiency should lead to goods and services most of us can’t even conceptualize.  This includes how we finance, price, transact and pay for those exact goods and services – B2B, B2C, B2B2C, C2B, P2P, P2C, C2C, O2O, B2G – and every other permutation of effecting commerce and creating or transferring value.  I look forward to keeping a very close eye on developments at this important and evolving intersection.

Note:  The idea for this piece was sparked by a research project our intern Matt completed for our firm, Fenway Summer Ventures

Read more…

IoT Central Digest, August 1, 2016

Here is the latest issue of the IoT Central Digest. This digest links you to a three part series entitled IoT 101, well worth a read. We also include articles about software tools for IoT device security, dive into fog computing and look at who holds the intellectual property in IoT.  If you're interested in being featured, we always welcome your contributions on all things IoT Infrastructure, IoT Application Development, IoT Data and IoT Security, and more. All members can post on IoT Central. Consider contributing today. Our guidelines are here.

IoT 101 – Everything You Need to Know to Start Your IoT Project

By Bill Vorhies

Summary: This is the first in a series of articles aimed at providing a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. Visit www.iotcentral.io to read the entire series.

Intellectual Property Held by the Top 100 IoT Startups

Posted by Mitchell Schwartz 

Using Mattermarks’s list of the Top 100 IoT startups in 2015 (ranked by funding, published in Forbes Oct 25, 2015) Ipqwery has looked behind the analytics to reveal the nature of the intellectual property (IP) behind these innovative companies. Our infographic presents a general summary of the IP within the group as a whole, and illustrates the trailing 5-year trends related to IP filing activity.

Automated Software Development Tools for Improving IoT Device Security

Posted by Bill Graham 

For IoT and M2M device security assurance, it's critical to introduce automated software development tools into the development lifecycle. Although software tools' roles in quality assurance is important, it becomes even more so when security becomes part of a new or existing product's requirements.

How IoT can benefit from fog computing

By Ben Dickson

What I’m mentioning a lot these days (and hearing about it as well) is the chaotic propagation and growth of the Internet of Things. With billions of devices slated to connect to the internet every year, we’re going to be facing some serious challenges. I’ve already discussed howblockchain technology might address connectivity issues for huge IoT ecosystems. But connectivity accounts for a small part of the problems we’ll be facing. Another challenge will be processing and making sense of the huge reams of data that IoT devices are generating. Close on its heels will be the issue of latency or how fast an IoT system can react to events. And as always, security and privacy issues will remain one of the top items in the IoT challenge list. Fog computing (aka edge computing) can help mitigate – if not overcome – these challenges

Additional Links

Follow us on Twitter | Join our LinkedIn group | Members Only | For Bloggers | Subscribe

Read more…

How IoT can benefit from fog computing

fog computing

By Ben Dickson. This article originally appeared here.

What I’m mentioning a lot these days (and hearing about it as well) is the chaotic propagation and growth of the Internet of Things. With billions of devices slated to connect to the internet every year, we’re going to be facing some serious challenges. I’ve already discussed howblockchain technology might address connectivity issues for huge IoT ecosystems.

But connectivity accounts for a small part of the problems we’ll be facing. Another challenge will be processing and making sense of the huge reams of data that IoT devices are generating. Close on its heels will be the issue of latency or how fast an IoT system can react to events. And as always, security and privacy issues will remain one of the top items in the IoT challenge list.

Fog computing (aka edge computing) can help mitigate – if not overcome – these challenges. As opposed to the cloud, where all the computation takes place in a central location, fog computing pushes the computation of tasks toward the edge of the network and distributes it among smart routers or gateways. The term and concept was coined by networking giant Cisco even before the IoT became a buzzword, but it was the advent of the Internet of Things that provided it with true, legitimate use cases.

Here are some of the domains where cloud computing can deal with the challenges of IoT.

Computation and data processing

Naturally, computation problems will be one of the main reasons we’ll descend from the cloud and wade into the fog. A problem lying ahead of us is the sheer amount of computation and data processing that IoT ecosystems will require.

With Machine-to-Machine (M2M) communications accounting for most of exchanges in IoT ecosystems, the amount of traffic that will be generated will be incomparable to what we’re used to deal with in human-machine settings. Pushing all of these tasks to the cloud will overburden centralized computation nodes and require bigger and stronger cloud servers.

The cloud is best known for its huge storage and analytics capacities. Meanwhile, many of the tasks and events that take place in IoT ecosystems do not require such capabilities and sending them to the cloud will be a waste of precious resources and will only bog down servers and prevent them from performing their more critical duties.

Fog computing can address this issue. Small computational tasks can be performed at the edge (IoT gateways and routers), while valuable data can continue to be pushed to the cloud. This way, precious cloud resources for can be saved for more suitable tasks such as big data analysis and pattern recognition. Reciprocally, functionality and policies of edge devices can be altered and updated based on insights gained from cloud analytics.

This model will also help address response time and latency issues, which is discussed next.

Response times and latency

Rather than requiring huge computational resources, many of the transactions and decisions being made in IoT systems are time-critical. Imagine a telemedicine scenario, or an IoT-powered hospital, where seconds and milliseconds can make a difference for patients’ health or life. The same can be said in industrial settings and work areas, where quick response can prevent or mitigate damage and safety issues. A simpler example would be parking lights that would have to respond to passage of cars and pedestrians, but must do so in a timely fashion.

Other settings that require large bandwidth, such as IoT ecosystems involving many CCTV cameras, would also be hard to deploy in environments that have limited connectivity if they rely on cloud computation.

In many cases, it’s funny (and outright ridiculous) that two devices that stand a few feet apart have to go through the internet and the cloud to exchange simple messages. It’s even more ridiculous having to cope with the fact that your fridge and toaster don’t work because they’re disconnected from the internet.

A roundtrip to the cloud can sometimes take seconds – or even minutes, in poorly connected areas – which is more than can be afforded in many of these scenarios. Meanwhile, at the edge, IoT ecosystems can make decisions at the speed of lightning, making sure that everything gets responded to in time.

study by IDC Futurescape shows that by 2018, some 40 percent of IoT-created data will be stored, analyzed and processed at the edge.

Security and privacy

As Phantom CEO Ken Tola mentioned in a previous post, encryption isn’t panacea to IoT security problems. And as a study by LGS Innovations told us earlier, hackers don’t necessarily need to crack into your encrypted communications in order to carry out their evil deeds. In fact, just eavesdropping on your IoT internet traffic – whether it’s encrypted or not – will provide malicious actors with plenty of useful information, e.g. give away your living habits.

Moreover, some forms of attacks, such as replay attacks, don’t require the attacker to have access to encryption keys. All they need to do is to replicate packets that are being exchanged on the network. For instance, with a good bit of network monitoring, an attacker might figure out which sequence of packets unlocks your home’s smart-lock.

Of course, there are ways to mitigate each of these threats, but robust security practices aren’t the greatest strength of IoT device manufacturers, and that’s why we’re seeing all thesespooky IoT hacks surface every week.

Fog computing will reduce many of these risks by considerably decreasing the amount of dependency on internet connections. Moving data and command exchange into the local area network will make it much harder for hackers to gain remote access to your data and devices. Moreover, with device-cloud exchanges no longer happening in real-time, it will be much harder to discern life and usage patterns by eavesdropping on your network.

Overcoming the challenges

Despite all the mentioned advantages, fog computing does have its own set of caveats and difficulties. For one thing, edge devices can’t match the power of cloud in computing and analytics. This issue can be addressed by distributing the workload between the cloud and the fog. Edge devices such as smart routers and gateways can mimic cloud capabilities at the edge location, making optimal use of their resources to respond to time-critical and lightweight tasks, while the heavier, analytics-intensive requests that don’t necessarily need to be carried out in real-time can be sent to the cloud.

Meanwhile, edge software should be designed and developed with flexibility in mind. For instance, IoT gateway software that controls industrial equipment should be able to receive policy and function updates, which will be produced by machine learning solutions analyzing big data at the cloud.

Read more…

Using Mattermarks’s list of the Top 100 IoT startups in 2015 (ranked by funding, published in Forbes Oct 25, 2015) Ipqwery has looked behind the analytics to reveal the nature of the intellectual property (IP) behind these innovative companies. Our infographic presents a general summary of the IP within the group as a whole, and illustrates the trailing 5-year trends related to IP filing activity.

The vast majority of these companies have both patents (84%) and trademarks (85%) in their IP portfolio. There was a sharp and mostly linear increase in filings for both patents and trademarks, from 2011 through to 2014, with a slight decrease showing in 2015. 2016 looks to be on pace to meet or exceed last year’s filing activity as well. All this is consistent with the ever-expanding number of companies operating within the IoT ecosystem.

A closer look at the top 5 patent class descriptions amongst all patents granted or published yields close results between these classes. This is not surprising given the similar technologies behind many IoT products, such that their patents will incorporate the same or similar descriptions within their claims. Comparatively, there is a wider variance in the Top 5 Trademark classes used, but this speaks more to the wider variety of marketing and branding potential than to the underlying IoT technologies. 

What’s striking in Mattermark’s original analysis of the Top 100 IoT Startups is that 30% of all funding raised by this group as a whole has been concentrated in only the top 5 companies; Jawbone, Genband, Silver Spring Networks, View Glass and Jasper Technologies. Ipqwery’s analysis further reveals that only two of these companies (Silver Spring and Jasper) have Top 5 inventors within the group. In fact, Jasper actually has 2 of the Top 5 inventors. The other top inventors come from Hello and Kineto Wireless.=

The broad-strokes approach of IPqwery’s infographic doesn’t directly illustrate the IP held by any one company, but certainly hints at where exactly this type of analysis could be very useful indeed. For where Mattermark sought to pinpoint where the greatest growth potential (momentum) was within the group of companies by looking at the overall IoT funding environment, IPqwery’s analysis of the general IP trends within this group sheds additional light on the matter, and perhaps raises some additional issues. Wouldn’t potential correlations between IP and funding also be a useful measure of momentum across metrics, and thus shouldn’t IP data be generally more integrated into business growth analytics, from the get go?

Here's a link to a new infographic by IPqwery summarizing the intellectual property held by the Top 100 IoT Startups (2015). 

 

Read more…

By Bill Vorhies

Bill is Editorial Director for our sister site Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. This article originally appeared here

Summary:  In this Lesson 3 we continue to provide a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. 

In Lesson 1

In Lesson 2

In This Article

Is it IoT or Streaming

Stream Processing – Open Source

Three Data Handling Paradigms – Spark versus Storm

Basics of IoT Architecture – Open Source

What Can Stream Processors Do

Streaming and Real Time Analytics

Data Capture – Open Source with Options

Open Source Options for Stream Processors

Beyond Open Source for Streaming

Storage – Open Source with Options

Spark Streaming and Storm

Competitors to Consider

Query – Open Source Open Source with Options

Lambda Architecture – Speed plus Safety

Trends to Watch

 

Do You Really Need a Stream Processor

 

 

Four Applications of Sensor Data

 

 

Continuing from Lesson 2, our intent is to provide a broad foundation for folks who are starting to think about streaming and IoT.  In this lesson we’ll explain how Spark and Storm handle data streams differently, discuss what real time analytics actually means, offer some alternatives for streaming beyond open source, and suggest some trends you should watch in this fast evolving space.

 

Three Data Handling Paradigms:  SPARK Versus Storm

When users compare SPARK versus Storm the conversation usually focuses on the difference in the way they handle the incoming data stream. 

  • Storm processes incoming data one event at a time – called Atomic processing. 
  • SPARK processes incoming data in very small batches – called Micro Batch.  A SPARK micro batch is typically between ½ second and 10 seconds depending on how often the sensors are transmitting.  You can define this value.
  • A third method called Windowing allows for much longer windows of time and can be useful in some text or sentiment analysis applications, or systems in which signals only evolve over a relatively long period of time.

 

Atomic:  (aka one-tuple-at-a-time) Processes each inbound data event as a separate element.  This is the most intuitively obvious but also the most computationally expensive design.  For example, it’s used to guarantee fastest processing of individual events with least delay in transmitting the event to the subscriber.  Seen often for customer transactional inputs so that if some element of the event block fails the entire block is not deleted but moved to a bad record file that can later be processed further.  Apache Storm uses this paradigm.

Micro batching:  The critique of this approach is that it processes in batches (not atomic level streaming) but typically those batches are extremely small encompassing actions that occur within only a few seconds.  You can adjust the time window.  This makes the process somewhat more efficient.  SPARK Streaming uses this paradigm.

Windowing:  A hybrid of the two approaches, Windowing maintains the atomic processing of each data item but creates pseudo-batches (windows) to make processing more efficient.  This also allows for many more sophisticated interpretations such as sliding windows (e.g. everything that occurred in the last X period of time). 

All three of these approaches can guarantee that each data element is processed at least once.  Only the Atomic paradigm can guarantee that each data element is processed only once.

 

Consider this Example 

Your sensors are like FitBits and sample data every 10 seconds.  They transmit that in bursts whenever the sensor is cued to dump its data into a Wi-Fi stream.  One user may monitor the results of the stream many times during the day, valuing low latency and causing his sensor to upload via Wi-Fi frequently.  Another user may not be near a Wi-Fi connection or may simply not bother to download the data for several days.  Still a third user may have trouble with a network connection or the hardware itself that causes the sensor to transmit incomplete or missing packets that are then repeated later or are simply missing from the stream.

In this scenario, data from sensors originating at the same time may arrive at the stream processor with widely different delays and some of those packets that were disrupted may have been transmitted more than once or not at all.

You will need to carefully evaluate whether guaranteeing ‘only once’ processing, or the marginally faster response time of atomic processing warrant using this factor in your selection of the Stream Processor.

 

Streaming and Real Time Analytics

It’s common in IoT to find references to “real time analytics” or “in stream analytics” and these terms can be misleading.  Real time analytics does not mean discovering wholly new patterns in the data in real time while it is streaming by.  What it means is that previously developed predictive models that were deployed into the Stream Processor can score the streaming data and determine whether that signal is present, in real time.

It’s important to remember that the data science behind your sophisticated Stream Processor was developed in the classic two step data science process. First data scientists worked in batch with historical data with a known outcome (supervised learning) to develop an algorithm that uses the inputs to predict the likelihood of the targeted event.  The model, an algebraic formula, represented by a few lines of code (C, Python, Java, R, and others) is then exported into a program within the Stream Processor and goes to work evaluating the passing data to see if the signal is present.  If it is, some form of action alert is sent to the human or machine, or sent as a visual signal to a dashboard.

Recently the first indications that some new discoveries can be made in real time have been emerging but they are exceedingly rare.  See more in this article.

 

Beyond Open Source for Streaming

Why would you want to look beyond open source for your IoT system?  Largely because while open source tools and packages are practically free, this is the same as ‘free puppy’. 

Yes these packages can be downloaded for free from Apache but the most reasonable sources are the three primary distributors, Hortonworks, Cloudera, and MapR all of whom make sure the code is kept up to date and add certain features that make it easier to maintain.  Even from these distributors, your total investment should be in the low five figures.  This does not of course include implementation, consulting, or configuration support which is extra, either from the distributors, from other consultants, or from your own staff if they are qualified.

With open source what you also get is complexity.  Author Jim Scott writing about SPARK summed it up quite nicely.  “SPARK is like a fighter jet that you have to build yourself. The great thing about that is that after you are done building it, you have a fighter jet. Problem is, have you ever flown a fighter jet? There are more levers than could be imagined.”

In IT parlance, the configurations and initial programs you create in SPARK or other open source streaming platforms will be brittle.  That is every time your business rules change you will have to modify the SPARK code written in Scala, though Python is also available.

Similarly, standing up a SPARK or Hadoop storage cluster comes with programming and DBA overhead that you may not want to incur, or at least to minimize.  Using one of the major cloud providers and/or adding a SaaS service like Qubole will greatly reduce your labor with only a little incremental cost.

The same is true for the proprietary Stream Processors many of which are offered by major companies and are well tested and supported.  Many of these come with drag-and-drop visual interfaces eliminating the need for manual coding so that any reasonably dedicated programmer or analyst can configure and maintain the internal logic as your business changes.  (Keep your eye on NiFi, the new open source platform that also claims drag-and-drop).

 

Competitors to Consider

Forrester publishes a periodic rating and ranking of the competitor “Big Data Streaming Analytic Platforms” and as of the spring of 2016 listed 15 worthy of consideration.

Here are the seven Forrester regards as leaders in rank order:

  1. IBM
  2. Software AG
  3. SAP
  4. TIBCO Software
  5. Oracle
  6. DataTorrent,
  7. SQLstream

There are eight additional ‘strong performers’ in rank order:

  1. Impetus Technologies
  2. SAS
  3. Striim
  4. Informatica
  5. WSO2
  6. Cisco Systems
  7. data Artisans
  8. EsperTech

Note that the ranking does not include the cloud-only offerings which should certainly be included in any competitive comparison:

  1. Amazon Web Services’ Elastic MapReduce
  2. Google Cloud Dataflow
  3. Microsoft Azure Stream Analytics

Here’s the ranking chart:

 

It’s likely that you can get a copy of the full report from one of these competitors.  Be sure to pay attention to the detail.  For example here are some interesting observations from the numerical scoring table.

Stream Handling:  In this presumably core capability SoftwareAG got a perfect score while Impetus and WSO2 scored decidedly below average.

Stream Operators (Programs):  Another presumably core capability.  IBM Streams was given a perfect score.  Most other competitors had scores near 4.0 (out of 5.0) except for data Artisans given a noticeably weak score.

Implementation Support: data Artisans and EsperTech were decidedly weaker than others.

In all there are 12 scoring categories that you’ll want to examine closely.

What these 15 leaders and 3 cloud offerings have in common is that they greatly simplify the programming and configuration and hide the gory details.  That’s a value well worth considering.

 

Trends to Watch

IoT and streaming is a fast growth area with a high rate of change.  Witness the ascendance of SPARK in just the last year to become the go-to open source solution.  All of this development reflects the market demand for more and more tools and platforms to address the exploding market for data-in-motion applications.

All of this means you will need to keep your research up to date during your design and selection period.  However, don’t let the rate of change deter you from getting started.

  • One direction of growth will be the further refinement of SPARK to become a single platform capable of all four architectural elements:  data capture, stream processing, storage, and query.
  • We would expect many of the proprietary solutions to stake this claim also.
  • When this is proven reliable you can abandon the separate components required by the Lambda architecture.
  • We expect SPARK to move in the direction of simplifying set up and maintenance which is the same ground the proprietary solutions are claiming.  Watch particularly for integration of NiFi into SPARK, or at least the drag-and-drop interface elements creating a much friendlier UI.
Read more…

For IoT and M2M device security assurance, it's critical to introduce automated software development tools into the development lifecycle. Although software tools' roles in quality assurance is important, it becomes even more so when security becomes part of a new or existing product's requirements.

Automated Software Development Tools

There are three broad categories of automated software development tools that are important for improving quality and security in embedded IoT products:

  • Application lifecycle management (ALM): Although not specific to security, these tools cover requirements analysis, design, coding, testing and integration, configuration management, and many other aspects of software development. However, with a security-first embedded development approach, these tools can help automate security engineering as well. For example, requirements analysis tools (in conjunction with vulnerability management tools) can ensure that security requirements and known vulnerabilities are tracked throughout the lifecycle.  Design automation tools can incorporate secure design patterns and then generate code that avoids known security flaws (e.g. avoiding buffer overflows or checking input data for errors). Configuration management tools can insist on code inspection or static analysis reports before checking in code. Test automation tools can be used to test for "abuse" cases against the system. In general, there is a role for ALM tools in the secure development just as there is for the entire project.
  • Dynamic Application Security Testing (DAST): Dynamic testing tools all require program execution in order to generate useful results. Examples include unit testing tools, test coverage, memory analyzers, and penetration test tools. Test automation tools are important for reducing the testing load on the development team and, more importantly, detecting vulnerabilities that manual testing may miss.
  • Static Application Security Testing (SAST): Static analysis tools work by analyzing source code, bytecode (e,g, compiled Java), and binary executable code. No code is executed in static analysis, but rather the analysis is done by reasoning about the potential behavior of the code. Static analysis is relatively efficient at analyzing a codebase compared to dynamic tools. Static analysis tools also analyze code paths that are untested by other methods and can trace execution and data paths through the code. Static analysis can be incorporated early during the development phase for analyzing existing, legacy, and third-party source and binaries before incorporating them into your product. As new source is added, incremental analysis can be used in conjunction with configuration management to ensure quality and security throughout. 

2023315?profile=RESIZE_1024x1024

Figure 1: The application of various tool classes in the context of the software development lifecycle.

Although adopting any class of tools helps productivity, security, and quality, using a combination of these is recommended. No single class of tools is the silver bullet[1]. The best approach is one that automates the use of a combination of tools from all categories, and that is based on a risk-based rationale for achieving high security within budget.

The role of static analysis tools in a security-first approach

Static analysis tools provide critical support in the coding and integration phases of development. Ensuring continuous code quality, both in the development and maintenance phases, greatly reduces the costs and risks of security and quality issues in software. In particular, it provides some of the following benefits:

  • Continuous source code quality and security assurance: Static analysis is often applied initially to a large codebase as part of its initial integration as discussed below. However, where it really shines is after an initial code quality and security baseline is established. As each new code block is written (file or function), it can be scanned by the static analysis tools, and developers can deal with the errors and warnings quickly and efficiently before checking code into the build system. Detecting errors and vulnerabilities (and maintaining secure coding standards, discussed below) in the source at the source (developers themselves) yields the biggest impact from the tools.
  • Tainted data detection and analysis: Analysis of the data flows from sources (i.e. interfaces) to sinks (where data gets used in a program) is critical in detecting potential vulnerabilities from tainted data. Any input, whether from a user interface or network connection, if used unchecked, is a potential security vulnerability.  Many attacks are mounted by feeding specially-crafted data into inputs, designed to subvert the behavior of the target system. Unless data is verified to be acceptable both in length and content, it can be used to trigger error conditions or worse. Code injection and data leakage are possible outcomes of these attacks, which can have serious consequences.
  • Third-party code assessment: Most projects are not greenfield development and require the use of existing code within a company or from a third party. Performing testing and dynamic analysis on a large existing codebase is hugely time consuming and may exceed the limits on the budget and schedule. Static analysis is particularly suited to analyzing large code bases and providing meaningful errors and warnings that indicate both security and quality issues. GrammaTech CodeSonar binary analysis can analyze binary-only libraries and provide similar reports as source analysis when source is not available. In addition, CodeSonar binary analysis can work in a mixed source and binary mode to detect errors in the usage of external binary libraries from the source code. 
  • Secure coding standard enforcement: Static analysis tools analyze source syntax and can be used to enforce coding standards. Various code security guidelines are available such as SEI CERT C [2] and Microsoft's Secure Coding Guidelines [3]. Coding standards are good practice because they prevent risky code from becoming future vulnerabilities. As mentioned above, integrating these checks into the build and configuration management system improves the quality and security of code in the product.

As part of a complete tools suite, static analysis provides key capabilities that other tools cannot. The payback for adopting static analysis is the early detection of errors and vulnerabilities that traditional testing tools may miss. This helps ensure a high level of quality and security on an on-going basis.

Conclusion

Machine to machine and IoT device manufacturers incorporating a security-first design philosophy with formal threat assessments, leveraging automated tools, produce devices better secured against the accelerating threats on the Internet. Modifying an existing successful software development process that includes security at the early stages of product development is key. Smart use of automated tools to develop new code and analyze existing and third party code allows development teams to meet strict budget and schedule constraints. Static analysis of both source and binaries plays a key role in a security-first development toolset. 

References

  1. No Silver Bullet – Essence and Accident in Software Engineering, Fred Brooks, 1986
  2. SEI CERT C Coding Standard,
  3. Outsource Code Development Driving Automated Test Tool Market, VDC Research, IoT & Embedded Blog, October 22, 2013

 

Read more…

By Bill Vorhies.

Bill is Editorial Director for our sister site Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. This article originally appeared here

Summary:  In this Lesson 2 we continue to provide a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. 

In Lesson 1

In This Article

In Lesson 3

Is it IoT or Streaming

Stream Processing – Open Source

Three Data Handling Paradigms – Spark versus Storm

Basics of IoT Architecture – Open Source

What Can Stream Processors Do

Streaming and Real Time Analytics

Data Capture – Open Source with Options

Open Source Options for Stream Processors

Beyond Open Source for Streaming

Storage – Open Source with Options

Spark Streaming and Storm

Competitors to Consider

Query – Open Source Open Source with Options

Lambda Architecture – Speed plus Safety

Trends to Watch

 

Do You Really Need a Stream Processor

 

 

Four Applications of Sensor Data

 

 

Continuing from Lesson 1, our intent is to provide a broad foundation for folks who are starting to think about streaming and IoT.  In this lesson we’ll dive into Stream Processing the heart of IoT, then discuss Lambda architecture, whether you really need a Stream Processor, and offer a structure for thinking about what sensors can do.

 

Stream Processing – Open Source

Event Stream Processing platforms are the Swiss Army knives that can make data-in-motion do almost anything you want it to do.

The easiest way to understand ESP architecture is to see it as three layers or functions, input, processing, and output.

 

Input accepts virtually all types of time-based streaming data and multiple input streams are common.  In the main ESP processor occur a variety of actions called programs or operators.  And the results of those programs are passed to the subscriber interface which can send alerts via human interfaces or create machine automated actions, and also pass the data to Fast and Forever data stores.

It is true that Stream Processing platforms can directly receive data streams, but recall that they are not good at preserving accidentally lost data so you will still want a Data Capture front end like Kafka that can rewind and replay lost data.  It’s likely over the near future that many stream processors will resolve this problem and then you will need to revisit the need for a Kafka front end.

 

Stream Processing Requirements

The requirements for your stream processor are these:

  • High Velocity:  Capable of ingesting and processing millions of events per seconds depending on your specific business need.
  • Scales Easily:  These will all run on distributed clusters.
  • Fault Tolerant:  This is different than guaranteeing no lost data.
  • Guaranteed Processing:  This comes in two flavors: 1.) Process each event at least once, and 2. Process each event only once.  The ‘only-once’ criteria is harder to guarantee.  This is an advanced topic we will discuss a little later.
  • Performs the Programs You Need for Your Application.

 

What Can ESP Programs Do

The real power is in the programs starting with the ability to do data cleansing on the front end (kind of a mini-MDM), then duplicate the stream of data multiple times so that each identical stream can be used in different analytic routines simultaneously without waiting for one to finish before the next begins.  Here’s a diagram from a healthcare example used in a previous article describing how this works that illustrates multiple streams being augmented by static data, and processed by different logic types at the same time.  Each block represents a separate program within the ESP that needs to be created by you.

 

There are a very large number of different logic types that can be applied through these ESP programs including:

  • Compute
  • Copy, to establish multiple processing paths – each with different retention periods of say 5 to 15 minutes
  • Aggregate
  • Count
  • Filter – allows you to keep only the data from the stream that is useful and discard the rest, greatly reducing storage.
  • Function (transform)
  • Join
  • Notification email, text, or multimedia
  • Pattern (detection) (specify events of interest EOIs)
  • Procedure (apply advanced predictive model)
  • Text context – could detect for example Tweet patterns of interest
  • Text Sentiment – can monitor for positive or negative sentiments in a social media stream

There is some variation in what open source and proprietary packages can do so check the details against what you need to accomplish.

 

Open Source Options for Stream Processing

The major open source options (all Apache) are these:

Samza:  A distributed stream processing framework. It uses Kafka for messaging, and YARN to provide fault tolerance, processor isolation, security, and resource management.

NiFi: This is a fairly new project still in incubation.  It is different because of its user-friendly drag-and-drop graphical user interface and the ease with which it can be customized on the fly for specific needs.

Storm:  A well tested event based stream processor originally developed by Twitter.

SPARK Streaming:  SPARK Streaming is one of the four components of SPARK which is the first to integrate batch and streaming in a single enterprise capable platform.

 

SPARK Streaming and Storm Are the Most Commonly Used Open Source Packages

SPARK has been around for several years but in the last year it’s had an amazing increase in adoption, now replacing Hadoop/MapReduce in most new projects and with many legacy Hadoop/MapReduce systems migrating to SPARK.  SPARK development is headed toward being the only stack you would need for an IoT application.

SPARK consists of five components all of which support Scala, Java, Python, and R.

  1. SPARK:  The core application is a batch processing engine that is compatible with HDFS and other NoSQL DBs.  Its popularity is driven by the fact that it is 10X to 100X times faster than Hadoop/MapReduce.
  2. ML.lib: A powerful on-board library of machine learning algorithms for data science.
  3. SPARK SQL:  For direct support of SQL queries.
  4. SPARK Streaming:  Its integrated stream processing engine.
  5. GraphX:  A powerful graph database engine useful outside of streaming applications.

 

Storm by contrast is a pure event stream processor.  The differences between Storm and SPARK Streaming are minor except in the area of how they partition the incoming data.  This is an advanced topic discussed later.

If after you’ve absorbed the lesson about data partitioning and you determine this does not impact your application then in open source SPARK / SPARK Streaming is the most likely choice.

 

Lambda Architecture – Speed Plus Safety

The standard reference architecture for an IoT streaming application is known as the Lambda architecture which incorporates a Speed Layer and a Safety Layer

The inbound data stream is duplicated by the Data Capture app (Kafka) and sent in two directions, one to the safety of storage, and the other into the Stream Processing platform (SPARK Streaming or Storm).  This guarantees that any data lost can be replayed to ensure that all data is processed at least once.

 

The queries on the Stream Processing side may be extracting static data to add to the data stream in the Stream Processor or they may be used to send messages, alerts, and data to the consumers via any number of media including email, SMS, customer applications, or dashboards.  Alerts are also natively produced within the Stream Processor.

Queries on the Storage safety layer will be batch used for creating advanced analytics to be embedded in the Stream Processor or to answer ad hoc inquiries, for example to develop new predictive models.

 

Do You Really Need a Stream Processor?

As you plan your IoT platform you should consider whether a Stream Processor is actually required.  For certain scenarios where the message to the end user is required only infrequently or for certain sensor uses it may be possible to skip the added complexity of a Stream Processor altogether.

 

When Real Time is Long

When real time is fairly long, for example when notifying the end user of any new findings can occur only once a day or even less often it may be perfectly reasonable to process the sensor data in batch.

From an architecture standpoint the sensor data would arrive at the Data Capture app (Kafka) and be sent directly to storage.  Using regular batch processing routines today’s data would be analyzed overnight and any important signals sent to the user the following day.

Batch processing is a possibility where ‘real time’ is 24 hours or more and in some cases perhaps as short as 12 hours.  Shorter than this and Stream Processing becomes more attractive.

It is possible to configure Stream Processing to evaluate data over any time period including days, weeks, and even months but at some point the value of simplifying the system outweighs the value of Stream Processing.

 

Four Applications of Sensor Data

There are four broad applications of sensor data that may also impact your decision as to whether or not to incorporate Stream Processing as illustrated by these examples.

Sensor Direct:  For example, reading the GPS coordinates directly from the sensor and dropping them on to a map can readily create a ‘where’s my phone’ style app.  It may be necessary to join static data regarding the user (their home address in order to limit the map scale) and that could be accomplished external to a Stream Processor using a standard table join or it could be accomplished within a Stream Processor.

Expert Rules:  Without the use of data science, it may be possible to write rules that give meaning to the inbound stream of data.  For example, when combined with the patient’s static data an expert rule might be to summon medical assistance if the patient’s temperature reaches 103°.

Predictive Analytics: The next two applications are both within the realm of data science.  Predictive analytics are used by a data scientist to find meaningful information in the data.

Unsupervised Learning:  In predictive analytics unsupervised learning means applying techniques like clustering and segmentation that don’t require historical data that would indicate a specific outcome.  For example, an accelerometer in your FitBit can readily learn that you are now more or less active than you have been recently, or that you are more or less active than other FitBit users with whom you compare.  Joining with the customer’s static data is a likely requirement to give the reading some context. 

The advantage of unsupervised learning is that it can be deployed almost immediately after the sensor is placed since no long period of time is required to build up training data. 

Some unsupervised modeling will be required to determine the thresholds at which the alerts should be sent.  For example, a message might only be appropriate if the period of change was more than say 20% day-over-day, or more than one standard deviation greater than a similar group of users. 

These algorithms would be determined by data scientists working from batch data and exported into the Stream Processor as a formula to be applied to the data as it streams by.

Supervised Learning:  Predictive models are developed using training data in which the outcome is known.  This requires some examples of the behavior or state to be detected and some examples where that state is not present. 

For example we might record the temperature, vibration, and power consumption of a motor and also whether that motor failed within the next 12 hours following the measurement.  A predictive model could be developed that predicts motor failure 12 hours ahead of time if sufficient training data is available. 

The model in the form of an algebraic formula (a few lines of C, Java, Python, or R) is then exported to the Stream Processor to score data as it streams by, automatically sending alerts when the score indicates an impending failure. 

The benefits of sophisticated predictive models used in Stream Processing are very high.  The challenge may be in gathering sufficient training data if the event is rare as a percentage of all readings or rare over time meaning that much time may pass before adequate training data can be acquired.

Watch for our final installment, Lesson 3.

Read more…

By Bill Vorhies.

Bill is Editorial Director for our sister site Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. This article originally appeared here

Summary: This is the first in a series of articles aimed at providing a complete foundation and broad understanding of the technical issues surrounding an IoT or streaming system so that the reader can make intelligent decisions and ask informed questions when planning their IoT system. 

In This Article

In Lesson 2

In Lesson 3

Is it IoT or Streaming

Stream Processing – Open Source

Three Data Handling Paradigms – Spark versus Storm

Basics of IoT Architecture – Open Source

What Can Stream Processors Do

Streaming and Real Time Analytics

Data Capture – Open Source with Options

Open Source Options for Stream Processors

Beyond Open Source for Streaming

Storage – Open Source with Options

Spark Streaming and Storm

Competitors to Consider

Query – Open Source Open Source with Options

Lambda Architecture – Speed plus Safety

Trends to Watch

 

Do You Really Need a Stream Processor

 

 

Four Applications of Sensor Data

 

 

In talking to clients and prospects who are at the beginning of their IoT streaming projects it’s clear that there’s a lot of misunderstanding and gaps in their knowledge.  You can find hundreds of articles on IoT and inevitably they focus on some portion of the whole without an overall context or foundation.  This is understandable since the topic is big and far ranging not to mention changing fast. 

So our intent is to provide a broad foundation for folks who are starting to think about streaming and IoT.  We’ll start with the basics and move up through some of the more advanced topics, hopefully leaving you with enough information to then begin to start designing the details of your project or at least equipped to ask the right questions.

Since this is a large topic, we’ll spread it out over several articles with the goal of starting with the basics and adding detail in logical building blocks.

 

Is It IoT or Is It Streaming?

The very first thing we need to clear up for beginners is the nomenclature.  You will see the terms “IoT” and “Streaming” used to mean different things as well as parts of the same thing.  Here’s the core of the difference:  If the signal derives from sensors it’s IoT (Internet of Things).  The problem is that there are plenty of situations where the signal doesn’t come from sensors but are handled in essentially the same way.  Web logs, click streams, streams of text from social media, and streams of stock prices are examples of non-sensor streams that are therefore not “IoT”.

What they share however is that all are data-in-motion streams of data. Streaming is really the core concept and we could just as easily have called this “Event Stream Processing”, except that focusing on streaming leaves out several core elements of the architecture such as how we capture the signal, store the data, and query it.

In terms of the architecture, the streaming part is only one of the four main elements we’ll discuss here.  Later we’ll also talk about the fact that although the data may be streaming, you may not need to process it as a stream depending on what you think of as real time.  It’s a little confusing but we promise to clear that up below.

The architecture needed to handle all types of streaming data is essentially the same regardless of whether the source is specifically a sensor or not so throughout we’re going to refer to this as “IoT Architecture”.  And since this is going to be a discussion that focuses on architecture, if you’re still unclear about streaming in general you might start with these overviews: Stream Processing – What Is It and Who Needs It and Stream Processing and Streaming Analytics – How It Works”.

 

Basics of IoT Architecture – Open Source

Open source in Big Data has become a huge driver of innovation.  So much so that probably 80% of the information available on-line deals with some element or package for data handling that is open source.  Open source is also almost completely synonymous with Apache Institute.  So to understand the basics of IoT architecture we’re going to start by focusing on open source tools and packages.

If you’re at all familiar with IoT you cannot have avoided learning something about SPARK and Storm, two of the primary Apache open source streaming projects but these are only part of the overall architecture.  Also, later in this series we’ll turn our attention to the emerging proprietary non-open source options and why you may want to consider them.

Your IoT architecture will consist of four components: Data Capture, Stream Processing, Storage, and Query.  Depending on the specific packages you choose some of these may be combined but for this open source discussion we’ll assume they’re separate.

 

Data Capture – Open Source

Think of the Data Capture component as the catchers mitt for all your incoming sources be they sensor, web streams, text, image, or social media.  The Data Capture application needs to:

  1. Be able to capture all your data as fast as it’s coming from all sources at the same time.  In digital advertising bidding for example this can easily be 1 million events per second.  There are applications where the rate is even higher but it’s unlikely that yours will be this high.  However, if you have a million sensors each transmitting once per second you’re already there.
  2. Must not lose events.  Sensor data is notoriously dirty.  This can be caused by malfunction, age, signal drift, connectivity issues, or a variety of other network, software and hardware issues.  Depending on your use case you may be able to stand some data loss but our assumption is that you don’t want to lose any.
  3. Scale Easily:  As your data grows, your data capture app needs to keep up.  This means that it will be a distributed app running on a cluster as will all the other components discussed here.

Streaming data is time series so it arrives with at least three pieces of information: 1.) the time stamp from its moment of origination, 2.) sensor or source ID, and 3.) the value(s) being read at that moment.

Later you may combine your streaming data with static data, for example about your customer, but that happens in another component.

 

Why Do You Need a Message Collector At All?

Many of the Stream Processing apps including SPARK and Storm can directly ingest messages without a separate Message Collector front end.  However, if a node in the cluster fails they can’t guarantee that the data can be recovered.  Since we assume your business need demands that you be able to save all the incoming data, a front end Message Collector that can temporarily store and repeat data in the case of failure is considered a safe architecture.

 

Open Source Options for Message Collectors

In open source you have a number of options.  Here are some of the better known Data Collectors.  This is not an exhaustive list.

  • FluentD – General purpose multi-source data collector.
  • Flume – Large scale log aggregation framework.  Part of the Hadoop ecosystem.
  • MQ (e.g. RabbitMQ) There are a number of these lightweight message brokers deriving from the original IBM MQTT (message queuing telemetry transport, shortened to MQ).
  • AWS Kinesis – The other major cloud services also have open source Data Collectors.
  • Kafka – Distributed queue publish-subscribe system for large amounts of streaming data.

 

Kafka is Currently the Most Popular Choice

Kafka is not your only choice but it is far and away today’s most common choice used by LinkedIn, Netflix, Spotify, Uber, and AirBNB among others.

Kafka is a distributed messaging system designed to tolerate hardware, software, and network failures and to allow segments of failed data to be essentially rewound and replayed, providing the needed safety in your system.  Kafka came out of LinkedIn in 2011 and is known for its ability to handle very high throughput rates and to scale out.

If your stream of data needed no other processing, it could be passed directly through Kafka to a data store.

 

Storage – Open Source

Here’s a quick way to do a back-of-envelope assessment of how much storage you’ll need.  For example:

Number of Sensors

1 Million

Signal Frequency

Every 60 seconds

Data packet size

1 Kb

Events per sensor per day

1,440

Total events per day

1.44 Billion

Events per second

16,667

Total data size per day

1.44 TB per day

 

Your system will need two types of storage, ‘Forever’ storage and ‘Fast’ storage.

Fast storage is for real time look up after the data has passed through your streaming platform or even while it is still resident there.  You might need to query Fast storage in just a few milliseconds to add data and context to the data stream flowing through your streaming platform, like what were the min and max or average readings for sensor X over the last 24 hours or the last month.  How long you hold data in Fast storage will depend on your specific business need.

Forever storage isn’t really forever but you’ll need to assess exactly how long you want to hold on to the data.  It could be forever or it could be a matter of months or years.  Forever storage will support your advanced analytics and the predictive models you’ll implement to create signals in your streaming platform, and for general ad hoc batch queries.

RDBMS is not going to work for either of these needs based on speed, cost, and scale limitations.  Both these are going to be some version of NoSQL.

 

Cost Considerations

In selecting your storage platforms you’ll be concerned about scalability and reliability, but you’ll also be concerned about cost.  Consider this comparison drawn from Hortonworks:

 

For on premise storage a Hadoop cluster will be both the low cost and best scalability/reliability option.  Cloud storage also based on Hadoop is now approaching 1¢ per GB per month from Google, Amazon, and Microsoft.

 

Open Source Options for Storage

Once again we have to pause to explain nomenclature, this time about “Hadoop”.  Many times, indeed most times that you read about “Hadoop” the author is speaking about the whole ecosystem of packages that are available to run on Hadoop. 

Technically however Hadoop consists of three elements that are the minimum requirements for it to operate as a database.  Those are: HDFS (Hadoop file system – how the data is stored), YARN (the scheduler), and Map/Reduce (the query system).  “Hadoop” (the three component database) is good for batch queries but has recently been largely overtaken in new projects by SPARK which runs on HDFS and has a much faster query method. 

What you should really focus on is the HDFS foundation.  There are other open source alternatives to HDFS such as S3 and Mongo, and these are viable options.  However almost universally what you will encounter are NoSQL database systems based on HDFS.  These options include:

  • Hbase
  • Cassandra
  • Accumulo
  • SPARK
  • And many others.

We said earlier that RDBMS was non-competitive based on many factors, not the least of which is that the requirement for a schema-on-write is much less flexible than the NoSQL schema-on-read (late schema).  However, if you are committed to RDBMS you should examine the new entries in NewSQL which are RDBMS with most of the benefits of NoSQL.  If you’re not familiar, try one of these refresher articles here,here, or here.

 

Query – Open Source

The goal of your IoT streaming system is to be able to flag certain events in real time that your customer/user will find valuable.  At any given moment your system will contain two types of data, 1.) Data-in-motion, as it passes through your stream processing platform, and 2.) Data-at-rest, some of which will be in fast storage and some in forever storage.

There are two types of activity that will require you to query your data:

Real time outputs:  If your goal is to send an action message to a human or a machine, or if you are sending data to a dashboard for real time update you may need to enhance your streaming data with stored information.  One common type is static user information.  For example, adding static customer data to the data stream while it is passing through the stream processor can be used to enhance the predictive power of the signal.  A second type might be a signal enhancement.  For example if your sensor is telling you the current reading from a machine you might need to be able to compare that to the average, min, max, or other statistical variations from that same sensor over a variety of time periods ranging from say the last minute to the last month.

These data are going to be stored in your Fast storage and your query needs to be completed within a few milliseconds.

Analysis Queries:  It’s likely that your IoT system will contain some sophisticated predictive models that score the data as it passes by to predict human or machine behavior.  In IoT, developing predictive analytics remains the classic two step data science process: first analyze and model known data to create the predictive model, and second, export that code (or API) into your stream processing system so that it can score data as it passes through based on the model.  Your Forever data is the basis on which those predictive analytic models will be developed.  You will extract that data for analysis using a batch query that is much less time sensitive.

Open Source Options for Query

In the HDFS Apache ecosystem there are three broad categories of query options.

  1. Map/Reduce:  This method is one of the three legs of a Hadoop Database implementation and has been around the longest.  It can be complex to code though updated Apache projects like Pig and Hive seek to make this easier.  In batch mode, for analytic queries where time is not an issue Map/Reduce on a traditional Hadoop cluster will work perfectly well and can return results from large scale queries in minutes or hours.
  2. SPARK:  Based on HDFS, SPARK has started to replace Hadoop Map/Reduce because it is 10X to 100X faster at queries (depending on whether the data is on disc or in memory).  Particularly if you have used SPARK in your streaming platform it will make sense to also use it for your real time queries.  Latencies in the milliseconds range can be achieved depending on memory and other hardware factors.
  3. SQL:  Traditionally the whole NoSQL movement was named after database designs like Hadoop that could not be queried by SQL.  However, so many people were fluent in SQL and not in the more obscure Map/Reduce queries that there has been a constant drumbeat of development aimed at allowing SQL queries.  Today, SQL is so common on these HDFS databases that it’s no longer accurate to say NoSQL.  However, all these SQL implementations require some sort of intermediate translator so they are generally not suited to millisecond queries.  They do however make your non-traditional data stores open to any analysts or data scientists with SQL skills.

Watch for Lessons 2 and 3 in the next weeks.

Read more…

Article: A Brief History of Field Programmable Devices (FPGAs)-data-analytics-alone-cannot-deliver-effective-automation-solutions-industrial-iot-min-jpg

By Akeel Al-Attar. This article first appeared here

Automated analytics (which can also be referred to as machine learning, deep learning etc.) are currently attracting the lion’s share of interest from investors, consultants, journalists and executives looking at technologies that can deliver the business opportunities being afforded by the Internet of Things. The reason for this surge in interest is that the IOT generates huge volumes of data from which analytics can discover patterns, anomalies and insights which can then be used to automate, improve and control business operations.


One of the main attractions of automated analytics appears to be the perception that it represents an automated process that is able to learn automatically from data without the need to do any programming of rules. Furthermore, it is perceived that the IOT will allow organisations to apply analytics to data being generated by any physical asset or business process and thereafter being able to use automated analytics to monitor asset performance, detect anomalies and generate problem resolution / trouble-shooting advice; all without any programming of rules!

In reality, automated analytics is a powerful technology for turning data into actionable insight / knowledge and thereby represents a key enabling technology for automation in Industrial IOT. However, automated analytics alone cannot deliver complete solutions for the following reasons:

i- In order for analytics to learn effectively it needs data that spans the spectrum of normal, sub normal and anomalous asset/process behaviour. Such data can become available relatively quickly in a scenario where there are tens or hundreds of thousands of similar assets (central heating boilers, mobile phones etc.). However, this is not the case for more complex equipment / plants / processes where the volume of available faults or anomalous behaviour data is simply not large enough to facilitate effective analytics learning/modelling. As a result any generated automated analytics will be very restricted in its scope and will generate a large number of anomalies representing operating conditions that do not exist in the data.

ii- By focussing on data analytics alone we are ignoring the most important asset of any organisation; namely the expertise of its people in how to operate plants / processes. This expertise covers condition / risk assessment, planning, configuration, diagnostics, trouble-shooting and other skills that can involve decision making tasks. Automating ‘Decision making’ and applying it to streaming real-time IOT data offers huge business benefits and is very complementary to automated analytics in that it addresses the very areas in point 1 above where data coverage is incomplete, but human expertise exists.

Capturing expertise into an automated decision making system does require the programming of rules and decisions but that need not be a lengthy or cumbersome in a modern rules/decision automation technology such as Xpertrule. Decision making tasks can be represented in a graphical way that a subject matter expert can easily author and maintain without the involvement of a programmer. This can be done using graphical and easy to edit decision flows, decision trees, decision tables and rules. From my experience in using this approach, a substantial decision making task of tens of decision trees can be captured and deployed within a few weeks.

Given the complementary nature of automated analytics and automated decisions, I would recommend the use of symbolic learning data analytics techniques. Symbolic analytics generate rules/tree structures from data which are interpretable and understandable to the domain experts. Whilst rules/tree analytics models are marginally less accurate than deep learning or other ‘blackbox models’, the transparency of symbolic data models offer a number of advantages:

i- The analytics models can be validated by the domain experts
ii- The domain experts can add additional decision knowledge to the analytics models
iii- The transparency of the data models gives the experts insights into the root causes of problems and highlights opportunities for performance improvement.

Combining automated knowledge from data analytics with automated decisions from domain experts can deliver a paradigm shift in the way organisations use IOT to manage their assets / processes. It allows organisations to deploy their best practice expertise 24/7 real time throughout the organisation and rapidly turn newly acquired data into new and improved knowledge.

Below are example decision and analytics knowledge from an industrial IOT solution that we developed for a major manufacturer of powder processing mills. The solution monitors the performance of the mills to diagnose problems and to detect anomalous behaviour:

The Fault diagnosis tree below is part of the knowledge captured from the subject matter experts within the company

Article: A Brief History of Field Programmable Devices (FPGAs)-fault-diagnosis-tree-min-jpg



The tree below is generated by automated data analytics and relates the output particle size to other process parameters and environmental variables. The tree is one of many analytics models used to monitor anomalous behaviour of the process.

Article: A Brief History of Field Programmable Devices (FPGAs)-automated-data-analytics-min-jpg



The above example demonstrates both the complementary nature of rules and analytics automation and the interpretability of symbolic analytics. In my next posting I will cover the subject of the rapid capture of decision making expertise using decision structuring and the induction of decision trees from decision examples provided by subject matter experts.

Read more…

iot security

By Ben Dickson. This article originally appeared here.

A recent DDoS attack staged against a brick-and-mortar jewelry store highlights just how devastating the negligence of IoT security can become. The attack, as reported by SC Magazine, involved a 35,000 HTTP request per second flood carried out by an IoT botnetof more than 25,000 compromised CCTV cameras scattered across the entire globe, causing the shop’s servers to go down.

As detailed by cybersecurity firm Succuri, the attack is unusual because it has only used IoT devices and also because of its uncommonly lengthy duration. After the initial wave, when the servers were brought back online, a second, bigger attack, with a 50k HTTP RPS, was conducted, which lasted for several days.

A separate report by Computer Weekly details how the LizardStresser malware is creating IoT botnets by exploiting vulnerable devices, and is mounting massive 400 gigabits-per-second DDoS attacks without using amplification techniques.

This is just a glimpse of the opportunities that the Internet of Insecure Things is providing for malicious actors who are always looking for new ways to break into networks to defraud organizations of their cash and valuable assets, or to harm opponents and competitors.

You’ve been warned about IoT botnets before

While the rise in DDoS attacks based on IoT botnets is new, it wasn’t unexpected. In fact, after 2015 became the year of proof-of-concept attacks against the Internet of Things, it had been predicted that IoT devices would become a very attractive target for bot herdersin 2016.

As Dark Reading’s Ericka Chickowski said in this post, “2016 is going to be the year that attackers make a concerted effort to turn the Internet of Things (IoT) into the Botnet of Things.”

Researchers from Incapsula first warned about IoT botnets last year after detailing an attack they discovered which they tracked back to CCTV cameras at a retail store close to their office. And with insecure IoT devices becoming connected to the internet at a chaotic pace, hackers have good reason to give up general purpose computing devices, such as desktop and laptop computers, to go after the easier targets.

What makes IoT device such easy prey for botnet malware?

There are many reasons that IoT devices – and in this case CCTVs – make very attractive targets for bot herders. As Igal Zeifman, senior digital strategist from Imperva, detailed in the Incapsula blog post, “Security cameras are among the most prevalent and least protected IoT devices. Moreover, many have high upload connections, meant to support their remote streaming functionality.”

What makes it easy to conscript CCTVs ­– and other IoT devices for that matter – into botnets? According to Chris Hodson, CISO for EMEA region at cloud security company Zscaler, who spoke with SC Magazine, it’s because the security development lifecycle for IoT devices is often expedited or bypassed due to strict deadlines around time to market or the cost of the hardware.

This is a point that I’ve also raised on several occasions: one of the fundamental problems with IoT security is that the developers often come from an unconnected background, such as embedded systems, which means they have the knowhow to provide functionality but aren’t versed in the principles to write secure code for connected environments. In other cases, security is advertently neglected for the sake of meeting release deadlines of cost requirements.

Researchers at Arbor Networks summed up the prevalence of IoT botnet malware in four reasons:

  • The operating system of IoT devices is usually a stripped-down version of Linux, which means malware can be easily compiled for the target architecture.
  • IoT devices usually have full access to internet and aren’t subject to bandwidth limitations or filtering – which is very true in the case of CCTVs.
  • Minimal operating systems running on IoT devices don’t leave much room for security features such as auditing, which lets attackers compromise and exploit the devices without leaving trace.
  • There’s a lot of hardware and software reuse in IoT development, which means a lot of security-critical components become shared between devices. (Just take a look at “House of Keys” research by SEC Consult, which shows how the reuse HTTPS certificates and SSH keys endangers millions of devices.)

The part that concerns consumers is the carelessness in dealing with IoT device security. Since IoT devices aren’t as personal as, say, smartphones or PCs, users tend to “install and forget” IoT devices. Bad practices such as not changing passwords, or worse, leaving devices installed with factory-default passwords are epidemic in IoT ecosystems, which makes it very easy to find administrative access to the device and install IoT botnet malware into it.

What can be done about the IoT botnets?

I just wanted to raise the challenge of IoT botnets in this post. The response will be the subject of a future article. But very briefly, a lot can be done to mitigate the threat of IoT botnets in the future. For one thing, security should become a major factor in IoT development. As Cesare Garlati, chief security strategist at prpl foundation told SC, “The very fact that patching isn’t high on the priority list for admins is testament to why security in devices like CCTV cameras needs to be ‘baked in’ at the chip or hardware layer.”

We’ve already seen the efficiency of hardware security in the headaches that Apple gave the FBI in the San Bernardino iPhone case. Having devices that are secure at the hardware level will go a long way into hardening our defenses against exploits, including IoT botnets.

Moreover, we should also recognize that some IoT devices can’t be secured at the device level and therefore must be secured at the network level. Deploying network security solutions, like the ones I’ve described in this TNW article can help a lot in providing security against IoT botnets for devices that are inherently insecure.

These are just two tips at fighting back against the rising tide of IoT botnets. I’m sure that a lot of you readers out there have brilliant ideas and innovations that can help deal with this situation. Since I’ll be writing about this very soon, I’m eager to know what you’re doing to deal with the IoT botnet threat. Leave a comment, or better yet contact me, to share your ideas.

FEATURED IMAGE: SAVASYLAN/SHUTTERSTOCK

Read more…

Sponsor