 Hello and welcome. My name is Shannon Kemp and I'm the Executive Editor of Data Diversity. We'd like to thank you for joining the current installment of the Monthly Data Diversity Smart Data Webinar Series with host Adrienne Bowles. Today Adrienne will be discussing sense and sensors from perception to personality. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we'll be collecting them via the Q&A in the bottom right hand corner of your screen. Or if you'd like to tweet, we encourage you to share highlights of questions via Twitter using hashtag smartdata. If you'd like to chat with us or with each other, we certainly encourage you to do so. Just click the chat icon in the top right for that feature. As always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Now let me introduce you to our series speaker for today, Adrienne. Adrienne is an industry analyst and recovering academic. I love that. Providing research and advisory services for buyers, sellers, and investors in emerging technology markets. His coverage area includes cognitive computing, big data analytics, the Internet of Things, and cloud computing. Adrienne coauthored cognitive computing and big data analytics in Wiley 2015 and is currently writing a book on the business and societal impact of these emerging technologies. Adrienne earned his BA in psychology and an MS in computer science from SUNY Binghamton and his PhD in computer science from Northwestern University. And with that, I will get the floor to Adrienne to get us started with today's webinar. Hello and welcome. Thank you, Shannon. It's always good to be here. And I was saying to people before this started that this is really kind of a fun topic, getting into perception and personality and things like that. So I do want to thank Shannon for allowing me to do this as a two-hour webinar today. Just checking to see if you're still there. Okay. They wouldn't give me two hours. So we're going to try and do it all in – now I'm losing it. Doing it all in 40 minutes and leave time for Q&A. So start by getting some definitions out there and then get into the real meat of the presentation today. When we talk about sensors, we're talking about any sort of device that detects the presence of a signal and reports it to something else. So we'll all probably use sensors every day or are impacted by them, whether we think about it as a sensor or not. Just use this as a simple definition. So a sensor can be – a sensor is looking for the presence of some signal or a change in a signal. These things are event-driven and they can either be stationary. So you might have something that's counting cars as they go past a traffic light or mobile. We have lots of sensors in our smartphones. We're going to talk about that quite a bit today using the smartphone, sort of the typical platform for sensors. The first thing I want to do is kind of establish that when we're talking about sensors, you can't really consider sensors in isolation. You're looking at three things. There's my little mnemonic device. It's sad. You have to have algorithms and data to make the sensors have any value at all. And in each of the cases that we're going to look at today, basically we have a sensor. We've got some input signal that is going to – either the sensor will strip out the noise and detect the signal or there could be a second processor out there. But basically by the time the sensor is done processing the signal, it's feeding data, perhaps refined, if you will, into an algorithm. And the output of the algorithm is where you get your results. You get something that's an actionable insight, hopefully. And you can substitute analytics here. I'm always talking about algorithms. What we're looking at is a way of taking data. It's typically time sensitive. So the time element of the data is important when something occurred. And we're trying to act on it often in real time, although that's not a requirement. So you'll see in the press a lot of numbers being thrown around. There are billions or trillions of devices. If you're new to this series, I typically try and bring you some of the relevant news items as we're kicking off. And here's one from just last week where Ericsson says that by 2018 the Internet of Things, which is all sensor-based, and we've talked about that in the previous one, IoT devices are going to overtake mobile phones as the largest category of connected device. And whether or not that's true whether 2018 is the year, the important thing is here that we're looking at billions and billions of devices. And we're going to sort of make the distinction as I go through the slides today between a device and a sensor because a device may have more than one sensor. Typically you wouldn't have more than one device per sensor, but a device could be a phone and we'll see there are half a dozen different sensors in a typical smartphone, but if we're looking at it that we're going to have 28 billion devices, each of those will have perhaps multiple sensors. We're talking about a really big number. And since this is in the smart data series, you probably wouldn't be surprised to find that the way we want to position this is these are tools, you can think of these as tools, technology-based tools that are going to help you leverage data and create value from it. So the purpose of this one is just to kind of set the stage for how many there are. And if we're thinking about 16 billion to 28 billion devices, it doesn't matter if you're talking 16 or 28, that's a lot of devices. We're talking about more devices than people. So people are interacting with multiple devices, systems that are talking to each other. What I want to get into today as much as possible is looking at the interface between these devices and people. And we'll talk a little bit about machine-to-machine communication. So with that in mind, what I wanted to do is just kind of, even though we're going to be looking at some really fairly sophisticated usages and applications, show that this is a quick scan for Internet of Things starter kits. And if you go out online after we have this conversation, it's like, hmm, that's interesting. Maybe I'll try it. A lot of folks in the computer industry tend to start with a device called a Raspberry Pi, but there are kits out there starting in the $20 or $30 range up to a couple hundred dollars. So you start to put something online and look at it as a sensor or a collection of sensors and then start to communicate. So I wanted to show that even though we'll get into some things that are pretty large and all-encompassing, you don't need to be dealing with that to get started. This one from Credit Suisse, I thought was interesting because when people are looking at connected devices, when we talk about the wearable industry, and this slide was from, it's really pretty variable. I'm going to use a bit as an example in a minute. But the range of devices that are out there and the types of data that are being collected and the way the data is being used is what we want to get at today because the opportunity that you'll find today is so much richer, if you will, than just a couple of years ago. With that, I always have to go into popular culture. If you've ever seen the TV show Person of Interest, which I guess is about to finish its run, this is an example of some might say the ultimate scenario or some might say it's sensor-based technology run amok, but their premise was that all of the closed-circuit TV cameras in the world are basically connected. I'm not going to dispute that because I've seen some examples where things that you wouldn't expect to be connected are connected. But the value, it's a real network effect when you have something like this and one system starts talking to another. And I think that's the part for, you know, the folks that often come into our webinars are generally technology oriented. You're looking for how am I going to create a new competitive advantage using technology. This is maybe the extreme case where you could connect everything, but the reality is that there are opportunities with almost any sensor to go beyond the small network that you start with. So to put it in context for IT, for enterprise systems and for consumer systems, let's go and look at the actual interface. And here's one. Here's about the simplest. So I want to make a point that when we're thinking about the interface for a system, how we think about that and how we implement it and what parameters we allow really limits or constraints the types of applications that we can build. And so what I'm saying here with how we imagine and implement the user interface, that's what constrains the human-computer interaction. And in this case it's about the simplest device. There's no parameters going in. This is just a meter that I was telling Shannon before the call that I stuck in my son's car over the weekend when his fuel gauge failed and it's a calibrated ohm meter that is now acting that way. It's only got one signal in. You can't change that. So there's only one type of system that's going to be using it on the output. If we have a more complicated system, this happens to be a picture of the original IBM Watson system that I took at the Watson labs. And we're looking at, and where sensors come in, is you've got input devices and output devices and by devices I'm just saying anything that creates a signal and that can be humans or machines. On the input or the output we can producing input that's got to be interpreted or put into a form that the machine can use. That's really this sort of upper left-hand quadrant is where we're going to focus today. But thinking ahead, when we're dealing with humans the output is also something that we want to make it a rich environment, if you will. So if you've seen Watson on Jeopardy, just as an example, or seen the ads with Bob Dylan talking to Watson, I'm going to come back to that in a minute. You know that one of the features is that the input for this system goes beyond the traditional batch command line, GUI, et cetera. And now it will take natural language in speech form. We're going to look at the difference between natural language in speech and natural language in text, then also look as we get into more sophisticated types of input where the system can process gestures and look at emotions. Today we're going to really focus on that as input, but maybe just to give you sort of a little hint as to how systems are going to be using gestures and emotions to display a range of desired outputs. I don't want to say that the machine has emotions. I'm certainly not positing that at all. They don't. But if it's appropriate for the system to interact with the user by demonstrating some affect, then I think that we're going to see that the interaction is going to change dramatically. So having said that, maybe just a couple of examples here. My 2x2 matrix is the complexity of the data coming in. This is the input data versus the type of sensor. So in this case, we're dealing with stationary sensors, things that stay in one place and the events happen around them. A low complexity signal would be something that is just simply doing a count. You drive over a strip or there's a light beam that's broken as the car goes past. That's very simple. It's only one way to interpret it. A low complexity data signal in a mobile device might be a Fitbit. Maybe it's monitoring your pulse. Maybe it's monitoring blood pressure, acceleration. There are going to be a couple of sensors in some of these. But typically, we're dealing with fairly crude, if you will, technology. But as you move up the stack, hot complexity for a stationary device would be a video stream in a closed circuit TV. Now, closed circuit TV has been around for a very long time. If you've been a regular visitor to London, you know that they've been monitoring just about every part of the city for many years. What's changed is now what we can do with that video signal. So it was always there. The rich data was always there. But now the systems that are analyzing that feed can do much more with it. We're going to see an example or two of that on the mobile side. Telematics, and we'll look at this in the automobile industry and how things are starting to change. We'll put devices in and sensors, and you can think of a device as being the automobile, or maybe it's just a subsystem where it's being monitored in real time. And at the high end here, we'll have a smartphone. I mean, certainly we're going to see more and more sensors, more and more data being collected by smartphones. But right now they are really at the high end of what we can do with a mobile device. So I'm going to move for a second into what I think of as the market signals that give me confidence in the predictions that I'm going to make. And so we'll take a look at just a couple of news items to show. So here we've got Google unveiling Google Assistant. So it's a virtual assistant that's a big upgrade to Google Now. And if you've been following the technology over the last couple of years, you probably saw things like Google Glass come and go. And that was advice that had multiple sensors in it. It was, I don't think we could spend an hour on Google Glass. But the idea is they're experimenting. They're putting things out there. And Google, as you probably know, in most cases, will know more about you than your parents or your children. It's very difficult to go through a week without interacting with some app that feeds information into Google. So here we've got a virtual assistant. And one of the things that we'll see with a lot of these sensor-based devices is that they're acting as personal assistants. And this is just one example. Well, there is artificial intelligence on a stick. And here the idea is that we're creating this, or they're creating this intelligence or learning software that is an add-on to existing systems that will go into mobile devices. And so what's cool here is that they're known for thermal imaging. And now we're looking at being able to add neural network capability to conventional systems. And these are all fairly recent. So this is an April, late April announcement. And this one, if you've been with us in previous webinars, I've been talking about Qualcomm and their 0F chip set for a while. It's a neurosynaptic or brain-inspired architecture chip set that goes in telephone handsets among other devices. And now, just last month, they've opened that up to develop so they've got the SDK kit out there now. So I think we're seeing this. There was one announcement that's been just an extra minute on, which is just last week, the announcement of a partnership between IBM and Cisco, which is really where I think the world is going. And this is marrying the edge of the network devices that Cisco has. So it has a good chunk of the network market at the edge, if you will. The analytics that IBM has developed with Watson Analytics and this partnership now, what they're doing is bringing this to the edge of the network. So one of the issues when you're dealing with a large network or with cloud computing is where's the data and where's the analysis being performed. There's always been an issue for performance and this partnership, I mean it's been announced, we're still getting data on things like pricing, but the idea is that with all of this data that's out there now, if you can process it as it's being captured rather than pulling it all in and then having to do the processing, you're going to have a much more efficient system. And in some cases it's not just more efficient, it's feasible that you can't actually do it unless you do it at the edge. And so this is kind of an interesting partnership leveraging the networking device at the edge capability of Cisco with the analytics that Watson provides. And that was just, as I say, last week. I'm going to take a quick look at some devices that you've probably either used or seen advertised and I want to talk about telematics and specifically instrumented vehicles. Right now a lot of what's being done with telematics is being done by the insurance industry. Look and understand driver behavior by monitoring it. And we've seen this as the parent of teenagers. I always appreciate the ads that say you can monitor how your teenager is driving. But besides that, you can get feedback from a device like this on some things like how you drive, how you brake, what was one that was in here at the New York Times. The one on the left is an older one. The one on the right is more recent. This was a writer for the New York Times who tried one of these devices and got a scorecard. What's interesting here is they have a grade for turns and it's measured by looking at the accelerometer and looking at the G force, the force of gravity as you're making turns. So you don't get penalized for making a lot of turns. You get penalized in this for making turns abruptly, if you will. And the way it's being used today by the insurance company is to offer incentives, offer discounts for people who get their scores down. It's almost, well, it is analogous to the way they reward students with better grade point averages, although I'm not really sure what fits in the world. But here, if you can see your performance and you get feedback, it's like biofeedback, you can improve if your goal is to be a safer driver or to be a thriftier motorist. So this has been around for a while. People are getting kind of used to it. Every car manufactured in the last several years has had a port in the onboard computer system just by every car where you could plug something like this in. But typically that hasn't been shared with a third party like your insurance company. The interesting part of this is for me that right now they promise the insurance companies are saying, well, you can use this and get a discount. We won't penalize you if you get bad grades. And in fact, on this one, it happens to be State Farm. My understanding is there's no grade lower than a C because they don't want to offend anybody. But there is the fact that that data, once it's out there, can be used for other things. The same sort of data could be used for warranty information. So let's say you're getting a good grade on acceleration and deceleration. Well, deceleration in particular, that means that you're probably going to go longer or go greater distances before you need replacement brakes. You can take this same information that was intended for the insurance company, share it with permission with the company that's manufacturing the parts, and either maybe offer you an extended warranty based on your driving habits, or you could do something like this to do predictive maintenance based on it. So one of the themes throughout the talk today is the same sensor providing the same data may be analyzed in different ways as it's aggregated for different applications. And to me that's where the real opportunity comes in. One of the issues here, and this is in the one of the Chicago is that people are still reluctant to do it. It's like everybody wants a discount, but there's still a trust issue there. So we're seeing the adoption. We're seeing people getting used to the idea of these sensors. What I think is kind of cool going here, this is a Tesla example, is that as cars become really rolling IT departments rather than having the technology as an add-on, we can see that the actual features of cars are changing based on the data they're getting from these sensors, but also taking that data and now feeding it back into a closed loop system. So in the case of the Tesla here, their latest software update, this is like a complete mind shift if you've been working with cars over the last 10 or 20 years or longer, where you're getting software updates in a car that will change the features of the car. And this sounds like a promotion. I have nothing to do with Tesla. I don't have one. I don't have it for stock. But they have a combination of cameras, radar and ultrasonic sensors that now, based on their history, they've been collecting data from all of these. Now they can use that to feed into the other systems in the car and start to get closer to an autonomous vehicle. And that's going to be interesting because even if you bought one a couple of years ago, you may not have been thinking about it as I'm going to be adding these features going forward, but it's all made possible by the analysis of the data and by the fact that you can now take near real-time input from sensors and use that. It's been analyzed. We've got sensors, analysis of the data, the SAE that I talked about. Now you can use that to actually start to have the car respond without direct human intervention. And I just, this is one that I saw actually one of my friends posted just the other day. You can tell it's European because otherwise it would be predictive analytics here. It's not July 6. It's June 7. It says 7, 6, 2016 where there is a race car driver that walked away from a 46G, the force of gravity times 46 times impact. And two things about this. One, we know all of that because the car was fully instrumented. It had lots and lots of sensors that were keeping track of these things full time. Obviously it didn't have a collision avoidance system that would work in this environment. But what was interesting to me is they have a combination of sensors. They've had cockpit mounted cameras that's new this year and they are actually photographing or recording driver at 400 frames a second. That is, that provides too much data to be analyzed in real time. So that gets stored in what you would think of as a black box in an airplane. But some of the data is actually going back to the pits and being used for changing, changing parameters during the course of a race. So it's, you know, we talk about fly by or drive by wire when you don't have a direct line from the gas pedal to the injection system. There's electronics in the middle. Well, now we've got signals that are going out based on these sensors that are all over the car going to a third party, which would be in this case the pit crew, the management. And that providing feedback that can go to the driver, but it could also go directly to the car. So we're getting much closer to autonomous vehicles, even in areas where it may not be that obvious if you're just looking at it from the outside. All right. I'm sorry if I'm going fast today. We've got a lot of different examples that I wanted to get in. Because the whole idea of sense sensors is just so cool. There are so many different ways to look at it. One of the areas that people are talking about a lot right now, a term that's getting a lot of coverage is spatial computing. And I would venture that most of us have used a GPS at one time or another. And many of us have probably seen the maturation of GPS devices from being a separate device to being something that was perhaps very expensive to have installed as an option in your car. To being a free app on your phone. And it all comes back to the fact that there are these sensors that can position you. And now we can have applications that know where you are and know what's happening in the environment around you. And so this happens to be just a, of course, Sarah Cork from some folks at the University of Minnesota, but I just wanted to get people thinking about it that we're really looking now at time and space as in physical space, not as in outer space. As dimensions that are important in creating value within an application. So that in mind, what I'm going to do for the next little section here is look at my favorite example, because most of us probably have one of these in our pocket. Sensors in the smartphone, what types of sensors there are, what they're being used for now, and get you thinking about how these things might be used differently in the future. So just look at that example with the telematics. Once you start collecting data, you'll likely find other ways to use it. So in the iPhone, it's a little modern smartphone. These are some of the sensors that are in there. You've got an accelerometer, which detects motion. And if I'll just use the iPhone, because that's one of the most familiar with it. And if you turn your iPhone from portrait to landscape mode, for example, and you're using an application where you have to input text, the keyboard is going to switch. And it does that because accelerometer is giving that information that you move things. The ambient light sensor is typically just used at this point to adjust screen brightness. But if you think about it, the ambient light sensor, if you combine that with the geolocation system in there. Now I know, I may know, I may be able to detect data from both of these, whether you're inside or outside, based on the brightness, the time of day, and where you are. There's a barometer in the modern iPhone. I didn't realize that. But the barometer is used primarily with the basic apps to measure altitude. So if you have an iPhone and you've used one of the health monitoring apps, and it says you've gone so many steps, well, it's using the barometer to check the pressure to see as you're going up and down steps, and not only that you're moving. In a second, that barometer can now be used for other things too. It's just a matter of taking that data and repurposing it. The gyroscope. Cool. We start to rotate you. You start to use your phone as an input device for a game. There's a lot of things that you can do with that. Right now, the cool thing in my view is that for most of these, if you're a software developer, if you want to develop a new app, you have access to their APIs. You can go in and say, hey, I'm going to build something that will run on the iPhone and that will use this data. And the beauty of it is you can create it and do something that's never been done before because you never have access to this kind of data. The proximity sensor is one that, if you use a phone, you've probably seen it in all the ads. When you pick up the phone and you're going to put it through your ear, I can't actually remember the last time I did that with a cell phone, but it turns off the screen because it knows, and I'll put that in air quotes, it knows based on what you're doing with the phone and the context that you're now having against your ear. You can't be looking at it. So turn off the screen. The last one, of course, the Touch ID Fairprint, which I was interested in a couple of generations ago. And now a number of apps that I use give you the option of using that touch as security for your app instead of putting in a passcode. But where this gets interesting to me is starting to combine the data from these sensors with a larger population, likely to be anonymous, but now we start to crowd share this information and again hopefully with permission. All of a sudden, there's a whole new range of opportunities for that data and now your data has new value. So it has what we think of as a network effect. Just to give an idea here, this is from Qualcomm. Their location technologies are in many of these zones are currently in over three billion devices. When you think about that, that means that if you're creating an app, it will take the data from the ISAC location technology. All of a sudden, you have an opportunity, you have a potential market that's very large, but also you can share data, you can start to gather data from a very large population. As we've seen in other talks, that creates new opportunities. So what I'd like to do is the next few slides are actually screenshots from my own phone. And if you were listening to Shannon and I chatting before the thing, you know that I'm moving in a couple of weeks, so I don't care that you know where I live. If you look at it, we've got two screens here. This one happens to be on the right is Uber. What's interesting there is when you use Uber, not only does the app get your location automatically using the sensors, but it will start to send you an active map that's using the sensor in your driver's phone. So you can actually see where they are and track them coming up the road and verify this says somebody could be here in five minutes. Well, you can look and see actually where they are. And in my case, I often use Uber to come and pick up one of my kids to go do something when I can't drive them. It's nice to be able to actually see that and get that information in advance. The one I'm left is a screen from Waze. If you haven't used Waze, again, I've got no relationship with the company, but they are a navigation application that crowdsources data. So when I'm driving and I'm using Waze to get somewhere, it's comparing the data from my phone with all the other people that are using Waze and adjusting dynamically the route that suggests based on behavior. With Waze, what's kind of interesting is you can also actually manually input information. So if you see an accident or a police car or some construction or even a large pothole, you can put that information in. And drivers in the area as they pass through for a period of time will get that. It can figure out, calculate based on driver behavior that there is a hold-up of traffic in New Rocha. But it also has these actual sort of hard alerts, if you will. And we'll see where that works and where it also has some unintended consequences. I just put down because one of the things that I think is really interesting when you get into sensors and communicating systems, this is an alert I got the other day and it said that I have a reservation at the restaurant. Yeah, okay, that's true. So it's reminding me. But it also said that there was a club ride on the 12th. And I had no idea what that was until I remembered that on Facebook I'm still a member of a motorcycle group, even though I sold my motorcycle a couple years ago. And apparently they were having a ride that I didn't notice on Facebook, but Waze was getting all the data from Facebook. Now that can get interesting. It can get creepy. And we'll look and see where it can just be annoying. I mentioned the barometer to show you one more app here, Dark Sky. Their solution, their system, their app was designed to give you very, very accurate information on the next 24 hours or even the next hour at very tight locations. Instead of turning on the TV and finding out that, you know, in your zip code, you're likely to get rained in the next 24 hours. If you crowdsource this data and you're sending basically by giving permission, by using the application, you're giving permission for your phone to allow the barometer data to be used by the app to be shared and then analyzed all of a sudden you can get data that's accurate in a much, much more accurate because you're getting real data. And it's not based on somebody's opinion. It's not based on looking at the window. It's based on actual conditions. And the more data you have, for example, like that, the more accurate it's going to be. And just this one where I'm going to end on this section, one of the things that I think is very cool if you're thinking about apps right now for mobile devices using sensors is that we can now have access as app developers to the geolocation sensors. And one thing that when I used to commute to Manhattan, I always thought it would be nice to have an app that knew when I was at the stop before mine to give me an alert rather than saying, well, the train's supposed to get to my station at 8 o'clock. Let me know at 745 because if the train's late, I'd rather just sleep up now. In this particular example, I had a leaky tire this week. I said, you know, when I get in the car, that's my location, tell me to check the tire pressure. And the one on the right is showing me that indeed when I got in the car, I had an alert to check the tire pressure. The next one is just an example of how you would use it on the train. Norwalk happens to be the station before mine. You can go in, set an alert, remind me at a location. You can see when you're arriving or leaving, you want an alert. In this case, it said, when I'm arriving, when I'm driving on the train and it's passing through Norwalk, that's when I want this reminder. So it opens up. In this case, that would just be information from me. I don't know that that would be worth sharing with anybody. But now we want to say, as we start to pull all of this together, in the title, we talked about perception and personality. We know now that the phone or the app on the phone knows your location, your acceleration, all sorts of good stuff. My question is, at what point does it know your motivation? Because then it can give you much more personalized service. And the little chart on the right is one of the early talks that I did on cognitive computing. So right now we're doing a lot with machine learning, getting into perception. Motivation is sort of down the road that we'll get to. But as perception improves, as we can get more than direct input from a person giving either text or numbers, whatever it is, and we can start to look at things the way you and I would do if we were having a conversation, we can get more information. Well, then maybe we can determine motivation, why are people asking this, and maybe we can respond appropriately. So you may have seen a diagram like this in some of the reports we put out. These are the different categories of technology that we cover at Storm Insights. And the reason that I've highlighted the ones on the left, voice, gestures, emotions and text, natural language processing, is because that's on the human input side, and that's what we want to focus on. The ones that are in bold, which is primarily voice, gestures and emotions in here, the reason this company is in bold is because we have a new report out that has brief profiles on each of those. And if you're interested in that, just send me an email after the talk, and I'll be happy to send it to you. There's no charge for those. We're building a database of profiles of companies in this area. Those happen to be the ones that are in our latest report. Now, let's get into personality and sentiment and emotion and theme and concept analysis. One of the things that has been getting a lot of coverage in the AI world recently, there's a new idea in commercial where Bob Dylan talks to Watson and Watson says, I've read all your lyrics and I think that your main themes are like it's lost love and time passes. There's this whole debate in the industry about, well, is that really what the themes were? My feeling, having looked at this and looked at the criticisms, yeah, sure, Bob Dylan wrote a lot of protest songs, but if you're looking as an intelligent reader who was reading this in 2016, let's say you're a college freshman and you were listening to the lyrics during the Vietnam War, I think that Watson is actually doing pretty much as good a job as a reasonably intelligent human based on context and the words themselves. I think that this is kind of interesting because it fits with the whole idea of sentiment and emotion analysis. There's a lot of work that's been done in that area. But here we're taking it up a level and saying what is the theme or the concept. Where this fits in with sensors is as we start to build these richer human-computer interfaces, now we can automatically, as we're processing input, try and understand what the person is doing, what the person, we know what the person is if we have access to their phone, but by looking at the concepts, things that they're saying, but also looking for properties of the input that will give you an insight into the sentiment that's being expressed or the emotion. Then we have a whole different range of options for what we can produce, what we can give, and what we can deliver to the user. So, one more slide here. This one is also in the IBM arena. This is from Bluemix where you can see that you have access to emotion analysis using natural language. In this case, it's using text. But then we start to look at in Bluemix, you can go in and build an app using their APIs. You can use it to analyze the content, understand what the emotion is, and have a different range of responses. If you think about systems over the years, one of the frustrating things is if you say the same thing, it's very deterministic if I've got the same input to a computer, it's always going to give the same output. Well, I may be in a different mood. I may be in a different place. I may have different needs, but use the same words. And now if we can start to analyze these other dimensions, we've actually got something pretty cool. So, we're just going to look at two categories. The first fairly quickly, looking at gesture recognition. And these are two or five of the companies that are profiled in that report. There's a big movement that's probably most advanced in gaming where the interface allows the system to use a camera, which may be in your computer camera or on your television as a camera now in smart TVs. And you can influence the input, basically influence the input. Your input is the movement of your hands. And if you've ever played like a Wii game that had a controller that had accelerometer in it and, you know, play Wii golf or tennis or whatever it is, then moving that device created a signal one now, this is doing the same thing, except you don't get a device because the camera is evaluating your motion and doing that. And so this particular example is from a company called iSight. But again, we profiled a few of those in our report. This one is the most interesting and the most promising is the idea of motion recognition or motion analytics based on video. And again, we're looking at several companies in this space right now. I'm just going to look at Effectiva because I've gone through and done their demo and I can show you some data. Again, this is not a client. This is not an endorsement. It's just showing you something that's representative of the advances in the industry and where we are right now. They're selling a solution or a piece of a solution that uses your device camera, whether it's your computer or your mobile device like your phone, which giving information will look at you as you're looking at the screen and try to understand the motions that you are displaying on your face. So if you think about having a conversation with someone, again, the same words are going to be interpreted differently if you're looking at your face. And I'll just give you an example. So I did one of their demos and you may recognize this. If you happen to have watched the Super Bowl in the U.S., there was an ad for beer that had a puppy and everybody loves puppies. Basically, what happened in this demo is I watched the commercial while the application was watching me. And what this graph shows, this graph was looking, you can see that I've selected Smile on here. This is actually from me using their demo. And you can see that at this point where it's at 34 seconds in, there's a light there. That was one of the places that I smiled and this is what was on the screen at the time. And if you think about that in terms of that using the sensors, it's using the camera, it's using all sorts of data, and now it can aggregate it. Now, obviously if you're reading commercials, you'd be happy to use something like this to get information. But you can use it, you can have a kiosk in a retail store. I'm looking up information and I frown and maybe you look at that and go, you know what? We've found that if people frown at this point in our presentation, they're less likely to buy unless we offer them a 10% discount. Now, all of a sudden it becomes interactive and it's based on your emotions or their perception of your emotions. And this just shows the seven basic emotions that they tend to track. All right. One more, which is, I said earlier that these are all on the input side. One of the things that I think is very cool technology is coming out of University of Auckland and it's being commercialized. This is a project called BX where the interface to the applications of the building is using neural modeling. It's a virtual infant that's actually modeled on the daughter of Mark Sagar who is leading the project. Mark has some Academy Award for his work on Hollywood productions. But basically the idea is that as you're interacting with an application whose interface is BX, what you're being shown is reacting to you because you're seeing the baby. The baby is seeing you and it can change. And obviously, if you've ever talked to your children, you know that you will change your facial expression. You may change your voice tonality based on the feedback that you're getting. So now we're looking at interfaces that are using neural models to change the way they look at you based on what they're seeing. And to me, that's one of the coolest things. So we're getting short on time. I'm going to wrap up with what I call findings and recommendations and just some general themes that I think would be good for us to look into. And to discuss further in some of the later webinars. So number one, I mentioned Waze and I use Waze all the time. I used to have dedicated GPS systems. We have one in my wife's car that we haven't updated in years. Both just use Waze. And that's really cool. It's done a good job and I've generally learned to trust Waze. But it's creating some problems. And this particular example is in LA where all of a sudden the quiet neighborhoods are now experiencing very high volumes of traffic because Waze is directing them to take these back roads or side roads that people had never done before. And the consequence there is obviously it's unpleasant to have your quiet neighborhood overrun by cars that are being directed by a computer. But now what's happening is people in those neighborhoods are getting Waze accounts. And putting in fake accidents to try and get traffic directed away from them. So it's a very interesting behavioral situation that we're going to monitor. This one is not strictly speaking sensor based, but it's another one with unintended consequences that as we're doing things with geolocation, it's really important to get it right. If you've seen this news story, it came out a few weeks ago. If you're dealing with IP addresses and you're looking for physical addresses, there are companies like MaxMind that will give you a physical address based on an IP address. The problem is they can't always tell where it is and so they tend to approximate. And in this case, if they can figure out what state you're in, but they can't figure out where the IP address is, they'll generally go to the center of the state. And if all they can do is figure out the country, they'll go to the center of the country. And that means that at the exact longitude and latitude that they have determined was the center of the U.S., there's a lot of things defaulting to that physical address. So this poor family living in a farm in Kansas, according to MaxMind, is the home of 600 million IP addresses. So they've been getting people showing up as people using IP addresses that are cloaked that are defaulting to this, maybe doing something that folks coming after them. That's all I'm going to say on that one. So the last point I want to make here, and pulling it all together, I was following this Ford recently. And I was looking at it and I stopped in traffic and realized on that bumper are four sensors that are being used with the backup camera. I almost got run over the other day. I was at a gas station. Somebody was backing up and I honked the horn and they stopped. And then they backed up again and I honked the horn and they stopped and got out because I was in my son's car and it was too low for their camera. Well, all of these sensors, in this case all these four backup camera sensors are doing is feeding information. It's a proximity sensor to let them know they were about to run over something if they were pointed in the right direction. But you can start to aggregate that and get all sorts of new opportunities. So the final bit here is where there's sensors of data and where there's data, there should be sensors. So let's start looking at every aggregation of data or every place that data is passing in a stream has an opportunity. So the recommendation is if you're in the IT industry, look at all your data sources for new applications and look to expand the data portfolio based on sensors. I think I'm five minutes over. But let's hand it back to Shannon and see if I can answer a question or two. Anything that we don't get to today, I'll be happy to address by email. Adrienne, thank you so much for another fabulous webinar. Very interesting information there. I actually didn't know that Bob Billen did a commercial with Watson. I'm going to go look that up when worlds collide, right? So the most common question, of course, that we receive are people wondering if they're going to get a copy of the presentation. And just to remind everyone, I'll be sending out a follow-up email to all registrants by the end of Monday with links to the slides, links to the recording, and anything else requested. Right now everyone's being fairly quiet in terms of Q&A. We have one comment that is fascinating and well presented. I would love that. If you have any questions that you want to ask, Adrienne, before we wrap up the webinar here, you can just submit it in the Q&A in the bottom right there. A hundred million IP addresses. That's just interesting. Yeah, this poor family, you know, think about it. If you're getting some internet scam and you want to go off the deep end and you want to find out who's doing it and look it up and it turns out to give you a physical address that says farmer's lawn, they ended up with the sheriff guarding the plight. And they really didn't understand what it was until somebody dug into it and realized that. So that company Maxmine that provides physical addresses has now made the default, like the middle of a lake somewhere, but they went through a really rough period. I bet. Yeah, it's amazing how much everything is connected and certainly a lot of questions coming around security now in terms of all the sensors out there. So as I said, if there are no questions, that's fine. But if people want to get in touch, you'll get these slides. There's an email address forming on there. There's my Twitter handle. Skype would love to have a more in-depth conversation about this with anybody who's interested. There's a comment that came in. It's a great webinar. I opened my eyes to great opportunities and I am in amazement state maybe going through the webinar one more time. I'd be able to ask questions. Absolutely. Yeah, and I'll include Adrian's information when I send out the follow-up email as well. So if you do think of questions afterwards, you can certainly contact him afterwards. Adrian, well, thank you so much for this great presentation. It really is just so interesting in where we're headed and with these technologies in this society and how the people are reacting to these technologies. It's just fascinating. And thanks everyone for attending today. We appreciate everything that you do in being engaged in everything that we do. Next month, we have a special event. It's going to be on July 13th. And this particular webinar series will be part of that event, Smart Data Online. And Adrian will be joined by C. Vardieri to discuss modern AI and the future of work. We hope you can join us for that full-day online webinar. We have a great lineup for everybody in the terms of smart data. And I really look forward to you talking about modern AI, Adrian. And thanks for everything, and I hope everyone has a great day. Thanks, Shannon. Take care. Cheers.