 Hello, and welcome. My name is Shannon Kemp, and I'm the Chief Digital Manager of DataVersity. We'd like to thank you for joining the current installment of the Monthly DataVersity Smart Data Webinar Series with Adrienne Bowles. Today, Adrienne will discuss artificial intelligence at the edge. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we'll be collecting them by the Q&A in the bottom right-hand corner of your screen, or if you'd like to tweet, we encourage you to share highlights of questions that are using hashtag smart data. If you'd like to chat with us and with each other, we certainly encourage you to do so. Just click the chat icon in the top right-hand corner for that feature. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Adrienne Bowles. Adrienne is an industry analyst and recovering academic providing research and advisory services for buyers, sellers, and investors in emerging technology markets. His coverage areas include cognitive computing, big data analytics, the Internet of Things, and cloud computing. Adrienne co-authored cognitive computing and big data analytics published by Widely in 2015. And it's currently writing a book on the business and societal impact for these emerging technologies. Adrienne earned his BA in psychology and MS in computer science from SUNY Binghamton and his PhD in computer science from Northwestern University. And with that, I will give the floor to Adrienne to get today's webinar started. Hello and welcome. Hi, Shannon. Thank you. Thank you. And welcome to everybody who's online with us wherever you are. Yeah, so today's topic, AI at the Edge, subtitled Intelligence in the Fog. We're going to talk about distributed intelligence, working with fog and edge computing, and in particular, look at the business impact. So I'll tell you now there'll be a bonus point at some point for recognizing the scene in the picture. I took on a foggy day a couple of years ago. But right now, let's just dive right into it. So I'm going to begin with some context for the remarks today and some definitions, because a lot of these things are a little on the fuzzy side, especially when you're talking fog computing. Talk on the technical side about some architectural issues, things that you have to decide when you're distributing intelligence. And then a little bit about the business implications and some thoughts on how to get started creating business value, building applications with distributed intelligence. I'll give a couple of recommendations. So let's dive right into it. Okay. The focus today really is to bring together a few technology areas and show how by leveraging pieces of each, we can create something of value for businesses. And so the three are the Internet of Things or IoT, cognitive computing and cloud computing and a derivative of cloud computing that I'll explain in a little more detail. So let's start with the Internet of Things. The IoT starts with sensors and ends with actions. Sometimes you think about it as going from sensors to insights, but really if we're not doing anything with it, it's not much use. So I want to focus on how we can use sensor-based systems at the edge of our network to help us take some action. A lot of times the action is making a business decision and making a technical decision. But let's look at the technology to begin with and then how we put it all together. And then we can have a discussion about what are the business implications of building a system using this technology, using this approach, using these architectures rather than more conventional systems of the past. So just in terms of terminology, a sensor, we're going to say simply as a device that detects the presence or senses something. So there's a signal. The sensor gets the signal in and either reports or signals something about that event. So it's an event-driven device. And a sensor can either be stationary or mobile. By stationary, I think, I mean something that's static that doesn't move relative to the rest of the world. It may move within a small area, but in general, these are the two main categories. So if I look at a jet engine here, this happens to be an SR-71 Blackbird, but the more modern jet engines today typically have about 5,000 sensors per jet engine. And those sensors collectively are generating about 10 gigabytes per second, per engine. So if you think about it that way, the sensors that are on an engine are static with regard to the engine. They're not moving around the engine. They're attached to the engine. But assuming that the engine is on a plane that's flying, we can think of those sensors as being mobile. And that's an important distinction, because the environment the sensors are working in is something that can also be static or mobile. And so if we have a static sensor on a mobile device or on a mobile in a mobile environment, then we can think of it in both ways, depending on what we're trying to get it to do. So when we look at the IoT aggregate, it's a collection of applications and appliances or devices. The key thing is that each of the devices is connected to the internet. And so you can think of it as the internet of things or the internet of everything. One of my colleagues, Delaney, is starting to talk about IoT as integration of things, which I think is a good way of looking at it. What we're going to do here specifically today is look at how we can take devices that are attached to the internet that are distributed in that way and make them smarter. So we're not just looking at getting more data or getting it faster. We want to have a little more intelligence. And that brings us to the next point. Sorry, just to finish up in terms of the mobile versus static, because I think it's important to have this classification in the back of our minds as we look at the different applications. Vehicles, individuals, people. The militaries that we have a lot of examples from the military, whether you're dealing with an individual, a troop, a soldier, or a device that has sensors within it that don't move relative to the device, so within the tank, but the tank moves. And so we have to have a connection between that and another system. And so those we would think of as a mobile connector. Sensor-based devices, IoT types of devices in relatively stable environments or static environments would include things like sensors on a machine, on an assembly line, and what we think of as a smart factory, retail store. We may have sensors that are detecting something as simple as how many people are coming in and going. It may get a little more sophisticated and have sensors that are looking at attributes of the people that are coming in and going, like doing some facial recognition perhaps, or doing something that looks at their height, weight, other attributes. But the sensor itself is static. The people move past it and then that data gets collected. Smart Cities is another area. I covered this once in another webinar looking at Smart Cities in detail. But basically we'll have sensors in most modern cities today for things like transportation, services, emergency services, lighting control, traffic control. Almost any sort of device that's out there that's owned by a municipality is something that can probably be instrumented in Chicago. For example, the street lights. There's a set of called smart street lights that are connected to the IoT that are also besides providing light. Those are points for data collection on air quality. So this would be static. And although they may be checking the condition, looking for events of things going past them, we can think of a lot of different intelligent or different ways that we might observe important events using these types of sensors as the event basically moves past the sensor. What I want to do is kind of look at this in the context of what I normally cover in this series, which is artificial intelligence and cognitive computing. So now let's look at what we're going to use for simple definitions there today. This is a pretty busy diagram, but I'm going to try and focus on just one part of it. If we think about the center, the second ring here in the circle that has understand, reason and learn, those are the three defining characteristics of what we think of as cognitive computing. The important part here is that on the left side we're looking at input. Right side is output. The top is interaction with humans and the bottom is with machines. So if you look at the lower left, we've got machine input and that's where we're getting data from sensors or from the IoT. So we're just going to narrow the focus a little and look at that bottom. So what we want to do today is look at how do we take this type of functionality and the most important part of this from my perspective, you'll probably figure out why later, is the idea of reasoning and put that in the field in a device that is doing the sensing so that we're distributing logical reasoning, not just distributing data processing. Now the next part in the center definitions here, I said we're going to look at IoT, cognitive and cloud and the derivatives. So fog meant to be funny, but it's actually the truth here in terms of what is fog computing. It's a cloud that can't get off the ground. So we're looking at something that we're not sending data or processing or signals up to some amorphous space in the air. We're dealing with something on the ground metaphorically because you can be dealing with fog for planes that are in the air too, but where it's actually at the very end point. You can't get any more concrete, if you will, rather than abstract. Like the cloud is being a theoretical and abstract. Fog is on the ground. This is where we have the endpoint. So fog computing is looking at devices that are at the end or the edge of the network. And it's often used interchangeably. This next example and set of definitions for edge, which is fairly synonymous. I put this one in for Shannon. So when we're dealing with the distinction between the fog and the edge, obviously the edge is the league guitars for you too. But more importantly, anything outside the data center is generally thought of as the edge if we're dealing with network computing or telecom. In general usage, it's the edge. If it's a node on the network that's terminal, it doesn't go any further. It's the furthest from the data center or the furthest from the cloud. And so in practical terms, we're going to use fog and edge pretty much interchangeably. Edge is specifically at the very edge. It's the last thing. There's nothing beyond that on the network. In the fog, we may have multiple nodes, each of which are edge nodes that are aggregated. And that's what we're going to see in terms of the architecture. One other area that we need to bring in when we're looking at distributing all of this is the idea of autonomy. Because right now, when we look at artificial intelligence, one of the hot areas is autonomous systems, whether we're dealing with the Navy's ship that can sail oceans autonomously or a self-driving car. And so I just want to make the distinction between autonomy and intelligence because we do see them interact. You can be autonomous without being intelligent. You can be intelligent without being autonomous. But autonomy is specifically the ability to make independent decisions and actions, whereas intelligence fits with our definition of cognitive. Intelligence means that you have to be able to learn. You have to be able to understand. And we don't really have time in this webinar to get into a deep coverage of things like knowledge representation. How do we know that something is understood? But combining those three, understanding, reasoning, and learning. In general, we also think that characteristics for intelligence are the ability to abstract, which fits with logical reasoning, and to generalize. Which, again, we can use logic whether we're dealing with inductive, deductive, or abductive reasoning. All of those fit together in the general idea of intelligence. What we want to look at today is moving the intelligence part of this to the edge of the network. Whether or not the actual node, the device that has one or more sensors, is autonomous. We want to give it the information and the power to be autonomous even if it's not used in that way. So when we're looking at autonomy and automation, going from giving advice to giving the authority to act on the device, I use this example because we've got a Grand Prix car that has lots and lots of sensors. And as a general rule when the car is running properly and is under the control of the driver, those sensors are providing data and information to the driver who is going to adjust accordingly. Also providing in real time data to the car's crew, which is in relatively close proximity. And that's happening in real time or near real time. They're also aggregating and collecting data that will be used at the completion of the race. So you can think of the sensors that are on the car as being static with regard to the car, but mobile with regard to the environment. And the functions of these sensors, how those things are being reported and who has control based on the events can change quite rapidly. So in this case, it was a horrific crash. Fortunately, the driver did walk away from it. It's pretty amazing with a 46G impact. But if we look at it and say, okay, do we want the car to detect through this that the driver is no longer in control? We have sensors that can tell that the driver is unconscious for one thing. And although it's very dramatic in this case, think about it. We could also have the same kind of sensors in our personal transportation cars if we still have cars before we get to fully autonomous vehicles. Do you take control away from the driver when you recognize that the driver is basically abdicated control by being unconscious? So you have to start to look at where decisions are made based on a number of factors and different values for the data that we get. And so the question that all of this is leading up to is if we have systems where at a remote point at the edge of the network, if you will, we're getting data, we're getting a report of events, do we need to take that information, that data, that signal, if we go back to one of the earlier diagrams and send it somewhere else, perhaps to the cloud, perhaps to a data center, if we send it somewhere else and have the actual decisions made somewhere else, are we just collecting data or are we going to operate on that data at the point where the data is collected? And you can probably tell from the title of the talk and from where I'm going with this that the general rule is going to be you push the control, you push the ability to make the decisions to the point where it's going to have the most utility or where it can first be done. And we can think of this in sort of the old military model where data would come from lower level individuals at the front. It gets passed up and higher level folks further back or further up in the chain of command or higher up on an org chart in an organization, make a decision, the decision goes down, gets passed back down, and then the actual action is taken out again in graph theory terms at the leaf nodes, at the edge of the network. Well, what we're trying to do here is look at what are the characteristics that allow us to act at the edge without having to pass that control back and forth. And that's what we're looking at today. So question becomes, this is something that's been debated and different applications. You'll get different answers for this. There will be different architectures. But the question is, do we move the computation to where the data is collected or do we collect the data and send it to a compute engine? It's an optimization question, really, and a question of bandwidth and expediency and cost. So in this slide, what I'm getting at here is if you have a data center and you have sensor-based devices that are providing data and updates on events to the data center, at what point, what are the characteristics of a problem that you would solve where you would not have to go to the data center for an answer, but you would just be reporting the events and what you did about them to the data center. And the gateway here represents a device that aggregates information from multiple devices, each of which has one or more sensors. So we're going to look at that part in just a little more detail. But first, let's kind of dive into this problem of partitioning the problem and distributing data, processing, and ultimately intelligence. If you built application software years ago, one of the early questions, one of the early battles, among people that we're trying to formalize or improve the process of software development was, do we start by modeling the processes? Is it data processing? Well, it is data processing, but is it the process that we want to model first and then get the data to the process? Or is it the data that we want to model, which is more stable, which is more subject to change? Do we start by building an entity relationship diagram of the data and then figure out how the processes fit with it? Or do we do a process model and then tie that? Then of course we got through the object wars and we had objects that have processing and data associated with them. And now we want to look further into how do we distribute processes, data, and what we're calling intelligence. And that's the subject of the next few slides. Pardon me if I lose my voice here. Still getting over a cold. So we have to decide what we're going to move or distribute and when we're going to do it. And the next few diagrams, the rectangles represent data or data source. The circles just labeled with P for processing represent a processing element and the more complex circle, if you will, represents an intelligent element. And the reason I'm making this distinction is a processing element could be something as complex as a supercomputer. It could be something as simple as a Raspberry Pi. It could be even more simple than that. It could be basically an atomic element, if you will, that just performs one operation on one piece of data. So what we need to do is look at the attributes of the process that we need to perform and the types of data that we're going to work with. Again, remember this is all in the context of data is collected by sensors, so it's event-driven. So one thing to do on the left here, we've got a number of different data sources that are sharing a processor. So in this representation, I'm saying we're going to take a processor and we're going to bring the data to the processor, perform some calculations, some computation, make some decision, if you will, and then perhaps update those data sources or perhaps the output from that processor one is going to go to another processor. It could be a completely different system. Architecturally, an alternative is each data source is going to have its own processor. We could also have it so that the data sources, sorry, the processors move logically or physically to the data. And when we start to get into different data architectures, that's kind of what's being done in some architectures, things like in-memory databases, where the movement of the data is such that it's more efficient to move the processing closer to the data than it is to deal with the bandwidth and throughput issues of moving the data itself. So the two extremes, if you will, is what I'm trying to get at here. Do we put the data inside the processor, meaning that the processor is going to be relatively static, or do we have the data trying not to move the data but move the processing element towards it? And the general approach that I'm advocating here going forward is that in the middle we're going to have something that includes a processing element and not just data but reasoning, which would include processing with sort of goal directed processing, I guess would be one way to think of this. So the intelligence is that not only are we performing some set of operations on data, but we're doing it with knowledge about that data that could be adaptive. And that's the whole sort of central thesis, if you will, behind what we're doing with cognitive computing. So we're not just taking a deterministic approach that – or a rule-based approach where there's – based on what the input is in the current state, there's only one possibility for the next state. That would be a deterministic approach. If there are multiple possibilities – we're dealing with something that's evidence-based and we're dealing with probabilities – then you have to have that reasoning. You either have to send the data to a reasoning engine or – again, what we're promoting here in terms of the architectural idea is have a small reasoning engine and the knowledge to actually interpret and process intelligently at the device and learn from that and then just report back. So you're not asking necessarily for each event – you're not passing that data and conditioned back to the cloud or to a data center. You're processing it at the edge, which would go along with the definition of actually processing intelligently in the fog. And to do that, with any degree of efficiency, we need to have a three-tier, at least architecture. Let me explain what we mean by that. The two-tier would be that if every device was at the sensor level – so there's a one-to-one correspondence between sensor and device – if those are going to have to correspond or – yeah, correspond as in communicate with your compute engine. They're just gathering and reporting. The compute engine could be a data center. It could be something in the cloud. Obviously, that doesn't make any sense – sorry, make any difference. It doesn't make sense. The data center or the cloud or cluster network, whatever it is, that's where the computation is happening. But the idea is that you get a huge amount of overhead if every little device just takes a small piece of information and has to communicate. So every time there's an update, you have a bandwidth issue. What you would like to do – it's a general engineering principle anyway – you would like to be able to process things at the lowest level where you have the full context to make the right decision. And so the three-tier basically introduces a device between individual sensors and your compute engine that aggregates. It'll do some processing. It'll do the aggregation and packaging. And so you can have any number arbitrarily of sensors or devices with sensors that are then working with the gateway. The gateway is typically a smaller hardware device that could be something as small. Even smaller than your Raspberry Pi, you've got a gateway in your smartphone, for example. You've got a number of sensors in a smartphone, and each of those is not communicated directly with the outside world. There's the gateway. There's some processing internally. And then the aggregated result communicates through the gateway to the system. And for the case of a cellphone, you can think of that as being yet another tier because the cellphone signal is going to a tower. The tower is doing the handoff and all that. And the actual back-end processing is at yet another location, so you've got a fourth tier there. But the basic idea is that whenever you have a lot of these sensors, you need to add in this step, this piece of hardware, so that you can have more power at the edge, if you will. The gateway is still at the edge in terms of it's outside the network, it's outside the major processing area, but the gateway is controlling or coordinating, collecting the data from each of the actual sensors. And now what I'm suggesting here is that at this point what we want to be doing is having smarter gateways so that we can add reasoning and some level of understanding there rather than having that data have to come back. So one place that this is already being done has been some advances in the last couple of years. One of the issues when you get into some big models with deep learning is that you need a lot of training data and it's impractical to do the training at the edge. You need a lot of computational power, but if you can have a system that will do the training and then take a trained model, if you will, so it's a much smaller amount of data and knowledge that goes into your model at the edge, then we can do the training in the cloud or at the cluster, compress it and compress the model and run that deep learning model on a much smaller piece of hardware. And that's sort of a general trend that we're seeing now. There have been some advances in the last couple of years. If anybody's interested in those, we can follow up off loan. When I was talking about the idea of mobile or mobile and autonomous, which again tend to go together but they don't have to. You can be mobile but have no autonomy at all. Everything gets directed from above or pre-programmed. But when we're dealing with autonomous systems that have to collaborate, that's when it gets even more interesting because if we start to have multiple systems, let's say you have one autonomous vehicle that has to deal with the human-controlled vehicles, but if you have multiple autonomous vehicles, they have to communicate and collaborate with each other. And so each one of them could be thought of as an edge device. Within each one, you're going to have multiple centers and those are going to be aggregated. So you may have multiple levels of these gateways. If you think about the systems that are in your automobile today, even if it's not automated at all, I mean frankly any car, I believe it's 1996 and later in the U.S. has to have a system, if it's sold in the U.S. It has to have an OBD, an onboard diagnostic system, and that's collecting data from a variety of sensors. Some of those are mandated by law so that you can go into your local emissions test run by the government and they can check data that has been collected. But in addition to what's being collected for the commission tests, there are a number of sensors. Everything from each of your tires will today have to have a TPMS, a tire pressure monitoring system. That's a sensor. Each of those gets aggregated, each of the sensor data from each tire gets aggregated by the TPMS. That also gets aggregated at the computer level. And for some of these things you are expected as the driver to respond to changes. In some cases signals are being sent to the automobile manufacturer for things like wear sensors. So if the instances, if you will, which would be an individual car, if they have to communicate with each other for things like navigation, then we have to have this intermediate loop. If they don't, if it's one car, let's say the car is still technically under the control of a driver, then you may be collecting things at the edge. Certainly today there are a number of cars that will take control from a driver. If they detect certain events, cars that do automatic braking based on sensors that can use anything from radar to lidar to other technologies. So it's a question there of how much intelligence goes at the edge so that that system can take control, if you will, based on this knowledge and learning. Just a couple of quick things here in terms of attributes that we want to consider when we're deciding what gets pushed to the edge and what gets kept inside, if you will, the cloud or the data center. If we have something that's high complexity on data, like CCTV would be close to the video images and you want to be doing something like monitoring, looking for people in a database. If you're monitoring at an airport, for example, the device itself is static, it's high complexity. That's a different kind of problem that if you're dealing with something that's mobile, you've got to fit it. The mobile part of it adds difficulty to the processing, but the fact that the data itself is low in complexity makes up for it. So low complexity mobile is easier to allocate to the edge than the actual reasoning on a high complexity data source that's static. Just one set of issues to look at. Another one is the complexity of the data in terms of structure. And I don't like the term unstructured, but what I'm getting at here is if something is deep or dense versus shallow, then it's the case that it's more difficult to put that level of intelligence at the edge. It's easier regardless of how fast the data is coming in, whether it's in batch, something that's static, or whether it's a high volume, high frequency trading app. If the data itself is easily interpreted, then we can probably keep that intelligence at the edge. Now I'm going to turn to the business end of things and look at the kinds of impact that we have as we get more and more data that's coming in from the IoT. So my comment here that it's a complex data rich world and sensors are everywhere should be fairly obvious, but I'm going to try and give some examples of things that you might not see in your daily life. But streaming kind of data like market data, news data, things that are coming off traffic signals, traffic cams, the easy pass system, that sort of thing. Those are all fairly straightforward if we're dealing with traffic or easy pass that's called different things in different parts of the country where you have a transponder that is communicating with a – so your mobile device in your car is communicating with a static device as you pass by it, but the actual data itself is very simple. What I want to start to look at here is things where there are devices all over the place like on this picture here, the Seaport Hotel in Boston where you can actually go online and see how many open slots there are on the bike rack. And that might sound trivial, but hopefully in the next 10 minutes we'll see why that could be the start of a completely new business for you. So once we start to recognize that there are all these things out there, each individual is producing a lot of data in general. If you're carrying a smart phone, have other devices, you've got your computer. You're also generating data just by your mere presence as static and some mobile sensor-based devices are noting your presence. How are we going to turn that into a business opportunity? So when everything is connected and we're going to combine the IoT and cognitive computing, just the IoT means that we've got new sources of data and we've got new sources of value from existing data. We have to start challenging these assumptions. In terms of putting them together, we've got some new technologies that are out there, new models, and we're building new ecosystems. That's actually a pretty cool time. So I'm going to give five or six, I think, here examples of how to leverage this type of data and how to distribute intelligence to do it more effectively. So we're going to look at the five rules here, know your customers, know what they want, know when their wants change, know where your customers are and where they'll be. And then the last one is anticipate opportunities. For each of those, the general advice is we've got to be looking at it and say, okay, do we have data that's related to this? Do we want to detect things that are going to change our relationship with our customer? Or do we have data about our own products, our own services that are going to change based on detecting a person, their presence or some behavior? So let's take a quick look and start with know your customers. So the idea here is we're getting into know your customers and put that little intelligence bubble in the middle. We can use technologies ranging from natural language process and analytics and gene learning. Right now I just want to say, okay, whatever we're using here, the important characteristic is that I want to be able to reason based on the context and make a decision at the edge. So we've got information about our customer that we have, you know, gathered from combination sources. It can be our previous interactions with them, our history. They have given us profiles in the past. We could be collecting them. But basically it's looking at the sensor-based systems where it can be active. It can be passive. It can be something where they have to be wearing a Fitbit when they come in your store and you're going to know something about them or the sensor itself can be passive. It's just it's not doing anything until a person walks past. But what we want to be able to do here is to start building systems that understand the customers so that when certain events transpire, we can react to them. But we can also go further than that, which is if we know what they want and we know that sort of what's causing them to want something, when those things change and we'll know about it because of the sensors, we can react in real time or right time if you will. It doesn't necessarily have to be what we think of as formally real time. But we can provide the right offer or the right response or the right signal where we're responding to an event that the customer may not even be aware of. So you know what they want. We're going to get that, again, using the same kind of sensors based on tracking their behavioral history. And if you think about it, hopefully we're looking at this in an opt-in kind of fashion. People are trading that privacy for the pursuit, if you will, of better engagement with their providers or with companies in general. So if you allow exchange between your sensor based devices and retailers, just as one example, then what's being tracked, and that means it's the current history, it's tracking you as you're doing something, then we can provide a more tailored solution. But only in general, only if we can act at or near the time that an event has occurred. So we may have to be doing that at the edge rather than passing it back. It may have to be something that is generated within a store, within a mall, within your car, if you will, rather than gather all this information and then tonight we'll decide that we should have offered you a special deal, offering you a special line of oil change may be more appealing if you happen to be within five miles of the dealer and you're getting closer to it rather than you're driving away from it. And the system has collected information and knows or can calculate that you're probably on your way to work and by offering you this deal and including an Uber to your office, you're going to get better response as the retailer. So all of this can be enhanced if you will by being able to do it at the edge and collecting this data and acting on it with that intelligence about the customer. But here's where I think it gets more interesting. When the wants change and we're going to understand that by having sensors that are looking at the history and the environmental factors. So if we're now have a smarter system and not only do I know where you are and where you're going, maybe I know because you put a root in ways. You're a collaborative sensor based GPS system, but I can also tell based on your history and the real time weather, which I may be getting updates from the weather service, but I could also be getting it from the barometer on all my subscriber phones, then a change in the environmental factor will change your wants and needs. And I can provide that offer to you. But again, only if I can get it to you in a reasonable time, which may preclude me from sending that information back to a data center. So it's one more case of pushing the intelligence out to the edge in order to have a better engagement. Know where your customers are and where they're going. This is a really an extension. I'm sorry of the earlier examples. But here we can also figure out using the technologies mobile, why buy beacons. All those are all sensor based using technology like facial recognition now that we already have in multiple phones. We can build that into sensors in the car. Some interesting work being done right there. So by combining edge based collection and processing, all of these things can start to work together and collaborate without going through a central site. And then anticipate opportunities. This is where more data we have and the communication between devices that doesn't necessarily need to go through a central site. We can have my agent talk to your agent, if you will. And all of that can still be done based on the logic and the reasoning engines in each individual's device at the gateway level. It's not going to be done at the sensor level because in general you need data from more than one sensor to make a decision. And the last one here, my suggestion that if you're building applications, you need to have the application be what I call aware everywhere. And the idea of that is that your application needs to be constantly getting data from wherever the best source is and making it available for action using the inherent intelligence that there's nothing inherent about it. You've got to put it in there. And so it's combining these technologies to make applications more effective by leveraging the mobile aspect. In this example I'm talking about mobile sensors, providing that analysis, if you will, at the point where the data is collected rather than aggregating it. And so to start thinking about that, I'll try and wrap it up in a minute here to have time for questions. Start thinking about your customers, whatever business you're in, and think about what data they're already producing. I'm going to start with the typical cell phone. This is a list of sensors from an iPhone. You've got an accelerometer. You've got the ambient light sensor. You've got a barometer, geolocator, et cetera. The idea is that each of these is providing data to the gateway in your phone. There are actions that the phone is taking that don't require it to interact with a data center or the tower, et cetera. So proximity sensor is the one that if you pick up your phone, you've dialed it, and you pick it up and you put it to the side of your head. It knows that at that point it can turn the screen off. I'm not sure what else I would use that for, but we'll start with the barometer. The barometer in your phone is used to calculate things like your altitude. So change in barometer can be used with the accelerometer combined to start to build fitness apps. So it knows how many steps you've gone, but also how many flights of stairs, how many times have you gone up or down based on the barometric pressure. But now if you start to aggregate barometric data from a pool of people and use that and their geolocator, you can say, all right, I'm going to look at changes in barometric pressure on this pool of 1,000 people that are all within three city blocks. Now we can start to get weather data based on that. And you can, with a system that can alert if we know characteristics of the geography, it may be that we can start to do specially priced offers. We can know that the barometric pressure is changing and based on historical data, we know that's going to change buying behavior. One that's been well researched is as humidity goes up, hair care products sales go up. So if I look at this and I have a profile of the individual and I have the sensor based data, then I may be able to offer something that's tailored to an individual that's going to need conditioner in the next 20 minutes when the humidity changes dramatically. And I'll get that information not from the weather service. I could get general information from the weather service. But I'll get it from looking at the behavioral patterns from all the opt-in users or all the anonymized users to see that they're changing where they're going and that the weather has changed based on, that I'm deducing this, based on their own sensors which have been, again, aggregated through those gateways. So the other thing you want to do, and this goes back to the same diagram from before, is start to look for new uses for existing data sources. You're going to look for new data sources too, but there's no shortage of data. The idea is you want to look at what's out there at the edge that's creating data, collect it with an IoT-enabled system, if you will, that can use that data to present offers near where the data is being created. And just the two diagrams here, I first captured this one at the Seaport Hotel when I was staying there a couple years ago and realized that you could start combining data from things like bike racks with traffic data that most cities will provide a reasonable stream of data, provided with weather data, provided with things like bike availability. If you were an Uber driver, you might want to start to look at outsmarting your competition with the other drivers by understanding what's happening in your neighborhood at a greater level of detail based on all this sensor-based data. The diagram on the right happens to be my hometown here in Connecticut that you can start to capture everything from weather to traffic to behavior data. It's out there. So the quick recommendations, you want to process and analyze and put the intelligence as close to the source as possible, move everything to the edge. If you get to this state where you've got a lot of processing today, rather than pass that back and forth, you can cut down on the overhead and the bandwidth and at the time by exploiting more advanced hardware, things like GPUs at the edge. And my kind of the insight here is, as you're thinking about building these applications, don't think about you don't need to own all the data. You need to own the idea that will give you the insights from the data. It's all coming from the outside world. And so I'm going to wrap with this one. There are new opportunities to put intelligence at the edge and benefit from publicly available data. This screen is from a website called Thingful that is trying to build the world's largest repository of publicly available IoT data. And with that, see if we have any questions and I would encourage folks to keep in touch. I've got my email on there and there's a plug. Shannon mentioned that I'm working on a new book and we'll have more information on that next month. And it's really looking at how all this comes together and what I call the age of reasoning as machines at the edge start to have that capability. So Shannon, back to you. Adrienne, thank you so much for another fantastic presentation. And I think the U2 song I still have, I can't find what I'm looking for, has a whole new meeting. There you go. So if you have any questions, feel free to submit them in the Q&A just to answer the most commonly asked questions. I will be sending a follow-up email for this webinar by end of day Monday. Thanks to the slides, the recording, and anything else there. And as Adrienne has up there next month, we're going to be talking about a pragmatic AI maturity model. Very exciting. Everyone's pretty quiet today. It's the new year. Everyone's been kind of quiet in the new year. There's lots going on, I'm sure. But yeah, if anybody's got any questions, feel free to submit them to Adrienne. You've got a couple options there. And again, I'll get that follow-up email by end of day Monday. Adrienne, thank you so much. Hope you feel better. And happy new year, everyone. And thanks to all of our attendees for being engaged. Thanks. Take care. Enjoy. Bye.