 I'd like to go ahead and introduce our next speaker. It's Sarah Cooper, and she is the general manager of IoT Solutions and Amazon Web Services. Sarah has over 15 years of experience in the IoT space. And she's going to be talking to us today about making IoT implementation easy. I just want to talk a little bit. She's very accomplished. So I want to do the mouthful of accolades for her. So she's the vice chairman woman of the Internet of Things Community, dedicated to education and information sharing among the IoT practitioner community. Formerly, she was M2MI's chief operating officer. And she was named an IS-50 Most Empowering Woman in Business, recognized top 100 wireless technology expert by wireless world, and is an National Academy of Engineers Frontiers of Engineering Award-E. So that is a mouthful. But please join me in welcoming Sarah. I think if I'm lucky enough to get invited back in another year, I'm just going to put down at the end, when they say bio, I'll just put mic drop. Thanks, Tim. Guy really set us up very nicely to start talking about how we experiment with IoT. And I would like to point out, now that I'm an Amazon employee, we do talk about gesture control. When an Italian manufacturer asked about connecting Alexa to the supply chain, we immediately went out and got those robotic arms, and started looking at how we could do some gesture control with Alexa. So one of the key areas that has been challenging for products across the board as the world changes and has continued to change at an ever-increasing rate is how do you make product changes after you've already installed something? The last thing you want to go do is buy the greatest smart home, fully integrated, completely closed software system. Everything is, a la 1990, wired up, and your home works really good day one. And then six months later, outcomes, completely wireless, gesture-controlled robots that are going to go around and pick up your clothes for you. But they don't work with the wired system that you just installed and spent 10 grand breaking out the walls. So figuring out how to sell products into that space and how to build an architecture around both the devices that are part of that product as well as the back-end systems that support that product is absolutely critical and even more so today than it was 1991. And also because I generally like to make people raise their hands, how many folks are part of the product design cycle? Because I can focus on the technology. Oh, we see some hands. Oh, very good. Because generally, even if you don't think of yourself that way, you probably are if you're touching somewhere on the device product chain. So design is generally where products start, as this is a beautiful example of design in my humble opinion. Design can be timeless, but technology rarely is. What's under the hood here is beautiful from a design standpoint, but from a technology standpoint, it's very rudimentary. If you were going to develop a car today, you would not use the same system under the hood, mainly because you would not be in compliance with the EPA, among other things. Also, design was really all that good, certainly at a first attempt, although it's a super cool car. Not something that post six months off the chain, people actually wanted to be driving in this. And just to be humble ourselves, that would be the first Amazon website. So yeah, a few things have changed since then, usually through a bunch of usability studies. So if we look at how technology has been changing, in 1873, it took 46 years for electricity to reach 25% of American population. In 1991, it took seven years for the internet to hit that same 25%. Things move a lot faster, and more things move completely decoupled from each other, mainly because of the variety of technologies that we have today. Just the minute you go to connect a device, how many different ways can you connect it? Let's see, you've got cellular, you've got Lora, next week you'll have something else. That breadth of technology at your fingertips makes it even more difficult to get things right up front. So if you think of a product, you've got to make a decision at some point about what you're going to put into it. You launch it, fabulous. Pace of innovation keeps changing. Somebody buys your product day one. They are accumulating technology debt until you hit the end of the lifetime product, and they're ready to buy version 2.0. Obviously, nobody likes to be in this particular place right here. And it's one of the reasons why we refresh technology. One of the other pieces about this, though, is that the price tag that you can actually charge your customer is based on the value that they think that they're going to be able to get in the length of time that they're going to be able to get that value. So if they realize that the technology change is going to mean that they have to refresh to stay competitive somewhere in the middle here, they're not going to be willing to pay as much for the product stack. So experimentation, being able to learn about your products that are in the field and being able to make changes to those products while they're in the field, being able to do things like personalization, which is great consumer and smart home, but also very important when you start talking about supply chains, many of which the robotics are still managed side by side with a workforce who's there to help maintain that robot, figuring out how to better adjust the communication between what somebody needs to see to diagnose the problem. That often comes into the background of that individual, also true when you've got a boiler to install. I don't know how many people here are from like industrial companies, but one of the things that's happened over the last 10 years is that the workforce of service technicians has aged. So now you've got your 50-year-olds who really know boilers really well. That's aged. And then you've got your 20-year-olds who know boilers that they read the manual. They know computers really well, though. So figuring out which of your classes just walked up to fix this boiler might mean that you want two modes. That's not something that was put into the boiler. Software or technology stack from day one, that's something that need potentially be an update or an upgrade to that system. Anyway, basically, if you can figure out how to up your technology game through the lifetime of the product, your customer maintains the value from that product, and they're happy with their purchase. Generally, customers are starting to come to expect this. They don't buy static products. They love the fact that Tesla did an upgrade for them and gave them autopilot overnight. That is amazing how many times that comes up in a week. Now, hey, can you help me make my product do this? Autopilot? Yeah, maybe. I can. Figuring out as well. And this was sort of an ideation slide for myself. I would love feedback on whether this really holds water or not. This is me thinking too much while I'm doing slides. It's always a bad idea. At some point, it feels like we reached the point where the product lifecycle early adopters to maturity, to, hey, this is old tech, is actually shorter than the product lifespan. When we hit that point, and for most technologies, we haven't gotten there yet, but when we hit that point, we have to figure out how to better assess risk of that product no longer being useful through its entire lifetime. And it seems like there are a couple of things that are happening in the marketplace. Maybe not necessarily to address this specific problem, but that inherently do. In IoT, we're seeing a lot of what's called the outcome economy, i.e. I'm not going to pay you for fertilizer. What I will do is pay you for the nitrogen fixation in the soil. That's where I get my value. That's the outcome I want, which is better plant growth, not the fact that you literally spread over my field. The other piece, and this one's a bit of a stretch, I think, but the sharing economy also distributes that risk. If I can move all of the value I'm getting to the front end of that by sharing that value and maximizing the possible use, cars is a good example. If it's a lifetime of 200,000 miles or 10 years, if I can do 200,000 miles in two years, I'm not sure you could drive that, but you have to drive fast. But if you could, then basically what you've done is you shorten the time where you're going to start building into that technology debt and you get a new car after two years. I'm not huge into the new cars. Personally, I keep forgetting where things go. So if this is the new paradigm for devices and products, that really the minute you install something, that's when it starts living. That's when you can start collecting data about how customers are using it, about how it's being deployed, about where within the processes that you thought people were going to use it, where you were wrong and where you can adjust. And this is where you have to start building in that experimentation layers. So a little tongue in cheek, we're not psychic. I personally have made plenty of design decisions, which six months later you look back and think, oh God, next time I'm not doing that again. Makes a lot more sense if you can go in knowing that yeah, I'm going to make those decisions. But I'm going to go ahead and figure out what are the opportunities within my architecture, within my tooling, to be able to make those changes. So three key areas, architecture, tools, and culture, the hardest one is culture. I don't cover that one. We do a lot of work with customers at Amazon talking about how we're able to innovate so quickly. I'm happy to have that conversation with anyone who wants to talk about it. But generally, architecture and tools are where you really need to be able to start. It's not if you build it, they will come. But if you build it right, they will play. So clearly, if you've got to know that you've failed, that something has failed, that either the performance of something has failed, you missed the opportunity to engage in a customer in a new way, you missed a cross sales opportunity in order to know something you need data. Thank you, Internet of Things, connected to the internet. Similarly, in an experiment that goes wrong, you need a small blast radius. That sounds fairly straightforward. And we'll talk a little bit about how you do that in an architecture and how you design that in an experiment. But that actually has a greater impact. It's not just the blast radius within the software. It's also the blast radius across the customer set that you might be doing an experiment with, I say with, not on. Mainly because I have a crash test dummy up there. But when customers are part of a beta program, you need to make sure that they are the right customers. And we'll talk a little bit about cohorts and how you make that decision, and how do you create an architecture which allows you to even apply something across a subset of your devices out in the field, the culture parts that shrug it off. So physicist, hence the doctor. For those of you who forgot, I don't know if this was fifth or sixth grade, the scientific method. But we do this a lot in product design, but we do it a lot in service design on the software side as well as hardware side. You've got to have an idea. Your question, your query. You've got to get some background information before you go and just willy nilly experiment on things. You want to know that you've got a pretty good assessment that this might work. Sometimes that's instinct. Sometimes that's looking at related products. Sometimes that's data from your last set of versions. Your hypothesis is what you think you're going to get out of this. I think customers will have an easier time installing boilers if I give them an indicator that tells them if they completed the last section correctly. Obviously your experiment, whether it worked or not, you have to have your tests that say, yes, my experiment worked not. Necessarily, did I get the results? But did I actually test what I was trying to test? If not, you need to go back to the drawing board on the experiment, but maybe not necessarily your hypothesis. As it comes down, you've got to obviously analyze the data, make some conclusions, figure out whether you were correct or wrong. But these next couple things, these teal ones down here, not the scientific method, these are product to design components. You also then have to make sure that this is worthwhile for your customers and figure out, is it something that you can really introduce? Again, that rollout phase is part of the design. So when you look at how are you going to measure the impact? How are you going to create a baseline for what your experimental change is? Data comes with different temperatures. This isn't particularly clear, but obviously there are some things that you need to know very quickly. We call that hot data, things that you're looking over a population of historical data. We call that cold data. It's purely just to kind of give you a sense of where these things line up from a device standpoint. A lot of the diagnostic and contextual data has a cooler temperature. It's all very important in figuring out exactly what went on in your experiment. And it's one of the reasons why putting Easter eggs into products is kind of nice from an architectural standpoint, because sometimes that contextual information comes in handy when you see something you didn't expect. That correlation pattern is much more fun when you see something you didn't expect. Not necessarily more fun to explain to the boss. Only reason I throw up this slide is just to point out, in the product analytics side, there's streaming data. There's experimentation data. I'll talk a little bit about Lambda. Really, I just mean from that something that is discrete, something that is boxed off and has a separation of concerns. So it gives you a place to experiment. And then, of course, you're heavy lifting. Being some of the machine learning, some of the AI capabilities, it gives you a broader spectrum of pieces in which to experiment with. So again, hot, warm, warm, cool. Kinesis is streaming analytics. Being able to set up, you can set up different experimentations with different types of data. Most of the experimentation that we see in the IoT space is Lambda. So I'll skip through walking through how this works. Behavioral analytics. Lambda here is being used really just to do some data cleansing and enrichment. Again, serverless architectures, these are function-based components of your architecture. They can be used with things that are more of a heavy lift, more of a cooler data side. Here it's EMR. But where we really see them from a product innovation standpoint, playing the largest part, is in things like this, where Lambda is used as the experiment holder. So we have customers that actually have an incentive program for their engineers. They'll give you 10 grand worth of serverless credits. In order you pitch your idea, you get 10 grand worth of serverless credits. You get a chance to prove your hypothesis. If your hypothesis is right, in this instance, we did an experiment with a solar panel looking at cleaning regimes. So the basic experiment was you've got three solar panels. One is actually dirty when you throw your hand over and you create a set of conditional statements and algorithms looking at, am I in shadow? Am I actually dirty? And then looking at a couple of different attempts to clean if clean is what needs to happen. We did three basic experiments. The first one was really just to pass a sheet of water over it, which turns out not to be terribly effective. Another one where you basically have a squeegee that comes in over the water, more effective, and another one that just has a squeegee. Not so effective. The point of this was really just to show that we could go through a very quick experiment, and this isn't really user behavior, but a very quick experiment and roll back and look at the architectural differences between the two. Lambda makes that really easy. That's not on the device. That's generally in the cloud, although that one did actually involve different components of the device. We've recently come out with something called greengrass, which takes that same ability. And again, I'm using the AWS components here because that's my new sandbox and playground. This is really an architectural design, so I have things that having functional components on your device where you can push, you can make a change in the cloud, you can simulate what that might do on your device, you can then push that down to the device and look at a group of devices out in the field with actual people using them. There are lots of different ways to do that. Probably many of you do that today, not using greengrass. The lambda functions here, what greengrass is, is it's lambda plus AWS IoT from a communication stack standpoint on a device. Generally Linux, go Linux. But here again we've got our same steps and processes. Hypothesis, measure your baseline. Determine your control and test groups. All of this can be done as part of your back-end architecture. This doesn't need to be a subset system. There are mathematical tests for figuring out what is your best baseline. Looking at historical data and who makes your best control group. You can promote back down to the device, collect the data, analyze the results, identify where this would actually be productive, which is the other nice thing about doing in situ experimentation is you can understand who, from your customer base, or what types of devices from an install standpoint make for the right targets for this improvement. It doesn't mean that you have to do a blanket update across the stack, which we see very, there's a lot of suppliers who have localization trying to figure out how do I maintain a centralized control over what could be thousands of SKUs. The other key piece to experimentation is having a very diverse, interesting set of characters to move on. Image recognition is the facial recognition over here. There's QuickSight, there's Alexa. These are all things within the AWS stack, but wherever you play, you wanna make sure that you have access to the next generation of tools without having to build them always yourself. Obviously, this is the Amazon playground. There's an incredibly diverse open source capabilities in all of these areas as well. So, I will generally end with one of my favorite cartoons. Given the pace of technology, I propose we leave the math machines and let's go play. The more from a cultural standpoint you can incentivize engineers, but really everyone across the spectrum to go out and play. The Amazon button, speaking of not having a screen, that button, literally a single button. All it does is throw a lambda function in the cloud for the most part. Look, I can do something, it's amazing how many people have gone out and experimented with that. And there is something that tactile sense of hey, I did this. It was bite-size, it didn't take learning the entire. Oh, thesaurus. Sorry, thesaurus is one of those words that you don't get to throw into presentations very often. So, I'm happy to talk to anybody afterwards about more about the culture of innovation and how to do that on the culture side. But come check out how we play at Amazon and what we do. Thanks.