 Hey everybody, thanks for coming. This is one of my favorite shows. This is my second year now at Big Data Spain. This is me, that's not my laser. Oh, that's my laser, can you see it? That's me, that's my company. And that's where you find me everywhere that's a zero on the end. This is the link to the slide deck. These are Google slides. You don't have to take a picture of the screen yet. I put the link at the end so you can decide at the end whether it was worth taking a picture of the slide, okay? Everything else good? Can you hear me fine? We're all good? How many of you actually already work with satellite images? You're gonna be bored. They told me this was business track, so I'm doing business. How many of you are developers? Whoa, they told me business track. I'm sorry, but I will show, well, the network is kind of sketchy. So if the network holds up, I can show you some Jupyter notebook stuff. Would that be good? Yes, no, thank you. I didn't know Spanish people were quiet. That's not the impression that I have of Spaniards. Okay, let's get started. The goals for today. Teach you a bit about remote sensing. Sorry for those who already know. Two, show you some of the value you could be getting from using remote sensing. And three, have fun, okay? Sound good? Okay, this is the part. I know the difference between C you know, right? It's just one syllable, C, or no. Sounds good? You don't even have to say yes. I can speak that much Spanish, okay? All right, so what do we all wanna do? Like what's the purpose of big data? Why do we all do big data stuff? It's cool, that's why I do it, it's fun. But the reason why our companies pay us to do it is because we wanna make timely, knowledgeable decisions about things in the world. Is that a pretty good summary of why we do big data? For the most part, yeah? And so why should you care about satellites, right? Well, are we all agreed that's a good reason for doing big data? Yes? And most of you are not using satellites, though some are, and I'll show you why actually, satellite imagery can help you get more timely data than you can get otherwise, and it can cover larger areas than you would have access to otherwise. So how many have one of these in their, not this exact model, because this is probably like 20 years old, but how many have one of these in their possession? Okay, how many of you have a smartphone on you? Then everybody's hands should have gone up, because all of us basically have digital cameras now, right, in our phones. So there's not much difference between that and this except in some very few key areas. This is our world view three satellite, right? So digital globe actually flies the birds up in the sky. We have a license from governments to fly it, which means we have some restrictions and stuff, but we actually have our own constellation of satellites. So let's talk about some of those differences. One pixel in one of our images represents 30 centimeters on the ground, right? So something like this is one pixel. Does everybody know what a pixel is? Is there a special Spanish word for it? Pixelo? Pixiamento? Si, si, pixiamento. So it represents 30 centimeters on the ground. That means that satellite up there, when it takes its digital picture of the ground, each little square is 30 centimeters, enough to see the lines in the middle of a highway or street lights, okay? If you took your iPhone and you put it at the same height as our satellite and took a picture, each square would be 40,000 centimeters on a side, right? So at best, you could probably see an aircraft carrier. Okay, does everybody know what an aircraft carrier is? See, I'm sure you know what it is. If I say it in English, do you know what I'm saying? That's the real question, says the ugly American abroad. Okay, next. The satellite is moving at about 27,000 kilometers an hour around the earth. That's fast enough to get to London, from London to Paris in under a minute, right? But the image is not blurred. And I have a feeling that if you were moving your phone at 27,000 miles per hour, there might be a little blur effect, right? It knows a pixel location to within five meters of its true location. So what I mean by that is if I tell you that this pixel is at these coordinates, that's if I went out with a GPS unit and took the middle of that pixel, it would be plus or minus five meters to the center of that pixel in terms of where it is compared to the GPS. Make sense? And that's without any manual correction. There is actually algorithms you can go in and using real known landmarks like GPS and stuff and get it to be submeter accuracy. But just straight off the satellite, it's within five meters. I don't think the geotagging photo on your iPhone can do that either. And each image is about 60 to 80 gigabytes, okay? Which is a, wait, am I supposed to say giga? Is it giga here? Or is it giga? It's giga. See? See, like wiffy? I know. It took me a long time to figure out what wiffy was. So the other thing about this, and this will come back later, is working with, that means each time I pull down a scene, that's 60 to 80 gigabytes of data that you're gonna have to process, right? That also means how many of you actually do have a digital camera? Yeah, and do you have like a 16 giga card in it or maybe a 32 giga card in it? And how fast do you fill up your terabyte hard drive? Like this, right? So think about how fast that satellite is constantly taking pictures. We actually don't store all the pictures the satellite takes because despite everybody saying disks are cheap, at our scale, disks are still not cheap enough. If we stored all the satellites, it would be, I'll show you how much data we have already, and that's without not storing every single picture we take. Some stats from our constellations, so this is all the satellites that we run. We have 18 years of satellite data, right? So going back over the entire globe. With that, we have over 100 petabytes of data. Amazon loves us. So most of our platform is hosted in Amazon. We have a very large storage bill. But that also, I guess, is invited to Amazon re-invent every year. Look what they're doing on the Amazon platform. Our constellation collects about three million square kilometers each day on the Earth, and most of that's not over the ocean. Like we basically tend to throw the ocean away because not many people are interested in the ocean unless someone asks for it. And we have ground resolution. Our largest is 10 meters on a side, and this is for some of our atmospheric products all the way down to the 30 centimeters I told you. Up to 29 bands of spectral resolution. So I'm assuming most of the people who do GIS or spatial know what I mean when I say 29 bands of spectral, is that true? For the rest of you, do you know what that means? See or know? No. So let's go, tell us, thanks Will. So let's go ahead and go ahead and see what we can do with that. Okay, when the image comes back, it comes back with multiple bands. Right, so what that looks like is a whole bunch of 2D-years arrays stacked on top of each other. So our more advanced cameras on our satellites will give you back eight bands from each camera, which means you have eight pixels for each location on the Earth. Pixie Lamentos, does that make sense? See? We treat these when you get these back. These can be processed in, how many of you are Python developers or work with Python? So these are all numpy arrays. Right, you can do all of your normal numpy, unfortunately, Matplotlib, all that stuff you can use with our image arrays. But what does it mean that there's eight bands? Everybody remember high school? With the radio, this is the electromagnetic spectrum? Everybody remember that? See or know? See, don't make me ask you every time when I had to remember. Because I will, I will. I'm a dad and I was a professor, so. So what I'm talking about with bands is, when I say a band, what we've done is we've compressed all the reflectance off the surface of the Earth. So the way the satellite works is, light comes off the Sun, hits the Earth, reflects back up to the satellite and we capture that in the camera. So a band is where we take the reflectance and we squeeze it into this, all the reflectance that comes back over this range gets put into that band. And then that gets translated into the pixel value in that band. Does that make sense? See, nice job for those who answered. Good job. So when I say we have, this is our multi-spectral bands. Here are all these ones in colors. This means that camera on the worldview three satellite takes one, two, three, four, five, six, seven, eight bands. Some are where the human eye works. Some are where the human eye cannot see things. And this makes you awesome, right? Because you can actually do things like this. So again, this is our electromagnetic spectra on the bottom again, just like we had before. And here is how much light is reflected off at those different wavelengths on different objects. Make sense? See, everybody should be saying see, otherwise you're not paying attention. Did you eat lunch? Is that the problem? All right, so what you can see here is this is dry grass, like Spain in the summertime, right? Yellow everywhere, so there's no chlorophyll in it. These, the grass, the walnut tree, the fir tree, what's the word for walnut in Spanish? Is there none? It's the one you put in brownies. Exactly. Okay, so what you can see here is the chlorophyll actually reflects here, what color do you think that is right there? Remember, the color you see with your eyes, the color that's reflected off. So what do you think this little peak, this is the visible spectra right here. What color do you think that is? Green, right? It's reflecting in green, so it looks green to our eye. Plants love blue light, and they love red and infrared. That's where they're absorbing all their energy. But then, if you go into the other parts of infrared, this is near infrared right here. If you go into infrared, they all start reflecting really far. If I had a plot of water, it would look something like that. Water reflects a lot in this part of the spectra, and then basically drops off in the infrared. So this allows us to do really interesting things. And then you can also notice, grass is a monocot, monocotelidin, do you remember that from biology class? See at least some of you, your teachers would be proud. These are deciduous tree, walnut, and the fir tree. And notice up here, they all have different reflectives in the reflectance. So we can use the ratios between what they reflect here and what they reflect here to try to figure out different species and what's happening. This reflectance is due mostly to some of the water content in the leaves. So if you do that, these are three pictures of an agricultural field. How many of you work in agriculture or insurance? Right, and maybe you insure crops. So what you're seeing here for agriculture, oopsie, wrong way, these are all, we've using that reflectance and we're using different colors to visualize it. These are supposedly plants growing and healthy. Right, so to us, they'd all look green. This is water content in the leaves. This is plants that are stressed. So one of the big applications of this is how many of you have heard of something called precision agriculture? Three, two, really only two people have heard of precision agriculture. For those who've heard it in Latin Spanish, is there a different word in Spanish? It's the same word and only two people have heard of it. You guys need to get up on this, it's huge in data science because the idea is with precision agriculture, tractors now can be driven by GPS alone and they're accurate to within several centimeters as they're driving and they know where they are in the field to within several centimeters. So what that means is you feed the tractor this satellite image and it says, oh, anything in red is stressed. I will put nitrogen here, I will not put nitrogen here. You're going out and you don't want to drive every single one of your fields and take soil water measurements. You say, oh, these fields need to be watered and there's something wrong with my irrigation system. I will go drive here, I can ignore here. So it's a much more specific application of resources, much more efficient, much more money inducing, knowledge about the world that helps us make timely decisions. Good? Okay, so when we went to that shortwave infrared, the one all the way up there that was on the far right, that's this part of the spectra here, right? Up there's the, there's blue, green. We're up in here. These are all different kinds of minerals. I think these are all ones that are iron based. How many of you are in mining? We're involved with a mining company. Okay, after this, you're gonna start going and doing mining. The middle one is a true color, red, green, blue, right? This is what it looks like if you took a picture with your digital camera. This one is looking at the different ratios of reflectance. Right, and you can see some of the things stay the same, but these dark patches, these are green and these are red. Those are actually different mineral compounds in the ground. You can't tell that from the visual image, but you can tell it from here. So then based on that, you can say, ah, these are high gypsum fields. These actually have iron bearing ores in here. And then based on that, you can go out and extract the things you want in just the areas you need rather than digging up this entire area. Make sense? Okay, plus also, it's much quicker to do something like this than to send out multiple field crews doing bore samples across the top of the surface, right? What you may do is do this, and then do this and say, ah, I don't really trust this. I'll send my, but I'm not gonna send my crew here. I'll just send them here and here to see what's the grade of that ore, right? This is all big data analysis though. Basically, this is all processing numpy arrays. And doing manipulations on them. So you can do this. Okay, this isn't some special, I mean it is, but it's not. Once you learn the basics, this is basic data science. So you wanna see some notebooks? See? Thank you. So we'll see if the internet stays with us. Because what I'm actually using here is, this is a hosted notebook on our platform. How many of you have used Jupyter? How many of you like Jupyter? Well, I disagree with you. But okay. What I'm gonna show is how easy it is to bring in one of our satellite images. Let me get rid of this announcement that my G-Force drivers are ready. So we're gonna bring in an image. So we bring up this search box. Where do we wanna get an image from? Somewhere in Spain. Okay, that's good. We have to get a little closer than Spain. It doesn't have to be Spain. It can be anywhere on Earth. Okay. We're gonna run out of time, because nobody's answering. Okay, yeah, great. Say something I don't know how to spell. Nice job. Spell it. See? Oh, there? See? What the been? That's Italian, right? Okay. So what I can do is like, oh, this is the area I want. Let's zoom into that port. Cause let's say we wanna, and I'll show this later. We wanna do something on the ships in the port. We don't wanna have to go out there every single time and count the ships. We don't wanna sit there and wait for the reports to come in from the shipping agency. We're actually a hedge fund. Is that the same word in Spanish? Does anyone know what I'm saying when I'm saying hedge fund? Right, so we're basically looking for the fastest information as soon as possible. So we don't wanna wait for all that stuff. We're gonna actually use remote sensing to tell us what's in the ports. Or maybe we're the fishing agency, and we wanna know how many boats are coming in and who's in port. Okay, or maybe we're humanitarian and we're looking, does this on the Mediterranean? Yeah, so maybe we're looking for refugee vessels, although probably not here. It's probably a far journey, but there's refugees coming in here. So we could be looking for refugee vessels coming in, right? So then what we do is we say, okay, here's the port. Ah, this looks like loading docks. This is perfect. So what we can do is do this. I love how friendly the Spanish are. You're so nice. And then we say search. And then what that's gonna do is go search our archive. And I'm gonna have to make this screen smaller because otherwise we won't see it. And then down here, oh, I put it in my pocket. Down here, you can see we've returned all the satellite images that we have. Right now these are, it's showing the satellite images from our platform. How many of you have heard of the Sentinel European Space Program? We also have Sentinel-2 images and we have Landsat images and we have Iconis in there. So I can change this to all. And now you can see we're also getting the Sentinel and Landsat images in there. And you can use this platform to bring them in. But I wanna show you one of the nice ones. So let's go back to WorldView 3. Whoops. So this, now you can see it's got 100% coverage. This one on June of this year with no cloud cover. And if you wanna see, oh, is this actually what I want? Or was that cloud actually covering? Like a 2% cloud cover, let's preview that one. So that's gonna pull down the preview. There's the preview and we can see wherever that 2% of clouds is, it's not here. So we're good. Okay, so we can say insert. And what that does is automatically insert into your notebook the ability to call out this little piece of the larger image. If I zoom out now, I'll show you what the satellite actually brought back. Remember when I said preview? That strip from here to here is what, that's the 60 to 80 gigabyte image. Yes, is that a question? No, you're waving to your friend to say I'm sitting here. Is that what you were doing? Good job. A satellite could have helped with that if only we'd had a satellite in the room. So that's the 60 to 80 gigabyte image. But for most of us, we don't want that, right? If you're just interested in looking at the boats and the port, I don't want all that. I just want that little piece that I got there, right? And then from here, we can do all sorts of other analysis, right? But there you can see, look, there's cars and shipping containers in the lots. And I'm not even that zoomed in. I mean, if we really want to zoom in, let's go back. So we know there's a boat right here. You want to look at the boat or you want to look at the docks? Boat, thank you. Bonus points for you. So let's just do this because the boat's right there. Search. And that was the, I don't remember which one, the one with 2%, right? So it's preview. No, let's just insert it. And that'll come in soon enough. So there we can see the boat right there. So that to me looks like a cruise vessel. Yeah? Why? Who knows why I'm saying that? Yeah, there's a swimming pool. And that looks like chairs. Now, this is not really sharp right away. So this is actually, since we're using the visible bands, what this is about 0.5, no, 50, yeah, 0.5 meters on a side. If I want to make that sharper, what we're going to do is we're going to take our 0.3 meter camera and merge it with our 0.5 meter pixels. Does that make sense? We have a better resolution image. So what I can say here is pan, sharpen equals, I think this is it. If jet lag hasn't robbed me of all my memory, nope. Yes, thank you. Good pair programming. There. Now watch what'll happen to that ship. So what we're doing is on our server side, we're doing that merging and then returning the image back so you don't actually have to do that processing in your notebook. And this is where we wait for the internet. There, but now you can see that it should seem to you like the boat is even sharper. Does it look sharper now? A little bit, is it hard to tell from back there? If you were zoomed in, we would be able to see these small boats better and we can see the actual outline of the pool more clearly. It was easy to see there was a pool there before. It wasn't as easy to see how big it was, okay? Any questions? I like taking questions in the middle. Any questions? Okay, so what I'm going to do is show you, the rest of this talk is basically just going to be showing you more stuff. Does that sound good? We have 15 minutes about. Was that a, that was a cheering that we only have 15 minutes left? Yeah, that's what that was. I know, go faster now even you're saying. Oh, I thought Spaniards were nice and now you offset it for him. Way to go. All right, so the next one I'm going to show you is this is a revealing, this is all visible, right? This would be easy for us to do visible stuff, right? But if we wanted to reveal some of the hidden, so let's hide the tips. This is one that I've written. Again, remember the bands. So this is an arrow in Pueblo, Colorado. All right, I picked that of course because I wanted to use Spanish words. So this is Pueblo, Colorado, good, both Spanish words. I heard that. Okay, does anybody know what this is up here? Baseball fields, exactly. What happens to baseball fields in dry areas? What do we do to them in the United States? Yes, we put lots of water in them so that they are healthy vegetation, right? Also, where is the water bodies in this picture? Like, where's the areas of water? It's pretty easy to tell that this is water, right? Right away. You're probably saying that's water. This looks like a river. So we'll probably go with that being a river and that's a river too, probably, right? Do you see any others? Has anybody seen any others? Yes, no? Watch what happens now. So this is going to be using some of those bands between them. First, I want to show you. In this one, this is the normal one we were looking at before. In this one, what we've done is we've said near-infrared is now going to be red pixels. Red pixels are going to be green, red reflectance is going to be green pixels and green reflectance is going to be blue pixels. We basically have shifted our normal picture up. Does that make sense? We do this a lot in remote sensing just to get a good idea. So remember, look at all these pictures down here. This is blue, green, red, where we usually see things. There's not much difference, just more reflectance. But look what happens when I plot the infrared. What happens to the baseball fields? They're very bright because you remember, chlorophyll reflects, healthy chlorophyll reflects infrared really brightly. So first off, we can see the baseball fields much easier here that they're green vegetation. We can see the vegetation growing along the sides. We can see all the vegetation growing in the, this is a park, we can see the vegetation trees around people's houses, right? In which they're much harder to pick out right here. So this is what you get with the extra bands. Now let's go for our water problem. We're trying to find the water. So remember I said before, water drops off in the infrared. So what we're gonna do is use the ratio of infrared to green to find water bodies. And what that gives us is this, right? There's our lakes before, good. There's that other lake, there's that other one. But we miss this stream. There is probably only a little bit of water in it and there's probably actually some ponds down here. And we miss those as well. So this is one of the things, if you were somebody in charge of water resources for Pueblo, Colorado, this image helps you immensely. This also gets us though into a segmentation problem. Right, because what else is blue on this? The roads, right? They're all blue. And then this is the river, but look how light that is. There's sediment right there underneath the water. It's very shallow water. So then on the rest of this notebook, which if you sign up for our services you can go play with it yourself. We actually go through manual segmentation to figure out where to threshold that value. Right, and then I picked, this one I think is picking up too much noise down here. This one's cutting away too many rivers. We've lost that entire thing up there. So I picked the point two and then I plot it back on top of our original image. Okay, so this is some of the basic analysis you can do. Here, how many of you have used SageMaker or have heard of SageMaker? Amazon's big machine learning platform. Here we're gonna actually do object detection in this next one. So this is the Paulo tsunami. You remember it was about a month and a half ago the tsunami in Indonesia, right? So when this happened, let me go to the top. Let me first show you the level. I mean, I'm just gonna show you a quick bit of the destruction. It was actually, that's actually one of the bad parts about working at a company like this is when we act, because we give this data away for free when there's a tragedy like this and you start looking through it and you get to understand how bad the destruction is on a totally different level. So there's before and there's after. So the bridge is gone, this whole area of land is gone and there's a whole bunch of sediment draining out into the area out there. So what we did is we took, so we're gonna focus on this area to try to help relief efforts to look for destroyed housing and where people might be. And you can see there's quite a bit of difference. What we did, and I'm showing this for a very specific reason, is we took a machine learning algorithm we had built for burnt houses in California, right? And we said, okay, let's just naively apply that to this image as well. And so here you can see that machine learning image right here where we're using SageMaker to do the machine learning on object detection. And when that comes out, we get this before image. So this is the before image. You can see all the building object that thinks it's found. And you can tell it's not doing a very good job, right? Because it's saying this is one building, right? So why did I tell you this? Why did I show you this? No reason. The reason is because your same data science skills that you use now still apply to remote-sense images. If I build a training data set on the customers of a new cutting-edge bank, and I build a model on that based on their saving and investment preferences, should I use that same model on a bank that's been around for a couple of centuries? That's not a trick question. Would that model be as valid? Especially with something like machine learning where there's no mechanisms under it? No, you can say no. I won't pick on you, I'm a nice guy, unlike some people. And so that same thing applies here with remote sensing. Just because you have a building model, or a ship model, or a car model, does not mean that you can apply it everywhere on the globe. So there's still a lot of, once you've built a model, your work is not done, especially if you wanna spread it to other places. But let's just keep going with this, knowing that it's pretty rough. This is the after model, okay? I don't know why it thinks it found, maybe this building here, if it makes it think it found a building here. And I don't know why it thinks that's a building. So what we end up with is, these are the buildings that supposedly remained, and then this is our final image. Can you see the green boxes there? All the green boxes are buildings we think have been destroyed. So we, I mean, at our first cut heuristic, this is way faster than actually going through and trying to digitize and count them all. But it's actually not a great model, but I'm showing you that you can do all this kind of object detection on remote-sensed images. Any questions on that one? We good? All right, I'm gonna quickly go through some other use cases. This one's exactly related. This is a hurricane in Oklahoma, in the United States. Within a couple hours after the hurricane, they were actually, or tornado, sorry, tornado, not hurricane. The different, do you know the difference between a hurricane and a tornado? Obviously I do not. Yeah, hurricanes are these big, huge storms. Tornadoes are those diablos that spin through, right? So everything in orange here is destroyed. Everything in blue is possibly damaged and the rest are basically okay. So within a couple hours the insurance companies in this area were able to know where they should send out adjusters and where they were gonna have to do work. Not sending someone out into the field going individually to each house. So someone over here said they work for an insurance company. This is the kind of efficiency it gets you beforehand. Right? And you can tell the people to ignore over here. Right? We don't have to send anybody over there unless they call. Okay, another example. So this is back to that mineral stuff and other things you can do with that shortwave infrared. This is what comes off a Landsat. Landsat eight has shortwave infrared. And then this is what comes off of our satellites for the same area. Because we have smaller pixel sizes, you can pick things out. Yes, that was the five minute warning. I got seven over here. Who do I believe? Oh, that says five over there, though. But that's American time. It's not 455. Here's, let me show you another. Oh, here you're actually, that's building roof material. Right, this is probably tar and this is other things right here, right? So we can actually use SWIR to actually pick out different kinds of building roof material. You could look at change over time. You could do it for solar panels, of which I'm shocked. There's so few of in Spain. Why is that? I haven't seen any. Okay, no one knows why. The other thing that SWIR is good for is looking for water, like I said before. This looks like a big dry field, yeah? Except for this pond right here and maybe some of these right here. That's actually what the soil moisture looks like using that other infrared, right? And there'd be no way to do this without going out and sampling in the field. Another example, this is a forest fire. Where's the fire? If I use SWIR, I can actually pull out the vegetation and I can pick out the fire edges and wear our hotspots. And I know Spain has a lot of fires. You're very similar to the habitat where I live, which is Santa Cruz, California, right? Where we're having a bunch of fires right now. This is for your insurance people, for your firefighters, for your government agencies, all of this, it's really helpful stuff. It's also really good for building detection. This, again, was a project we did with Bill and Melinda Gates in Africa. OpenStreetMap had these little blue houses. We used some AI stuff or machine learning stuff and got it to look like that, automatically. So the numbers on this, well, automatically, right? The numbers on this were, we mapped almost 18 and a half million houses over 945,000 square kilometers in Tanzania. Or Tanzania, depending on how you pronounce it. So this is good for the building industry. This is good for civil planners. Again, good for insurance. Let's see, what else we got here? Ship detection. If you want to look at counting passenger vehicles from satellite imagery, these links are all on the slides. This actually talks about the difference between using, I think, I put this one in because I think you guys would really like it. We talk about different algorithms. So the Lynette classifier, we compare that to a CNN. And then we also compare that to doing segmentation and morphology. And then we compare that to single shot detectors. And they all give really interesting counts on cars. So you can get, this is automatic counting of all the cars in this lot. I think I made my point, huh? The only one is this one also shows off using SageMaker in the middle. These links are all in there. So let me wrap it up. So what does our platform get you? Let me actually go back to presenting quickly. So what our platform gets you is Jupiter for Exploration and Collaboration. We host it for you. You can certainly bring them down and work on your own laptops if you want, but you can also work on them remotely on our platform and save them there and collaborate. We have REST APIs for pulling back the image retrieval. We have a tasking framework, and it also integrates with ML and AI and vector and numeric storage. The tasking frame, oh, you can mix in other data formats on that platform. The tasking framework takes your notebook. So you develop a notebook. You can then say make this into a task and run it over an entire area. So for example, when we saw those baseball fields, suppose you were interested in all the baseball fields in California, because you own a taco truck, and you want to start doing your little taco truck tour to sell your tacos, you could run that, you could develop that in Santa Cruz and maybe up in, let me think of another, in Los Angeles, and then you say take this notebook, now make it into a task and run it in the background on our infrastructure across all of California and give me back the vectors for just the baseball fields. Make sense? And it's on our infrastructure and we do it for you. Using containers, of course. Coming soon, which soon as in like sometime this winter, is the ability to train, validate, and analyze machine learning models along with storing the model and the results all in the platform itself. Like a whole platform for doing that, especially since imagery is somewhat harder to do that with. That's the end. There's the slide deck again, so now you can take your picture if you thought it was worth it, and then that's me. And I think I've used up all my time for questions, yeah? So, I will be around though, I'm doing something at like ask the expert session over by theater 25, and then I think I'm done for the day, although I'm going to Will and Sophie's talk, or it's actually just Will's talk, but that's later. You should go too, Will's a good speaker. And he's nicer than me. Thank you.