 I feel like I need to talk like George Will in complete paragraphs. OK, good. Back to colloquial me. All right, we want to talk today about seeing machines think. Artificial intelligence is becoming an increasingly interesting thing that we all think about. At the same time, it's been an enduring concern for a long time. In fact, in the old days, people would try to create smart machines with handcrafted algorithms, collections of very specific rules, brute force calculations. And even then, it could lead to very, very interesting things. One thing that I've been interested in for a long time is the game of chess. And I just want to show you a very, very relatively old piece because it has a point to make. So this is a game of chess. You can play it against the computer. And as I play, you can see every move that the computer is thinking. The computer is not playing a great game here, by the way. But it's thinking hard. And we give it credit for that. And what it's doing is it's simply looking at every possible move and trying to figure out what's going to be good or bad according to a pre-made formula for good or bad moves. And this really represents, I think, sort of an old view of intelligence on computers, that it just sits there and calculates brute force. However, what's exciting is that we're seeing a very different role for artificial intelligence today. And so today we're going to talk about neural nets. And one of the things that sets that apart from the kind of old world computing that Martin was talking about is the fact that these systems learn over time. And they're modeled after the human brain and nervous system. They're also probabilistic. So there are many things. There are many sort of shades of gray there, no pun intended. And they are not completely predictable. And that makes it for a really interesting opportunity for visualization. Because one of the big challenges is understanding exactly what is being learned by these systems. And experts don't always know what these systems are learning. So this is a very simple sort of artistic rendering of what a neural net might look like. You input it some data. So over there, we're inputting a picture. And this picture will pass through all these layers. Each one of those vertical things is a layer in the neural net. And each layer is made up of a bunch of neurons. And they are firing up and trying to understand things about this picture. What you see at the bottom are filters and activations. They're trying to take the picture apart and understand what it is that the machine is looking at. Eventually, as this goes through all these layers, it comes up with a prediction. OK, so what kind of picture is this? Is it a horse? Is it a deer? Is it a plane? And the machine will come up with different probabilities for each one of these options. And then eventually it decides, OK, horse is the one I'm most sure about, so I'm going to go with horse. So how does a real deep learning network works? It's a mystery. In fact, people go on Twitter to make jokes about it. And if you search for things like black box neural nets, you get tons of answers right away. And this is one of the, again, one of the hooks for us in visualization in trying to open up the black box a little bit and try to understand what it is that's happening inside these systems. So we want to suggest that there are a couple of reasons to stay calm about this mystery. One is it's actually normal for technology to be kind of mysterious. One of the things, I went and just did a little bit of reading about old kinds of technology. And think about something as simple as metal that for thousands of years, people were using iron. And they would basically extract it using these very complicated, dangerous blast furnaces. So here you see images that are on the left, I think, from the 1600s and the right from the 1300s of Chinese iron blast furnaces. The truth is when these were created, this was super high technology. And nobody really knew all the chemistry involved in burning these things. And it was OK. It was fine. So I think, in a way, it's like the normal human condition that sometimes the technology runs a little ahead of our understanding. And the other thing is that we don't really know how brains work either. And we're pretty comfortable with them. So that's one reason to stay calm. There's another thing which is that visualization really can help. And in fact, this is an amazing opportunity, I think, that taking an MRI of a human brain is very, very difficult, obviously. It's like this big honking machine you have to get in. And it's horrible. When we have artificial neural networks, we can actually start inspecting them in interesting ways. And it's really a great opportunity to understand what's going on. So we want to talk about some of that today. And I want to start with a little bit of a warm up, actually, before we leap to neural networks. I just want to talk a little bit about machine learning and how you could sort of understand it in general. And so let's start to teach a computer to drink wine. Let's have a computer tell red and white wine apart. So the first step is to get data for that. I was able to find some great data. And there's a nice UCI site that has a nice archive of data sets. And I found data conveniently for wines on the amount of chlorides and total sulfur dioxide. I wish I could have gotten data on oaky flavors or whether there's a cherry note in there. It's a little more appetizing, but we're going to just deal with the chemicals that we're given here. So you have a data set like this. And there's one particular maneuver that is critical to understanding this whole field, I think, which is that you look at something like this where you've got two columns of numbers. So each row has two numbers. And the key mental maneuver is to think about those pairs of numbers as points in space. That transition from just a data table to geometry has a lot of power to help you think about this. So for example, I can plot the red wines as points in this two-dimensional space. I can plot white wines. And just by plotting, I can start to see, yeah, all right, there is a difference there. And I can start to think, how could I quantify that difference in a way that a computer could understand? So one thing you could do is say, I'm just going to look at one of these features. So I'm thinking here to use machine learning jargon. The chlorides are a feature. The amount of sulfur dioxide, that's also a feature. I'm labeling them, instead of x and y in the axis, x1 and x2, just to get you used to the weird jargon that happens in this field. So how could I tell the difference? I could look at just one feature. For example, I could say, let's look at just the chlorides. And I could write down this little equation here, our formula. And I could say, well, when that is positive, I'm going to guess that this is a red wine. And if it's negative, I'm going to guess that it's a white wine. And since this is a visualization crowd, I've done a visualization thing here and placed a heat map on the back, showing you the value of this formula. And you can see that actually it's not terrible. It kind of gets a lot of the white wines right. They're in the white area. The red wine's in the red area. OK, maybe it's not great though. I could also try sulfur dioxide. You can see that's actually much worse if I look at just that feature. Not good at all. So what you really want to do is combine the two of them. And the simplest thing you can do is combine these in a linear way. So I've created a sort of diagram on the right that gives you an indication of what's going on. I've taken the two features, the X1, the X2. And then I've combined them, putting more weight on the X1 feature, the one at the top, to get this linear classifier here. And if you look, you actually see most of the red dots for the red wine are in the red area. Most of the white dots are in the white area. It's not perfect, but the truth is it's actually better than most humans of this task. A lot of people think it's very easy to tell these apart, but in fact, studies show that people are much worse than you would expect. There are many ways you could find this particular formula, both analytically, or you could try to sort of teach the computer gradually to find it. We won't go into details about the exact training, but the point is that this formula exists and this is a way to tell these apart. Now, as I said, the formula does better than most people, but we cannot conclude from that, that this is some hyper-intelligent wine computer. On the contrary, get that back? Good. And Peter, yes, I'm about to insult it. What would you, it's like, no, maybe it is hyper-intelligent. Okay, so what would happen, for example, if you gave it a glass of pure sulfur dioxide? If you actually look at this diagram and plug into the formula, the model will happily say that is definitely white wine, whereas a human would say, argh, I'm joking, because it's actually poison gas. So that's sort of an interesting point to keep in mind as you think about machine learning, that it's designed to tell you stuff about the data it's been trained up. Okay, so now let's look at a demo that sort of makes that point. Let me see if I can make this a little bit bigger. So this over here is a neural net, a very, very small, simple and transparent neural net that we launched a couple of weeks ago and that you can play with on your browser. So it runs locally on your browser. This is part of the TensorFlow project. We're gonna talk about TensorFlow in a little bit, but this is all open source. So please play with us. The whole point here is to start to do what Martin was walking you through, to play with different scenarios and see how tweaking the network will give you better or worse answers for the scenario you have. So to mimic what Martin was looking at, I have here very simple network that's looking at these dots over here. These are data points and we have two sort of rough clusters. We have the blue clusters, which are positive points and we have the orange cluster, which are negative points. And we're trying to ask the machine to separate these in the best way it can. And so starting very much along the same lines as Martin, we have different features here. The first feature just partitions the space vertically. The next feature just partitions the space horizontally. Okay, so this is our very simple system. It has no hidden layers. It's very straightforward. And now I'm gonna play it and very quickly, it gets the answer, right? Of this might be the best partition for this set of points. Okay, so this is all good and straightforward. So now let's give it a slightly more challenging data set. I have now two concentric circles of different points. The negative points are on the outside, the positive points are on the inside. Okay, if I run the same network with the same features, what does it do? It doesn't. It doesn't find a good answer. It can't, right? Okay, it's too complicated a data set for it to figure out with these features. So what I can start to do is I can start to add a couple of layers here. And these are hidden layers. And now I'm gonna add different neurons to these layers. And now let's see if it gets it. Okay, it does. Okay, so let's talk a little bit about what's going on here. So I've made my system more complex. I've added neurons to it. Now each neuron here, each little box, is telling me how it sees the world and how it thinks the world should be partitioned. Given the data set, it's given, right? And I see that over here it starts to form these curves and starts to sort of follow the shape of the data better. The other thing it does is that now we have these lines that are both blue and orange. And what that means is these lines are weights. It's how much weight we're giving to each one of these neurons, okay? So the thicker the line, the more weight that neuron has on the output, okay? Or to the input to the next layer. They're also different colors. So the orange weights are negative weights and the blue weights are positive. Positive weights mean this is a straightforward weight. So you're gonna take whatever I give you and you're going to move that to the next layer. A negative weight means that you're going to take exactly the opposite of what I give you, okay? So if we look, all of these final neurons here are feeding negative weights to the output, okay? If I look at the output, everything on the outside is orange and everything on the inside is blue, just like our data set, which is great. But if I look at this neuron, it's flipped, right? And that's why we get these negative weights, okay? So, and then we can start doing, we can start playing with different things. So now I'm putting in a very different kind of data input where I have sort of a checkerboard setup. And I'm going to play the same exact network and see what I get. Not bad. I get a pretty good straightforward answer. But the other game I can play is I can just fiddle with these features, right? And sometimes I may choose a feature that actually is really good at predicting the data I have. So if I run it now, it's slightly better. And this is something, this is, this is a lot of the art of machine learning is choosing your features wisely, playing with your data, tweaking your network so that you can come to the best outcome. And then you start to see things like this, this neuron at the top here is almost deactivated because we don't really need it anymore. It doesn't, we don't care about that. And then there is stuff like this, a much more complicated data set. And let's see, if I just run this network the same way, it tries. Trying really hard. It is, but it's not, it's not getting there. Let's play with this. Let's give it a full set of things. Let's make it. Make its brain bigger. Very complicated. And let's see how far we get with this. See if this is any better. And it's trying so hard. It'll figure it out. It will figure it out eventually. And then there are all sorts of questions that you wanna ask of your network. When it gets, if it ever gets to a stable state, is it overfitting the data? Is it just really looking really hard at the data that you have? And then that's it? Or is it general? You want something that is general. But anyway, the point here, I'm gonna stop it for a minute. The point here is that this is a toy network that you can play with and start to develop some of the intuition for it. It was really interesting because we developed this tool internally at Google at first. And even experts were surprised with some of the things that they could see on this very, very simple network. So again, the role that visualization can play here is really important because even people who deal with these kinds of systems every day have trouble developing a really deep intuition for what's happening here. So now that we looked at a toy network, let's look at real networks. And the point of that, these networks are very complex. They are massive sometimes and they have lots of high degree nodes. And this is degree nodes. And this is work I'm about to show that was done with Ham over here, who's in the audience. He was an intern with us last summer and started this amazing work. So the goal is to turn something like that on the left where you have all the low level nodes and all the edges in the network into something that is a lot more structured that takes into account hierarchies, that takes into account high degree nodes and how you should deal with that for the topology of the network. The other point here is that it's really important to be able to show these systems. It's really hard, but it's very important. So to give you a sense, every time a new breakthrough comes up in neural nets, you will have publications, you will have academic papers, and the first thing people will do is draw diagrams such as the one you're looking at right now. And these are very high level diagrams. It's the level at which we as humans want to think about these systems. This is my system. It's made of 20 layers. This is layer one. This is layer two. This is a convolutional layer. This is a fully connected layer. And this is the level at which you want to talk and think about these things. However, the structure of these graphs is incredibly low level where if you wanna add nodes, you're literally adding one node and another node and that's creating edges. So you can imagine the difference between the data you have to play with and this kind of manually drawn diagram, right? And yet, this is the lingua franca. This is what everybody will draw on papers if they wanna talk and communicate about their networks. So this was our challenge. How do we get as close as we can get to something like this automatically, okay? And so to show you what we have, this one here, okay. This is something we did. I wanna, this is a visualization that has been open sourced together with TensorFlow. So TensorFlow is Google's open source machine learning platform. And when it launched, we launched this visualization of the graphs in TensorFlow as well. And what this does is it automatically reads your graph and visualizes it. And so this is a sample network called CIFAR. And it does a couple of things. One of the things it does is that it shows you these big blocks and these are hierarchical blocks. These are clusters of operations that are happening. So if I open one of them, you can see they have a lot of stuff inside, a lot of computation that happens. Another thing we're doing here is whenever we find blocks that share the same structure, we're highlighting that to you by color. So blocks that share the same color are blocks that share the same structure. So to give you a peek of that, this is convolution two and convolutional layer one. And you can say that, yeah, they share the same structure, okay? The different, oh, there's one thing I should have said from the beginning, so let me go back. The way these things flow, this is like a data flow graph, it flows from bottom to top. So the input to the system is at the bottom. The output is at the top. This is just a convention in neural nets. And the other thing you see here are varying lines. So whenever you have an edge, a lot of times these are tensors in these systems. And these are the things that are carrying all the data. Remember that artistic rendering where you saw the beautiful things flowing from one layer to another, these are the tensors. And the tensors have different shapes and sizes as they go through the system. I'm gonna say something as the pedantic mathematician. When I first heard the name tensor, I was like, oh my God, it's like differential geometry. What the heck? Turns out tensor is just used in this sense as a multi-dimensional array, but it has two syllables, so it's a better word than multi-dimensional array. You wouldn't wanna call this thing multi-dimensional array flow. I just wouldn't. So the other thing that people care a lot about when they are looking at their systems is what shape are these sensors, what size? And so we label the tensors themselves and tell you what shape they are. There's other things we're doing here. So we can't show the entire graph at once because otherwise it would be completely impossible to read. There are always nodes in these systems that connect to everything else, like gradients. And so you just can't read a graph like that. So we did a couple of tricks. One of the tricks is that if I go back to the original view, you have the main graph on the left and this is the core of your system. And then you have auxiliary nodes on the right. And these auxiliary nodes are actually part of the graph. They are all together, all of this exists together, but we had to cut them off and show them separately because otherwise you couldn't read the graph at all. And this was something that was sort of a bold move on our side. We're like, oh, we're just gonna cut off these high-degree nodes. And so we started talking with users and saying, does it make sense at this point when you look at a graph like this, does it match your mental model of what you're doing or is it completely bust at this point? And people were like, no, this is great. In fact, can you give us control and can I ask to trim out even more nodes out of my network if I want to? And so we're like, okay, this is a good feature. All right, so now you can click on any node and say, oh, add back to main graph or... You can see why we take them out. Yes, exactly, or take it out. There are other things, other fun things. So because we're trimming things whenever you see these side nodes, these are things that have been trimmed. If you click on any of them, you're taken there, you can open them up. And again, you get to start to see the complexity of what's happening in a somewhat simple network like this one. All right, so let me go back and tell you a couple of things. So yeah, so this is all on TensorFlow. It's been open sourced. This is work done with a lot of people, including Ham who started the project here. Why is this hard? I'll let you take a look at this, but as visualization folks, I'm sure you can relate. This is what one of the networks looked like without any of the trimming that we did. So it's impossible to visualize. And then again, some of the tricks that we played here, high degree nodes, being trimmed, core structure on the side. The reaction has been incredibly positive. One of the things to realize about this community is that even though they're dealing with tons of data and very complex systems, there's a dearth of visualization. There is no good visualization tools that are used across the board. People were always building their own little systems. So it was a real boom when TensorFlow came out and you could just visualize your graph like that. People have been editing the graphs. The other thing is if your code is a mess and your graph is a mess, the visualization will show that very quickly. And so people have been actively editing their graphs to make the visualization better so that they can understand the systems that they are building. And so that to us validates the use of how useful some of this is. And again, the whole inspiration for doing something like this was communication. When people wanna talk about these systems, they wanna talk at high level. And we are seeing this sort of communication happening and screen sharing and so forth. Okay, so far we've talked about visualizing the network themselves, the systems. There are two main ingredients in machine learning systems. One is the algorithms and the network that you build. The other is the data that you're feeding it. Cause it's learning, right? And it's only gonna learn depending on the data you feed it. And so we are also starting to visualize that input data. So one of the things that we did is look at, again, a simple system, CIFAR-10. And what CIFAR is, it's an image classification network. So it's sort of like, it takes a bunch of images. It takes a bunch of 32 by 32 images. And it has 10 classes. Classes being, is it a car? Is it an airplane? Is it a deer? Is it a dog or a cat? Okay, so it has 10 of those. And all it is is a bunch of pictures. It's trying to classify the pictures. Each picture has been labeled by humans. So we have a ground truth to go back to and say, is this really a dog? Is this really a cat? And so forth. So now, let me show you this demo that visualizes, oh, God, what am I doing? That was interesting. All right, now we... Hang on, hang on. Let me just go here. And what I wanna do is I actually wanna reload this thing. See if it's bigger. Okay. So this is a visualization of the data set, of a sixth or a fifth of the data set in CIFAR. Each one of those squares is an image. And so being able to see that all at once is already huge for people who are dealing with these systems. You can zoom in and actually look at the pictures that you're going to be working with. We are separating these pictures into different classes, like I said. So the airplane, the automobile, the birds, so forth. And then we can start playing interesting tricks on it. So I'm gonna look at the same images, but I'm going to organize them by hue now, just because I can. And I can see right away that there are some classes of images that have a bunch of blue, which makes sense, right? Airplanes and ships, I would imagine, have a bunch of blue. Another thing that was interesting to me was frogs, almost no blue frogs at all, which was interesting to know. But the other thing is the distribution of things like automobile and truck as being the only ones where you have a really nice distribution across the spectrum. But now I can start to play. So this is only the input data. It hasn't even touched any machine learning system yet. I'm just looking at my data. Okay, just a small anecdote about why input data is so important. We were talking to a professor in MIT who was doing the sort of thing, trying to understand images. And he was like, we scrape the web, we look at tons of images, but you can't look at millions of images. So one of the problems in their system is it couldn't recognize mugs very well. They're like, why mugs? What's the problem with mugs? Well, it turns out that the input data for mugs, for some reason on the web, tends to be mugs with the handle to the left. And not to the right. And so when it got images of the handle on the right, it didn't know what to do with it, right? So this is why input data is incredibly important. The bias in your data is incredibly important to understanding what your system is gonna do. Now I'm going to start touching the machine learning system and try to look at this data through the eyes of the system. So what I'm showing you now is a confusion matrix. This is a comparison between what the system thinks a picture is and what the picture has been labeled as, okay? So it's good news to me that the diagonal here is the biggest, is the one with the biggest, the biggest distribution here because, oh go away, because this means that the diagonal is like what I thought were cats are really cats. But now let's do one thing. Let's take away the correct predictions. Let's take away everything that's right, that diagonal. And now I have everything that's wrong, that my computer, that my system got wrong. And I can see that there are two big chunks here, for instance, things that I'm thinking are dogs, but are really cats and vice versa. Or things that are birds, for instance, that the system thought were airplanes, okay? So this is one here where the system was thinking they were airplanes. And again, this will allow you to start tweaking your system to understand what's happening. I wanna show you one more thing here. This distribution is each class and how certain the computer is that all of these images, which are of airplanes, are indeed airplanes. So the computer is very, very sure that all of these pictures here on the right are truly airplanes, which is great. But it's very certain these are not airplanes, and they are. So you can see the distribution, again, that cats and dogs, it does not do very well. So these are cats that it's very certain are not cats. It's very, very certain. It's salting to the poor cats. Maybe the other way around. But here's the other thing. So this is a system that has been out for a while. It's a proven data set. It's one of those model data sets. And by doing this visualization, we found a mistake in this data set. So again, I'm gonna zoom into cats. Cats that it's very certain are not cats. Okay. So these have all been labeled by humans as cats. Look at this guy here. Right? The machine was right at once, but it's not good. We can have a costume maybe. Yeah. So this is exactly the kind of exercise you wanna do when you're training these systems. And just being able to look at things and slice and dice is incredibly useful. So, oh, God. Cool. So, okay. Let me just get you to the right point. Oh, okay. Yeah, so this, the CIFAR-10 classifier, that was used to create that is actually the open source tutorial inside of TensorFlow. So you can go play with this yourself. And I wanna talk quickly about what neural nets are learning because you saw that input data can be tricky, but let's talk about some interesting phenomena. So one is, what happens if we give it random images? Okay. So it's mildly confident, like we forced it to say something. And it said the left one's a bird, the right one's a cat. You can't fault it too much so far. But you can actually play a little game where you take a random image and you say, how could I tweak that to make it more confident about what its classification is? And if you keep doing that, you actually can come up with what are called rubbish examples. So here it is 100% confident, the thing on the left is a frog and that the thing on the right is a bird. And actually I've had a couple of people claim they can see a frog in the left one. I personally cannot. You know, it's funny. I look at these images when we created them and they reminded me of something. And I realized what it was. It was actually an art piece that I'd seen. There's something about, like these are not random colors here. It's the blacks and whites with some other carefully chosen colors. And I realized, oh, it's just like that. Gerhard Richter painting that sold for 6.9 million. And so if you put them there, you can see these rubbish examples are kind of halfway in between, which I think raises an important question, which I'm sure is all in your minds right now. And just to answer it, that's a frog with 6.9 million. It's a frog. In all seriousness, I actually think this is a really nice way to appreciate why that painting is so good. If you spend a long time looking at these, you realize, wow, that actually is a very subtle painting that is not random at all. It is quite nice. There's something more strange than rubbish examples. That's something called adversarial examples. And the idea here is that you take an image that it is corrective and confident. So here's a frog, 99% confident it's a frog. It turns out that with very subtle changes, you can create an image. This image on the right is different from the one on the left. With 99% confident is a bird. Do you see the difference? Okay, let me, we'll work on the difference. We're gonna put the right one after another. If you look very carefully at the colors, you'll see them shifting. You see it? Yeah, subtle. We can do this another way. You can take the difference in Photoshop and barely see the difference. We say computer enhance. And we can prove, okay, that's not actually just a black image there. So making these tiny changes makes a difference. Here's one airplane on the left. On the right, 99% confident automobile. You can kind of see a difference in the sky. I don't know. Here, horse on the left, 95% confident, 99.9% confident, it's a truck now. So these things are learning something. This is a good class fire, and yet it is doing something very different from what a human is doing. It's hard to understand what could possibly be going on. And in fact, there's a very nice paper I'd refer to a good fellow at AL, which has a really interesting explanation of what's happening. And their explanation, and I think there's probably more to be said about this, is that this is a fact about high dimensional space. So if I think about images as points in a space, just as we turned the data about wine into two dimensional space, in image, these images are specified by about 3,000 numbers. They're points in 3,000 dimensional space, okay? And tiny pixel changes in each little coordinate can actually make a big difference. And you can think about this just to do a little geometry here. That in two dimensions, if you take a tiny step up and to the left, it doesn't make that much of a difference in where you are. There's maybe a factor of the square root of two if you go one unit in each direction. But in n dimensions, it becomes the square root of n. And if n is very high, those little tiny steps can really add up. We can sort of think about this geometrically, like here's an actual 3D cube. And if we had a six dimensional cube, this is not a six dimensional cube, but it has its diagonals distorted the same degree, the same length compared to the sides that an actual 60 cube would have. And we can really dial this up and see that like 32 by 32 by 32 dimensional cube, actually its corners are very, very far apart. So there's something happening in high dimensional space here that's very mysterious. And I think I wanna end talking a little bit about that mystery that, okay, we've talked about geometry, we've talked about math. There's one other thing that we could sort of do with these things is to say, okay, computers are seeing something here. What are they? Let's give it a Rorschach test. Let's actually treat them like humans. And so I'm just gonna quickly give you the results of a Rorschach test we did. We found four public image recognition APIs, which we'll refer to as robots one, two, three and four here. And we just gave them inkblot images. And let's see if you can spot the personalities. Like robot one here, very literal. Robot two, kind of interesting, a barrette, that could be. Robot three is just like, it's art. Robot four is a little snarky. It's like, that's a Rorschach image. Dude, we gave it another one. Robot one, very, it's like a jigsaw puzzle. Robot two is like creative, it's a fleur-de-lieve. That's good. Robot three, the artsy one, it's like, oh, that's design. Robot four is like, come on, it's just some black ink splotch. Just gonna go through. Robot one, literal, a mask. Robot twos, it's a pin. I don't know what it's thinking, but it's creative. Robot three is showing some emotional stress here. It's like, I'm isolated, I'm not isolated artist. And robot four is like, come on, it's a freaking Rorschach image. Another one, robot one again, it's hook claw, not very creative going to that. Robot two, it's like, it's handled more mustache. I can see that, that feels human to me. Robot three is just like, it's a print. Like it's very art focused. Robot four is it's like face paint print. So it's like trying to work with us a little bit. Rorschach, again, robot one, very literal. You're seeing the personality. Robot two, brass knuckles. Robot three again, it's like, I'm isolated. Robot four is like, trying to say something smart. And actually it's coming close. And then this was the one we ended on. Robot one, literal is only robot two, kind of clever. Robot three is getting into the project. It's like, I'm isolated, they're doing nothing but showing me. And robot four is like, it's a black art splat. You can see this machine roll its eyes if it had one. So one ending note here is that maybe these Rorschach tests, maybe treating them as humans is a decent way to go after all. So I think we'll end there and thank you very much.