 Artificial intelligence is inevitable. As we have more and more AI algorithms in our lives, I want to make sure that those AI algorithms are good. That means that they need to be effective and measurable. They need to do what we want them to do, but I need to be able to analyze when they're wrong. At its core, intelligence is all about identifying similarity. I know that's a chair because it looks like every chair I've ever seen. And I know that these chairs and these stools are different. But when we're talking about artificial intelligence, we need to identify similarity between data points. I can represent this chair as a data point, the green dots, by looking at their qualities. The chair has some width. How many legs does it have? If I look at all of those points, then every chair corresponds to a point, and points that are close mean that they have similar qualities and they are overall similar things. We're going to look at two algorithms that identify similarity. Algorithm one and Algorithm two. Algorithm one basically says, all right, I have a bunch of points that are close together. I'm going to put a circle around them. Then everything in that circle is similar, and things in different circles are dissimilar. You'll notice that then we have all of the chairs are here and all of the stools are here. So this circle kind of captures the essence of what it means to be a chair, right? And that's what I mean by measurable. Now, you give me a new data point. Let's say it's over here, and say, oh, I think it's a chair. Well, I can tell you how wrong you are. You were that far away from my idea of a chair. So that's nice. It's measurable, but it's not very effective. To see that, we're going to look at this. We know that these are all the same chair, but real data is messy. It's not going to be in nice clusters. So if I tried to get Algorithm one to identify all of these as the same, I would draw a big circle across this, and it would also capture these points in the middle, which might be a totally different thing, tables, right? So Algorithm one is measurable, but it's not very effective. Algorithm two, on the other hand, is effective. What it says is, okay, well, I have these points that are close. If they're close, they link together. Links form chains, and then those chains are all similar. So I'm able to separate this string from this middle cluster, but in doing so, I've lost the ability to measure whether I'm wrong. If I give you a point here, and I say I think it's that chair, you can't tell me how wrong it is because we've lost the idea of the essence of what it means to be that chair. So that's where our research comes in. What we show is that there's a way to show, instead of distance being the line between two points, like there, we can do it in a specific way with math. Along the string that we get here, in the right way, that Algorithm one and Algorithm two are identical. So that means that Algorithm two is measurable, and yes, AI may be inevitable, but bad AI certainly isn't. Thank you. Thank you.