 Good morning, folks. For me, it's a tremendous pleasure to be here talking to such an interesting audience, right? I love the presentations that I saw so far. The DNA archive idea, it's brilliant, it's awesome, so I love to see that. And also, I love to see the presentation from the folks from Amazon. So very happy to be here. I only have 10 minutes, so I'm going to focus my presentation of how we can combine machine learning and user experience. We've been talking a lot about machine learning and how we can do incredible stuff, right, with machine learning. But I'm going to focus more today of how we can use this incredible technology to improve your life, to improve the life of the user who has experienced this technology. And that will be my approach today. So I'm going to start with this guy. This is Steven Melinowski. He's a pianist and an engineer. And he had an idea in the 70s. Actually, he had a hallucination in the 70s, probably with the help of some artificial, not our intelligence, but some artificial help. He had a hallucination when he was playing his piano at his house. He saw the notes from the paper, right, came into life. He saw the notes. He was playing Bach. Try to figure this out. He was playing Bach. And he saw the notes came from the paper. And he saw the notes dancing in the paper. And he was so delighted with this experience that he decided to develop something to replicate this hallucination, to develop a user interface where a user could see the music happening. And he learned how to, he bought a computer at the time. He learned how to code. And he developed the user interface that I'm going to show in the next video, interface. He changed, not just listen to the music, but also see the music, see the flow of the music. For example, I can say from my own home, this is my son, Miguel, he's six years old. And he loves now to listen classical music because he can see the flow of the music. And most of all, he can foresee what you happen with the music. For example, in the last clip, you had some ups and downs through the music. For example, my son, he gets really excited when he see that something is about to happen in the music. So through a user interface, this guy, Steven Malinovsky, he changed the way we can consume music. Another example that I have, and I know that probably we have folks here who like basketball. I know that Greg, I don't know if Greg is here. I know that they were a big fan of basketball. So this guy, Kirk Goldsbury, he used to be a cartographer who was doing research in Harvard, but then he decided to develop a map of how the basketball players was shooting from the court. So he came up, he had the idea to figure out who was the best shooter in the NBA using maps, using how they shoot from the court. And by the way, Steven Nash was the best shooter at the time in 2011. When he combined all this data, he realized that in terms of points per attempt, if you shoot from three or do a layup, that was the best shot at the time combining all the data. If I analyze points per attempt, the probability of you make the point from a three point or from the layup, from the paint, right? Your chance to make, to convert the point will be greater. So with that, in 2012, by the way, he presented this in the MIT, his Long Sport Conference, and he left the conference as a San Antonio Sports employee and because of this paper, and of course because of the studies that came after this paper, the NBA changed completely. If you're watching the NBA in the last year, for example, you're noticing that either people shoot from three or try to do a layup or a dunk, right? It's all about that because of course, in terms of economics, that's the best shoot that you can have in the NBA. And it's funny because here, the blue dots here on our left, it's the poorest shoot in the NBA right now, right? So that's exactly what made Michael Jordan, for example, immortal, right? So if you think Michael Jordan completely challenged the economics of the game. And this, for example, is Brook Lopes. He is a center, but basically he shoots a lot from three because he changed the way he plays because of this graph. So the reason I'm talking about this is to show you guys of how through a user interface we can change the way people interact with data, right? When people interact with the assets. And that's exactly what we're doing for Platform. We develop a platform 100% visual because we understand that people deal better with the asset, deal better with the data when people can see the data, when new users can navigate the data, can see, can manipulate the data. And sometimes we're using some device, you can touch the data, right? On your left here, for example, you can see all the metadata, all the metadata tagging, all the good stuff from machine learning, but most of just using machine learning, you need to see the data, you need to touch the data, you need to be able to navigate that. And of course, I understand that you guys probably were thinking, yeah, that's awesome, but I have so many assets, right? So how am I gonna browse 25 minutes of assets as the folks from Amazon mentioned earlier? And of course, here we need machine learning. We know that we need machine learning to help us to filter the noise, to help us to bring structure to this huge pile of unstructured data. But at the same time, we know, and I'm rushing here guys because I think I'm running out of time. But at the same time, we know that machine learning is awesome, but we're still just evolving with the technology, right? This is one of the famous paradox that you have with artificial intelligence, the Moravex paradox. Here, we know that the machine learning can solve very complex problems. No, machine learning can beat the best goal. Players can beat the chess players. But for example, in this video here, I'm getting a train coming out of the wall, right? What the machine learning is lacking in the picture from the right is, it lacks context, right? The same context that my daughter, my four-year daughter has, that she understands that a train, you not come out of the wall, the machine learning doesn't have this context yet. And this is as obvious as I can sound here. What drive us, what drive our product is, how we can combine machine learning and user experience to make you guys to combine all the machine learning intelligence with the context that you guys bring to the equation and how we can design products, these solve problems to this combination of machine learning and user experience. And I thought that this slide is so obvious that I can sound, that I try to make a little bit cooler with this headset and this chamber. Anyway. And here, this is gonna be probably my last slide. I love what the Amazon folks said about working backwards because that's exactly the way we think we should solve the problems. Here I have the gap between you guys or between a user who wants to find some content and here I have some content bagging to be found, right? The way we go here, it's literally backwards. I need to understand from the user perspective how the user wants to navigate the data, how, which kind of searches the user wants to apply. If the user wants to search by using questions, by using a natural language, if it's speech is more important than the visual, I want to understand from the user how I can make the data available to them. Then of course, I need to use machine learning, right? I need to have some power to filter the noise here. And then, by the way, in some case, I have the physical data that I need to digitalize, that I need to preserve, and we can do this when to end. For example, in this case, I want to see the user experience is you guys wanting to see all the clips from a particular camera cut, for example. Here I'm using machine learning to do similarity, to do image similarity, to find all the clips with this image from this camera. Here, for example, I have the act of pitching, right? The user interface here is, you guys trying to find players pitching the ball, and probably I want to combine this with logo from companies, right? You guys are giving me the feedback of what you expect from the data, and I'm using machine learning in the backend to provide the data. And finally, this one, I think it's awesome. This one, we search, right? Some archivist, for example, can have the idea of searching Willie Mays talking about Sandy Kofox, right? The biggest rival in the secrecy. And by doing facial similarity, for example, by searching by keywords, I can find this clip from Willie Mays, and in this clip, he's talking about what he thinks about Sandy Kofox, and his answer was strikeout. So, which is pretty awesome. And that's all, guys. That's all that I have for you.