 Okay, so tonight I'm going to talk about conversational agents. I often call chatbots. I'm going to talk about the technology behind them, because one of them. All right, about the technology behind them should be okay. All right, and sort of the different types about them. I think there's a lot of confusion around sort of conversational agents about what they can do, how do you make them, and the various types of them. So, about me, I spend most of my time working on deep learning that supports algorithmic trading. I have a startup that I don't really have much to do with nowadays, which is a children's app company. It has about four million users, and the team sort of runs it itself. And for me, I think that deep learning is by far the most interesting thing to be working on nowadays. All right, so, and in conversation agents, I'm going to propose that there are four sort of key types of conversational agents. And you will recognize some of these. You may recognize some of these. You may not. So, the first type is what we call the real sort of old style chatbots. I don't know if anyone's ever sort of played with these. These are generally made with a language called AIML. Has anyone played with AIMM? Yeah, generally in an audience, some people have. Also, IBM's sort of Watson's public interface uses a similar system to that. Then we've got sort of what I call the simple modern chatbot, which is generally what you see on Facebook nowadays for any sort of interaction that you're using on Facebook. Then we have like generative chatbots, which are things like Twitter bots and stuff like that. And then the last one I'm going to say is like an intelligent agent. These are sort of things like Google Assistant. So, big things to think about when you're building conversational agents. First one is sort of open domain versus closed domain. And this is one of the reasons why people get frustrated at these things very much. By open domain, I mean that you can talk about anything to this conversational agent. Whereas a closed domain is very specific about what topic you can speak about it to. People get very frustrated when they want to speak to a wider domain than the chatbot or the conversational agent supports. And it's probably one of the main reasons why when we talk to people out on the street, they kind of hate these things a lot. Another thing to think about is sort of pre-made answers versus dynamic responses. A lot of the chatbots, especially the old ones, very much, it's just a fixed answer. The person says, it says A, it responds with B. There's no sort of dynamic element going on there. Next thing to think about is, is this just for chat or is it triggering some sort of action going on as well? So, one of the big advances with the Facebook ones is that we do have like an API or something like that that can call different things. If you want to, you know, order something or something like that, it can interact very well. And then the last one I would say is sort of think about is access to what information and access to the web or not. So the more advanced ones, as I'll sort of talk about later on, have a lot of access to be able to do things. So let's look at the old ones. AIML was basically an XML language. The famous sort of chatbots like this, the first one was called ELISA, which is what, the late 60s, 70s. I've forgotten exactly what the date was, but it's a long time ago. And it was trying to, you know, basically just have a conversation with people for psychotherapy. I don't think it was that great because it didn't take off that much. Nowadays, the only place you sort of see these things is there's a site called Pandora Bots, which is worth just sort of looking at to sort of see how things were done in the past. And Watson, which is also maybe worth looking at to see how things were done in the past. Yes, I wasn't saying much more about my experiences with Watson. Often these things use things like regex to sort of filter strings, look for things in strings, that kind of thing. In a sense of AI and deep learning, they're very dumb bots, right? They really don't know anything. You have to basically script out every single little thing. Now that said, when someone does script out every little thing, they're really impressive. And you still see these around a lot because there are people who will spend six months scripting out one conversation in all the different directions that it could go. And when you talk to those bots, they do actually sound, you know, kind of interesting. There's a really good one, a mobile one called Angry Bot, which basically is using this thing and it basically gets, its whole aim is to get you into an argument with it. And it's very good at that, right? They're outdated and they're very fixed and sort of set in the way that they work. So this is an example of AIML. You can sort of see that this whole part is just to say hello, right? And trust me, that was probably about two or three times longer. I just sort of cut what would fit on the screen. But it is good to know the sort of past. So, okay, the modern ones, simple modern chat bots. These are what sort of took off, started taking off probably a year ago with Facebook opening up its own platform for bots. One of the things that, I think one of the smart things that Facebook did do was understand that in this day and age, people would rather press a button than type everything. So therefore they optimize that in the UI a lot. Okay, so how do these things work? Generally, and I'm just going to be sort of quite general, the most sort of simple modern chat bots work on what I call an intent and a slot system or an intent and a variable system. So basically everything you say to it, it's looking for the intent of what you said and then it's looking for the variables that fit in to that intent. So I've put some examples here. Can I order a pizza? Can I order a hamburger? So both those statements are the exact same intent. It's just the slot or the variable in the slot that's changed. So the slot actually there would be called, we'd call it what the order is, what the product is or something like that. And you'll find that nowadays the way a lot of these things are working is you can define intense and you can then define variables that are allowed to be in those intents. Now if you're going to deal with something like a pizza ordering system and suddenly you say, I want to order a BMW Series 5. It's not going to have a clue what you're talking about. So the company that I think is best at this sort of stuff, if you do want to make one of these things, happens to be bought by Google. I did think that they were the best before they were bought by Google, but it's a company called api.ai. And they really mastered this sort of system and got it to be, I just got some water. Their interface is probably the best by far out there. It makes it very simple. If anyone ever asks you, hey, can you make me a chatbot to do something quite simple, go here. But you can do it by yourself. So this is where the fun starts. And you'll find that actually doing this thing is not that difficult. People think that it's quite difficult, but it's not. You're basically, and everything I'm going to talk about tonight is kind of built on the previous three talks, right? Kathleen made a good point about you're building sort of a classification thing for doing, this is exactly the same. You're building a classifier to predict a class of what that intent is. So most chatbots will really only have a very limited number of intents. Maybe usually they max out at like 15 or 20 intents, often only sort of four or five intents. So if we think about that from sort of a deep learning perspective, it's not that difficult to build a classifier that only has to classify those things. And so obviously the thing that we need to do is we need to define what the classes are going to be, and then we need to pick some algorithms that we can use. Now, here's the thing. You can use logistic regression. How many of you psych it learn? Everyone knows psych it learn, right? Just go into psych it learn, open logistic regression, build something like this with a number of intents. You find very quickly you can get to 85, 90% accuracy at being able to predict the intent. You then need to sort of play around with how you're going to handle the variables or something like that. And actually if you wanted to, you could kind of combine this with the old style and maybe use this for doing your intents and then say rejects or something like that for your variables. But we can use a whole bunch of different things to actually sort of do these. Now, we need a pipeline. So again, building what other people said, okay, if we're going to build something and we're going to make it quite good, then we want to have a pipeline. And our pipeline is going to do the things that exactly, you'll find that when we're dealing with text, all these things sort of fit together in the same way. So every single thing here I've got here is something that was mentioned in the first three talks. You're going to generally tokenize your stuff to get it into separate words or into separate elements of words. You're going to vectorize your words by getting them into embeddings. You can also, you know, sort of the old-fashioned way to do that is to put them into one-hot encodings. You will find embeddings do much better. And then, yeah, words are back in gloves. So here's a sort of diagram, a very, very simple pipeline. And you can go home and do this with TensorFlow. And very quickly, you could build something that would be able to handle your own sort of chatbot very quickly. Again, these are simple chatbots, right? They're very much closed domain. Okay, so I've put down, yeah. API.ai and Google do these things pretty simply. I think they're probably the best ones to do. They only tend to work on closed domains. They're really good for things like ordering a pizza, booking a flight. That said, even booking a flight, they're often not that great. You know, really simple tasks, generally very transactional tasks that you know the person's going to say this and this and this and this. These kind of chatbots are very good for when you can totally predict what the conversation will be. Okay, the next type of chatbot are your generative chatbots. So you've probably seen these around the net. And a lot of them are very fun to sort of play with and to look at. This one was, I think, was quite well known last year in the election. And what it basically does is it tweets as if it was Donald Trump. But it doesn't seem to say the word loser enough. That's my... Okay, so there's a few ways you can build these, right? Generally, you need a reasonably large corpus to do this. So someone like Trump, what they do is they build it on all his past speeches. Perhaps he's also passed tweets and stuff like that. And then there are a number of ways you can do this. You can either do this straight with sort of like an RNN LSTM way, which has been very... There's a quite well-known code example called Char RNN. Anyone heard of that? I'll describe it. Okay, so Char RNN is made by Andre Capati. And basically what he did was just... He did it at a character level and it's just... It learns a corpus and then it predicts the next character. So you feed it in, say, 10 characters and it looks at those 10 characters and then it works out what it thinks should be the next character. And therefore it will then start to generate random text. And you can train it up... Like generally, you want to train it up on something reasonably large. I think he uses Shakespeare's, you know, the full works of Shakespeare. I did it one time with the whole book, Alice in Wonderland. And you'll find that very quickly it can generate text that is very readable but when you search the initial corpus, that text is not in there. So it creates an illusion that it's being... It's done in that style. And that's how they can do it for things like Trump and stuff like that. Like I said before, you need a large amount of training and data to be able to do this. Ideally, the more the better. If you've only got small amounts of data, then it's very... You'll be very limited in what it can generate. So one of the things you can add to this is a thing called Beam Search. I realize I should have had a... Maybe next time we can look at... Beam Search is a bit more advanced. What Beam Search basically does is predict sort of out of a whole bunch of different random things it lets you predict which ones are more likely. If you've seen Westworld, there is a great episode in Westworld where... Do people watch Westworld here? They're holding the thing and it's basically showing five or six words that the girl could say next and she's trying to say it faster and it basically picks out what she's doing. That is actually a good, perfect illustration of Beam Search. The other thing that you can do to these generative chatbots is that if you are training the maps, so I built one of these ones that was basically trained on a corpus that was conversations on Reddit. Very graphic language. But the funny thing with this is you would ask it something like, what's your name? If you think about it, what's your name in that corpus which... I think the whole corpus was five gig of text files, so it's a pretty big corpus. There's a lot of names in there and there's probably a lot of times that someone has said on Reddit, what's your name and someone else has replied. The classic thing that happens with these chatbots is that you type in, what's your name? It says, my name is Fred. You say, what's your name? My name is Sam. What's your name? My name is Joanna, right? Because it's doing it purely out of sort of randomness. So a way you can get around that is if you do want to sort of play around with something like this, is you can take your initial corpus, filter out all the names with like something like a ner or, you know, some sort of entity recognition search and replace them with a token. And then every time it sees the token, you can sort of teach your bot that this is going to, you know, that, okay, every time I see this token, I'm going to use this name. Every time I see this other kind of token, it's going to mean this. That sort of tends to give it a lot more coherence. So Anusha basically talked about sequence-to-sequence models. Sequence-to-sequence models are extremely interesting. They are very complicated, especially when you factor in a thing called attention, which makes them a lot better. You can, though, basically what this sort of is. And I heard someone asking before about are these like auto-encoders? They're not auto-codes. When we talk about a decoder, so the blue bit here is the encoder and the orange bit is the decoder. Basically, it's just two sets. Think of it as two separate networks. You can almost think of it as two networks. Like we did in the GANs. For those of you who were there when I, when I taught the GANs how we had, like, a generative network and a discriminating network. This is not quite like that, but you have a network that basically takes an input, makes a sort of matrix, which then it passes on to the encoder, and the encoder can then take that matrix and generate an output from that. So the sort of the key bit here, you'll often see things like this where you've got, you know, a start tag so that you're basically telling your system that this is the start of a sentence. You've got your question or your sentence. You've got an end of sentence tag, the EOS tag, and you've got a go tag, which basically means that the next part of the decoder should kick in. I don't have any padding tags there, but I wish I should sort of mention those as well. These are also very good for training generative chatbots. They're also sort of a preliminary example of how you train a neural translation system and things like that. I'll actually mention these a little bit more when I talk about sort of the advanced things. Okay, so generative chatbots are great, yeah, great for fun, for messing around. They're not really ideal for business cases unless you had some sort of amazing data set and you had customers that had a very good sense of humor because you're always playing the risk that the thing may say something really weird, especially if it was trained on Reddit or something, out of the blue it would just suddenly start swearing at you. They start the long way to go, but I do think this is, it's very interesting to sort of know these and play around with them and sort of see what they can do because I do think in the future we're going to get a lot better at these as we incorporate some things like masking and a few other things that sort of make, that allow networks to understand more about concepts and the context that they're in. I think you'll start to see these to take off a bit more. So actually last night I programmed up one of these to sort of give you, I'm not going to go through code because we don't have a lot of time, but I will put this up on GitHub and I'll send out the link and I'll maybe even just make a video walkthrough of the code. This is basically just a very simple system that is done in Keras using LSTMs to do a generative chatbot. So I'll send something out about this over the next few days. Okay, the hybrid agents. So for me this is where all the interesting stuff is at. And I will give you an example of a hybrid agent. I'll give you a couple of examples of hybrid agents. So this is a hybrid agent that I've made called Kate. There are some emails from people. You have been here recently. Kate, can you tell me how many new articles were published on tech in Asia in the last three hours? Are there any articles in the past two days on AI? There are two articles that were published in the last day. Would you like to know their titles? Yes, please. First article, nine Southeast Asian startups made it into Google's new Launchpad class. Second article, Google Launchpad picks six startups from India. Shows love for AI. So that's... I'll cut this really short. So this is another one where I'll sort of explain what's going on in this one in a bit. Kate, what is your name? I am Sam. I remember you, Sam. Kate, what's on my schedule today? You have nothing scheduled today. Your next event is a phone call meeting with team tomorrow at 8 a.m. Okay, Kate, go into learning mode. Okay, Sam, I'm going to add a learning mode. Kate, create a category called sports teams with the subcategory basketball teams. Okay, I have created a subcategory basketball teams under the category sports teams. Kate, please add the entity, the LA Lakers, to basketball teams and to my likes. Is that the Los Angeles Lakers team? Short main Lakers or LA Lakers? Yes, the Los Angeles Lakers. Is their website NPA? Calm slash Lakers? Yes, that's the correct site. Okay, Sam, I have added the basketball team Lakers to your likes. When are they playing next? The LA Lakers next game is later today. Kate, what are we currently talking about? We are currently talking about the Los Angeles Lakers next game. Okay, so I'll just stop there a few times. So, okay, let's talk about how these work, right? Because they don't always work, all right? So, hybrid agents, for start, you've got speech to text, all right? Now, I would say straight out, speech to text, just use an API, right? There's no point in sort of building your own model for that. These have the ability to access wider info. They use custom NER and POS, so parts of speech tagging, for extracting variables. Now, what Martin showed you early on tonight about building your own NER system? So, Martin was showing you a very simple example of building a NER for, you know, just recognizing names. Now, that's, you know, and he pointed out that, in fact, both Martin and Karthik pointed out that often the challenge with a lot of this stuff is that your, is that it will work fantastic on, let's say, an American corpus, but it won't work on an Asian corpus, or it'll work fantastic on a corpus related to one topic, but it won't, you know, go for another topic. So, the big thing you always want to do is train your own systems for unique stuff. Anything that you're going to be doing that's sort of slightly unusual, build your own model that can then, you know, build on someone else's model, like Martin did building on NLTK. If you want to get really tricky, you could then sort of ping Google's model and use their model to train your model. I shouldn't be saying a lot about that. Anyway, the same, that basically sets it up so that you can then start to recognize more things that you want to. Okay, the next thing is the whole concept of a graph system. So, computers are very, you know, computers sort of suck at learning concepts. It's one of the biggest things that, you know, one of the biggest stones on the road to AGI is getting, you know, algorithms which can really understand concepts. Now, the way I've done it in that video is I've kind of tricked you, right, by having it so that it's a graph system so that you basically, it understands certain things already and then I can add on to it, add on another node to that graph. So if it understands sports and then I can have another node for sports teams, then underneath that, it could have different types of sports, it could have different sports teams. It has no clue what those things are, but it's very good at being able to create the illusion that it does. The other thing you want to do with these things is that you want to have multiple ways to check the intent. So you all, I think a lot of you understand what an ensemble model is. So instead of just having one way, like I suspect probably the API or AI uses ensemble models a lot for the way it does things. But you want to have, you know, a number of ways to do this. So again, none of, there's only one thing different that I'm mentioning here that we haven't had tonight and that is the character CNN classification. So more and more, and Karthik mentioned a model that OpenAI did where basically they train things on a character level and for some bizarre reason it was able to then learn all this extra information. And we can do things with character level stuff, which then, if you think about it, when you're training something for word-based stuff, anytime someone spells something wrong, it messes up your entire system. Whereas if you do it for character level stuff, even if the person spells the word, you know, 10% wrong, your system will have a decent chance of being able to work out what it's about. So this is sort of a new thing of using character level CNNs. And also, yes, you know, using CNNs for text. It's maybe something we can talk about in the advanced one. So here's an example of the architecture of an intelligent agent. So it's a little bit simplified, but it's mostly there. You've basically got the speech. SET is speech to text. You've got, so I put first layer of intent detection and a check layer of intent detection. Really, you could just use an ensemble, right? You've got entity extraction for knowing at any point what the entities are. You've got some sort of verb extraction for knowing what the actions are. And even though they're kind of part of the intent as well, I think it's also good to have them out, then you've got your logic. So you've got a whole bunch of things going on in logic. Now, then you can also do a thing called translate. You can start to translate. So I will come back to that in a minute. You then can access information. So the way you want to access information is generally through some sort of knowledge graph. And that might be a graph database. I'm going to show you a few of those in a moment that you can sort of use. Actions, just think of that as like an API call. If you want to make an API call to something to find some information out. And you think about it. It's not that difficult to build APIs that do things nowadays. It's not that difficult to use something to scrape tech in Asia, then build an API that, whenever I ping it, it tells me, okay, what are the unread articles and what the text is? So then I can just pull that in and process that. So that's an example of actions. And then you also got unstructured retrieval. So the ultimate API, if you really want to think about it, is HTML. Most people don't think of it like that, but really at the end of the day it is. You ping a website. It sends you stuff back. If you kind of have a decent understanding of how the HTML works, you throw out all the CSS junk, you throw out the images. If you're not using anything to do with images, then you can start to look for things in that data. And often one of the ways to do this sort of thing is to look at two or three different sources and look for a commonality. And if there's a commonality there, there's a decent chance that it will be what your thing is after. So then you basically generate a response, beam search. You can beam search the response and then basically text back to speech. Knowledge bases. Google has a fantastic one. The Google Knowledge Graph. Most people don't know about this, but if you think about when you run a search, you often see things coming up on the right side, like if you type in someone's name or someone like that, you'll see information coming up about them. That's generally coming from the Google Knowledge Graph. And you can actually access that API. And you can search that API. And then once you've got that API, you can then lift information off that. So some other ones you can use are Freebase, which I think has been discontinued now, and Dbpedia, which is basically like a database of Wikipedia. So the Google Knowledge Graph looks something like this. This is an example for Taylor Swift. And you can see very quickly it gives us a few URLs that we can find out more information about this person. It gives us sort of an article body. It wouldn't be that hard to pick up from here that Taylor Swift is a singer, or a singer-songwriter. In fact, it's even in the description, right? And you'll be amazed that if you ping things like buildings around the world, you will get all this really useful information that are in there. For Dbpedia down the bottom, you'll see that it's got her birthplace. It's got her birth date, right? It's got a sort of description of her, which is not as useful. But if we wanted to build a bot that you could ask anyone's birthday or anyone's age, it's not that difficult to do. Right? Once you're using a system like that. Does this make sense to people? So the translation bit, and this is still cutting edge, and I think it's one of the directions that I'm very interested in, and I think Martin certainly interested it in, is instead of think about translation as not just translating from English to French or from English to Chinese, but think about translating from English to SQL Now if you think about it, mentally you can do that, right? Everyone understands what SQL is, or any sort of database language. If you start thinking about that as a translation thing, you'll get a lot further along. Okay, the really, really hard stuff, and to be honest, this is the stuff that I spend hours thinking about, asking Martin questions about proposing things, you know, trying testing things out. At least three things, is how do you have stateful conversations? So you notice that I ask her at the end, what have we been talking about? And she can do that. At the moment all she's doing is storing a rough variable, right? There's nothing very impressive there. Google though does have a very impressive way of persisting state in Google Assistant where you can ask it something like, you know, you can be talking about Barack Obama and then you can ask, you know, without mentioning his name, just say, oh, you know, who's his wife? And it will know that you're still talking about Barack Obama. I haven't quite worked out exactly how they're doing that and they don't seem to want to tell anyone in the world how they're doing it. Right? If anyone at Google knows, please come and tell me. The other big thing is context. So knowing the context is really important. You know, if we know the context of certain things, we can do a lot of things. Now with mobile devices we can know things like your context of your location. So if you're building some sort of, you know, chatbot app where you're basically saying, what are the top 10 restaurants near me? Okay, if I've got your location, I'll probably kind of work that out, you know, in a number of different ways. But things like context, you know, Catholic made a bunch of really good examples of how language has context. And if I, you know, if I'm saying something quite, if you're watching something quite funny and someone says a joke, you're more likely to take that next statement as a joke than you are as something, you know, really serious. Because of the context of the conversation. That's something still that we've got a long way to go with these kind of things. And I come to take different uses. I basically ask, is this Sam, is this, you know, that kind of thing. Google Home can, right? So that's something at least that's, you know, really hard sort of to do unless you're a multi-billion dollar company. So tips, so I would encourage you if you do want to do any sort of conversational agent. The biggest, biggest thing is try to limit the domain of the conversation. I use Google Cloud for speech. It's probably one of the best ones out there. I try to use what, you know, make use of whatever is off the shelf. In my example, I'm using a lot of things like Spacey, you know, that came up earlier tonight. I'm using a bunch of different things. So you're trying to use things that are off the shelf, but then use your own models for anything that's going to be unique, anything that's going to be, you know, unusual. And then also try to get as many conversations as possible. If you're doing this for any sort of business case, if you can get access, like if it's, say, for a support thing, if you can get access to a million support conversations that a company's already had, you can predict very accurately what, you know, that they will be doing and what sort of questions people are going to come up with. Okay, that's it. So next time, I'm happy to take questions from you, but next time we were thinking that what we might do is get you to vote on some of the topics. So I think we will send out a link with some way for people to vote, but I... So definitely new machine translation is a really interesting topic in text because it basically uses sequence to sequence with attention. Some of the newer versions of advanced summarization also use attention and you find that increases the abilities that it has a lot more. CNN's for text classification, I talked about that. In advanced buildings, Babel, do people know Babel? Facebook's Q&A sort of thing, we could do that. Or Passisaurus. How many have heard of Passisaurus? All right, Passisaurus is the new sort of pre-made models from Google that kind of do what Passi McPass face does. Oh, can you hotdog, no hotdog? So I'm curious, yeah, like, you know, do people have a preference? Is there anything that people are really interested in? One of the big things for both Martin and I is like we're constantly trying to work out do people want to know, you know, what are the big things that people want to know? The cool thing with sort of text is that you find that probably 70% of what you end up doing is sort of that pipeline stuff that all four of us spoke about. But once you've sort of got some stock code to do that or once you've learned how to do it, you can get through that part of it very quickly. And then it's just a matter of trying out different models, trying out different things. So any questions? Do I have to take questions or...? Any questions? No, I know it's late. Okay. All right. Oh, everyone's just waiting out for the swag that we've got. The old chatbots, you think that if someone does have such a chatbot, how difficult is it to move this over to the generating type? You've got to ask yourself first, do you want to move it over to the generative type? Because the generative type, most times, is not going to be good for something like business or something like that. If you're talking about moving over to the modern type or something like that, yeah, okay, that's actually very simple. Really simple. Please, all of you, just go and spend an hour and build a chatbot in api.ai. You will see how simple it is. And it will also give you a really good understanding of how these things work going forward. Everyone's looking for their name. Any other questions? It would be a vector, right? Yes. How do we convert that into a word? How do we find the corresponding word? If you think about it, in the example that I'll upload over the next couple of days, what I'm doing is basically going from a word to word to VEC, pulling down the vector. In the case that I'm using, I think I'm using the 300, so it's a 300-dimensional vector. You run that through your machine. At the end, you pull the vector out that it sort of predicts. And then you use an equation to basically look on word to VEC for the closest word to that particular point in space. And that could be cosine distance. It could be a whole bunch of different equations. No, you don't have to search through the entire vocabulary. You don't have to search through the entire vocabulary. But there are ways that you can do it faster. It's one of the big things that Facebook has been working on lately is exactly that kind of thing, taking a very big embedding and being able to search it a lot quicker. Because you're right, it is something that can be quite slow for certain things. But it's actually very simple to convert back and forth. And you'll find that if you're doing, for most things, you're not going to have a problem. Where Facebook has problems with it is because they're basically... So Facebook runs every single thing on Facebook through a CNN. And it produces an embedding that sort of takes that, whether it's a video, whether it's an image, whatever it is, and then they put it into this vector space. From that, they can see very quickly that if the image you uploaded goes to the vector space where all the porn images are, they won't let this photo go live. And they can do that very simply. They can also do pretty amazing things because then they can start seeing, okay, you uploaded this picture. And this picture is kind of like another picture that someone else uploaded. And so from that, we can deduce that these two people were at the same event. There's a chance that they might have similar interests. If you can stack all these things enough, you can then predict whether they're likely to be friends or not and send them a hint to become friends, all those sorts of things. You're going to see this sort of thing happen a lot more. For me, embeddings are not just for text, embeddings are for everything. If you really start thinking about using embeddings for everything, and I think going forward, that's going to be a big thing, yes? That was the last question. Yes, sorry. Sorry, it's so late. We all tend to ramble, I guess.