 Hello. Hola, mis amigos. Que pasa? Mi amor, James. Dernier Catrallotta y mi español, no es bueno. So I'm switching to English. Apologies. Thank you all for coming. I do work on fault-tolerant graph databases. It's a really fun day job. But for today, I'm going to talk a little bit more about graph theory and how we can build intelligent systems using graph. So I'm not really going to talk about my day job. It's not as fascinating as it is, obviously. I'm only 23 years old and my day job is so damaging. It's made me look like this. I wouldn't want to inflict that on you lovely people. We are going to talk about machine learning. So I can't really see you very well. But some of you are going to be developers. Give me a wave. All right. I'm going to talk about machine learning. For you folks in marketing, give me a wave. Oh, you're not owning up. I'm going to talk about AI for you guys, OK? That's the way this thing works. So we're going to talk about machine learning. I'll give you some definitions. I think there's a lot of promise in the field of machine learning. There's also an awful lot of hype and bullshit. And I haven't got time for that. So we're going to stick with my definitions. And then we're going to talk about graphs and my first experiences in building systems that seemed intelligent and how I once thought I'd built Skynet. And I hadn't built Skynet, it turns out. But it was a bit scary and wonderful. And then I'm going to talk a little bit about graph theory. As it turns out, we have a bunch of tools already available to us from 300 years of mathematics in graph theory that enable us to build systems that seem intelligent. Before we even reach for your fancy data science toolkits, we have a bunch of things at our disposal which we can use to build intelligent seeming systems. We'll then step it up a gear and have a look at what happens in contemporary machine learning and take a look forward into the future of graphs and AI. All right, so my bluffers guide to AI chronyms. Humor me. Stick with these definitions. They're a good set of definitions. So ML, machine learning, where we find functions using historical data to guide future interactions within a given domain. AI is the property of the system that it appears intelligent to its users. Often, but not always, as we'll see in this talk, are using machine learning techniques. Or we might choose to think of AI as machine learning implementations that can be cheaply retrained to address neighboring domains. If you have some computer vision system that can distinguish animals, you may be able to change that computer vision system cheaply to distinguish shoes. These things are often completed with predictive analytics using past to predict the future. Conflicted with general purpose AI, that is machine learning with transfer learning so that learned experiences in one domain can be applied elsewhere, this kind of human-like AI, where if I bash my head on a wall and I learn that it hurts, I don't need to subsequently bash my head on a table to know that it will hurt. For some of you developers, I know that that is a challenge. But trust me, humans do this stuff. So what do we do today in ML? Who's doing ML today? Give me a wave. All right, by literally five people at a conference that's AI centric. You are the thought leaders. You should be down here giving this talk. What do we do today? We take tables and we laboriously extract features from them and ram them through our ML pipelines. But I'm a graph person. I think we can do better. Tables, meh. What we do today is so good, but it's not great. I don't think it's the limit of our technological ambition to say that we'll take a table, extract features, turn it into vectors, and pump it through a regression or classification model. It's not a bad thing, but we can do so much more with data if we stop representing it as dull tables and start representing it as rich graphs. That's the fundamental hypothesis for this talk. We can do more with graphs than we can with other kinds of data models. And to demonstrate this, sorry, another definition, the thing on the left is a graph. The thing on the right is a chart. Just so I don't want to see people saying, look at my bar graph. That makes me cross. That makes me cross, Jimmy. Don't say that. We're going to go with the thing on the left. Credit to Hodler and Needham who wrote the graph algorithms book for this. If you want a copy of the graph algorithms book, which has an excellent chapter on AI, it's available for free at neo4j.com. Do grab a copy. It's a great book. Now, this is a graph. Anyone not seen graphs before? Of course you've all seen graphs before. This happens to be a graph that represents the London Underground Network. If I was to ask you lovely people, if you're coming to London, you're going to visit the neo4j office here at Southwark in the south of the city center. You're going to land at Heathrow down here in the bottom left corner. How do you get from Heathrow to Southwark? Even if you've never been to London and even if you've never had the pleasure of smelling another human being's armpit on the tube for 50 minutes as you're traveling from Heathrow to central London, it's not very nice. But you could figure this out. We know the rules. We know that we are able to traverse from one circle to another if those circles are connected by a line. It's a really simple idiom. Graphs at a basic level, the basic idioms are trivial. If I asked you to find a short path or a fast path between Heathrow Airport and Southwark tube station here, even if you've never been to London, you could kind of figure that out, right? You almost certainly wouldn't end up picking a route that goes in the north of London. That would just seem wrong. It would seem too long. So you know that the rules you can in your head run a human version of Dijkstra's algorithm and kind of advance your way along the tube system until you find yourself at the neo4j office. For what it's worth, the fastest path typically is to go up the blue line until you meet the gray line here and then take the gray line down to Southwark. So graphs are super simple. As a basic data modeling technique, they are lovely to work with. The label property graph model, which has risen to prominence over the last decade or so, is ridiculously simple. When we learned databases at university, we had a book this thick with a bunch of normal forms that we had to learn. When you learn graphs, this is it, right? One slide, OK, it's a widescreen slide. But it's one slide. The label property graph model, we have nodes, typically representing entities in the graph which have properties, key value data. And those nodes may be labeled to indicate the purpose of that node in the network. We also have named directed relationships, which can also have properties on them. In the tube case, it might be travel time or distance between stations. These relationships have exactly one start node and one end node, and it might be the same node. So you could have relationships that loop back. You are now all graph experts. I bless you as graph experts by the power of Neo4j. There you go. When you've got a good graph database, querying it does not give you fear. The kind of join fear that you have with relational technology doesn't appear. That asterisk that you can see at the top of this cipher query, that means query at any depth or in relational speak as many joins as it takes. And the reason why I'm not scared of that is because a join or a traversal on my overpriced MacBook Pro, which is a piece of crap and I hate it, takes about 1 40 millionth of a second in a steady state database. So doing these joins is very cheap in a graph database. So let's take a step back. You now know graphs. You now know that graph databases are performant for exploring graphs. We just do clever, mechanically to find things like pointer chasing to get fast results. Let's take a step back. Let's take a step back. We can be smarter, I think, about the way we're building intelligent seeming systems. And I'm going to tell you a little bit of an anecdote. It's my history. So that's great. You pay to come to a conference. You get some boring old geezer telling you about his life. Good life choices, people. Anyway, remember this? Do you remember that period of optimism when we had before the world became shit? Yeah, yeah, yeah, me too. 2008, right? Things seem like, OK, it's OK. The planet's all right. We might get through this together. It seemed good. I went to a conference in southern Sweden. And I bumped into this weird Swedish guy, which I guess is parenthetical. I bumped into this Swedish guy, tall Swedish guy. And we got talking about what I was doing. I was working at Telecom's company. And I was working on retail. I was working on recommendations. So we were trying to sell you more Telecom stuff, sell you a bigger mobile phone package, sell you more home broadband, that kind of stuff. And I was explaining to him how our strategy for this was that we're going to buy a piece of software for about $10 million. And then we're going to spend three to five years customizing that software, because that's what you do, right? You spend a load of money and a load of time. That's just modern software development all over. Being sarcastic people in case you couldn't, yeah. And this guy, this weird guy, starts telling you, hey, that's a graph problem. And I'm like, honestly, you're an idiot. Like databases have rows in them. Did you not learn this at university? He's like, no, no, it's a graph problem. And I've got this thing called Neo4j. And I'm like, OK, firstly, you don't know what a database is. Secondly, your name's a shit, dude. Who would call a piece of software Neo4j? That's the worst name I've ever heard. Anyway, this guy keeps talking at me. And I'm like, oh, OK, whatever, dude, whatever. So I go back to work the next week. And I realize, oh, yeah, that weird guy. That weird guy was right, this is a graph problem. Because it's just about things that depend on other things that I can go up those dependencies and sell the thing at the top of them, knowing that it's a thing that the user needs or can use. And I'll never try and sell them something they don't need. So in one long afternoon, I poured our transactional and product data into a graph and ran a query. And in fact, I downloaded this thing called Neo4j, the world's worst name database. And back then, back in 2008, Neo4j was a Maven and Java hellscape. It was a build your own database kit. It was horrible, absolutely horrible. I mean, give me a woo if you like Maven. Uh-huh, uh-huh, give me a double woo if you love Java. All right, there's some muttering here, right? Yeah, but Maven, yeah, this is not a combination that makes many of us feel happy about our career choices, right? You've got a job, you're doing Maven and Java, so I've got a job on doing Maven and Java, and oh my God, it's awful. I've got all this XML stuff going on, and then I write a query like this, like this Java for loop here. Oh my word. But what it did was given a starting point in the graph where I told the graph what things I'd already bought, it was able to crawl over the graph and figure out other things it could upsell or cross-sell me. And when it gave me my first answer, I was like, holy shit, I invented Skynet, because the system seemed intelligent. It gave me a very intelligent response for the stimuli that I gave it, and it blew me away. I was then hooked on graphs, by the way. That guy who was weird, and I keep now saying he's weird, and I'm being videoed, saying he's weird, and one day he might watch this video of me saying he's weird, he's my boss. He runs Neo4j. So that's a career-limiting move. Don't follow my example. But of course it wasn't Skynet, it wasn't AI, it wasn't Arnold Schwarzenegger, it was just a graph traversal. It was just a breadth-first search from my current context across the graph of products that I could be upsold to. It absolutely blew me away. So this stuff, it turns out, is incredibly powerful. I was hooked on graphs, I could now understand how I could build systems that appeared to be intelligent just by applying a small bunch of simple graph theoretical rules, and then I'm hooked. So then I joined Neo4j, and this stuff has to get better because it sucked. As a usability thing, it sucked. So I joined Neo4j, and Neo4j matures over time into the database that many of us know today. And I decided then, as part of being at Neo4j, we then had a customer who was interested in retail analytics, so a real challenge. So we're in the UK. The UK has a very mature supermarket sector, world-class in many respects. One of the things that was pretty poor back then was the way that the supermarkets did recommendations. And the way they would do recommendations is by you would swipe your loyalty card at the checkout, and then a week or so later, you would get some vouchers through the post. And then you would take those vouchers, say, oh, that's spammed from the supermarket, and you would drop it into the recycling. Zero impact. And the supermarkets knew this, and what they know is that it's far more impactful to have something both in line as part of the transaction flow, and it's far more impactful for a human to have a touch. And I'm gonna run an experiment which has never failed me before, but we're about to see something. Hey, how you doing? Hey, there you go. Thank you. You're welcome, thank God he's not a psychopath. Okay, that's never failed me. It turns out, thank you, sir, most humans aren't a psychopath, even though he knew he was being set up by the speaker with the microphone. Hey, actually, you can pass for me, right? You're like a younger handsome of a version of me. You know, this is great. The gentleman at the front took that thing which he kind of knew was my name card, but he still looked at it, bam, that's when I got you. That's when the supermarket's gonna get you as you slide your loyalty card. They give you your vouchers and the cashier puts them in your hand. And unless you are a horrible person, you will look at them, even if you don't want to, you will look at them because that is part of our protocol. And at that point, you've got a much stronger way of selling to that person, provided you're giving them good recommendations. So what do we do? We put the product data and the transactional data, the purchase data in a graph. So part of this graph here is taxonomical. It's about categories and products. Part of this graph here is about baskets that a particular individual bought. So we're able now to both do global analytics about populations doing purchasing and we're able to look at you in the click stream as you come through the shop or come through the website or whatever, which is great. So we stuff this in a graph because that's where data lives and we discovered this idea of this young father's pattern. We were enamored for this particular proof of concept by this idea, this myth that we had learned from the United States where on Friday nights, if they put beer, six packs, next to the diapers, they sold more of both, right? Interesting. So we figured out there were like young fathers who would buy some diapers for their baby. They're not going out because they've got a baby so they buy a pack of beers and, you know, they kind of, I can say put the beers on the baby, put the diapers on the baby, have a beer and just sit there watching TV because that's what your life is as a parent, right? That's as good as it gets. It's a terrible thing. So we had this idea of this pattern and a young father would be someone born between a certain age who'd bought a game console, who'd bought some nappies, some diapers and had bought something to do with beer. We thought that was kind of typifying. But what was, oh, what was interesting is if you run this through your modern AI ML pipeline, that's what it thinks you are, right? It thinks you are a very, very keen gamer. There are people taking photographs of this. I can't believe I'm being photographed in front of this slide. But that's what your current ML framework thinks is going on. You've got someone that is so keen on drinking beer and gaming that they don't move. That's all good. See, I do like that I can lower the tone of this fine, fine event. But actually, there is a business opportunity here because you'll find some people who partially match the young father's pattern, in which case we are now able to perhaps prompt them to change their buying behavior so that they will buy a game console. So that's great, right? So we can take that graph. We can turn it into ASCII text. We can flatten out the ASCII text. We can add some Neo4j boilerplate on it, some Cypher boilerplate on it, and then suddenly we've got an answer from our database. We've taken a picture, drawn that equivalent picture in ASCII text, put it in the database and got results back. Notice the time on this. Okay, zero milliseconds. That's important. Being able to do things in line is important. The previous versions of these systems batch processed. And a batch process can do the one thing that the psychologists at the supermarket hate, which is like, beep, frozen peas, beep, tomatoes, beep, Xbox One. Thank you, I'll pay. I get the vouchers, 20% off PS4. It doesn't feel very nice. The psychologists call this a contraindication. In common English, we call that you piss me off. Right, that is an insult. I've just spent like 200, 300 quid on an Xbox One. And now I see I could have got 20% off of PS4. Why was that? Because we didn't take into account the clickstream data, the current transactional data that was flowing through the system. If we had, we would have known it was a contraindication and we wouldn't have offered that voucher. Because not only will it not change my buying behavior, but it now makes me hate you because I feel that you have ripped me off in some way. Now when I was young, I worked at a company called Toys R Us, which sadly is no longer with us. I know, can you believe me working at Toys R Us? Like you would send your kids in and I would be like, ah, no. But at Toys R Us, I learned that if you upset a customer, that customer will then go and tell 17 people in their social network. I know, right? I know. Some of you developers don't even have 17 people in your social network and Python doesn't count as a person. But imagine nowadays with social media, you can still only tell 17 people, social media doesn't make you popular boys and girls, but you can tell them a lot faster about how shitty your experience was. So we have to do this stuff in real time. This stuff works at scale, Facebook does this, right? Facebook graph search implements exactly this pattern in which sushi restaurants in New York do my friends like. I have a graph here, that's me. I'm friends of Andreas and Michael. Andreas and Michael like these particular sushi places. They're both in New York. This stuff is super cheap. The query is super easy to write. It's not like gazillions of SQL, it's a simple graph query. And the search structure is highly scalable. The lovely thing about well-written graph databases is that the latency of your query is proportional to how much of the graph you choose to explore. It is not proportional to the overall dataset size. So you can start to do some kind of amazing things. So that's the basics, right? This is what blew me away. This is what got me hooked on graphs. And then once I'm in, once I got the gateway drug, I want more, I want to mainline graphs. And then I discovered there's this whole thing about network science and graph theory. And it is amazing. These are my two favorite books. No, okay, obviously the O'Reilly Graph Databases book is my favorite book because I wrote it, but apart from that, these books are acceptable. Genesto Bianconi's book on multi-layer networks and network crowds and markets by Easeley and Kleinberg are just incredible books. For those of you that want to pay rise, read those books, rebrand yourself from data scientist to network scientist. That's 20,000 euros, you're welcome. So it turns out that graph theory operates across a lot of domains. There are also a lot of off-the-shelf algorithms that we can use to process this stuff. Very low barrier to entry, and this stuff is amazingly powerful. So we're gonna start off with one technique called local properties. And graphs are weird, right? Small changes in the graph can have huge changes, huge repercussions for the overall query. By the way, I can see myself on the monitor here. I can see the camera guy struggling to keep up with me. I'm not a sedate speaker camera guy. You're gonna have to cure it. There we go, fast to keep up with me. So first thing I'm gonna teach you, triadic closure. Fancy name, but all it really means is make triangles. Particularly in graphs involving humans, there's a tendency to try and make triangles. Here you've got a small social graph with Kyle, Stan and Kenny. You can see that Kyle is a friend of Stan and a friend of Kenny. So far, so nice. You know this in your own life. If you've got two friends, you sort of have transitive trust. You provide transitive trust between those friends that they will somehow be compatible. Those friends will ultimately, they're likely to meet. And because you like both of them, they're probably a good match for each other. So they may also become friends. And that's what often happens in human graphs. So here we have, eventually the graph forms this, what's called a stable triadic closure where we've got three friends. It's also this notion of structural balance in triadic closures. So you can not just have positive sentiment like friendship, but you can have negative sentiment like enemies. In this case, we've got Cartman, who's a friend of Craig and he hates this guy Tweak. In fact, Cartman's a really awful character. He really wants to make Tweak's life awful. Kind of like your manager, really, much like that. So you could form a closure like this and you could say that Cartman is a friend of Craig. Cartman is an enemy of Tweak, but Craig and Tweak are friends. And this is kind of awkward, right? We don't like this. It sort of feels awkward to us as humans because Cartman will say to Craig, hey, Craig, let's go and be horrible to Tweak. And Craig's like, no, Tweak's my friend. This is, why would you say that? That's a real dick move. It's sort of awkward, right? It's not a balanced closure. There is a balanced closure where Cartman and Craig are friends and they both hate Tweak and now they can persecute him. Yeah, I'm not saying this is good. Look, I don't think this is nice, all right? I don't think it's nice, but it's a thing. It's a stable closure. Of course, there is another kind of stable closure where they're all just friends, yay, which is much nicer, but they are both low energy states, structurally stable closures in our graph. You know what I love? I love playing videos of minions to all of you computing professionals because you're enjoying it. With your intellectual capabilities, you're enjoying that gift, aren't you? So this idea of structural balance is a key predictive technique in graph theory. You can make it, you can use this technique to generate predictions that are gonna be quite accurate. And I can demonstrate this. Here is a graph of the great houses of Europe. Look, look, forgive me, Spanish guys, you're just off this graph, but I'm sure you're in there somewhere. Late 1800s. And in this graph, the black arrows, the black relationships represent friendships and the red arrows represent enemies. So here we have bits of this graph that have stable triadic closures and bits of this graph that are unstable, right? So Russia, Austria, Germany, that's very stable, they're all friends, but you can see that UK, France, and Russia, that's unstable. Like two of those guys should gang up on the other one, right? And then it would be stable. And of course it's gonna be the French, definitely. So what happens? So actually we can roll forward, creating more balanced triadic closures. Italy joins in with Germany, it's the Three Emperors League by history, and then Russia and France as the graph indicates start to make friends. So now Russia and France can go and gang up on the United Kingdom, as it ever was. And then the weirdest thing in British history ever happens. The French make friends with us. Who'd have thought, right? I mean, they're like, hey, let's be friends. And we're like, okay, that seems reasonable. What's brought this on? Est-ce qu'il y a défoncer dans la salle? Bonjour, mes amis. Okay, so my French is not terribly good, but that relationship is called the Entente Cordiale, which is like a famous historical document. My French isn't brilliant, but as I remember it translates as, we surrender. Something like that, yeah? Look at school, they taught me, je capitule, tu capitule, you capitule. Okay, I'm not gonna labour this. So anyway, now this is weird thing, where we've got an unbalanced closure. The French and the Brits are friendly. The Brits and the Russians aren't friendly. That can never last. So the Russians and Brits become friends. And as you iterate over this graph, just looking for opportunities to create stable, tragic closures, the graph eventually bifurcates into this pattern. And this pattern is brilliant, right? Because we know this is what happened, because this is the starting point for the First World War. So boo, humans, we suck. But kind of yay graph theory for being able to predict that, knowing nothing about humans and their war-like tendencies. So we can apply this technique in our data to see how a graph might evolve. If you're interested, by the way, that comes from the Easley and Kleinberg book, which is a brilliant, brilliant book. Grab a copy, and you get walked through it in much more depth than I can provide. Surprising, right? Aw, it's not my kid, I don't care. I'm a monster. So there's also this notion of strong, tragic closure, which suggests that relationships not only have sentiment, positive and negative, but they have a strength, so strong or weak. And this strong, tragic closure property allows us to start to pick apart our network and decompose it into neighborhoods. So in this case, for example, we might have a tragic closure, where in this case, Stan and Cartman are kind of weak friends, more acquaintances than friends. The sentiment is positive, but it's weak. And it turns out that in graph theory, we can pick these weak relationships out because they are the relationships that bridge neighborhoods or subgraphs in our overall graph. And that structural role is important. It gives us some level of cloufulness as to how the graph might also evolve in the future. And there's this property called the local bridge property. And here you can see, this is again, this is a primary school. You have the boys at the top left and the girls at the bottom right. And any information that goes between the boys and the girls has to cross that one link between Stan and Wendy. So it turns out that that weak link is super important. You would see the same thing, for example, if you decided to map your organizations, your organizational hierarchy, and actually better still, just who's communicating with who. You will start to find these weak links in the graph and they are organizational hotspots where information is being transferred. So they're very useful. But we can also use these weak links to start to decide how a graph will evolve using this local bridge property. This is an example from the 1978 US Journal of Anthropology. Fascinating, yes, I am a very interesting person. Thank you for asking. And what's happened here is that a karate club was formed and grew around node number one, a student instructor. But as the club has grown, it's become popular. And it's grown and they've taken on another instructor, in this case, node 34. And as so happens in human societies, there has been a schism. The club has started to bifurcate some of the students like the professional instructor, some of the students like the old fashioned instructor, that's right, the existing student instructor. And you can see with your clever human eyes and your clever human brains that two clumps are kind of forming. So using the weak bridge property, we can start to predict how the club will split. We know it's split because it happened in 1978. Can we predict how it's split? Well, we can do this graph partitioning thing by picking apart weak links. And if we predict by graph theory, we get this. And the pink nodes are students that ultimately would stay with the original instructor. And the white nodes are students that would go with the professional instructor. Do you wanna know the truth? Graph theory gets it wrong, but you have to watch really closely. One node different. Node number nine here. Node number nine when stayed with the original student instructor, even though node number nine was far more embedded in the graph of the professional instructor. Why? Because node number nine was about to complete their black belt certificate with the original instructor and didn't want to change while they're about to do that critical part of their training. So graph theory is good. Graph theory can make great predictions, but it can only make predictions based on what it knows. So it's not always perfect. It's a good indicator, but don't necessarily bet your houses on it. Now, if you wanna do this stuff in Neo4j, go ahead, boot up Neo4j, put your graph in, run these algorithms on it. They come out of the box as part of your data science toolkit. They are brilliant. And actually they're pretty quick. So if you were gonna do, for example, page rank on a large internet scale payment graph, ordinarily you would spin up your monstrous Hadoop cluster, phone your local power station and say, I'm switching it on, put more coal on the fire now and have an incredible carbon footprint. Actually Neo4j, with these graph algorithms, for example, page rank, 20 iterations of a 20 billion relationship scale graph takes that many milliseconds, so hours. Which is impressive considering it's only on what is now a laptop scale computer. Why is it so fast compared to Hadoop? Because you get locality benefits in Neo4j because we know how to store and process graphs very efficiently. Anyway, ha ha, thank you whoever that was that laughed. It's you again, you are the best audience member by far by the way, 10 out of 10. Rest of you are kind of eight out of 10 at this point. Wave at me if you're millennial enough not to know this gentleman. Get out of here, put your hand down. So when we were kids, this is how we were told AI was gonna be, right? In our cars, I can't imagine, actually in this car only the bottom left corner is engine. The rest of it is graphics cards all the way along. It's like NVIDIA slots all the way along. So we were told when we were children, AI would be in your cars and we would also look like this. Which is great. It didn't quite work out that way, right? But actually graph AI is getting really interesting. So there was this really, I think seminal paper from KDD 2015 by Fakar Ayatoll. And the problem they were looking at was spam in social networks. So kind of Twitter-like network with followers and so on. And what they wanted to do was stop spam getting through. And so their insight was, can we use features from the graph to detect whether a message is likely to be spam? So they have a social graph and if you like a layer of message transmissions on top of that social graph, can we extract features from that structure and predict spam? And the answer was yes. They didn't need to look inside the messages for cheap Viagra or please send money to Nigeria, that kind of thing. They could look at the structure of the graph, extract features like page rank, like triangle count, like they're in and out degrees, like centrality, like colors, like labels and so on. And from those features alone, they were able to identify 70% of the spammers with 90% accuracy, no textual analysis. So already just having the contextual information from the graph structure gives you a huge leap when it comes to features that you can process for gain in your ML pipelines. If you then mix this with the traditional data, with the textual data, with the numerical data and so on, you can produce models that are unholy accurate and brilliant. I think Fakurai and his team don't get enough credit for this. I think they've done just, it was eye-opening what they did for what we could do with graphs. Now, much of modern graph machine learning is still vectorizing, right? Taking a graph, extracting features, creating vectors with Graph2Vec and friends, and that's okay. I think that these are complementary techniques with the machine learning pipelines we already know, but we don't always have to vectorize. There are other opportunities for us within graphs to not do the kind of same old, same old things again. One thing that seems to become popular over the last couple of years is knowledge graphs. So semantic domain knowledge for inferencing and understanding a particular domain. My favorite example of this is the eBay shop bot. So if you're ever in the US, you can have a voice conversation with eBay and they will give you a highly curated shopping experience based on your history, based on what it's learning from your conversation, and based from the combined histories of other eBay shoppers over eBay's huge and dynamic catalog. A good example of this is if I chat with eBay and I tell shop bot that I'd like to buy a bag, it figures out for me with high probability I'm looking for a laptop bag and my guilty secret is I really like laptop bags. I've got so many of them and I love them all and I've got one for every occasion, like that one for Mondays, like I'm a nerd for laptop bags. But if my wife spoke to eBay, she wouldn't get recommended laptop bags given her context, her buying history and her demographic. She's much more likely in her case actually to be recommended handbags. And so these kind of semantic knowledge graphs provide context for a user journey and this richly connected data, the behavioral data in eBay's case, coupled with the product catalog data at scale, give them ability to make really prescient seeming recommendations about what to do next. For them, what does it mean? They sell more stuff because they can get you to the right stuff more quickly so that you will click buy, or actually in this case you will say buy, ship it to me. But to me, this seems like magic. I have to say the first time I saw the eBay shop bot running, I had that, oh my God, they built Skynet moment again. But because I've seen it once before in my own stupid life, a moment later I'm like, aha, that's not Skynet. That's just data and algorithms, but just data and algorithms that provides such a compelling intelligent interaction with a system that I felt it was close to magic. It was amazing. These knowledge graph things are becoming more and more widespread now in many, many different domains. My other favorite one is NASA. NASA built a knowledge graph because they were leaking knowledge as people got old and died. In fact, the lovely story from the NASA folks was by using a knowledge graph, they were able to find knowledge that was otherwise lost to them that will get humans to Mars two years earlier, which is great, right? Because planet Earth is turning to shit, so it's good to have a plan B. But then we also, if you're getting stuck into kind of hardcore ML, the notion of convolutional graph or convolutional neural networks. This is a general architecture predicting nodes and relationship attributes in graphs by Kit from Welling from 2017. And the idea here is that you could have a K-partite graph, not just a bipartite graph like user likes movie, but a K-partite graph like user likes genre movie in genre kind of thing. So any general K-partite graph. And the thing here is where you effectively recursively train your neural networks. User likes movie in genre all the way back up, until you've got a general architecture that understands your recommendation domain very well. To wit, this isn't just some magical graph AI thinking. This architecture is the thing that powers YouTube. So if you're wondering what makes YouTube so engaging, and in some cases so radicalizing and dangerous, it's this algorithm that's feeding our dopamine hits, my seven year old kid is a victim of this algorithm. He's just like consuming daft things from YouTube all the time and YouTube always seems to know what he's gonna want next. It's like somehow they've got an annoying filter switched up to 10 when you're an adult. So these kind of graph convolutional neural nets are a very popular way of doing recommendations at scale. Looking forward, the big brains in AI, this is Google, this is MIT, this is University of Edinburgh, have put forward a position paper where they say that graphs themselves are going to be a fundamental representation for future general AI. The reasoning they give in this paper boiled down is that we need a hierarchy, but we also need to be able to go across that hierarchy for transfer learning. Humans, we kind of get this, right? We kind of arrange our knowledge in hierarchies. We can reason up and down those hierarchies, but we can also reason across those hierarchies. As I said, if I headbutt the table, it hurts. If I headbutt the wall, I already know it's going to hurt because the wall is solid like the table and my head is soft, so I can go across that hierarchy. So the brightest people in the world for AI are saying that graphs as a structure are going to be fundamental for where we go in the next generation of truly human-like AI systems. They haven't done the research yet. This is the early stages, but their hypothesis is that graphs will be it. Now look, here's a chart, not a graph. This is a chart for my colleague, Mark Needham. If you've never seen graphs before, oh my God, they seem difficult at first because you have to unlearn so much. But once you are over this hump and you're down here, trust me, once you get graphs, you are never going back because they are wonderful. So we have an opportunity in graphs. We have an opportunity with graphs and ML to build systems that are truly powerful, truly wonderful. But with great power comes great responsibility. Please don't be building Skynet for real. Ladies and gentlemen, thank you very much. If you want to come and chat to me about this, I'll be outside this room tomorrow at 2.30. I would love to chat to you about what you're doing. Thank you very much.