 You all know him, so please let's welcome Peko, Paco and Nathan. Paco, how are you? Thank you very much and I'm really grateful to be here at BigThing. Yes, you are a classic in this conference. We can't do this conference without you now. You are so kind to join us all the way from California, I believe. Yes, yes, out here in the Redwoods. Thank you for, I don't know if it's going to be a long night for you or you're going to go to bed quite late. Exactly. Sorry for keeping you waiting, Paco, all yours, looking forward to listening to you. Wonderful, thank you very much. Let me see, can you see my screen okay here? Can we see his screen okay? I'm not sure, Paco, to be honest, because I don't see everything. You just start, carry on, carry on and we'll take care of everything. Thank you. So all of the slides are online, I'll post a link later on. There's a lot of great links within the slides, materials, open source, repos, tutorials, other things that might be helpful as background information here. And I must say I'm grateful to get to present at BigThings. I miss coming to Spain, to all my friends in Madrid and elsewhere in Spain. You know, I just want to say I miss you and I hope I get to come back soon. Look forward to that definitely next year and hopefully much sooner. This talk is about, yes, I got a note about sharing the presentation. Let me know if it's not, if it's not showing yet. I think I have to do one more thing. Quick, quick, quick, let's go to keynote. There we go. Okay. Perfect, Paco. Okay, I think we're on mine now. Yes, go ahead. Great. So this talk here is about two big ideas. One is called graph thinking and the other is called thinking sparse and dense. The graph thinking part is really more a message about business than technology or more about how to think about the business problem and then be able to apply the technology. And the point there is to render complexity in a kind of business problem into value. And we do this through leveraging graph technologies. So one of the things about this thinking sparse and dense is about how to really take advantage of the hardware for complex data workflows when you're working with graphs. So these two ideas that go hand in hand, they really need each other. I'll also have a few words about leveraging open source. In my role at Derwin, I help lead a team. Our team is working in very large graph infrastructure. In particular, we're working for some use cases in manufacturing in Europe. And it's, it's really been a pleasure to get to work in some of these areas and especially see where open source can be applied in this. And the reason I want to present this talk is because this word graph, I think you'll be hearing this a lot more obviously throughout the conference. I'm sure there'll be plenty of talks referencing about graph neural networks and how important this is for deep learning for AI. But there are other aspects of graph as well. And I want to show how some of these parts of graph technologies are complementary and work together. So graph thinking, let's start out within illustration. Imagine we're somewhere in the woods. We're in a small village. Let's say it's a medieval village back in the black forest. Kind of show a little map here. And in this village, there's a pub. Pat is the person who attends the local pub. Pat has a friend named Hannah and a friend named Thomas. Hannah works the fields grows the grain and also has a friend named I can. And then Thomas raises poultry and Thomas buys grain from Hannah. Thomas also has a friend named Brenda. And then I didn't operates the mill right there on the string and buys grain from Hannah. And I didn't also has a friend named Chris. And then Brenda works the brewery. She buys grain from Hannah. She produces beer sells back to the pub for Pat. She has a friend named Kim. Now Chris, Chris works the bakery. And in the bakery, Chris buys eggs from Thomas he buys flour from I didn't have the mill. He makes bread that you sells back to Pat who works the pub. And then Kim works the recyclery. So she collects the organic waste from these different businesses here and then compost that into fertilizer sells it back to Hannah to use out in the fields. So in the scope here of seven people, seven small businesses in a medieval village in the black forest. We have a circular economy. And so these these relationships between the people, as well as the businesses and the producer consumer flows involved in those businesses of the different types of products, how they're moving through the graph. We see these relationships. But there's a problem, because if we were to take this and move it into a relational database, then it looks very different. On the left hand side, you can see there's an entity diagram here the the schema on the right hand side you can see six tables these have been normalized they have primary keys. And the thing is that by by virtue of normalizing this in a relational form, we lose the perception of pattern. The key aspects of the relationships in that village and between those people and those businesses. It's no longer something that's it's immediately available. So for complex business context, network views, or would bring the data closer to people who can make sense of it. And of course, this is a simple example, I there's seven people in this graph. Imagine if there are seven trillion people, well, let's say seven billion people, seven trillion businesses maybe. As you increase in scale, as you increase in the complexity of the relationships represented in the graph, the network views make it so that people who really understand the domain can make sense of it. This involves acknowledging the complexity of the business context. It involves that the goal is we need to identify patterns in the graph to be able to use that and make informed decisions. So thinking about patterns, here's some examples. Hannah is relatively new in the village. She'd like to expand her business. She's noticed that her customer Brenda buys a lot of grain. So which other villagers are similar to Brenda? Well, if you look at the graph, you look at some of the patterns there you find that Chris also sells product to Pat as Brenda does. And Chris also sells organic waste to Kim as Brenda does. So maybe the bakery is a good candidate, maybe the bakery is one that's very similar to Brenda. It could be that the bakery could buy unmilled grain and make sprouts or malt or something like that. So Hannah is interesting in sponsoring a co-marketing campaign trying to drive demand for grain. Who are the customers of Hannah's customers? So if you look at the graph and you do some some traversal here, it turns out that Chris, Pat, and Kim are each a minimum of two hops away. So Chris is showing up a couple of times in different measures. Okay. And then a tech billionaire uses time travel to relocate to a medieval village in Black Forest. And the important question of course is, which are the acquisition targets? So you can use this graph if you run a graph algorithm such as between a centrality, which is maybe not so far away from patron kind of similar centrality there. But if you run this graph algorithm with this exact data, you'll see that both Hannah and Chris ranked the highest in terms of centrality. They're really key players, businesses in this medieval village. And we can go on but I want to point out there's a larger article and I want to shout out to my good friend, Jürgen Mueller at BISF. Jürgen and I put this together, this illustration of a village. We have an article on medium that goes into more detail. Now, the background on this is that if you look back to the late 1990s, Dave Snowden and others working at IBM in their consulting business, clearly, they define something called Kinevin. It's a framework for assessing a business context, trying to understand what are the challenges that the leaders in that business face. And this has been the genesis for certain terms, like for instance, if you have a simple case, it's called the known knowns. And in a simple business context, you know, you just establish what the facts are and you follow the rules, the best practices. It doesn't take a lot of training to do that. But then you get into a more complex or a complicated kind of business context. These are the known unknowns. And this is where you need some experts to go in and analyze the situation analyze the data, try to determine cause and effect, solve for the known unknowns. And through this analysis, then you determine, okay, what types of trade-offs are there for decisions to take. And in a more complex, this is the infamous unknown unknowns. This is the complex context where you really can't determine cause and effect through reductionist techniques. Instead, you have to go in and perceive patterns and build a probe of the situation and perhaps do some experimentation, but understanding what patterns are emerging there is how you make an informed decision. This is complex situations in business. This is what we face these days, right? When we're talking about climate, when we're talking about the complexities of global supply chain and problems there when we talk about pandemics. These are complex issues. And these are the kind of situations that leaders face, organizations face, when they're approaching these kinds of problems. So let me shift gears a little bit and talk not about technology, but about how it is that people learn. There's a great book by Susan Ambrose from about a decade ago where she talks about the journey that people go on when they're learning a new subject, when they're progressing from being a novice in a particular subject to becoming more advanced, eventually becoming an expert in that field. And Ambrose talks about this in terms of the cognitive structures, how people represent what they have learned really in terms of you can think geometry. So when a novice is first beginning to learn a subject, they probably start with some simple facts. These are probably not connected well, maybe not a lot of context. But as they begin to learn more and more about this field, they start to piece together the facts. And so they have changed the association and they can start to ask some questions and really get more interaction with their environment. As they become a competent practitioner in this field, then what you see is cognitive structures where it's very similar to what we call decision trees in machine learning. And then as a person really gains expertise in a field, the thing you notice is that they understand the category busters. They know how and when and where to break the rules that come out of tree based decisions. And by virtue of that, their cognitive structures are graphs. And so this should really be a lesson for those of us who are interested in artificial intelligence. Going forward, when humans develop expertise in a topic, their cognitive structures for it are graphs. So to bring this together when you when you think about the kinds of challenges that people face in business. And when you think about that journey that people go through when they're learning a new subject, learning how to gather expertise and knowledge in a particular subject, this leads towards graphs as being an answer to handling complexity since making by leveraging graph patterns. Now, conversely, there's an anti pattern. So from the field of behavioral economics, there's something called ambiguity aversion. And this is where people, some people when faced with complexity and uncertainty, they will do everything they can to avoid it, sweep it under the rug. And so this is a known problem in psychology. It's also a known problem in financial markets. And it's something to be aware of as we are addressing more and more complex context and business and addressing those with artificial intelligence especially. So let me change gears here a little bit and talk about knowledge graphs. I'm sure you've probably heard the term. I want to talk about this in very general sense at first, you have the notion of a graph where there are many different entities that are represented. Each entity will have a name and some attributes, and then there are links their relationships among those different entities. There may be some vocabularies that are used to help describe these are effectively shared definitions that we agree on for standards to help describe what kind of entities and relationships and values that can be. And it's a really flexible way to represent a complex data scenario. Now the thing is, if you've worked in object writing code for object oriented languages, and especially class hierarchies, you've already heard almost all of these terms, probably by different names, but you've already heard all these other terms. So it should be relatively familiar. The other thing is just thinking in terms of shapes. Data objects within a graph are represented as shapes. And so notions about geometry, about looking further in topology, how to recognize patterns, he's become very important when you're working graphs. Great primary sources. I would like to reference, especially Claudio Gutierrez and my friend, once again, they had an excellent article at ACM this year. And you can go back through Natasha and why never begin is of course wrote foundational paper about 20 years ago in this field. And then you can even go back into the 19th century Charles Sanders purse wrote really describing knowledge graphs back in 1882. Now, there are different types of graphs, certainly the whole constellation of technologies in W3C. So the semantic web technologies as a category, RDF and now all in schools and all that. But then there's also property graphs labeled property graphs are typically what a lot of graph database vendors will be using. There's certainly some work in progress to try to align these two approaches to something called RDF star a lot of progress we've made on that. But one of the points I want to make is it's not just these two categories. There are actually other types of graphs that we really need to understand to get the full picture and have complimentary technologies. So starting with the Knowledge Graph Conference and that community. I've helped to curate a list. There are more than 40 different graph database vendors currently. Even so, you know, our team when we're out in industry working in this area, what we hear from industry customers is they they really prioritize scalable graph compute, more so than say database features. I mean, most are out there. I mean, most enterprise organizations already have databases, they already have SAP IBM Oracle, SQL Server, etc. And this recalls to me. Six years ago, the first time that I returned to Madrid for my second year at big things. Circa 2015. We have MPP vendors who are doing distributed databases based on how you. And then spark came along and just cleared out that field. The thing was, at the time, what what businesses needed what people needed was more horizontal scale out for compute spark provided something that supported the business use cases very directly. And it didn't come along with a lot of heavy weight management that the MPP vendors were typically requiring my team. So I think that we're at a very similar point in time. There's a lot of graph database vendors very interesting technology, very amazing work. But the demands from industry are for more of this kind of horizontal scale out. And so we'll probably see more emphasis on that. To add to it, I know I showed this slide in a talk before, but Eric Jonas and other folks at UC Berkeley, the same lab that came out with spark and Ray. They did this paper, really looking at the trends of the physics that are driving trends in cloud computing, and then the economics that are driving trends that fall from the physics. And one of the major things that they've said is this this large long-term arc of decoupling computation storage. So they scale differently, they're price differently, they're used differently. And the bottom line there is that if you're trying to wrap computation in the guise of a database, you're probably going against the trend. Also, I really like this paper from Google, this talks about introducing path query for their work with their perigal infrastructure graph infrastructure. And part of it they go and they do a compare and contrast of different graph query languages. So Sparkle, Cypher, Gremlin, these are part of the graph query languages, they do a compare and contrast with their path query work. It's notable that Google has introduced some constraints, some trade-offs and constraints into their language, which then provide better guarantees for the behavior of graph technologies at scale on distributed systems. Having worked with implementing Sparkle query engines, Cypher query engines, I've got to tell you that I think we'll see more and more projects along the lines of what Pathquery has been doing. Now, another question. When you ask people what is the shape of their data, they'll typically describe something that's a rectangle, right? It's a table, it has rows and columns, it's a matrix, it's a spreadsheet. And that's good, that's a good way to look at data, except when it's not. The problem is that if you look into the hood, if you look at a spreadsheet, the thing that makes a spreadsheet work internally is its dependency graph. That's how it's calculated. And the thing that makes a SQL query work is the query plan, which is a directed acyclic graph. And on top of that, it also has schema, which is a entity relationship diagram, another kind of complex graph. And there are tools that simplify data for us. They actually have graphs internally. And one of the problems that we run into is that by obscuring the graph, we are obscuring the metadata and the business rules, which are so important for leveraging data. And that creates a kind of tech debt. And of course we see this. I don't know if you've heard it, but there's a researcher, Eleni Hermans, out of Netherlands, who did her work on understanding spreadsheets. And really fascinating, out of the global 2000, about 95% of firms do the last part of their tax reporting based just on spreadsheets. And those spreadsheets change from quarter to quarter. There's not a lot of consistency. There's not a lot of tech debt, a lot of hidden information in terms of business rules and metadata that is so crucial that gets obscured. Now, Gardner, they've been kind of lukewarm talking about graphs for a long while, but they did an abrupt about face in February of this year. And they say that by 2025 graph technologies will account for 80% of data analytics up from 10% this year. And 50% of the inquiries that Gardner receives about artificial intelligence are about use of graph technologies. So it's been a real change there in the thinking. And what it is is this point about exposing metadata and business rules, not hiding that the very thing that gets obscured by relational databases and spreadsheets, and which has led to a lot of tech debt is what is so valuable in complex business context. Now, when you talk to people about use cases, I like to study about producing case studies in this field. But when you talk to people about use cases for graph technologies, they'll probably say, well, that's for Facebook, that's that's for Google, Amazon. It's true that the tech giants all have large graph practices. But what I want to show is really the larger graph opportunities are elsewhere outside of technology. You find this kind of applications are very strong, especially in verticals like finance and pharma and manufacturing. And think about it, if you have a factory, the data exhaust from factories, you know, one factory alone can be measured in exabytes per day. So there are a lot of opportunities in industry, not just in tech firms. I want to provide a simpler of some of the public use cases. And I'm really super impressed with Barbara's team, what they're doing with network medicine. And I think that out of everything I've seen this is probably has the most impact for people long term. Certainly in terms of retargeting drugs drug discovery, you see this work at Novartis and AstraZeneca and Roche and others. I really like this video here from Stefan Reiling talking about how they use graph based machine learning to guide some of their discovery and research priorities. And then in manufacturing, there's the ASF and Siemens and Bosch and others. I really like this video here from my colleague Janice Ellis talking about the ASF and uses for very large graphs in manufacturing. And then in FinTech and finance, you know, you've got Refinitiv and Bloomberg and financial and others, all making really large scale use of graphs in finance. And also, you know, to talk some about the tech companies, Lyft, there's a great case study by Mark Grover in this article on medium about data context and compliance and what they learned. I'm going to talk a little bit more about Luna Dong about the challenges of the product graph at trillion nodes scale at Amazon. When you're working in data science graph data science, it's relatively similar to typical workflows that you would see. There's a few changes, though. So I'll point these out. One is that, you know, you start out by integrating data sources, probably across different business silos. And the graph into a graph is really one of the big challenges, but also one of the big opportunities. And it probably comes from other databases. So this is why I think that in terms of graph technology, I think maybe it's not so much about system of record. It's more about data integration and overlay. And this is where I differ maybe from some of the graph database finish. So once you get to the stage of preparing the cleaning up data, which of course is a big thing in data science, then you see some real differences with graphs, you're worried about different things. In addition to what you were already worried about with cleaning up data quality and all, but things like transitive closure for inheritance. That's very problematic to do in a relational database. But now you can do this with graphs cycle detection. It's also something that's a kind of data quality problem in graphs. And it's important to remove cycles if they're there by error before you go and do some of the algorithms. That's a very common kind of data preparation. Similarly analysis, and with that deduplication. These are things that you try to do them with SQL, it's going to bring the database to its knees. But in graphs, you can have much more efficient ways to accomplish this. Once you've prepared your data, you start to apply perhaps more semantic overlays. This can be used for quality checks. But then I want to say that, you know, when you get into use cases, it, there's some similarity and some differences with what you see in data science. Certainly there's a lot of great work with visualization. And when you can put interactive graph visualizations in front of a business unit leader, someone who knows the domain that they're working with. These patterns just really pop out of the grass. Also dashboards of course related to that. And you know other areas too in terms of the modeling that we do that. But working with grass, you, you also need to be able to work with graph algorithms. And there's a lot of very, very useful techniques there of how we can gain insights through graph data science by leveraging particular kinds of algorithms. Certainly there's graph neural networks and geometric deep learning. So building deep learning models, you can think of graphs as a kind of feature store for training those models. But also in the industry, it's very interesting to see so much work in operations research. So think of graphs also as a feature store for the parameters that go into optimization models, linear programming, dynamic programming, these kinds of things, what you need to be able to control the factory. So operations research, actually, this is where AI and OR start to align more. And in terms of understanding these kinds of business use cases, I like to use this as a lens. There's a triangle, three vertices. So know your business, know your data, know your customer. And when you look at the use cases and the different protocols, well, know your customer is a regulatory requirement fintech. It's a pretty big deal, KYC. Market intelligence is somewhat related to that risk analysis, but you can go around the circle here. It's a way of understanding these use cases. And really what is the goal that you're trying to build out of graph technologies. And so in summary of graph thinking. If you have a simple business context, you can just establish the facts, play, you know, follow the rules, the best practices. It doesn't take a lot of training to do that. If you have a more complicated kind of business scenario, you need to go in and do some analysis, you're probably using a lake. But when you have a more complex kind of context, this is where real experts need to be able to go in and discern patterns to probe the situation, since what kind of patterns are emerging and then make informed decisions about that. And this is where you probably need a graph. This is how teams of people in machines learn. And so it's how organizations learn. This is where graph thinking plays such a vital role. Okay, real quick. When we shift over to thinking sparse and dance. This is sort of the flip side of how do you apply the technologies. Not to go into too much math, but in algebraic graph theory, we have this notion of transforms between rich, complex graphs, and what we call algebraic objects. So we can go from a graph and transform it into a vector or into a matrix or into a tensor, but we can also transform back from these objects back to put information into the graph. And there's a lot of great work that gets done that way. Not negative matrix factorization factorization methods are where you take a graph to represent as a matrix you do some some graph algorithms on it, and then you can get a lot of insights that way. And this has been how people have worked with graphs for a long time, because while working with tensors was very expensive. But of course you've probably heard of tensors used for deep learning and other areas there. The point that I want to make is that sometimes you must blend the symbolic representation with a numeric representation. And certainly, when you're working with graph algorithms or deep learning or visualization, this requires that kind of numeric representation. And other times, you need the symbolic when you're working with natural language, regulatory compliance, human loop, domain expertise, explainability, these are quite symbolic. So the two have to go together, not just shoved into a matrix. So I like to think of this as thinking sparse and dense with all apologies to Daniel Kahneman. This is where when you're in a data workflow, you really take into account the tradeoffs between how you represent the different stages. Sometimes you're working more sparse. And sometimes the data must be put into more dense form, and then other times transform back out. This is crucial for taking advantage of the hardware accelerators. For instance, if you're doing deep learning, when you're training your convolution layers, you pack all the data, vectorize it into a very dense representation and crunch it on GPUs. But then when you go to calculate your loss function, that's aggregation, and that's typically bandwidth limited, not compute limited, and that's more sparse. And we see this throughout workflows in data science. When you're doing data preparation. It's probably very sparse when you're training a model it's probably dense, but then when you go back out to the customer you have to bring back and sparse symbolic space. So, I'll just leave a pointer here. Earlier this year Dean wankler and I wrote a mini book, working with the engineering leads from the open source machine learning projects at Nvidia. And I really grateful to get to work with those folks. There's a free download for it, but we explore this idea of thinking sparse and dense. So if you haven't heard, definitely watch this space. There's something called Legion that's been developed at Stanford over the past few years and also legate more the application layer. It's a next generation of cluster scheduling for PI data. So the idea is, how can we take super computing and be able to access it just with Python for data science. This is much more aware of memory objects in task force flows. So it's a next generation beyond as far as a scheduler. So you don't waste so much time moving data around between different CPUs GPUs that kind of thing. There's some great resources here, especially from Michael power. And the reason we care is because graph neural networks have become a very big deal. This notion of geometric deep learning. Michael Bronstein and others, I think have really written the canonical papers in this field. There's also a really great work in terms of like motif mining motif predictions. So, so I'm enjoying the work coming from message best of others in that area. And also, you know, talking about graphs. There's a lot of things you can do with graphs. You can work with like W3C technology query languages visualizations graph neural networks. The problem is these different camps don't talk to each other very much. And their software doesn't really talk to each other very much either. This is a major hurdle to overcome. And so I've been working on a project for the past year. One of this is for commercial reasons working in manufacturing as I mentioned, but it's possible to leverage open source to produce really large scale distributed multi tenant graphs using Ray using arrow and leverage the hardware to accelerate this. Not all of this is open source yet, but we're working toward that. There is an earlier project that we started last October that is open source. It's called kg lab. And it's our work toward integrating many different areas of graph data science, building graphs, making it more pythonic, if you will, to do this kind of work. We have this all on GitHub. There's Jupiter notebooks for each different kind of area that show examples that you can get in and use. Certainly, we do a lot of serialization different formats, but we found that parquet is a couple order magnitude faster than the others. We do a lot of work with visualization like a pub is Cairo, but certainly our friends at graphistry were doing some integrations toward them. You can query but then you get back a pandas data frame so again very pythonic new validation shackle the prescription is like unit tests for graphs really excited about shackle. And graph algorithms with co graph network x graph, but also training neural networks in terms of graph neural networks pytorch geometric DGL. And definitely want to shout out to my friends that recognize rubrics is a really excellent open source platform for being able to integrate pytorch geometric with hugging face and others. We also have a probabilistic work. We, we've done integrations with PSL probabilistic soft logic for being able to apply probabilistic rules and measure uncertainty in different regions of our craft. We show some examples of that. And finally, to wrap this all up. The idea is, there are different kinds of inference that you can use an AI. And we want to provide ways to integrate these so you can mix and match and bring these together and make complimentary solutions for more kinds of hybrid AI if you will, really bring together what can you do with the query languages, as well as with graph networks as well as visualizations and probabilistic graphs, etc. So, I wish you well I'm working with graphs. And if you want to get a hold of me. Here's some ways. I'd love to talk to you further. Thank you very much. Paco, thank you so much. Can you hear me there Paco? Yes, yes, yes. What a fascinating presentation. I loved it, especially the beginning with that village, which it was so clearly so well explained, you know, and I'm a lawyer by profession, so imagine. Fantastic. Paco, I guess in that village, you are the cider brewer. Yes, because Paco. Exactly, yes. So for those who don't know, Paco brews cider on his free time in California. So Paco, next year, when you come to the big things conference in person, bring a couple of bottles of that cider of yours. It was fascinating. So, so clearly explained, but there's so much, so much to take away, very intense. In this sense, Paco, we don't have much time for questions, but one question they ask you is, and please, I'm reminding the audience to keep sending your questions well in advance. Otherwise, we don't have time at the end. You know, we're in the chat. Paco, they ask you, when does a graph become a knowledge graph? Is there a? Yeah, interesting. I think I think the short answer there is, so we have these ideas of standard vocabularies. And this is a way for people to have common shared definitions. So that when we start to describe data in a graph, we're measuring it with the same units or we can describe which units we're using. So once you take the data and start to connect it together and have relationships, you go to graph. But then when you start to apply some of these vocabularies, this metadata, that's when you start to have more of the knowledge graph, because that's when you can operate on it with different types of AI tools for inference and really gain those kinds of capabilities. Okay. Paco, they also want to know, what are you currently working on that if you could advance some of the news you will bring us for next year's edition. Some of the latest thing that gets you fascinated and doesn't let you sleep at night. Wonderful. Well, you know, we're working with a manufacturing firm in Europe. And certainly, we have a lot of our colleagues in Madrid. So I want to shout out to my friends in North Madrid. But we're looking at these use cases in manufacturing where graphs are extremely vital for understanding things like waste mining and understanding sustainability and carbon footprint across a complex supply chain of many many vendors worldwide. So as you get into these really difficult kinds of industrial problems, this is where Grafsky can really have a lot of real world impact. Okay. Fascinating. And you said it can be applied in any vertical, I mean, in every sector. You mentioned examples on financial. You also give us, I have written down here, pharmacy, finance, manufacturing. In any sector, or is there a sector where graph design we are thinking couldn't be applied and is not recommended. Well, I think that the litmus test for that is if you take some people who are experts in a field, and you have them in a room in front of a whiteboard, and ask them describe the problem that you work on every day in your business. I guarantee they'll start to draw some bubbles on the whiteboard and connect them. They'll start drawing a graph. So this is how people when they represent domain expertise, this is typically how experts think. And I think that's the virtue of why grass is so powerful is because it's so close to how human experts think about the problem. Absolutely amazing. Actually, we should send those experts or those businessmen and women that explanation of your village at the beginning, because when you compare how Hannah and Aidan and Chris are related, and then you put it on a piece of paper, which is a square, you lose the perspective. You said at the beginning, and I quote, because to me that was clearly the definition is we lose the perception of pattern, which is what we're looking for at the end of the day, right? Paco, just a couple, I think we still have a bit of time, but not much. You mentioned a few books of colleagues and references for you and colleagues of yours. From all those you mentioned, you've given us a lot of information and we didn't have time to catch them all. So they ask you which one would you think is the one that gives you a better overview or the one not to begin with, because our viewers are quite technicians and quite techy. But which one would you choose? If you go to your desert island disk, you remember that program where you can only take one book to a desert island. Which book will you take? Well, I will post the link to my slides. I have all these links are in the slides. There's a lot of references. Excellent. And I would definitely point there's a few communities in the world. There's certainly the Knowledge Graph Conference, KGC, and there's a community of 2,000 people on the Slack. So a lot of experts who want to interact and help you out with graph problems. Also, there's connected data world based on London, similar community and events. In terms of climate science, there's something called ESIP, which is part of the Earth Observatory agencies, and a lot of use of Knowledge Graph there. And some other communities as well. I moderate a graph data science linking. So see us there too. Wow. And I also recommend everybody to check on Paco's website. He has a fantastic website full of all these quotes and comments. Totally crazy scientists, as he's saying. He is not the normal speaker. He says what he thinks all of the time. So check into his website. He has a fantastic drawing of himself done by somebody else, I guess. And so check his website. It's fascinating. We'll stay tuned with you, Paco. Thank you so much for joining once again on this big things conference. Next year, we have to meet in Madrid, as promised, hopefully. So thank you, Paco, for coming all the way to California. And we'll see you next year if we're not before then. Thank you, Paco Nathan, and bye-bye. Thank you very much.