 Yes, thank you so much. Well, thank you for having me. So it's indeed. It's the last of the day So it's like outside and I saw like a lot of people already drinking beers and stuff So it's like I hope that there are still people in the room. So thank you for listening So this is like a last year my my colleague a channel was here and he he presented for the first time we've ate I'm what we're trying to do and we've done a lot in the year So today I want to I want to show you actually an update and and way more things about we've yet And a lot of things have changed also the things that we've changed into so quickly the agenda what I want to talk about So first I want to talk about what it is because they were you know, we're new we're new on the scene So yeah, so what is it? Secondly, I want to talk about what has changed since last year or presented here and this year I want to talk a little bit of the about the technology and last but not least of course I want to give a demo. I want to show it in action. All right, so that's the that's what I'm going to do So first about the about we've hit so we hit is an open-source smart graph That's what it is and what we mean with that. Well the fact that it's open source. That is the usual suspects, right? So source lives on get up, but you can use that using docker docker compose and Kubernetes So we have that all out of the box for you It is smart because of something which we call the Contectionary and if you never heard about the Contectionary that's fine because we invented it And I'm about to explain to you what it is or it does And I want to use I'm going to use one buzzword Sorry for that, but what I what is it has a serious? It's it has serious notion So sometimes people talk about like the AI first architectures So like can we build systems that have like these machine and models built into them that we can build new solutions? And actually new ideas and that's what we try to solve with semantic models So what makes it in our case what we like to call smart is the building semantic model and the graph is well I don't have to in this room. I don't have to you know shared through what the graph is But the we we've chosen to use graph QL and the reason why it's done that is because we see of course There's like a lot of a graph experts, you know you sparkle or sorry for those kind of things But sometimes when we have developers who find it a little bit more difficult We see that they really like to work with graph QL So we really embrace graph QL and it's like completely 100% graph QL based So that's how we define what our smart graph is and with a smart of you can do three things first thing is semantic search So what you can do and that's also what I will demo to you Well, we know from traditional search if I may call it like that we search for keywords, right? So if we write about The company Apple we actually need to search for Apple But in we if you don't necessarily have to so if you add company with the name Apple But you search for example, what's the company the business related to the iPhone it will still find your data object company with the name Apple We also can do based on that is automatic classification So we can automatically make edges in the graph based on the semantics of your data objects and last but not least we can do knowledge representation and this is often what's always also referred as like nowadays Hey, it's like as the knowledge graph, but it's like I wouldn't say we're necessarily a knowledge graph But the you can create those similar Representations so those are the three things that the three use cases that we can help you with so something important to share Based on what we did last year So the best way to explain is a little bit on time and what I mean with that is like that We saw of course a lot of databases in the past that were more relational base, right? So just you know a road column structure row column structures and tables and then of course we got these Graph databases so for example I had the people from they are in room So they made a beautiful beautiful database like that and we chose a year back to store our information with Janice and what we did was that we had that semantic element that what I will explain the context And I will explain what that is. We had that as a feature But last year we decided to just double down on that semantic element. So we got completely rid of the Implementation Janice and we create basically everything ourselves and we that means that we now only have that Contextionary to store that information in and We've actually this is a crown so we actually figured out like this is actually We're really happy that we made that decision because now we could really bring something new You know to the to the stage something different and a different way to handle your data objects and work with them So we store everything in a semantic space That's what we do and now if you go like wall semantic space. I'm gonna explain what we mean with that So but just keep thinking when I talk about semantic space. I'm talking about the Contextionary So imagine it like this if you go to a I'm compliant as you can see let's say if you go to a grocery store and you have a shopping list and on the shopping list You might have four items a banana Washing powder you're looking for an apple and a piece of bread So if you go in the supermarket and you find the banana You know that an apple is gonna be closer by than the bread or the washing powder And if you move towards the bread, you know that you're actually getting closer to the washing powder and moving more away from the fruit That's how we store data in the space and so that's the metaphor to actually imprint The problem that we solve and we do that to something that we that we call the Contextionary and I want to give you a little bit of background where we're coming from and What's different to other? solutions that around here, so if you go all the way back again time to the to the 1950s it was like a famous quote and I said like a word can be characterized by the company keeps and But that basically means that you would say that the word Paris would be more closely related to France than it would be to Holland for example or the US and New York would be more closely related to the US then for example Spain and that basically went for all words and a lot of research was done there and then we Jump forward and then with the whole machine learning boom We saw there was a lot of work being done with with based on machine learning and these word embeddings And then what we also saw there was and it was like we got first word to back Got glove and nowadays have what they call in the academic realm, which they call a state of the arts called bird But if we now put on our engineering ahead, we really fell in love with glove and why did we fall in love with glove because Bird has multiple representations of a word in that data set, but the glove doesn't glove has one vector representation for every word and the critique that it often got was that well If you for example have to name Apple Apple can mean of course fruit, but it can also mean the company and That we wanted to solve the problem in a different way But as engineers we were very happy with this because now we could index it We could index those words in a storage mechanism So if you start a we V8 you can imagine it like this It's an empty space that you start you choose a language not a programming language, but a spoken language so let's say for example English and The space is filled with all those words So for example if you find Apple you find nearby fruit you find nearby company And you might also find iPhone and these representations that we store they have a 600 dimensional representation, which is very fancy, but just it has to do with compression that kind of stuff and The thing that we did is this If you store a data object like this So for example the class company with the name Apple found it in 1976 etc and in when I demo I will show you how that actually works But if you send information what we V8 does It then it creates a string of the words and the concepts that are in there and it takes those concepts And it funnels them down. How does it do that? It says like it takes the Euclidean distance It combines it with the it's this sounds very fancy But with the logarithmic function based among the occurrence of the word So for example company might occur less often than the word Apple so then we say the word company is more important And then we even work with word boosting so that you can say certain words are important in my data object But what we do with that is that we create our first First object in our graph and it gets its own factory presentation So now if we have an empty V8 with those words in that and we store our first Data object as you can see there. There it is It lives that's where it lives in the vector space. That's what we mean so now if we query for let's say a Company and iPhone it can look in the nearby space and find the data object so now without even having iPhone in the data object we can still find it and That's the thing that we created that we thought like hey This is this is our thing right? This is our our golden goose egg because now we have different ways of creating graphs and actually query through them So this example for example might look something like this So if I have a data set with companies and this would be my graph QL Okay, where I said it get give me things which are companies and I want to have their names But explore them by iPhone. I might forget this result Apple and as you see iPhone is nowhere in the data object and If I have that same that that I said then we say well a little bit more abstract organize these companies On the concept of Redmond. I might get back Microsoft and that's how we structure our graph so and So that's basically what what we've hit is and as a developer we've yet comes with a few features So the first thing is that's that's contectionary comes all out of the box You don't have to do your training. You don't have to set it up. Whatever. It comes all out of the box Or basically a lot of the container I should say Adding that I happen to the HTTP API, but calling data to the graph QL API It's completely containerized so you can run it wherever you want And because we use that vector space. It's very very scalable You can skill this that space very very tremendously big So I think the biggest one that we ever tried was a few was a few billions It gets pretty big and something that we have in the pipeline and maybe I can show you that next year But is that we also can create peer-to-peer networks of v v8 So that we can point to semantic elements in different graphs so that we don't have to agree anymore on our ontologies Or on our schemas, but that's like that's that's in the make So a little bit about graph QL. So I have so when I demo that so how we structure our you a graph here this is the The UX if you will over graph QL So we have a get function first the other one is an aggregate function, but I'm going to talk about the get function We have a semantic kind which we make a distinction between things and actions now in some verbs We have the class the classes of the property a property might have a reference and then the property itself And what you can do you can have these semantic filters on top of that. So there you see for example the explore Filter and they can search for concepts, but you can even move away from concepts of move towards concepts And I will demo that to you in a bit Well demo so now we get to the demo so now you might want to say how does that actually look so what I did is that I spun up a a Docker container if you want to do it yourself if you go to our website send me that technology and then you simply click We've yet and then you find all the documentation The installation gives you just a we've yet But what I'm going to demo to you is this news publications that are set and if you click that one There's just one simple docker compose comment that you can run that you can play you know around with it yourself So there's a meta endpoint, which I'm just running just to make sure yes, so that is running well So let's first look at a schema. So this is an example of schema So here you see I have the class publication and I have I have the name name So the name of the publication but for example So I said the headquarter geolocation, which is our geo coordinates has articles etc. This is how we structure the schema this key That's important because we will see that back when we use GraphQL to query and the things that we actually store Look like this. So for example, you see an article the article has an ID a Beacon is a reference in our graph But why do we call it beacon because we do it in the space? So it's a beacon in the space and Then here you see for example a summary of the article the title of the article and the URL where the article actually comes from So that's how it's structured. So we've created a simple GUI that you can use to actually You know look through and search to the to the graph so you can visualize stuff, but I want to I want to dive into the The GraphQL queries. So let me show you a simple query. So if I say get Things and I want to have I want to have a publication and I want to see the name This would be a valid query that you see rising folk financial times wired New York an economist, etc Now what I now can do is that I can say well, I Want to oh I Want to explore for the concept let's say business and I'm going to limit it to three results just that it's easier readable. So I now say do the same query but explore based on the concepts of Business so if I now run this query you see financial times the New York Times International Times, etc But you see the word business the the method that it's nowhere there. That's what comes from the Contextionary or If I would say fashion and I would run it they you see it starts for example with Vogue So that's how we've structured it and how it works But that's not not only going for like small strings, but also for large larger text objects So for example, if I would say get things article and I would say show me the Title of the article and run the query. So now you see all these articles about a variety of topics You see Brexit so you can see when we when we actually pulled to that end But it's just a variety of topics in here But if I now say well, I want to have those articles I'm going to use the explore function again, and I get I say well, I want to explore for the concepts. Let's say Music and I'm going to limit the results again just for the sake of readability So same query but now based on the articles. So if I run this you say like you see fair enough The first one has the word musical, but then it's about When Stephanie and then it's about John Lennon. So you see the word music is not necessarily in there But it organizes it like that automatically So now even if you want to filter further in this in this graph What you can do is that you can say well The question we had like so how do we do pagination because if you have a 600 dimensional space, what would be the next page So what we've done is this so we said well we can actually for example move towards a concept So I can say well move towards the concept for example of the Beatles So I guess you already know what will happen if I do that So and I give it a certain force the force is how strongly you want to push towards this concept inside the vector space So let's say it's a little bit arbitrary, but let's say 85% So if I do that now you see that John Lennon the article with John Lennon comes first And if I say like I'm more like a stones person. I hate the Beatles then of course you can also do Move away from the concept of the Beatles same query and we see John Lennon is gone and now the question is like okay So what makes it so like now in the traditional graph people are going like yeah But I haven't seen really the graph in action yet So that's just very simple because you could say for example has authors and then we can say on author The author is a name so this is how we structure the graph so now you see so you see the graph object here That's this first the title of the article and he said has authors and then actually the authors that are related to this to this article I Think if time allows it there's one more thing can I still quickly show one more thing? Yes So the I was wrong with you which I completely forgot because there's another problem that we also tried to solve for this And I want to show you that going back to the publications So quickly going back to the publications Things publication and then they say a name of the publication When when you glance over this you might have noticed that we have all these publications But we have the international New York Times the New York Times company and the New York Times in there three times Which is a problem because of course you want to represent concepts But in a database we have the same concept represented three times. So we have something for that Which we can do is that we can say well, we want to group concepts together So then we say I'm gonna do the type merge them together I'm gonna give a four. So how big do we need to look in the vector space before merging and Then I can say well do that with five percent So if I now run this query, you'll see that it merged together the international New York Times New York Times company and the New York Times But if I now do a graph query. So I say has articles on article The title of the article it even now merges together those different articles from those different publications into the same Concept. So that's what we have. That's how it works Oh way more features the automatic classification I didn't even get to show you that but you can play around with it because well where it's fast them So this software is open source. You can play around with it. You can set it up yourself. You can create your own graphs Of semantic graphs, I should say That's my story in a nutshell. Thank you all for listening and If you like it and if you go to a website, then I just have one question. So this is our website If you go here then you can sign up to a newsletter if you want to learn or you can click on the get off star Button if you want to I will promote a little bit. So that's a but this is a website. Thank you very much for listening Thank you Very single words No, no why are you sure about it? There's a great question. So let me let me start with the first answer to the first question so and and sorry if I went over that quickly too quickly because it's And this whole everything what I told us also on the website in detail But that's what's happening here. We always knew use the same algorithm so if you would have a that object with a longer sentence like for example the summary or the title of the articles that you've seen it applies the same Algorithm so it says like first I take all these individual words Then I find this center position between those words Then I weigh them based on the occurrence. So certain words are seen as more important than others. So it weighs it towards that And then we have this optional word boosting that you can say well in this specific case This word is very important. So move more towards that. That's how we create those vector positions. So regardless if you're querying or if you're Adding data, that's how we do it. So that's why we also became agnostic about the fact that gloves about single words And because we learned that if we so we the first prototype we did way back was very simple We say okay, I have the word Apple show me what's nearby and then as you would expect as glove does it says like well I found iPhone but also for found fruit and then we did something very simple Which okay now go and sit in the center between Apple and iPhone show me again what happens and then you see actually That that's successful and if I if I may I can actually quickly show that I'm the last talker. So Thank you. I should have waited a little yet said go ahead and then thank you So there's a contectionary endpoint where you can say well concepts So if I now literally do what I just said, so if I would say Apple Then you see your Apple iTunes Google preview on this and now of course in this example. We don't see Of course the fruit, but let's say if I would do Apple and not based on the company bottom fruit So I concatenate them apple and fruit. You see how it now starts to Get better and better in those results. So and that's how it the algorithm works So you can play around with this also yourself on this on this endpoint and the other question Every time we make a query Yeah, sure so we We of course we're also like a we're also a business So we have like that so the core is like open source, but we built like a shell around that So we currently have six companies using this on a large scale in in a variety of industries also retail Oil and gas all those kind of things and these graphs get pretty big and especially if you scale the communities cluster It's fast, which is something I'm now I could say that we fensely like Architected that but it was actually something we got for free because we just the data model is just vectors only vectors And that's of course very fast to scale and to search through so the answer to that question is yes I Don't know and I love the idea so we're definitely going to try that out. I don't know. I don't it's a great idea We haven't we haven't tried that yet. So what we currently do is that we just have you have a we've eaten in a language so Dutch French English of course But we haven't tried that yet. So if you don't mind then it would be fantastic too Or of course you can try that yourself or we can do it together, but that would be fantastic It's a it's a great idea and I don't know Thank you Thank you