 Thank you everyone. I want to welcome you all to this online edition of Open Belgium. I want to quickly thank our three main sponsors, Monodesign, Microsoft and Agents Hub, but now I want to give the floor to Ad for this very interesting session. Cool, thank you Astrid. So practical steps towards distributed data. I'm going to work on the premise that we all know that having our data stored in centralized databases is inherently a bad thing. As the open data preacher says, data is power and if we as users want to be able to use our data, then we need to get access to that data. Having our data stored in systems over which we have no control, because they do contain a lot of data of course, but we have no control where the data is virtually being held hostage in an attempt to control us, to steer us towards using as much of the platform as possible, or at least to create as much value for the platform as possible is vulgar at best, and maybe it's a risk to our society as a whole. That would be the worst. So let's say that that is a bad situation. It kind of looks like this right now. There are a few databases. They contain a whole lot of data and for the users, well they have nothing. The users currently have nothing. And we want to progress to a world where by the user, as a user, by default at least, we would have access to the data directly. We would have it readily available to us under our control. It should be our data. And it looks slightly different. It looks a bit like this, where there are many, many users and they all have their own tiny little database. And that might sound like a pipe dream kind of. If we look at our current world with all of these very large databases, how could it ever be that we can go back to smaller systems? And back is an essential thing there. If we move back somewhat over a decade or so, we'd all have our own systems. We'd all have our own, we probably had CDs and most of us would rip those CDs and play them on our own computer. And you could play the same ones in your car. So we had standards and we could choose which applications we would want to use. So although it may be that this sounds like a pipe dream, if we go back then 15 years, this was just reality. So we should be able to progress back to it. And we've noticed that we can. So in this talk, I want to go over the various steps that we take that we took or that we are still taking part of still to move towards that, to move towards having the data for the end users. So what are those steps? Let's just cut to the case and start to talk. Right now, we ourselves, most of us, even like this whole group of people, when we build applications, we'll actually build them as data silos. It's the reality on how we normally build applications. And although our data silos may be smaller than some of the large social networks and maybe our intentions are better, still users don't really get control over that data. We still build silos, kind of. So taking that into account, how do we move forward? So these are the fairly simple steps that we figured out, steps that we've encountered and steps of which we noticed that, at least for us, they seem to work. And they do come with an asterisk, probably many of those, but they seem to work fine. The first thing that we do is we identify the data that we want to talk about. And we identify it in a way that would allow us to share that data. We search for some sort of ground truth, a truth that we can all agree on. How do we do that? We do that by creating a semantic model. And a semantic model is basically a way of finding all the information in a way that others have described it to, so that we're not the only ones describing it in that way. So kind of no bike shedding. Once we have that semantic model, we start building applications on top of the semantic model directly. We use that to build just regular applications, just straightforward applications. And by doing that, we get to learn all of the standards that come with distributed knowledge. We get to know the standards of linked open data. And it doesn't have to be open data that we talk about at this level, not at all, but it can be open data. You should note the standards, though. Standards are important. We ourselves, we've built many large applications on top of it. And sometimes we just stay with that one application on top of that one data set. We build it ourselves using microservices. And I think that helps. I think it helps to build these structures that microservice at this stage. But I don't think it's a requirement. So if you don't like microservices, and there's a lot of downsides of microservices as well, you could have very good reasons not to want to do microservices. Well, you have a good excuse. You don't have to use microservices. As we progress with these applications, we notice that because we have the semantic model, and because we've defined it in a very clear way that's both understandable for us as for business, and because we basically taught about it for long and figured out what would extension points of this thing be. We notice that we start building smaller and smaller applications on top of it. And this makes sense, right? We have still have that one large database, and we still have that one big shared model. But because there's multiple questions popping up, because people have their own specific constraints, can I have an admin interface? I would just like to see a summary of the data. It becomes much more feasible to build that, because we know this model will stay put. So we know that if we build it today, it will keep running tomorrow. And as we make these smaller applications, there's this realization that these smaller applications work on a bigger domain model, but they don't always consume the full data set. We have a larger domain model, but we only consume parts of it. Why is this all in one database? So naturally, as humans, as people using the applications, as people architecting them and building them, it becomes more and more obvious that we can split off data. We can split data into separate stores. Most often it starts by us saying, well, actually, we have this data. We want to share it with another party. And in reality, the parties we communicate with initially generally aren't really the most linked data minded parties. That's not what you start with, because the grand community generally doesn't start with that sort of smaller data, with that sort of linked data. But we start sharing data. And we push data outwards and we receive new data. And there we do often encounter now that we get semantic data sets. So we start importing the semantic data sets. And then it may be that we decide to import that data set and the fetch the data in a live way. But it may also be that we choose to import the data or we choose to query it live. Both options exist. And we gain experience with that. And as we gain experience and as this thing evolves, we notice that the data sets are smaller and smaller and smaller and smaller. There are specific data sets about contacts now, for instance, that gain attention at the Flemish government. And so with these data sets becoming smaller, you notice that you're querying more separate data sets. And you get to choose and you get to learn on a very, very safe basis what the challenges are for working with such distributed knowledge. When do I cash it? When do I go and fetch it live? What are the issues with it? And they're all very feasible issues. But the only way in which we'll achieve this is by gaining the experience itself. So who am I to tell you this? Because it could be anyone, right? I'm going to jump forward. All right. Who am I to tell you this? So I'm, I'm at, I'm the CEO for Pennsylvania, which I co-founded with four other people. And after we started, we realized that the web was kind of a, kind of a sad state. It was unhealthy. And we realized that there were maybe some, some threats, some threats that, that maybe even threaten our democracy as a whole. And, you know, if, if you're starting something out, maybe if you're at that point, and we've all grown up with, with a very healthy web, a young web that, that opened up a whole lot of possibilities. And maybe it's just a nostalgic idea that we want to save that web. Or maybe it's really for the better. And it's very hard for me at least to figure out which of the two it is. But in any case, we decided that we wanted to make the web healthy again. And I know that we cannot by ourselves make the web a healthy system. It's not the way it works. It's a grand community after all. But what we did realize is that if we do this right, we'd be able to kind of push it forward. And if multiple people push it forward, then it will become better. So the only reasonable move is to act and make it better on your own, the best you can do. And that means cooperation. And very similar to distributed technologies or, or this knowledge bases that we want to have at individuals this place, the stuff we are promoting, the ideal ideology we have at Pencilaleo, there may be downsides and we have to learn them. And from a distance, from where we're at right now, for the vast majority of cases, we cannot estimate correctly whether it would be good or bad. If something is new and you don't have experience with it, that holds for at Pencilaleo as a company and for distributed technologies as a whole, you do not know what the downsides are going to be. So jumping in blindly and saying, we're going to make everything distributed, that's probably the very best thing to do if you are a large social net social media platform, because most likely it will fail. And we won't say that it was bad because we didn't really know what we had to do ourselves will blame it on the technology because it's the easiest thing to blame. And like right now, it's very hard to find reasons why Red Pencilaleo would be what the downsides of this sort of company would be. It's extremely hard to indicate the downsides of what these distributed technologies would be. Surely on a logical mental level, we can think about this and we can think whole, maybe we'll have some data latency. But that doesn't help us in getting a gut feel for these cases. Looking at how IT people work, we are off factors 100. It's asking how far the nearest postal office is. And someone will tell you it's about three kilometers. And then you know it's going to be maybe one and a half or six kilometers. But that's sort of the leeway you have. You ask an IT person how long would it take to calculate this? And it's like that scene from Rain Man where he's at the doctor's office and they ask him questions. And you ask questions to an IT person. You ask, hey, how long would it take to make this calculation? You say, oh, I don't know, maybe 50 milliseconds. And he said, oh, but it can happen on a server. Oh, it can happen on a server. 15 milliseconds, not 50, 15. But it will be accessed by a user on a 2G network. Oh, then it's going to be 500 milliseconds. This is the postal office, not being three kilometers away, but 300 kilometers away. And our gut feeling is kind of that far off. So that means that when we work with these technologies, and we've experienced it with OSOC as well, where we guided the project where we put data in people's solid parts. But at some point, we wanted to have a view. The view was about maybe 40 people, which is solid parts. We had to go and fetch. Hence, it takes a while to fetch the data. And we wanted to have it quicker. No problem. You can easily solve this, just you need to get the gut feeling on what will work and what will not work. And we ourselves, we, the ones listening to this talk, the ones working at Red Pencil and everyone at Open Belgium, maybe even, the people in your company, your CIO, your CTO, your CTO, your developers and your architects, we all need to figure out what these ups and downsides are and in what way we can work with them. We need to get this gut feeling. So we have learned how to use these technologies. And we know that the gain is humongously high. But if we're going to blindly stare towards the new world without knowing the ups and downs, we'll probably lose track. And we will probably conclude that the technologies are bad, even though it was our experience that was missing. And so that's basically this talk. We know that these things, we know these things ourselves, I mean, because we have executed with them. And because we've worked on it, we kind of know what the pains are. And so in longer steps, knowing, knowing why I'm a civil engineer, very big data interest in databases and slightly too much love for the web, maybe, that the question then becomes for us in more detail, what are these steps? What am I going to do in order to move us forward to that new world? And so it's the same slide again, which I now didn't even cut up, because this is basically the essence. I'm going to go over it multiple times because this should stick. We realize that we've reached that endpoint. And we realize with projects that we've achieved that that is where we can land. So the first thing we do is when we build a new application, we go around and about and we check what data there is in the application. That's the very first thing we do. We start wondering about that data. And we don't even blindly go through that. We actually go around and about with the people that will use the application and we'll discuss it. And if you think about that, thinking first about the model is super, super important and it makes a whole lot of sense. If the data model is broken in your application, then you know that it's going to take you ages to fix it and it's going to be hyper expensive. And we don't really want to accept the fact that we may be wrong in the data. And we don't want to accept the fact that we have to study the data before we actually start building. But in reality, I think we can realize that that is just something we have to do. And we try to mitigate it even. We try to build business object layers and reasoning layers in between. But what we do with these layers is just guessing on how the model could be extended. And when we define a semantic model, and now it just sounds way more academic, but in practice, it's not at all a highly academic thing. What we do is just as if you went around to the world with your application and you talk to various people and you ask them, hey, if you see this application, what would you do with this data? What do you think would have value? And then you collect all of that sort of information. You get loads of interviews and information and way too much, actually, the process. But then you say, you know what? It would be very nice, dear world, if you could transform this into the prettiest data model that you could figure out of this. Could you give me that instead, instead of all of these records and documents? And the world says, yes. The world actually says, yes, you can get that model. That's what you can have. And so, sure, it might be that this model does not perfectly fit your case because you might have a special snowflake. And in reality, sometimes it is effectively the case in practice that something doesn't match. But for these things that you haven't foreseen, the model has been thought about so much longer and so much more extensively that most likely it's going to be way better than what we'd figured out on our own. It's not cheap. Building the semantic models is expensive, but it's about the best investment you can do. Fixing the data afterwards is always more expensive. So, learn what you can learn from others. As a perfect scenario is that you get that awesome model. So, in practice, in Flanders, especially in government context, if you're in a government context, there's a landscape for massive land, AIV. And what they do, similar to what Europe does on a higher level, they actually support you to find the community and to create semantic models together with the community. They provide you guidance and they construct you the models and they ensure that you can agree with others on the models, including the whole international setting. So, it's not cheap. It's still expensive to build the models, but it is hyper important. These semantic models that we have here, they are the equivalent of the IBM-compatible computer. You could play games if only it was an IBM-compatible computer. And it was a standard, basically. Or even more apparent, but slightly more distant from our world, maybe, as ITers, pick up any sort of device. And you'll notice that it has screws. And the screws in the device and the nuts and bolts, they've all been standardized. Part of the industrial revolution is figuring out how we could share such knowledge. And that made everything move way, way faster. If you get the bicycle from a friend of yours, you won't, at any point in time, doubt what the brakes are. You know what the brakes are. If you borrow a car from someone, you know if you steer clockwise, you will go to the right. So, there's no doubt about that. These are the standards. And semantic models are the thing that builds the standards for us. This is the thing that makes sure that when you take data from one application to the other, that the other application knows, if I turn right, if I turn clockwise, it will go right. That's what it does. If I pull this lever, that will be a break. These models, by definition, the way they operate, are extensible. So, you can extend the data, and it will not destroy the data. So, this is the equivalent of automatically getting backwards compatibility, which is kind of valuable. It tends to be hard, and now we kind of get it for free. So, once we have the semantic model, because I elaborate on it quite extensively, but it is a hyper-important point. But we want to build applications, of course. So, how do you build an application on this magical semantic model that apparently solves all? Well, in theory, there is no difference at all. In theory, you can just use them as the classes that are in there, and you can use their properties, and you can model the main that way, and it can just store in any system you want. Stomatic models in practice are a bit more flexible in some regards, but we tend to limit that in practice when we build the semantic models. That helps us to communicate with current communities. So, with that in mind, we can very easily just build our applications the way we build them now, and we can build them on top of a SQL database. There is nothing wrong with that. You can have the linked data stored through some abstractions, basically. So, what we have learned is that there is a benefit of storing the linked data in a graph store, a store that understands linked data, that understands the semantic web, or at least semantic technologies in general. It helps us to understand the technologies at this stage, and it's also a very flexible way of working with the database. So, you get to know the benefits from these technologies, and there where they're weird or different, you get to know what that is. You get to experience that in a fairly well constrained model. So, if later on you want to use these semantic technologies to communicate with other applications, then you already have that data ready. And so, the very best way forward is to start building directly on top of these semantic technologies at this point in time, and we build applications. So, that's the end thing here. There's an application. There's nothing very special about it. You could build this application as a big model of application. I put many cogs in there because we use microservices, and I think the microservices help you reason about this data or about these technologies in a certain way. It helps to think about applications as a sort of data transformation system. So, if you think at applications logically, what they do is they either manipulate and or they manipulate some data, or they visualize some data. It's not really anything else an application does. If you go to an IT company, you buy a new hard drive, then what you will get is you will pay for it. And so, in some system, some bits and bytes will switch to say, you have now have less money on your account and it went to this account, and that's all that needed to happen. And you can then later on visualize that in your own app, but basically it's just bits and bytes that have switched. And so, you can build monoliths, and it works fine, technically. We tend to use microservice, and I think it helps, but I'm not saying that there's some sort of holy grail. It's not a silver bullet. I strongly doubt it, that it is a silver bullet. I strongly doubt it because if you look at software in general, then doing good compartmentalization and high cohesion, low coupling, that sort of thing are going to be way more important. It just grossly overshadows all the other problems. If you would work together with people that don't believe in technologies, then from experience, they will probably make the technologies not work. It's reality. So, we have to convince people and we have to figure out what the lessons are and take smaller steps. So, if microservices are not your cup of tea, then don't use them. We do. And because we have these services and because we can just reuse code, which you could do in other systems as well, we start building more applications on top of this, of this same database. And as long as you can easily share code, it will become apparent that you can easily make custom user interfaces. We tend to call this an application cluster because we have many applications on the same sort of data set or database, most often even. But we build specific user interfaces over time for specific cases, like summarizations of views, sometimes it's public views, sometimes it's sharing the data with external actors. But the general idea there is the same. We have these smaller interfaces and because we notice that we have these smaller interfaces, it becomes clear that sometimes we only discuss a part of the data. And that's where it really, really gets interesting. The software couldn't care less where the data comes from, as long as it can get access to the data. And so in fact, what we encounter there is that we can easily split parts of the database. We can create no smaller databases with structured knowledge in it. We tend to call them base registries, as in this is like where you can find the ground truth. And sometimes we just store that in one shared place. And once we split off these databases into separate sections, we notice that this important information gets shared. And because it gets shared between applications, either it's importing and exporting or we live query, we do both, even both at the same customers, most often. But because we can experience the two, we get to know what works well and what doesn't work well there. So we split off these databases and we get to figure out if this data would live separately, can we more easily transfer it elsewhere? Would this make it easier to manage? And it's a growth cycle. We notice that it's growth. We're not always there. When we define these models, we more and more see that it becomes more apparent for business people to realize this is going to be something separate. And as we notice that multiple applications would manipulate that data, it sometimes becomes more obvious to store in a separate place. We don't always store that data remotely. Sometimes we pull it in. And sometimes we do query in life. For instance, in the say editor in Killing not later when we make meeting minutes for local governments, I'm going to get to that soon anyways. We pull in data from legislation, from base registries that are not ours, and we do not store that data in our systems, we just query in life. And as these databases become smaller and smaller, as it becomes more and more apparent for business, like, hey, but actually these contexts should be a separate thing. And there's extra data of these contexts. I don't think I want to have this in the same database. We can make the databases smaller and smaller and smaller until there's not really that much of a jump to have people have their own databases. And part of the reason why this is so dry is because we have this apparently at the Flemish government and against the agency of domestic affairs, ABB, they gave a presentation on this actually, I think it was the first presentation of Open Belgium, someone picked it up. We went through this whole process, start to finish with local governments. And what we did with local governments is we started discussing with them on how to share information. And if you think of a local government, that means your city, so your mayor and all the entities connected to that, when they make decisions and they make decisions about everything, like anything that could potentially be political, it'll be a decision. If it's renewing a road, that's going to be a decision. If they're planning new trees, it's probably going to be a decision. If they're changing the speed you can drive somewhere, probably going to be a decision. They decide on just about anything. Even if you want to renovate your house partially and need to take a part of the street for a container or something like that, it's going to be in a decision. But it actually gets worse than us riding down all of these decisions. I mean, it has its value, so. But what actually happens in reality? In order to make a decision, you have to make clear that you're allowed to make a certain decision. So your local government has to prove that it can make a certain decision. So they'll look up laws indicating we are allowed to plant trees on our public domain. And so we're going to do this and then they'll motivate that decision. They'll write out, we want to plant more trees because it makes people happier and because it's good for the environment and that's one of the key points we want to work on. They make their decision stating we're going to plant the new trees. Same thing with, for instance, appointing a new mayor. If they'd appoint a new mayor, that would be a decision. It would be the council would have to agree on that and they can agree on it from, but still they have to agree on a decision that's written now. And when they make these decisions, they write everything down meticulously. But other entities, other agencies, need to validate whether your local government is actually doing everything legal. They're not allowed to do anything. They're not allowed to appoint a new king for Belgium. It's not their responsibility. They can write it down, of course. They can kind of decide on it, but it wouldn't be valid. So what would happen when they do that? They also have to inform the Flemish government on the decisions that they've made. So in practice, what they do, they first write out the legal text so that everything is structurally correct. And then they go to an application, which actually works like this application up to recent years. So it went through these same stages too, the digital counter system for local governments. And they come and tell us, by the way, I made a decision. And you can find that decision here. I've attached it. And what we've decided is that is that we've, so I was distracted by Niels, sorry Niels. And what they have decided is that they they will inform us that they have decided to appoint a new mayor. And this is the decision and this is the contents of the decision. And we decided on that date and it wasn't that meeting and these people were there and they write all sorts of stuff down on the decision so that the back office systems of the Flemish government can efficiently work to crunch through all of those decisions. And that makes sense. But why do they do that? One would wonder. And we took them on a trip. So now we went with them a few years ago and we started discussing with them, how do we how do we talk about this? How do you use your decisions? What is the date that it is in there for you? And we asked them, how would we express this? What other users are there for this? And we asked other actors, what sort of use cases there would be for that? And we did that together with one of these Oslo tracks together with against a couple of massive lander and we worked through all of these standards. And we discussed with them long, very long actually, like what is this whole thing that we're talking about? And we learned a lot for the applications that we have built. And we defined the semantic models and we standardized those and we had public review about them and we started building applications on top of them. And together with them, we figured out how to make the community move forward. And there's loss in place that some of these things have to be followed and some of these things are optional. And we've made the systems this way, right? So we first did it and we built it in the same way as the regular applications and we started building more of these applications and we started splitting the applications into multiple sections. And now they don't come to us and they fill in a large form anymore. And they don't fill in all sorts of details of what they've had already written in the decisions. They annotate their decisions with RDFA, link data, semantic technologies, the models that have been defined here and they annotate each of their decisions and each of their publications with that semantic data. And now they don't come to us and say, please verify this, this is all of the data. They come to us and they say, here's my tiny little database. Can you please verify it for me? Can you please verify that the decisions that I have made have been made correctly? And this is actually amazing. They don't even fully realize that they've gotten back control of their data. They can use this data for anything. Our software goes and says, oh, you've published the decision there. Very interesting. It goes and downloads the decision. It verifies everything that is in there that we can programmatically can at least. And it fills in any of those forms that they need to fill in. And then as all of these forms have been filled in, as if they were filled in manually. This is just literally no difference. If some data is missing, they can enter it manually. The back of the system will receive all of the necessary information from those forms where it gets validated and it follows the full flow that the decision need to flow in order to be fully ratified so that we cannot, after the fact, say, oh, this was not legal. And what we've experienced there, too, with all of these systems, we've experienced that we don't go and fetch this data directly each time. Because we went through all of these steps and we noticed with these smaller and smaller databases that it makes so much sense to further split up a database. We figured out that it makes most sense for us to cache that data. So we downloaded only once. So we have it for a longer time and so we can search across many of these decisions in a timely manner. A subset of this, of the data that we download, a smaller subset. Because this is also something that is growing, but it is effectively live. It is being actively used in production. And no one seems to realize that has happened. And it is amazing that no one realized it because it means it's a super natural process. If you want to know what data is in there, there's an open Belgium talk that exists. There's a separate website containing links to what data is in there right now. We can post the link on the chat. If you have ideas of stuff you could do with the legislation, feel free to check what the challenge is and to paste some updates or post a submit your idea of what you could do. Maybe it wins whatever the challenge really is. But it is interesting to browse to it. So if you have an idea, do just submit it. And we notice this in other forums by the way too. So these are cases specifically at a government context. And at the government context they work fine. We use the same sort of applications for simple things like web shop like stuff. And there you actually notice the same things. So you notice that you first build a shop, but then they realize they actually want to have back office applications as well. And so this is a very natural process it seems. We panned out to land here, but we didn't pan out that we'd already be here. So we kind of expected us to be like on the beginning steps, but it turns out it's not that big of a leap. So all of the data, I'm going to repeat that, all of the data is coming from remotely harvested sources. So from people that now today have control over their own data. And they don't realize it, but they are using it. And that is gorgeous. So summarizing the steps we take for achieving this awesome goal, we first define a semantic model. And then we start building the software directly on top of that semantic model. We build more and more interfaces on the semantic model. We start splitting up the databases into smaller and smaller databases. And we start sharing information between our application and other applications. We start creating base registries. And this brings us closer and closer to gaining all of the needed knowledge to start working in an even much more distributed world where we can actually give controls by default to the end users, where it's their data by default. So we've kind of learned how to work with this distributed data. And I think at RedPencil.io we now know how we can put control in end users' hands. And we know how we can work with these technologies. And we have our frameworks like semantic works that we use to build this efficiently. And they're not rocket science steps. They're more of a plea for something we basically have to do. It's not because we know these steps that the world will change. It's because we all know these steps. So we all have to follow these steps. And we have to learn what the pains will be and what the benefits will be, too. So if you follow these steps, as far as we've seen so far, and maybe there are other paths, I don't know if there are other paths. We figured out one good path, one good way. We land there. These are the ones that we noticed. And they're not very challenging. They're super feasible. And they're a lot of fun to do. And they actually move our world forward. So if you don't have any questions here after, and if you don't know what to do right after this talk, which, you know, you have a few minutes left even, take the leftover, it's going to be three minutes or so. Just disconnect. If you don't know what to do, you don't have any questions, you don't want to continue talking. That's fine. Disconnect, take the leftover three minutes and think about it. Think about what is my next application. Just relax in your chair or go for a walk and think, what are the steps given this talk? What are the steps that I would need to do to kind of start moving forward? Because that's where we want to be. And we don't know what the impact will be. So you're very free to either press the disconnect button or if you want to talk, we're still here, you can put your questions in the chat. I'm quite certain that Austerite or Niels can unmute people too if you want to verbally speak up. But then say in the chat, I want to verbally speak up. And let's move the world forward. Let's make this happen. Niels, do we have questions from somewhere? Niels is now frantically looking for his mic. Yeah, for the unmute button. Yes, we had a question from Begel Talita. On whether first moving towards a graph database such as Neo4j would be a good step. We discussed it a bit in the chat so far, but perhaps it's good if we now go into it live. Go ahead. I mean, I see you mentioned in the chat, so yeah, very sane opinion. So I think what we've concluded is that it is a bit nuanced. First off, I don't really think it matters what database you use. You can store RDF in any database. Though obviously querying is going to be a lot easier if you use an RDF database. And in that regard, a graph database is an advantage because most graph databases are pretty close to RDF and you can implement SparkQL on top of their query languages, some such as Neo4j provide this. We then dive a bit deeper into what you should do first, like should you first combine your database, your data in a database or first focus on the model. I think our position there is pretty clear. We really value the model and we think it's important you think about it. Given that, it is interesting to combine your data to see what's there so that you can take that as a piece of information when doing the actual model. So it is nuanced. Indeed, very nice new one. I think it also makes so an important step that we shouldn't ignore and I think that's definitely something we underestimated at some point. You have to pick technologies that people want to use. If there's a certain pro versus Neo4j and you know that's really, really, really would like to use Neo4j or your architect or whomever in the organization. Then starting to push for something if you notice that they don't want to. They will prove that what you suggested will not work and it doesn't matter what you give them. It's irrelevant. So if there's a current drive to go for Neo4j then it's a step forward. So why not? It's better to take the step forward than to not take the step forward. An important stepping stone though I think logically is that we figured out how are we going to further progress from this and you can transform data. I mean if the data is in the right model then the hard thing has been solved. I think everyone is also unmuted now. Is that possible? They can unmute themselves but Meet did say you might have to refresh to get your microphone to work. Okay challenging. Yes because most people are in listen only mode so they have to connect to the audio first if they want to. And how does that work? Do you have to refresh or you can press the mute and mute button now? You have to press the telephone icon. And then do the echo test and then you're in. Yeah so it takes a moment. Yeah that makes sense because you're becoming the equivalent of the presenter at that point. Yes. Cool. I don't know if there are other questions given high level talk etc. The point on a on a databases makes a lot of sense. Very nice starting point. Now we land at the dreaded silence. You can also ask other questions via the chat if you want to or reach out to us later on. We'd be happy to chat about it more. Yeah we kind of breathe this I think to a large extent. And so most of the stories we've had so far are it just works. So there's a certain fear there but it turns out that it just works. It also works without reasoning which is something that tends to be a fear. And we're very surprised that it so far works without reasoning. But it probably is because we have a larger entities pushing certain ontologies forward. For instance you have in flounders you have against up informatics that does a lot in that regard with Oslo. But we in flounders specifically also have just an atmosphere where we have a bunch of I think we just have a lucky generation of researchers that are being busy on on this on these technologies so that helps a lot too. And in Europe the European Commission also pushes this forward very heavily. Is that there is no center? Yeah so here that makes that makes a lot of sense. So there's there's two portions in insert that may have this problem. So it's an intrinsic problem right. When you work with distributed technologies you may not know where your users are for instance. So in the end if local governments in this situation don't come to us and say hey by the way this is my data please verify it then we don't have the data. Once they do come by then we do have it. And then the the the logical semantic models as well as the data itself can be described and is normally described as a semantic model. So they behave in the same way and both finding the currently existing semantic models although there's word being done on it but that can be challenging and sometimes it's it's just experience and that is at best very sad. Also finding the data in that regard could be a problem but that is where we notice that the base registries really help where we state within an organization hey if you want to get some information for instance with respect to contact that's the case we literally have today in front of us please go in in that base registry and update it there. And that really that really helps. So knowing where you'll have to where you have your entry points the and also as as Marek comes by with as easy as smart response the it works the same way as you have on the internet so you but you need to have entry points and you can build indexes on top of it and and you can build browsable sites all of that exists but you still have to do it it's it's readily available. Marina says that it's it's a bit far from from your current reality and so my gut feeling was the same so when we started building with semantic works one of the reasons we started initially building this way was because we thought it would just be a faster way of building web applications and and if you want to use microservices that seems to hold it seems to be a faster way and it seems to be a structurally sound way of building them. So and and we wanted to achieve that end goal here and we wanted to achieve a world where people would have their own data but my god that's far away it's never going to happen. At least that's 100% my thought and when we when we started Red Pencil with with Niels, Niels co-founded it and three others except for Niels and me. I think we wanted to achieve that but I'm then maybe to be one of the more positive people in that regard and at least my gut feeling was if we have proof of concepts in 10 years and we've done a good job. We started it in 2017 late 2017 so it turns out that maybe so reaching this situation is a super weird and risky one and I fully agree like how will we ever reach that but it turns out if we just if we just take the steps and we just cross that river step by step there are stepping stones then it seems to be more realistic than what we would have thought and it seems to go way faster than what we would have thought and also way more natural when so far nothing nothing has exploded yet so no and it's it's been like no there's not even been like a huge party to say we've achieved it it's just oh it turns out we've achieved it. That's weird. When I was plotting like when I was thinking do I want to give this an argument but that's something to say is then that I came to realization but actually we have this this is weird so yes but I don't know don't be afraid to move turns out we can but maybe it will take us 10 years sorry that that is a cost also feel free to unmute if you want oh yeah yeah yeah so that's yeah and you know in 10 years we won't have moved the full community and that's also okay I mean things move so the applications the ones we built too we plan them to run for 10 15 years that that has to be the horizon at which applications need to run so yeah you're not gonna I mean it's not it's it's not that we're gonna remove everything that exists then we're not gonna pause the world for a moment. The is typing sections are like super curious for me right now I don't know how Niels experiences that. It is. I take the mic then. Yeah I just wanted to say that it's really I'm currently working on launching open data in Brussels for a regional organization but it's really slow because people are not aware of it. Technical teams don't know really how to to launch them so we are currently really talking details and trying to to make all platforms match it's quite tricky but we're going really tiny step by tiny step but it's it's a recommendation to to develop open data amongst public institutions but many institutions I try to work with they are just not aware of it they don't there's no budgets from there the people that are not skilled and they are quite frightened as well so that's just no no the fear is definitely there too and we that's and there are talks on on what they on what we did with with with consulate with with other than that we didn't do that I think but with with pharaonic force and cutting this method etc we did do a lot of sessions with people on how to really support them so the people in this case local governments they have that data and the vendors that we work together with there because we don't build their software we give them a backup solution in case they can't publish it in the right way but it's up to a market that's to create it correctly and and to support people and we helped them so we first discussed with them so that they'd realize what the models would be so that they knew they had the data but then after that we also helped them a lot in in how should I write this down and there's interesting thoughts on on how we've done that and how that came to fruition and now still they publish data and sometimes they publish it incorrectly because they don't know the intricacies of the semantic policy they make sense because currently the vendors are mostly publishing data and they are consuming much less than what they publish and you have to start somewhere so you can't you cannot you know it's a chicken or egg problem and apparently we we chose a chicken so we helped yeah go ahead no i just i was thinking the egg was there before because of uh all uh whatever actually it makes sense it has to be logically uh yeah so we picked wrong but still it worked um and we uh so silly me so i just thought for it for a moment um the uh um but the thing is we also helped them a lot in how do i write this down how i how do i use these standards including uh giving them example sparkle queries um giving them examples on how to encode in rdfa and then one day when they've encoded and said we think it's this uh we verified it we validated um and not only us bwc got a bunch of work on that too um validated and and see uh what is currently wrong and how can we help you to do it right because they're they're most often they're willing to do it right most people aren't against sharing data they're just afraid that they'll lose their job um and that uh they'll come across as ridiculous uh because they don't know that thing um so everyone's afraid when they encounter something that that they don't know um yeah right yeah you're right and i but i think if we encounter smaller and smaller semantic models feel free to unmute uh i think as we uh encounter smaller and smaller semantic models we will more likely also need reasoning and reasoning at least in a performant manner for as much as i see of it in the world currently um that would need some attention um research into how can we do this efficiently and how will this actually work and that's just also experience with it i'm extremely surprised that we have gotten this far without reasoning and we're not hitting roadblocks with it um so when we initially and that is maybe six years ago or so started uh building semantic works we thought let's just dump the data in a triple store and not have reasoning and we'll see you know in half a year we'll get stuck and we'll see how to solve it then um but it turns out because we think so long and hard about the models that it's it's surprisingly far you can get without uh reasoning but it's good to know that the base theories are at least there the match semantic models in case they'd be uh disjunctely created uh that that solutions would exist um i see a question from Lynn that i didn't uh read yet uh yeah Lynn yes i fully agree though with what i've seen now so far um so these people use that data automatically and i think in terms of local or smaller businesses um and and they don't realize how important it is for them but if they don't if we can't help them at least move towards something of of data governance something that is distributed um you could bet that um the whole thing will become maybe five web shops um i i i pick on the case that maybe some have heard but um i recently bought bought an air conditioning system um and and so like movable air conditioning system because uh too hard in the bedroom of disaster and apparently then you budge and you move for it um but what we did in practice is we we have to search for um for various websites and i prefer not to buy from amazon or or any of the very large large vendors i prefer to buy locally if possible which is nearly impossible so okay um but we searched for for maybe half a day or a day on what sort of systems exist uh and then we cataloged them like with pictures etc uh and prices and and warranty period so we knew what we'd be getting and then by the end of that we were so bored of picking this that in the end we just said okay it's going to be this one because it looks okay and that's like the worst idea um and i think this sort of thing that i had to do was just gathering information from various sites and if you look at a web shop especially from smaller supplier if you go to a smaller electronics store um they most often say please go to the supplier's site and ask us what the price is and we'll tell you what the price is but it's not rocket science to calculate the price automatically they tend to have a fixed uplift like it's going to be this much percent with a minimum of that amount of money um but they cannot catalog all of that information on their web shop um if if the suppliers would publish the data semantically that would i think be the starting point and if not the suppliers then um some open data sites like wiki data or something like that then we could get the starting points to make these web shops at least more easily and that would also be the starting point to to embed the data much of the data they already embed in a semantic way slightly lacking structures it's a tiny bit of identifiers missing but it's literally only the identifiers that are missing in order to be searched well on google etc and for google they move so maybe for this they could move too but it is an extremely challenging project because you cannot um not even to my parents they have a local shop and my brothers as well and i can talk to them about link data um i guess if they be drunk um but that would be about it um so heart uh goodbye and awesome questions by the way i i i appreciate it uh and and yes they're challenges uh but i'd like to discuss further at some point so i suggest we uh drop off drop the uh recording and thought but i can stick around everyone um i want to welcome you all to this online