 Okay, quickly sharing my screen. There we go. Thank you everyone for joining this session on linked data event streams. And more specifically what I think is ought to be the base API for open data sets. And this is really the base question that I've been asking to get over my research team for the past, I think already eight years. And it's what's the best API to publish an open data set. And I'm not the only one who tries to find out an answer or to find an answer to that question. For example, you have the geospatial people that started with the WFS specification, the web feature service that is the geospatial web querying API. You have the linked data people that started the Sparkle endpoints and a query language that can query over graph data. And you have many, many, many other specific APIs that you can host over your data set. But then of course, from all these great querying APIs, which one do you host? Well, if you look for example at Flanders, then you have the address database. They already have quite a couple of APIs online. Like they have 17 on their website and you can have a tool, a wizard to help you pick the right API that you should start using to do something with the address registry. But if you want to look for a simple auto completion functionality, that's not included in the set of APIs. So that wouldn't work. And right now, if you ask them how much this costs to keep online these 17 different very specific APIs, they would answer that costs a lot even to maintain all these specific APIs. That costs a lot. So this situation where on top of a data set, you keep creating more and more APIs that you try to keep up with the recent trends. That is what I call a maintenance hell because you will just keep on creating legacy APIs that need to be maintained. A second fallback or like the fallback approach to that is we will share a data dump. And then if you're interested, you can just create an auto completion API yourself on top of that data set and everyone is happy. Although with the address registry, what we noticed is that, for example, local government started to create local changes to their own copy of the data set and they didn't go back to the master data set. So in that sense, we started creating out of date hard to synchronize copies of the data sets that are maintained everywhere by everyone by nobody at the same time. And this is what I call the replication hell. I think these are two really big problems that we see when we try to design the best API possible for an open data set and neither of these two solutions is good because they both have problems. So how do you define what your priorities should be when trying to publish a new data set? Well, this is then the idea that we positioned is that you should do as... the least amount of effort as possible and the least amount of effort as possible is when you keep working with... when you try to get everyone to set up their own querying API, but of course you need to be able to make sure that everyone can sync with the latest changes on top of your master data so that you really claim and advertise to everyone, I'm the master source of this data and if there are updates happening in the real world, you should just come and fetch them from my event stream. And this is, I think, the new next thing that we need to convince everyone about is to make sure that we do life cycle management across our objects, that we do life cycle management for our data sets because that's in this way we're going to make sure that this whole completion API is always going to be able to work on the last version, this geospatial interface as well, this link data interface as well, and so on and so forth. But there's no specification yet for a stream of linked data objects and you may say, well, there's, of course, newsfeeds like JSON feed or like atom RSS and so on, but I like the ideas behind these specifications, but there are no linked data specifications. They don't allow any linked data sets to publish their latest objects towards the outside world. So we designed one over the last two years. We've been designing this interface. It's called the link data event stream specification and we define a link data event stream as an always growing collection of never-changing objects. So never-changing objects, these are objects that live in a specific moment in time. For example, an air quality observation, an observation that once was made and you will never change that anymore in the future because that was the truth at that point. You could also see it as version objects, like a version of a specific suite name or a version of a specific address and we'll make sure to make that object. And if you want to go back in time, you will be able to find that object as it were at that time stamp. So for this interface, we designed one with the simplicity of Atom and RSS also in mind and one page of such event stream contains a data description just like RSS. It contains links to other pages and it contains, of course, all the items in that page. So you can navigate through the event stream by just going, following links through the thing. If you're a technical person and you want to dive deeper in this specification, you can find it at w3id.org slash ldes slash specification. We are proud to announce that this is getting adopted by the SEMEC program at the European Commission. So this will really become a European specification for publishing links data. So with link data event streams, you can just replicate the data and that's interesting, but you cannot query the data then. You can only copy all the data and if you would have a question like give me all the air quality observations over the past 10 years, you will just have to download all that data. You will not be able to immediately put in give me an overview of the yearly summaries, for example, of that time series or give me all the exceptional results over that period of time. That's also something that's not possible. You actually need to download it and you need to do all the processing on your own machines. That's also interesting. Why? Because the right effort or the right investments are happening at the right place. For example, if I am interested in a very specific processing of a specific data set, then it's me who will have to invest in that processing and I don't think that should be the data publisher that should invest in my specific use of that data set. But I do think that we will be able to grow towards a more efficient ecosystem where we all work together towards a better and more efficient data ecosystem where we share efforts of indexing data sets and this indexation, this is what I think can happen with fragmentations. Let's look at linked data event streams. How do we go then in the end towards a specific geospatial interface? I think that you can first just geospatially fragment that linked data event stream. You can create tiles from that event stream so that every geospatial tile also has an event stream in itself. And if then, for example, you need to calculate the root from A to B, then you can just download the right fragments of that event stream just in time. The data will be less big to download. It will still be a considerable amount of data to download, but you will be able to download it just in time to answer your query as well. And this downloading just in time, well, this is also interesting because that means that you can also do that on a server. And if you do it on a server, then you can also expose, again, this querying interface. So I see a three-level architecture for open data APIs in the future that looks like this. At the core, I see the linked data event stream. It's the thing that if you don't do the linked data event stream, then you're not hosting a proper open data set. I believe if you don't host the base open data event stream API, then you're just cutting corners and you're not giving your end users the flexibility they need on top of your data. But on top of that linked data event stream, what you should do, what you must do in fact, you can do optionally. You can build some reusable indexes yourself. And this is the second tier. You can fragment your linked data event stream by geospatial areas, or you can fragment it by prefix, saying that everything with the letter A will be found in that part. You can also fragment it by time period. You could say, well, I'm going to first have all my data from this year is in that fragment while all the others are in the other. And you can have many, many other different fragmentations that you can think of. These are indexes that can be created by you and yourself to stimulate the ecosystem to reuse your data set, but it can be equally created by third parties. We will see in the next presentation we'll see some interesting use cases where third parties may actually have good incentives to do exactly that. Then on top of these reusable indexes only then I think that we will see querying interfaces for which you can, if you really want to do developer enablements as a data owner. You can, for example, host a Sparkland point. You can host a WFS service, a GraphQL interface, a Cypher interface, whatever. But I do think that these things are less stable if you do it as a data provider yourself. Why? Because, well, they should always be the last priority to keep online. So if they go offline, I think people should always be able to fall back towards reusable indexes and even if these go offline, that they're able to fall back to the linked data event stream at the core. Good. So this ecosystem then looks something like this. We will have multiple fragmentations which you can see as tree structures on top of the event stream. And then, for example, the geospatial APIs, these are going to download the right fragments of the data when a specific question comes in. But the specific APIs will mostly be hosted by the third parties themselves. For these fragmentations, we've also built the tree specification. The tree specification is, or the linked data event stream specification is built upon the tree specification. But the tree specification allows you to fragment a collection of objects. And you can specify different relations on top of your fragmentations. Like you can have relations considering geospatial relations, suffix relations, substring relations, time-based relations, and so on and so forth. That's it. If this sounded somewhat interesting, know that my team is hiring. So this is my current team. I'm looking for more people. So feel free to send me an email if you're interested in doing something with the linked data event streams. That's going to be the vision for the next four years of research. And that's it. This was a very general and very abstract and very difficult introduction on linked data event streams and tree and everything that we're doing in that realm. But I want to make it more concrete. And that's why from this moment on, I'm going to keep my mouth shut. And I have invited way more interesting people than myself. I have invited people that actually do things in the real world. So, yeah, without spoiling anything of their presentations, I'm just going to give the floor to Erwin. Okay. Thank you, Peter. Also, thank you for the invitation, because a little bit that we are doing stuff in the real world and that you are doing nothing real important, but I will say more the opposite. But anyway, let's see how I can get this working. Share my screen. Okay. Are you now seeing my screen? Not yet, but I suppose it's coming. Oh, well, sorry. I have to push another button. It's working now. We don't see it yet. No, okay. For everyone in the meantime, yes, now it's coming, Erwin. Oh, it's moving. And it stopped and Erwin is gone. Okay. We'll wait a moment. It's crashing. Oh, no, your sound is still here, Erwin. Yeah, Chrome is crashing. Okay. I know he's gone. Okay. Just to... For the audience, there will be room after all the talks to ask questions. We've foreseen quite some time for questions to specific speakers. So please write them down on a piece of paper or write them down in the chat. We'll get back to them after all the talks. Yeah. And we'll try to moderate that discussion. Hi, Erwin, you're back. Yeah, Chrome crashed. About technical problems. Now let's see if I can... I will try to share my full screen right now and see if that's improving. So it seems that it takes us some while. Do you see my screen now? Yes, now it works. Okay, great. Thanks. Sorry for all the technical problems. Yeah, so Peter asked us to tell us a little bit about what we are doing in the Netherlands. So together with Bouter, we're going to talk a little bit about what we did so far, what our plans are and what we'll actually show a demo of the link data event streams, a proof concept that we are currently working on. So, well, my name is Erwin Vollmer. I work at the Dutch Cadastra and also at the University of Twente. And in the Dutch Cadastra, I'm leading the data science team and we are trying to bring link data and knowledge graphs a step further in our team. So at Cadastra, we have quite a long-standing history of dealing with link data. I think the first data set the Key Register Addison Buildings, I think we published already for more than five years. That's what we call a production link data. So it's really a production effort. It's available and you can use it. And we also have this approach used for many other data sets around. However, we found out that maybe last year, we had a lot of kind of problems in this workflow of dealing with link data, especially we had a lot of data sets standing in the queue to be published by us as link data. But the team was not really capable of handling all the data sets because it simply took quite a while to publish a data set as link data. We took several months to publish this kind of key registers. So we thought and also the technology was much more further developed. At that time, when we started five years ago, we had to develop much of our own tools. But now a lot of improvements have been made in the link data tooling at least. So last year, we set up a new approach. Can we make link data more easily, more cost-effective so that it doesn't have to take months to publish a key register of link data? But can we do it within maybe five or 10 weeks, but much faster and much simpler in tools? And well, we kind of succeeded. So now we published a new key register with this new approach. And in January or February, we released a bug 2.0 so the key register, I was building again, but now a new version and now it takes us only five weeks to publish this kind of large key registers. So and also the key register topography, large scale topography is also published through this new link data, what you call registration architecture. So you see the links over here. So how this is, I won't go into details in this 10 minutes, but basically we get the data from a Postgres database and then we have some very simple tooling software components, like the announcer and the microservice and they will load in the end the triples into the triple DB triple store. And then we put different views, APIs on top of it. Most relevant for our new approach is that we actually kind of divided the link data into two parts. So we have what we call the registration view. This is the lower part. Sorry for the Dutch, this slide where we just publish a data set as is as link data. The data model looks as much as possible to the original data model of the not link data set. And then on top of it, we put the knowledge graph where we do get a kind of object view on the data where we integrate the data from the different data sets below, but we also get a more customer friendly view on this data. And in line with the previous picture then on top of this knowledge graph we put all kinds of APIs on top of it for the different kind of users that we have. So this is now working quite okay. So why then are we now moving on and also participating in this link data event streams, the proof of concept? Well, we believe that it might be one of the APIs that we put on top of this knowledge graph infrastructure and also on the data set. We have to find out, of course, it's just the first steps, but basically already it also fits with the ideas that we have that link data should be much more cost-effective and much more simplified in a way. So there we have already good connection, I think. But what we also have to find out, okay, and it might fit in the technical architecture but what kind of use cases doesn't really make sense to use this approach. So we have to find this out. And finally, we also find it very interesting. I think we can much more together as Belgium and the Netherlands to learn from each other. We are dealing with the same kind of data, same kind of stuff. So we also see that a very nice start between Flandre, Belgium and the Netherlands on this topic. But what did we do already then on this link data events frames? So that's the point in time where I would like to hand over to Wouter. Wouter, can you show us what we did so far? Yeah, thanks, Erwin. Let me see. I'm going to share my screen. If it's okay, you can now see the triple store at Cadaster. Specifically, you can see the two data sets that we currently expose through the link data events streams. So that is the key registry large scale topography BGT on the left-hand side. You can also see that these are fairly large data sets. So this is the BGT large scale topography key registry is over 1.3 million triples and we also have the base registry of addresses and buildings or BAG, BAG and that is 850 million and slightly over 850 million triples. So these are relatively large link data sets around 1 billion, slightly below slightly over 1 billion. And of course we want to expose them now using link data events streams. We did an implementation of link data events streams which is currently here running on local hosts. So this is basically the implementation where I currently configured the BAG so the base registry of addresses and buildings and then I can search for different times. So we implemented the time index and I can then when I change the date I also get a different part of the key registry back. So this is say the low level API. And also because it's JSON-LD I can also easily put it in our triple store and then in the context of our triple store it becomes a little bit easier to process. Of course what I just showed you was really the raw JSON endpoint. Then you can also see a little bit of the structure like how the data is fragmented into different different nodes that are part of the same bigger collection. And specifically for our key registry we have the nodes that you can then retrieve. The nodes have relationships to other nodes and those relations are cementically qualified. So you can go for a less than or equal to relation but you can also go for a larger greater than or equal to relation. So the pagination is basically cementically meaningful which is one of the key innovations I think of link data event streams. You can also take a look at the incoming node. So the incoming node is then the key registry collection which gives you in this case 10 members and so those are places of residency in the Netherlands. And this is actually the content that is part of the key registry. So in this case we are looking at something with one function. So this is a place of residency not like a store or not an office space. So this is really great. So it's now running on local host as I already mentioned. It took us only one sprint to implement and our sprints in the cadastral data science team are three weeks. So in three weeks we were able to implement it comfortably. Also able to give some feedback on the link data event stream specification. So I would say it's a very good specification and it's very easy to implement. In the next sprint, so the next three weeks we will make this part of our standard way of exposing the key registries. This means that in three weeks from now it will no longer run from local host. It will actually run from the online triple store and it will be one of the ways in which our key registries will be disclosed. Now one final point that I want to make here is that if you expose things over this amount of triples, you don't need to use any memory. You use very low level means to expose the data. So it's also a very cost-effective way to publish such very large key registries. That's the demonstration of what we did, how easy it was and I would actually say everybody should do this. This is a great way to expose your data as link data. Evan, did you have some closing words? Otherwise I'll give it back to Peter. No, I have no closing words. Is this? Yeah, this makes me very happy. This is one of the first of a few first implementations of the link data event stream specification. I'm very happy to see that it's easy to implement for data publishers even if it's the first time looking at the spec. So yeah when do you think that the audience here will be able to download the entire BGT and the entire BAG, so all the addresses in the Netherlands as a link data event stream? Will that be before the end of the month? Will that be before summer? Can you already make some guesstimations? Can I answer this, Evan? Yeah. Okay, yeah, so next print, so that means the sprint actually starts today, the new sprint, so it means three weeks from now we will have the public endpoints available which means that you can basically go to the datasets that were also communicated in the chat and then if you go to slash feed, so that will be our new standard path to expose link data event streams if the data contains a time dimension, then it will have this slash feed path like you have slash sparkle or slash graph QL, you will also have slash feed and of course you should be able to work through all of the data, it skills very well even for these large key registries of over a billion triples. Yeah, I would say April 1st and if we don't make it then we can always say it was April 1st full joke. That's a good one and we're always on the safe side, yeah, definitely. But happy to hear that this means that if I add some buffer that certainly before summer there will be a new HTTP point where you will be able to fetch all the data and make your own intermediary indexes on top of them. There's a small question by Retmar in the chat whether these 10 objects, they were the only changes at this point time or was this just subsetted and I think this is just indeed a page so if you would follow then if you would go to the next page you will see more objects and that may have changed at that time. So you always need to download more pages than just the last page I think. Good, so let's move towards the presentation of because it's a bit weird to kick off with open Belgium with a great presentation by people from the Netherlands. So let's show that in Belgium we don't have to be shy and that we can also do great things in Belgium itself and immediately from the most beautiful city in Belgium as well. No, I will probably get some comments in the chat as well. One of the most beautiful cities in in Flanders and Belgium, of course, is Ghent and Olivier you're doing a great project there, right? Hi, thank you Peter, first of all, thank you for having me Peter. So I'm Olivier van den Slager I'm a social strategist, Design Museum Ghent. I previously worked at MMO and before that that was still packed so the center for expertise with digital digitalization of cultural heritage and now I'm working on a big project called the collections of Ghent maybe first of all I'm not a developer I'm an art historian and curator in background so I'm not the most technical profile in the panel but nonetheless I'm going to try to give this presentation for you let's try and share my screen I also have an error Please elaborate on your error if there's something we can help you with Yes, let's try it if it's the screen it says permission for access to the screen is not given Yeah, it's your browser you should normally be able to give access So if Hi Boris from Cloud 68 here Olivia if you can go to the top side of your browser where the URL for the pages on the left side of the URL next to the flashing microphone you should have an icon with I guess I can describe it as two squares on top of each other Yeah, so you're going to want to click on that and it should say on the screen pop-up will come up and on the screen share it might say temporarily blocked Is that the case? No, I don't see the I can put it in the same cap So what browser do you currently have? A Chrome a new one not working Perhaps you can upload it as a PDF Olivia, is that a possibility? Yes If you click on the plus actions in the left corner and then you should get the option to upload a PDF I could set it to picture Can you open your share screen because there's a video in it Yes, that's possible Where did you send it? Or send it to Brecht as well I already have it Okay, Brecht, can you share it? Yeah There we go, Olivia, the floor is yours Yes, next slide please I'm just going to let the video play Okay Is there no sound? Is there no sound? So if you do a YouTube video you can share it directly Brecht, by going to the plus icon and pasting the link of the YouTube video there between the screen share there is no sound coming from your computer It's directly Brecht, share Yes That's in brief The very brief project we're working on right now Next slide, Brecht Yes What the video isn't saying is we're opening up the share account and we're doing so it's linked to open very important usable data connected to an interoperable image and asset sorry but we're also making it open so it's not just creating so that it's not us just creating cool things but we also want to make sure anyone use it and use it in their own way Next slide Opening up isn't enough We started the whole project with the basic ID that there are already a lot of cultural data sets out there but they are not always connected to each other or they are isolated and if they're not published already they remain in these closed silos all governed by each institution on their own all following their own logics they are not really interoperable from the start Next slide What we want to do at Brecht at the core of the project is all connect and cultural collections from four museums so we have some industry museum design museum we also want to add an archive to the pool that's quite new because they all in themselves use a different standard to describe their data so the data is also registered in the way which also makes it hard for them to connect or communicate unless we try to translate it into one standard but then we want to take it one step further we also want to allow history to be published in the same data set or in the same data stream so we're looking at how people can crowdsource their objects how they can also use the same model to do so and so everything becomes interoperable and everything can be quarried with distinguishing whether it comes from an established collection or from a citizen of Ghent to also create new heritage so we're talking about event streams and I think the big idea here is that heritage isn't static so why should our data be if you look at the people who are registering today in museums they do so in a static method where something changes let's say where a piece is in a depot or it's on display somewhere or it's in a connection space the previous data always lost and it's very hard to retrieve unless you start working with backers so we were thinking about together with what if we would start thinking about cultural heritage or try to approach them as being time-series what if we would be able to normally have one single truth that allows for a multitude expose change as well maybe next slide so we have we can publish cultural heritage objects by using the caching headers or by making sure they're all timestamped we can then go back in time and that would allow us to create new applications with them we could track where the piece was through the time we could also track how a description object changes maybe because the creator is working on it but also we try to do that with all value that has been used by the registrar but of course it's not enough to be useful for the people or for the professionals already working in cultural heritage institutions but it would be interesting if we can try and figure out what the cross-domain for cultural heritage could be so we're working with the Oslo Cultures which is a new application file by Flanders and it's part of a bigger set of standards and we're using that to describe or to form our cultural heritage data too but we also have to put it on top of it which is called records in context because we're also working with archives and Oslo at this time does not really allow for describing the archive and the third big framework we're working with is which is international image operability framework next slide by doing this and by publishing a data stream of objects that has both texts coming from a standard view but also making sure that all the possible reproductions are in there and you can query them want to authenticate and reuse and we're starting several organs or several let's say streams of funding also in the project one of them is a financial scheme where we can 200,000 euros to subsidize let's say start-ups to reuse the data and make a new application where we're also starting a cultural data lab where we want to explore together with the cultural heritage professionals and the end users what we can do with that data which will be in the next slide so this is the the technical architecture we're using I'm not going to really go deep into this but as you can see it has two major components coming from the CMS it's a collective management system where they are registering all the data from an object and then we also have all the images that describe the object to productions and we are trying to put them all in data stream which we will then publish via a triple store which for this is the virtuoso and from there on people will using sparkle queries to query the whole data set to check both originating from the cultural heritage professional as from the citizen from Ghent and try and encourage reuse from that and we will also build an immersive room that will feed also the data in the event stream as a demonstrate of the part of the technology we're using so thank you for my presentation all right cool Olivier so there is a big project the co-ghent project that's going to publish by summer as well going to publish a lot of collections so really the raw data the core data of the museum in Ghent five museums I think are going to be published as open data there this open data will be linked to data thanks to Oslo culture so Oslo is the open standards for linking organizations in Flanders which is a data standardization initiative they set the standard for cultural heritage and and it will be used to make sure that everyone can again create derived services and create derived indexes on top of this these data sets and also within the co-ghent project which is really a big project with a lot of different partners we're going to make sure that link data event streams there make sure that everyone always has the last version of the data and that they always have all the flexibility they need to do anything that they want with the data and this is really important and I think that's really the driver behind every link data event stream so thanks a lot Olivier for that presentation we're a little bit behind schedule a little bit due to the technical difficulties that we had I hope we will be spared from technical difficulties for the next presentations the next speaker is Brits Lonneville Brits are you there? Yes I am Super can you try to share your screen and let's hope that it goes more swift than Olivier Are you looking for the buttons or do you find the buttons? No I was going to It works The floor is yours Brits Ta-da technology So good afternoon everyone my name is Brits Lonneville and I work at Flanders Marine Institute also known as Vliss and I'm part of the data center working on the marine regions project and so today I'm going to present to you the marine regions gazetteer let me first introduce you to our wonderful team so we have Llanders Capers which is our team leader and myself we are the two geographers of the team then you have Salvador and Patricia those are the two biologists of the team and then finally there's Bert van Wurnen from IT who is supporting us in every way he can with the IT related stuff now what is marine regions all about let me try to explain it to you by using a very recognizable case meat mora moro which is a deep sea fish and this deep sea fish you can find it around Archimedes sea mount in the Mediterranean sea now let's say you went out on a dive and you saw quite a lot of mora moro and you want to talk about this to your American colleague so you call him up and you say man I saw mora moro around Archimedes sea mount and he starts laughing his ass off he asks you whether you were on drugs during this dive because no way that mora moro can be found around Archimedes sea mount and you're definitely mistaken so what was your crucial mistake here well you did not go to marineregions.org because if you would have gone to marine regions then you would have seen actually two Archimedes sea mounts there's an Archimedes sea mount in the Mediterranean sea and there's an Archimedes sea mount in the North Pacific Ocean and so what is marine regions or how does marine regions come into the story well we try to improve access to and clarity of marine geo-referenced place types names and areas and we do this by giving a geo-object or an energy ID so in this way you can clearly see the difference between Archimedes sea mount A and Archimedes sea mount B that's not the only thing we do we also provide you with the coordinates of course a latitude, longitude a bounding box if that's available even if there's a WMS link to this data we can show this on a small map we're also going to give you a little bit of contextual information through the place type this will tell you is this a sea mount is it a sand bank are you looking at something completely different and of course we'll give it a name and now two objects can have the same name but of course one object can also have multiple names and multiple language and we will also store all of this information in our database and of course we're also going to provide you with the source where did we get this information marine regions integrates a lot of other authoritative gazetteers such as the GEPCO gazetteer of undersea feature names but we also have created some data sets our own such as the maritime geodatabase containing exclusive economic zones territorial seas so we have included all this information in the marine regions gazetteer as well and finally a very important feature of our gazetteer that I have not talked about yet is the relations because we have a hierarchy in our gazetteer with parents and children so we're going to link objects to their parents and in this way you can easily go through this hierarchy and learn about marine regions objects now how can users access this data nowadays so what is the current state there's several ways to access this data you can just go to our website to the search page you can browse through the hierarchical browser so through the parent-children relation for our most important data sets we also have OGC web services WFS, WMS or we have rest services on our websites and then we also have marine regions our package so M regions if you want to check that out so that's the current state but as we have heard already from the people who were presenting before me we can do more for our users and our users include among others big geographic databases such as worms or Eurobis or using marine regions and so in order to go one step beyond what we have been doing so far we are currently involved in a project with the team of Pieter with Harm delva and also with the Vlist Open Science team of Marc Portier and so we try to open up this data even more make it even less ambiguous by yes opening this data up as linked open data basically this is a six month project we are currently at the end of month number two and so what we have done until now is trying to link all our data to vocabularies to make sure that it's very clear what we are describing and so that everybody and especially divers in the Mediterranean Sea can then clearly explain to their colleagues where they found their fish thank you very much for having me and I hope this was clear for everyone thank you a lot Bret for the very swift presentation I really like I really forgot the name of the fish but I really like the example you gave her but I feel I have a hard time to identify the case that I would be scuba diving in a warm Mediterranean atmosphere somewhere that's something for post-corona I think to look forward I find it really interesting these three presentations the first presentation was about address registries topological topographical help me out topographical registries the second one was about cultural heritage which is something completely different and now the third presentation was again about something completely different and still there just a collection of objects that can be managed as a linked data event stream and this becomes really interesting that we can start to build tools that just work across all these things at the same time and speaking of a tool that can work on top of these three different things and is generic in that sense well that's something that Bret is now going to present Bret should be loading can you see my screen yes so I am Bret from IDLab Gantt and I'm working on the linked data events team clients the LS client in short so what does this do this is this is being made in the co-hand project like if you already said you have a few museum partners in Gantt that are publishing data so these are the data providers and what you want is to build services on top of that for example Olivier wants to build a dashboard to see what the status is of the institutions how many data they have published and if their mappings are correct etc or another service or application is the enrichment application and you want to add a link in the metadata from the object metadata to the image itself or to a presentation manifest following the IIIF specification that describes how the image should be displayed another application would be a text search index so that users can easily find objects by typing in a search bar and of course to do this you need to harvest data from these institutions to your application and to do this I'm creating the eldest client which is simply a harvester of link data event streams and so that you can copy the data into your database system now you maybe have been wondering don't we have OIEPMH difficult name to do that which stands for the Open Archives Initiative a protocol for metadata harvesting sounds exactly like that and indeed it does similar things you have a data provider with metadata and a repository and on the other side you have a service provider which runs a client-side application called the harvester also that performs certain requests over the web that receives an XML encoded metadata about the things and to do this it has specific request types for example a verb this records give me the records from after this time okay we have this but what do we need now like this OIEPMH thing is invented in 2001 and yeah link data exists at that time actually so it really needs an update to be more performant for example it works on XML nowadays people don't work a lot it still exists a lot we already have JSON now and we have link data more coming up like JSON-LD so we need to go a step further there also it uses hard coded requests and now with the data event streams you have a more hyperbedia-driven approach the client needs to follow relations to new documents to new nodes and these relations can be used spatially or in time based so the client on the server becomes more loosely coupled and you can even use indexes to publish the data also OIEPMH is focused on retrieving the latest version of an object and while with event streams you can maintain all the versions yourself and choose how when you will drop all the versions for example you will only maintain one year of data and after that people should have harvested already in their archive so you can use some retention policies how long you will maintain certain versions also with the server OIEE works with the resumption token this means that every client is like served separately with the token the server needs to process this token and then give them the next objects while with event streams you have cacheable fragments and this means that it becomes very lightweight for the server to host these so all these things are IDs that have been evolved in the last 20 years and are now applied in the linked data specification the eldest client is available on GitHub, you can find link here it's implemented with an actor based architecture it's a new architecture communica it's called that we've been developing in IDLab it's a query engine focusing on query of different linked data sources and you can find the installation or the instructions here in the actor unit eldest client package and you can use it as a command line interface or as a JavaScript library so let's now take an example for the Kohen project we have published data of design museum camps in this link now I want to retrieve all of the data what I can do is for example give a MIME type, I want to retrieve it in JSON-LD this here is the context I want to apply it on for example it can be a translation in English and I want to only harvest after this time after January 1st this year I want all the data from there on and then the URL to your event stream now to see this live in action I've printed here in my command line with these parameters and now you see the objects on the event stream are like floating in so I can process these in my system and create my own services on top of it so feel free to try it out in your command line I would say and if you have questions I would like to hear it afterwards thanks thank you very much indeed really nice to see the history from OAIPMH towards link data event streams and that we indeed have the same idea in mind but that or solution is more tailored towards HTTP and link data and really brings it into 2021 instead of 2001 version of this fact so really interesting work just maybe a side note that it is alpha version code we are sharing this code now for the first time at a conference so please open a lot of issues when you encounter issues with it I pasted the link in the chat towards the link data event stream client please forgive us if something goes wrong please open issues and we will try to fix them as soon as possible and give it also a second chance if you are able to do that with openness comes also a little bit of you cannot publish as early enough but on the other hand you only have one chance to make a first impression and these two things need to be balanced well I try to balance them by adding that a side note here so I like to release early but then please forgive us if there are still mistakes in it good next presentation is by again from the northern neighbors can you hear us? yes perfect feel free to take control of the presentation this is going to be hard of course the previous experiences having the previous experiences in mind yeah it appears to be an IQ test or something by which I don't mean that I think there it is share alright the famous question can you see my screen? at this moment we cannot oh wait okay it's loading alright in a few seconds there it is yes there it is perfect I'm involved in the dutch digital heritage network and I'm going to talk about auto completion which is a very specific use case for using or potentially using linked data event streams so this is not a talk on the perspective of a data publisher but more of a data consumer first of the dutch digital heritage network is a partnership founded in 2015 a partnership of dutch cultural heritage institutions in the Netherlands of course and our aim is to develop a system of national facilities and services for improving the visibility the usability and the sustainability of digital heritage and one of these national facilities that we are working on is called the network of terms and I'll explain this in a little bit but first off a little bit about cultural heritage this is a painting a rather unknown painting I have yet to meet the first person that knows who painted this painting it's called The Draw Bridge in New Amsterdam this is actually painted near the place where I was born which explains my rather grim nature probably if I show the next painting then you'll probably know who painted this one this one is called Elmend Blossom and the creator of this painting of course is Vincent van Gogh however there is a problem two institutions in this example own a painting of van Gogh on the left hand side and on the right hand side both institutions use different terms to refer to the same information in this case they use different notations to refer to Vincent van Gogh for instance written as V van Gogh or Vincent van Gogh this of course is a problem for findability because people know that V van Gogh actually are the same person but machines do not unless you tell them so of course we can encourage institutions to use the same terms to refer to the same information for example Vincent van Gogh easy right yes but then there's another problem the problem of identity of course Vincent van Gogh as we know him is this famous painter but there's also another Vincent van Gogh his nephew who has exactly exactly the same name so how can you distinguish between these two persons of course link data officer solution we can use identifiers your eyes to refer to this one creator of these paintings these your eyes stem from what we call terminology sources and here are a bunch of them and terminology sources is basically an umbrella term for for physical right classification systems reference lists authority files etc there are national terminology sources but also international terminology sources you may probably know a couple of these so we want to encourage cultural heritage institutions to start using terms from terminology sources especially to start using your eyes to refer to terms from terminology sources in order to improve the findability of their information but then new problems arise and of course user applications such as collection management systems can connect to the systems of these various terminology sources but the systems use different API endpoints for example a sparkle endpoint or some custom web API and these terminology sources use different data models for exposing their information and this makes it rather hard for user applications to connect to these sources because they have to understand the various API protocols and the various data models of these various terminology sources and this hinders of course the easiness which for connecting to these sources so we conceived a solution and this solution is called the network of terms this is an application that is basically a gateway between user applications such as collection management systems and terminology sources so user applications do not have to connect to the systems of terminology sources anymore they can connect to the network of terms or to the API of the network of terms so a user application can send a search query to the network of terms the network of terms then repackages this query and sends it in real time to one or more terminology sources in parallel then collects the results of these individual sources repackages them into one result set and returns this to the user application and this one result set contains the matching terms including of course the crucial URI that a collection manager can then store in his user application or in this collection management system the network of terms offers a uniform API so that user applications do not have to know the specific API protocols and data models of various terminology sources the API is working perfectly but it's rather hard to show this to collection managers or less technical people so we also developed a so-called a demonstrator a rather simple visual interface that you can use for filling out some kind of search query for example, Vincent van Gogh selecting one or more terminology sources and when you hit the search button then this demonstrator calls our own API, searches the sources and presents the results this is also working perfectly but there's one problem one big problem users do not want to fill out an entire search query in this case Vincent van Gogh users are lazy collection managers are lazy so they want to have some kind of Google-like experience that if they start typing then matching terms should pop up instantly in this case Vincent van Gogh so how can we add this feature, this functionality to our network of terms in a way that makes sense that fits into our architecture we could of course put into place some kind of auto-completion server would then harvest data from all the terminology sources and store this in some kind of index then user applications would be able to query this auto-completion server this however doesn't really fit into our architecture because it's not a decentralized solution it's a centralized solution you would have to collect all the terms from all the terminology sources into one index it's not a really scalable solution because in this picture there are just free terminology sources but there are a lot more and this is ever-growing so our auto-completion server would have to grow too and this is not a very lightweight solution our current network of terms implementation is lightweight because it directly queries the sources but this auto-completion server would force us to maintain the data that we harvest and re-harvest it periodically in order to keep the information current luckily we stumbled upon something called tree and of course this has been explained before and tree offers a solution for our auto-completion problem so we teamed up with idLab with Peter and HaremDelpha and asked them to develop a prototype for us that demonstrates the auto-completion functionality for us using the tree vocabulary and what it basically looks like is that there is a terminology source this terminology source publishes its terms its data as RDF this is rather common so this is not really new this is something that most users in our network already do this data gives us the opportunity to introduce a new component the so-called fragmenter this fragmenter grabs the data and creates a lot of tiny fragments of the original data file of the original RDF data file and for auto-completion this basically means that a fragment for instance the case of Vincent Tango consists of the V of Tango this V has a relation with the A, V, A, N etc so you can build up an entire tree of relationships between characters of terms so in the end all the parts of terms have their own fragments and this fragment basically is just an RDF file it's a very small RDF file but an RDF file nonetheless then there's a fragment server because all these tiny fragments in itself do not do anything you need a server to make these fragments accessible a fragment server so interestingly the data the source data must be provided by the source if you cannot do this then you're not a really good data publisher you can provide you can of course provide some kind of RDF dump however and this is of course the interesting part you could also offer your data as some kind of linked data event stream so that it becomes a continuous stream which would then fit into a pipeline where as soon as updates arrive from a terminology source for instance a new term has been added the fragmenter could recreate or create a fragment of the new term and then make it accessible for the fragment server to serve it to whoever is interested in these fragments the other components the fragmenter and the fragment server however can either be provided by the terminology source if the source has the resources for maintaining this kind of infrastructure or a service provider can do this and this fits rather well into the picture that Peter painted at the beginning this is the tier approach there's something that you must do yourself as data publisher and there are things that you can do but also things that other people or other parties can can do for you this is all good but there's still no auto-completion functionality right we have a bunch of fragments and a fragment server but that's it so there's we need more than this so the fragment server the fragment server are very very simple servers they just serve plain rdf files to offer this auto-completion functionality we need an auto-completion client and this is a smart client it knows how to request fragments from one or more fragment servers it knows how to interpret the data inside these fragments it knows something about auto-completion for instance about ordering the results, ordering the terms in some kind of fashion that makes sense for auto-completion interestingly tapping into what Brecht already mentioned this auto-completion client too has been developed using the communicat framework so our auto-completion client is not just a simple client it's really a small query engine that understands the vocabulary that we use for making auto-completion work at the end there's of course a user application or as Peter mentioned in his talk an awesome application this is the part where the user interface resides this is where end users for instance collection managers of cultural heritage institutions work and this is where they fill out their search query where the auto-completion functionality starts to kick in and the user application operates the auto-completion client which then queries the fragment server or servers and there we have it auto-completion functionality but what does it look like in practice we also asked IDLab to not only develop a prototype for the fragmenter and the fragment server but also a demonstrator a visual interface in order to show the results so that actually works and this is it you can go to this URI and try some terms yourself this demonstrates currently searches for sources for quite different sources for instance the cultural heritage and the second world war World War II so this is my query FIR and it results in terms like FIR FIRST, FIRES, FIREWOLF etc is there something special about this demonstrator? No this is exactly the functionality that you would expect of an auto-completion function a search bar and auto-completion results so from an end user perspective this is precisely what we want however underneath it all there's this fragmenter there's the tree specification there's a link data event stream hopefully eventually in place for making this work and this fits perfectly into our architecture of having this decentralized approach with link data as the core method for publishing and using data so what's next? this is a prototype so we need to test it thoroughly for instance we want to measure performance especially user perceived performance is it good enough for users to use and another perspective is what is the exact quality of the auto-complete terms so we need to look at the contents of the terms that are found if this is all successful then we would like to bring it into production we don't know yet when but probably thinking of Erwin here on April the 1st we have to wait and see that's it thank you very very much if you would like to know more about this auto-completion functionality please contact us tech at netwerk.tv thank you awesome thanks so we've now seen three data sets being presented address registry to the cultural heritage data set the marine regions gazetteer we've seen two clients being presented one just taking a copy of the entire link data event stream and the second client was auto-completion across a fragmentation I've put all the links that you need to recreate that for your own data sets I put them all in the chat so all the tools that we create should be at an open Belgium event of course our open source and all the data sets are open data so please try to get your hands dirty I'm an academic so I'm afraid that sometimes what I say is a little bit too early for market adoption sometimes but I promise you that's pretty close to market adoption so if you want to be one step ahead start playing with these tools the next presentation is again another data sets use case so now we've mainly talked about base registries or data sets that really need to be reused as they are the reference for many other data sets but I think Olivier also already mentioned time series we should try to also find an overlap of time series this is exactly what Philip Michils from iMac is going to hint towards, right Philip? your own mute that's another one for the bingo see if I can pass IQ test if you see something it is loading so I think yes you passed the IQ test great so yes my name is Philip Michils and I'm working for iMac but we're working together on digital twins also with Flanders with information agency in Flanders and I wanted to explain in this presentation how we see linked data events streams supporting digital twins so for those of you who don't really know what digital twins are there are basically decision support systems for city planners and policy makers that's how we look at them so it's more than just defensive visualizations that you typically see in these demos of course the idea is that we are trying to understand the dynamics of cities by merging data streams and by applying models to those data streams for performing simulations with examples if we want to implement circulation plans in cities then we want to maybe simulate the effects of several different circulation plans see what the effects are and then choose the ones the one that fits best our criteria the thing with digital twins and these computational models that are running behind it are hungry machines and what we want to do basically is correlate and process data from different sources and of different nature so Peter was hinting towards it it can be sensor data for a large part of course sensor data is very important for us but also it can also be other data for instance I will show a case where demographic data can play a role as well and you want to correlate all of this so you will need something else as well so what's the concept of what we are doing in the duvet project which is a European digital twin project the idea there is that we try to see data sources models and visualizations as components that can be fitted generically to a central data broker that's the concept that we are trying to look at and now of course it's all very nice but if you want to connect these data sources and they're all in different formats and they don't have clear semantics the semantics are not well defined that's easily going to bring us into problems it's going to be an integration nightmare and it's going to be very costly to adapt these data sources to make sure that they work with our models and what we want to do is to connect any data source with any models so it's a bit difficult then so back to the fancy demos the fancy demos of digital twins is what you see it's the iceberg at the top but below that is a whole lot of things going on and a big part of that is actually getting data to be interoperable to break the silos that are typically found in these data sources to address the issues of data quality and do things like data analyzing so that's our biggest problem and we kind of figured that out one day we had a very good idea and we naively tried to solve the issue of figuring out what is the quality of life in a certain part of the city so street by street we wanted to assess the quality of life in the city and we had kind of an equation that would take into account different aspects of the city so this could be are there shops nearby is there public transportation in your neighborhood how busy is the street you are living in are there parks etc etc so very very diverse data sources most of these data sources were available somehow in some form but usually they were not really semantically well defined so they were not published as openly in data so and we quickly found out that it was way too hard to try and manually hook up every part of every data sets to each other so it was almost impossible and this was really typical for a lot of use cases that we tried to implement using digital twins so with Duet we have the intent to create a digital twin platform for Flanders so the idea is then okay where is the data in Flanders and there is quite a lot of data so there is the authentic data sources which are being hosted by the Flanders information agency but there is also tons of other data sources spread across so many different organizations they are countless but most of these data sets unfortunately are not published so they are not accessible but even if they are accessible they are not always interoperable so that is the main issue we face and one by one onboarding just does not scale it is too expensive so we were kind of disillusioned you could say because how do we go about this there is lots and lots of data but it is way too hard to just onboard it and use the digital twins so we saw earlier today that this is really giving us hope giving me hope and I was really enthusiastic to see this it is actually very easy to implement the link data event stream standard so I think link data event streams can help in many ways first of all they can remove obstacles that we are typically facing when we try to publish open data we have seen that it is very easy to implement and it is also a strategy where we can separate publication from the actual management of the data which I think makes it much more feasible to achieve the requirements are fairly limited they are well written and they are easy to implement what is also helping is that it is a more robust approach to building time series around sensor data typically today the approach of keeping time series on sensor data is to subscribe to the sensor event stream but the problem there is that you need to specify what kind of data you want to historically keep and with link data event streams if we just onboard everything as link data event streams not only the sensor data but also the context and we have a robust way of keeping everything the history of everything which is essential in digital twins and make sure that we can time travel not only in the values of the sensors but also in the context of the sensors and also the context of the digital twin itself because if you are doing evidence based policies and you want to validate the results of a simulation you want to be able to go back in time not only in terms of the results but also in terms of the surrounding conditions if you are doing an experiment of a circulation plan in a city but you don't have the situation of the actual streets of that time at the time you were running the simulation then you cannot recreate the experiment and the results are not so much so another good thing is that we can have these reusable building blocks where we can do things like reconciliation which is essential and we want to be able to link elements of one data stream to elements in another one so being able to have a uniform system of referring to records and also linking to other sources is essential to us and also of course having the possibility to create derived data streams where we can have where we can subscribe onto the raw data stream apply calibration models do aggregations apply anonymization are all essential tools for us in building digital space and a good example of such a building block is address match which is actually an existing service of the Flemish government where they based on addresses and the way they are entered into a system try to resolve that to an actual address record which is very similar to the demo we just happened to see when we were looking for Vincent Franko in all of its possible writings and what we want to achieve basically is to not just have data sources but eventually to have all linked data sources and if all of these data sources can be published as linked data event streams then it's going to speed us up considerably so I'm very hopeful and also very thankful for everyone that is doing that thank you thank you very much Philipp it's one thing I particularly liked in your presentation is the fact that you said like ah but it's really important in order even if you want to prove something later on towards your government if you got a certain certificate saying ah because of because of that street and this just thinking and this artwork that at that moment was at the museum that's why you get a certificate that you actually visited that artwork and that street just trying to combine different data sets here that then still you also want to rewind your data set or you want to go back in time and make sure that you can still still prove that that certificate is correct or it was correct at that time so that's indeed it's called data traveling capabilities time traveling data time traveling capabilities I'm sorry and it's considered essentially also in big data processing and certainly if you have digital twins where you have then forecasting models and then based on different data sets then you do a certain forecasting and then if later someone comes to you and say ah but why did you do that in hindsight look what happened we could have predicted this and you can say ah could we okay let's go back and let's that's also the nice very nice aspect on LDES I think because it includes archiving archiving specifications so you can be specific about what your intent is in term of retention of data in that kind of it's really important. Yeah okay great so let's move to the last presentation and then there's some room for questions I hope it's Annalies de Kraan are you with us we had some technical difficulties in the beginning I hope you were able to resolve these by now I hope hi we can hear you that's already a good thing yeah I had some difficulties but that's solved right now I should be able to share my camera and share my screen so no webcam for me but I'm on the laptop of my daughter so I think we blocked that earlier otherwise Peter maybe you can share your screen and we can touch to that I'll give it one more no I of course need to be able to open a PowerPoint presentation I'm on a computer that's not mine Peter if you want I can show the presentation okay yes that would help me okay I also have it ready Astrid do you want me to do it instead I'll just try to do it it was loading it said I think I now share my screen yeah okay perfect okay thank you so I will just shout when we switch perfect yes thank you so much for this collaboration so thank you Peter and all the colleagues to have us here as Informasi, Vlanderen and now digital Vlanderen so maybe digital Vlanders is the new name what I really want to share with you today is the experience the experiences we have and the capacity building we did at digital Vlanderen so I won't go into technical details but I'll give you a short tour around in our learning curve about linked data events stream and then you will also see the link with the digital twin project and the items Philip already mentioned in his in his slides so you can move on to the next slide please yes so maybe I just wanted to to pose this because digital Vlanderen is just the new name very recently of Agentswap Informasi Vlanderen or Vlanders Information Agency for the international community so it's actually the same company with some more IT departments now involved and it's now called digital Vlanderen but you're actually the same people okay next thank you so within digital Vlanderen we have different programs who are working on the digitization of the Flemish government and its stakeholders and within digital Vlanderen we have a program where I'm working for and it's the program authoritative data sources of the digital plan and within this program we have some experiences about linked data event streams and insights and that's what I really would like to share with you today so we have one aspect on the smart data track then linked data event streams for the building registry and the address registry really a short thing about Oslo and the linked data event streams for the large scale reference database also called ERB or Basis-Kart-Vlanderen for the Dutch speaking people so first of all I want to zoom out a little bit from open data to smart open data so we started the track I think half ago about smart data and to zoom a little bit out on how we can open up our scope with an open mind an open innovative thought how we can manage all the data and the data streams that are coming ahead of us so we have now certainly in the smart city landscape but also in governments and the evidence based policy everybody is going into that data becomes still more and more a foundation for designing the society and the future of the citizens and policymakers of tomorrow that's also the link to what Philipp just told and we have multiple data sources we have the slow moving data as we know it and that's the data from the like for example the base registries and the authoritative data sources and we also have new data streams like the fast moving data from sensors or real time updates we need from different data sources and those things are very challenging those huge amounts of data how can we cross those data on the different domains how can we link all those data with their context how can we make data more reusable in that context how do we deal with scalability and once only principle and all that kind of challenges can you skip so that's why what we call the smart data trajectory and we really want to focus on the findability accessibility interoperability reusability and the ease of use of data and for that link data is for us key we understand in smart data is actually that it can make the connections between the different data sets and if you then consider a smart region a smart city or something like that you need to be able to connect the objects in the field like a road a building an address with potential fast moving data sources like sensors and the smoother the link is going the more information you can distract from it and this can help or enable to provide solutions in the field of mobility healthcare environment so that's the zoom out I wanted to make and to have all this the qualitative data alternative data sources are really fundamental to have it to have these data available to link all this to okay and so in our learning curve step one is we started last summer a prototyping phase on link data event streams and link data fragments and then you can and to make the story complete it was together with the team of iMac IDLA but also some enthusiastic students from open summer of code and of course our flanners information agency or informatics we started a prototype on link data event streams and if you know thank you I'm showing it but I also wanted to make clear to the community at open knowledge Belgium that we also did at open summer of code so we also the foundations of link data event streams come from within open knowledge Belgium itself okay so prototyping link data event streams for fast and slow moving data so the first hurdle we wanted to take was of course how can we manage sensor data so fast moving data and are we able to publish it in a sustainable way and what's the knowledge we need for that and how do we relate it with the slow moving data sets and so there was kind of capacity building needed and then we started with with the prototype with the team of Peter on link data event streams it also gave us some really interesting insights on the architectural part you will see it later on in the presentation it really is the blueprint for something big we're doing today and it also gives really valuable insights on scalability load the possibilities on the query side so for reducers etc so what we at least what I found really valuable and learned from the prototype was that we could publish a fast moving data set and a slow moving data set so sensor data on which we were very or quite immature and also our base registry the address registry as link data event streams so and it was quite a similar way we didn't have to do anything exotic it could fit for both types of data sets and some really interesting insights on the link data fragments where you can have your query module and question it question efficiently through different data sources that was really interesting for me and then the second step was actually the opportunity we had to step into the Semic project with Peter his team to make a few of our alternative data sources into link data event streams and you can see them already up front above so we have the building registry address registry large scale reference database and it's all based of course on the semantic interoperability model of Oslo so it's really a quick overview for the building registry and address registry we already work with event streams but in this Semic pilot we move okay thank you we move link data event streams so the ongoing work right now I'm not going to step into detail is that actually the link data event stream is ready to be published and it's now in a test phase as a projection on top of the raw dataset what is the link pointing to? well that was that should be an error that's not a link yeah okay it's the Flammage URI standard okay but I will put a link to the pull request that Dwight prepared in the chat maybe you copy please okay and key in this realization of link data event streams is for us with our colleagues of Oslo open standards for linked organizations is the semantic interoperability model and so the vocabularies and the application profiles and the URL is also on the web page it's key to achieve a link data event stream for our data sources we can switch to the next one and the standards being used for the realization of this one are address and register I'm sorry for that but it's in Dutch and gebouwenregister so also available on datavlanderen.be move to the next Peter okay and then we have the large scale reference database also called GRB or BasisCard Landeren which is a large scale topographic map containing lots of information about buildings, parcels, roads whatsoever and today this is available at VFS and is also dumpable by downloading your data set and there our colleague is still working on and then you can move to the next slide this is in the phase where we need to run now an Oslo standardization project to make it fit and then we can move on with link data event stream implementation so that's quite brief about this one and then step 3 it's it's always a little bit bigger and step 3 is actually that the link data event streams and the concept about it is really at the core of our new architecture that we are now going to further how do you say it proposed to the policy makers in Belgium in our RELANCE project so RELANCE is a really big thing in Flanders so we are now preparing how do you call it in English a note for the Flemish government in which we present our architecture and all our door-stelling and all our goals for this RELANCE project and really in the core of this we also use link data event streams in our technical solution so this is actually a zoom out of the RELANCE project we are now working on sensor data platform and what you see here is actually 4 silos or 4 key components of this trajectory and why are we doing this that's maybe the first question to answer Annalis you are muted all of a sudden can you unmute and she's gone okay great it was done automatically but as I was saying because I'm really enthusiastic about the RELANCE project and the sensor data platform we are going to page there is that we really want to enable data publishing and data reuse for a more and brighter and bigger data flow of all kinds of data and specifically sensor data and what we what we try to do is bridging the gap for the data suppliers and the data sources and also sensor data sources to unlock them from their maybe silo they are in and by moving it into the architecture we are proposing here with open source building blocks we want to enable these data to flow and to be much more easier to reuse by other partners by new business models that kind of thing so the RELANCE block you see in the middle consists of all kinds of open source components we sometimes also call it publication streets of components with in the core link data event streams with which all those data suppliers can more easily maybe publish as linked data their data and unlock it from or the domain or the supplier or the location where they originated from and that's the first part and then but that's maybe not this relevant here in this meeting we really focus on standards for all this we want to have an ecosystem that works with it and a governance on it and so this is just an image of our functional architecture in a more detailed way but what I really wanted to show you was that linked data event streams is really in the core of our solution it is a draft but I think it's a beautiful way forward to unlock the data thank you thank you very much Annalies I would like to ask all speakers that have spoken to share their webcam again so that we can be visible again to the audience and yeah well I think the really nice thing about your presentation Annalies is to show that this was really the beginning of linked data event streams in last summer during Summer of Code but that it evolved and we got more and more people on board while doing that and now it's really at the core of what we want to do regarding data management at the Flemish government which of course will translate into multiple tools, generic tooling that will be created but not to be underestimated is that you also said that there are going to be two linked data event streams that we're going to start with one, the address registry in Flanders and two, the GRB in Flanders, the GRB the Grootschale Referensie Bestand which is exactly the BGT in the Netherlands and the address registry or the address and building registry in the Netherlands is also going to be and also the address registry in Flanders so this will cater for some really interesting cross member state use cases and I'm really looking forward to seeing how like two completely different backends with the same data but with different slightly different data models because the Netherlands has their own data models and Flanders have their own data models that we can still then later on find them and make sure that we can seemingly carry over the two data sets in parallel at least that if we can give a demo like that that would be great and if then at the same time we can for free give a demo where we can also carry over marine regions over the five collections from the Museum in Ghent over digital twin models over what else did we talk about I almost already forgot but all these different data sets I think they will become a queryable to the masses this way and we will have an automatic ecosystem when people set up a new intermediate index it will be added to the ecosystem if they put it down it will also it disappeared but still the ecosystem will be able to recover just a fetching right data just in time we have about two minutes left for questions luckily I don't see too many questions popping up in the chat but if there would be any questions it is no or never and I will only take one question which was not according to us to plan but still I think we had an interesting session any questions in the chat three two one feel free to immediately take the floor thanks Peter I have one question how do you publish your data set information if you use linked data event streams for instance how does this LDS fit into the decads model since Erwin told something about slash feed for publishing your event streams but how can we discover your event streams yes this is also part of the 3CG specification so you can also just go to the specification and read about decad compatibility where that is mentioned but indeed I do see that these collections are going to be added as distributions of data sets so the decad model is a catalog data sets and distribution I think the data set is like the address registry and then the feed is your specific distribution of how to retrieve the data set but also adding the metadata about the specific collection and the views that are created on top of that you will be able to have even richer metadata in which you can use your decad catalog to say like I just want all the collections or all the distributions that for example have used the property SOSI result time and for me SOSI result time that would indicate that I have a time series so I can can use that in the 3CG specification just as Geraldine also posted in the chat we also indeed points to DC terms conforms to indicate that you conform to the 3CG specification so that in that way you can also immediately find all the data sets that use this way of publishing the data okay thank you good and this was immediately a question for me so thank you very much it's six o'clock my kit is screaming downstairs for my attention so I will also not take more of your time this way but I want to wrap up by saying that the presentations please dear speakers send them to me if I don't already have them I will make sure that they're posted online so that it can be used by the audience to read up on it and to click the links in your presentations the video itself will also be shared on the open Belgium website if you in the audience want to start get your hands dirty with linked data event streams ping me or go to the github repository or send us an email or just immediately go to the specs get your hands dirty we will be more than happy to help you out if you bump against anything and last but not least I would like to not make penultimate but not least I will thank my speakers Brecht Brits Annalies Philippe Olivier Erwin Wouter who already left and ensures thank you very much for being part of this thanks for making linked data event streams credible because thanks to your implementations and thanks to your projects using it in the real world it is becoming a reality not because there's some vague specification about it so thank you very much and for your enthusiastic presentations and last but not least I would like to thank of course the people from Clothe86 who have been great at providing us with support when things didn't always technically work out so with these final words thank you everyone and see you see you online in your next question about linked data event streams bye thank you ok um