 I'm Agis, I'm Antonio and I'm the CEO of a company called Covenso, so we are based in Belgium, already two and a half years, relatively new, but with quite some adventures, so we are 15 engineers. I would say that we are AI-driven software integrators in the sense that we combine knowledge engineering, some of the technologies, linked data. Recently, you will find us in high-cup integrators because we have started very discretely to do some machine learning adventures with the University of Brussels. Our projects, the last two and a half years, are around the labor market, the legislation. These are the two cases that I'm going to show later on, and we have started two experiments quite strongly on digital badges, based on the open badges and electronic portfolios, and with some future, let's say, wishes for blockchain technologies, which is quite a high place. In this perspective, I chose this heavy word to differentiate ourselves from research data. So whether they are production-based or whatever you might call it, we are not going to show things that have to do with research data. We are going to talk about linked data and open data that are applied in production-based systems. Unfortunately, or fortunately, the private sector is not very keen on open data. So industry here don't expect any manufacturing data and stuff like that. We are going to talk about the problem. But before doing that, and in order to prepare my story, let's say, in a better way, this is something, this is a photo and diagram that I used quite a lot, which shows that what amount of data is produced per minute. So four million views on YouTube, for half a million speaker feeds and so on, and the question is, while data is the king, open data is not such a good number. This is the open data bar meter, which shows with the red areas where data is really closed, with the green area where data is really open. So it's quite interesting to see that despite the fact that there is huge quantity of data generated every minute, a very small percentage, because you are going to see in the next slide, it's really important. If I would have prepared the slides after the first presentations today, I would have been even more transparent because the guys gave me a lot of hooks today, whether from the open data portal, the European data portal, especially the data tunnel, this reminds me of the old days, but that's another discussion. What I want to point out is that there are many, many criteria where you can say that my database is very open, not a lot of stuff. We focus on the two principles that already exist. The one is from the open data chart, about some years ago, and relatively new one, 2016, which was in the G20, in China in 2016. What these two principles say, that in the end, okay, open by default, of course, it's not rocket science, but comprehensive according to the open data chart. And if you see the fair, findable, accessible, reusable, interoperable, and this is the criterion that we are going to use throughout the presentation to show how fair are our opening. Findable and available. Okay, in 2015, I hope on that parameter, showed that from the quantity of data available online, only a small percent, 10% or around 12% was really open. And the now Arab shows the new report that has just been published, where you see that the open data has decreased, the open data is 7%. The machine readability has decreased 53%. While the good thing, of course, is that there is a 26% of the open license, which means that the barrier of using the proper open license is getting better and better. Accessible and usable, and just quoting the barometer, 23% of the data sets were relatively easy to find, 10% of all data sets we surveyed were not available in real charges. Only a quarter of the data sets we analyzed were available under open license, meaning licensing remained a big barrier for that image. Interoperability, something that we believe in, something that is not very easy to implement, because interoperability stems out of many, many criteria, but the institutions are producing more information, which is included in forms that are preventing data publishers and public users from communicating with one another in the data community. If the data is only available for interoperability, how can you communicate? The usability fixer, they are out here from what I saw, which is very nice report, which shows that the technologies are still unobstructed. Well, if you see how much effort is typically required to make the data reusable, a lot of effort, and some effort. So you see it's already quite a lot. And reusability, data is published in many forms, from disaster and mobs to interactive bubble charts, but the news cannot be used in bubble charts as we use on the paper. We're going to show, without marketing ourselves, but just very, very bad things, and I'm going to be quite fast because there's quite a lot of work that has been done in the last two and a half years out of these use cases. There are people who are approaching panaceas of control, architects and developers, we can start asking them to do that whatever you want. So we have two very interesting projects that we've worked on the last two and a half years. The logistical data in Luxembourg, which is basically open data, which is quite interesting in the system, and opening up the qualification data in Europe, which I'm going to describe in the next 10, 15 minutes. Open data sets are about the legislation. By the way, according to the barometer, you see legislation, machinery stability 14%, bulk uploads 66%. The best thing to show what we have done in order not to go, we had to make the red color green. An ecosystem of applications in the government of Luxembourg, we have not developed everything that we have developed, but we're part of it. Two basic distinctions. Back-end applications, that's a whole lot of cost. The back-end complicated stuff, from URIs to linked data, some of these ontologies, which must be transparent according to that very, very interesting data journalist, that must be transparent from the end user. So these back-end applications support the portals that are more user-friendly. But developers need flexible APIs. Developers need endpoints, we get a little bit technical here. And also developers need to know about the control mechanism which we call here of our details and so on. So an application, an application, a very nice supporting ecosystem of back-end and front-end applications. And of course, with some annotators here, because you always need to annotation, whether it is intact because I want to... The front-end portals, the goal was to open web applications to visualize and discover all sorts of looks and work use legislation. So facilitate the exchange of looks and work legislation, information between different stakeholders within the country but also across Europe. The back-end applications minus different pieces of the legislation by relevant ministries and organizations. So it's not one ministry, it's not one organization, it's let's say a set of minus stakeholders around this nation. Facilitating coding of legislative data for internal and open data usage. Share the information across different organizations when we hear about sharing. Of course, come up with ontologies, and there's a very interesting ontology that provides this sharing, provides the metadata, provides the richness, provides the labeling and all the stuff to do a very nice query on top of the last search and stuff like that. The second case is was and still is a little bit tricky because it has involved a lot of politics because it has to do with the member states. It's not only one country. Even one country is difficult but when you start 24, 26 countries it's getting a little bit more complicated. What was the idea? What is the idea of the European Commission? The National Publication Database is the cost metadata about qualifications and significations of every country that the member state of Europe. The EUF, the European Publication framework wants to more or less try to align the levels of the certifications. But how can you align when there are many countries, many players and when there are different semantics. So what would have been the best idea is first of all to convince the member states to open up their databases. Well, okay, open up a SQL database doesn't make sense unless you are defying it and so on. So opening up the databases, semi defying them, whether you choose the word amplify, scoshify, whatever and then start harvesting the metadata. The result. The result with two tangible things that we have developed the last two and a half years and metadata schema that describes everything around qualifications from the provenance side who is issuing from the application side who is operating from the context and content side what are the building blocks for the qualification and qualification and some parts of the workflow and so on. Once this common language, let's say, has been established, some crossworks with other existing schema types some really very, in a very good situation countries were done, then a register of registers, let's say or a qualification that a register was implemented is implemented so that it hosts the metadata of these qualifications. The idea, not hosting the qualifications per se but having a very powerful search engine first of all to search the data to the national qualification database that is opened and secondly to help the other states to publish their own characteristics should they have not finished with the qualification database implementation or because they just wanted to make some data publishing available quite soon. Third goal was to support the existing portals of the European Commission that has to do with qualifications and skills and jobs and activations basically the ESCO and the LLQ LLQ stands for learning opportunities and qualifications for them. The idea was, was, still is quite simple there are providers of qualifications from other states then private and working parties as well of course and there are consumers in this case the consumers are the two portals of the European Commission so an API which can just publish let's say the datasets to the portal but of course this can be expanded this can be expanded into providing search and user search functionalities. In the core, the QDR, the qualification data register standardized data by using standardized technical for much whether they are JSON, XML, RBF or whatever is supported at this moment there are validations there are validations we have started for fortunately as of even last year which is quite interesting as a commendation from the W3C and of course something very, very important missing in our opinion not from these projects but by the regime so data catalogs, data application profiles or other extensions of the data let's say in ocean are very, very fundamental for data publishers to start using in order to have version. Challenging, we went to the member states the last few years, visited 15 and for moving any plaques not that it's a problem everything you have to see the glass are full but there were some challenges regarding the level of technology the level of understanding we found that some countries were way ahead of what we would ever dream we can talk during lunch but we found some countries that well they were a little bit behind this so we had to prepare we were quite proactive I must say with the help of a commission basically we had to be proactive about changing documentation, slides, webinars whatever you can imagine in order to foster so we found countries that were already familiarized with linked open data software and we were part of that as the English could say we found also countries that were not that far but they had already created a centralized identifier scheme so we could do a crosswalk we could do mapping and that was really short well for marketing reasons I would say that we didn't have challenges but of course we have, everyone has challenges but the truth is that in our case of Luxembourg we only had some challenges which we were expecting and that made us not feel bored technology, not understanding because we are 16, 17 years developing let's say RAAF partners but for example there were challenges like legacy data that had to be transferred so scripting here scripting there which of course is a challenge but it's a nice challenge for nerdish people on the other project on the second project we had more let's say the understanding of technology as a sort of a barrier the implication that opening data meant and means which is very very important to understand and see how officials from the member states are either very positive or quite reluctant to open up the data all these two and a half years the project that we have done the lessons learned is that indeed, guys don't publish data on PBS if you really are fostering the usability of data not rocket science please add metadata so that you can make the data comprehensive and thus interoperable make them findable put intuitive links whether they are existing schema like schema.org or just some simple metadata that you define in a control vocabulary and publish the control vocabulary so that the people can understand don't take and transport around your data sets without updating the URLs encoding in URIs there is some stability so that you can make them accessible and use licenses really use licenses I might say three times but the most important is try to also start implement license conflict mechanisms I was also personally surprised last year when I found out that languages that in theory could handle or tools can be built on top of them exist already around 10 years now we believe we have seen it we have implemented it we believe that open standards and technology stacks are mature whether it's part of the new version or link data fragments or stuff like that are successful or not this thing can work so use them and I left in the end two very important things also the world hooks this morning about quality of open data even if which is not very regular I find the data sets open there to implement my use cases and my workflow which is not very public let's suppose I find the completeness the schema let's say compliance or stuff like that are not in there there are now some calling issues there is one from a researcher from AKSW in Germany Zaberek who has really implemented a very very let's say A to Z quality framework we should start using it but not least whether the EIF and the solution architecture templates or the European Commission sometimes are a little bit difficult to read they provide a very very nice first roadmap for interavailability ESA is working now already quite a lot of years let's help them by providing feedback this is where it all started that's all, thank you very much thank you very much for finishing a bit ahead of the schedule so we'll use some time for questions in the legislative sector what vocabularies are emerging from the typical vocabularies like Kavlingdor for example and well we use quite a lot of the ISAC vocabularies for example for legislative I mean the ISAC vocabularies are quite generic they're talking about location they're talking about service then we use some other vocabularies but still you know XML is around so some commandosional stuff like that they are already in there there were a variety of vocabularies I don't remember if the way SCOS was used I think there was a sort of parsing vocabularies SCOSMOS which I mentioned SCOSMOS in Polish or the text itself in the you know the structure, the background the idea, the direction then you have to look at the publications yes, publications like SCOS I don't know what you mean thanks but what leather are you looking to do that stuff in the versioning or somewhere else that's a good question can you consult a possible guy it's not and can you for Luxembourg it's mainly French so for Luxembourg it's mainly French at this point but for the other if there is a multilingual linking this in here should be linked you saw one layer which was versioning here and there you had the editing facilities but I thought that there you had also the link in English because this is on the map yes, the multilingual in the sense that if there are different language the same data set is in different languages then what we usually the so called language packs for example and then every language pack is a different data set and then the multilinguality is linked on the versioning level so because this is a different data set it's not the same data set we split them because you understand clearly what works on all the terms this will be very simple depends because there are so many terms yes it depends also on the way that you structure your taxonomy your cost taxonomy and the terms I mean there are different labels for example it's how you structure also your taxonomy and what do you put in memory and all this stuff I guess you have a model somewhere yes there are quite a lot of models on the website for SQL yes it is on the website which everything is open so we can provide you you looked at 15 different language dates but did you only look at the national level or did you also look at different languages below that for example or city well for example in Italy the regional it's quite complicated there are 21 regions that are behaving like 21 countries but whether we like it or not we had to address the regional also maybe in Belgium also there are some regions so of course needs to be in a national level doesn't work on a national level in all the countries so we have also to address the regional level so how would you link them to each other the regions for example well not a lot of linking in the sense that for example I have a very recent example they have implemented their qualification database which is called Atlanta they have as an ambition to foster Atlanta throughout all 21 regions but from what I know so far it's been heavily used by the Fremont region from the Romana region and 3 hours I don't remember so it's an ongoing thing I saw Trina is going about the same time I was going to add for the answer to the question the thing is that in every country they need to link the national qualifications and what is mandatory for them to link those qualifications to take the case of Belgium for example one for Flanders one for French speed in Belgium and one for the German so those institutions that are wanting qualifications they need to link those qualifications to one of those specific Fremont and then those qualifications are assembled by national all regions of Belgium and Fremont and then we retrieve those information from the databases but there is no even if there are regional qualifications they want there are not and they are not and the qualifications are not in the state for them yes we have one more session before lunch which is suggested if you have any other questions you it's very short very short two weeks ago after meeting out by the Flemish region Jean-Pierre Younger has announced that there will be a new European regulation imposing that once an information is available in one public institution anywhere in any country the other public administration will not be allowed to re-ask the question to the citizens but they will have to make a plan set how to find the information inside the program once and only principle yes that is do you see any link between that and open data? absolutely and I would say I see a link with this with linked open data maybe I'm playing with words but this is the reality absolutely thank you