 We work at pencil.io when we do things like this and try to make that work. We sometimes is a transitional architect for link data, so you can actually shoot and say it's not really link data at some point and you'll be what? And what we aim with this is to get link data in student companies. We look at link data, we see that really, if you look at how a business solution, a software solution is chosen, it is different than what we choose so far. If you look at the data world, we do something different. See, a business solution is basically chosen when there is a low total cost of ownership. Yeah, that's not really the case with link data. We don't really have the benefits yet and it has to be easy to adapt and extend and link data is really rigid, really think about things long, so it's not really changeable. We want predictable performance, which is something that sparkle endpoints don't really have. It needs to be easy to maintain what that really holds and there's a low initial cost that's not necessarily the case either. So there's definitely a gap here between link data and what businesses really think about. They're way cheaper than what we look at. Businesses look at something else. Something that does have this is that we don't have this. Something that does have this is microservices. Microservices are from a distance, from a marketing perspective. They're awesome. They're tiny so you can understand anything of them and they're very easy to debug because they're tiny amounts of code and they're very easy to reuse. So this would be a perfect solution. This is something you can sell. Only of course in practice there's a bit of an inverse there. This sells easily but it's hard to actually get running. In practice we have data multi-dependencies. Two microservices that basically talk about the same data. They can't really necessarily communicate. Assume that I have a first name and a last name. I have the two of them in my database. I can basically choose to store them together or store them separately. It's basically a random choice for many applications. If I choose to store them separately then that means that I have more detailed information. If I store it at one field I have less detailed information. The more important thing is if two microservices assume a different model one assumes that they're separated and one assumes that they're the same in one field and one in two fields and they won't be able to communicate correctly because they have different assumptions. So microservices people say well it's easy. Just drop an API in between. See you can actually build an API in between and say look this API basically understands the two of them. But now you have a different problem. Your API now has to be standardized. If you want to reuse that microservice three years on from now and you have changed your API that won't really work anymore. Basically now you've shifted your requirements from a data model to an API model which at best is even worse. Lastly there's disaster analysis. Microservices are terribly hard to do disaster analysis on. Assume that I have a Facebook registration. Very simple. The first thing that will happen is my microservice will go to some other microservice and I'll first of all my client says hey I want to register using Facebook and then I have this registration service and the registration service says I want to first check if you already exist so it checks if you already exist in database you don't. It comes back and then goes to this Facebook registration service and that one actually goes to get another service. By now we're about here. And that microservice basically has to figure out where the contents come so what information Facebook has to share on you. Imagine that this thing fails in that whole path. It's not necessarily complex to say we can't contact Facebook but you somehow need to standardize all of these error messages and get them back to the front end. Which in microservice world it's not really the easiest thing to do. We like easy. We don't like hard. This isn't easy. This isn't what they sold us with microservices. But we can actually do better. Assume that our microservice would have a direct connection to a database which in itself is a bit of a weird idea but okay. And assume that we would use a semantic model to talk to that database. Our initial problems in our garden, our initial problems and did we really know how to structure the data? Well we really thought about that data. And that's really where link data plays a role. It plays a role in a totally different field than how we choose business solutions. This has got nothing to do with microservice per se but if we first think about that data model and standardize that data model in a way that we know that we'll be able to cooperate with it with distinct entities then we have something that can work. The link data model works between two totally different entities in different continents producing data which don't necessarily know each other and in the end the data integrates. I guess we can make that work for microservices within one company or across companies within a region. It's not the hardest to do. You get some extra things from that. See, if you embrace a semantic model you actually really know what you're talking about. You have documentation about the fields that you want to talk about. You get functional microservice. All the benefits from functional programming very similar to that. You get that. Because if you basically have, if these are separated that means that you get a front-end call and that asks you to do something. Please register me using Facebook and that microservice has that responsibility. Now it knows how to write it to the database because we have standardized the way of doing that like FORF for instance. We've used FORFs from the beginning and we initially thought it was complex and in the end thought we really needed it so it makes sense. And you get standardized APIs that went through the whole W3 track north against standardized. So if you talk Sparkle for your database you know that people have thought about what expressivity really needs to be in there. This way of working makes it extremely easy to both share microservices between applications because you have the semantic model and that apparently works. It makes it very easy to debug them because they're functional so you can very easy check what happens. People like us will probably be natural but it's something that has to be learned and it's just something we ignore so far. We didn't... Oh yeah, you get these nice fancy pictures too. If you want to explain what data is to non-data people and you show a graph you can actually work with that graph. On the left for instance, these are your microservices that operate. On the left you have a person and you can actually explain that person who goes drinking, uses a bus, has a diploma and some other person has called about a diploma they went to a party together and had a boat trip at the party you can make a whole story about this and even though non-technical people might not necessarily care about linked data the model in itself is expressive enough for them to understand them it was built in order for them to understand it so you get something that you can't actually explain or work with. We didn't set out to be this awesome architecture that would do this and that would actually be productive. The news will go in details but the short history about it is that Erica and I essentially needed to share code and we were really happy with Rails except for the fact that Rails was made terribly complex so we realized that Rails probably wouldn't need a productive thing anymore so we just started searching in what way can we build something with the knowledge that we have that would allow us to share code and we quickly noticed that if you don't know what you want to choose now you basically have to stay light you have to pick something new you have to pick something that can change over time so we knew that a bigger solution one big framework we wouldn't be at the right spot in the right time to choose the right framework so we almost had to go microservices and then we realized if we ever want to share data the only way of doing it really is linked data so the platform in itself basically arrived out of a necessity but it does work and it works well so we built u70 and I'm going to try and explain it in 60 seconds and I'm going into a bit more detail later on because 60 seconds is really short what we went for is user-facing microservice which means that one request from your front-end is handled by one microservice one microservice, exactly and that microservice talks to a triple database where data is shared we use Docker to deploy our services and our front-ends to make it easy to they take care of everything to download, to build, to deploy and everything is integrated in a single-page app which is a JavaScript framework that talks to our microservices and we have very well-known requirements which makes it easy to integrate all these things we built on standards HTTP, JSON, SparkQL and that makes this framework really work so a bit more than 60 seconds I think but as Al already mentioned we really focused on keeping it super simple we aren't all microservice experts we aren't all UI experts and we just want something to get stuff done to build the software we want we don't want to build this super effective complex system that no one understands no, we want to have the freedom to experiment to improve gradually and that is what this framework is about we focus on orthogonal features so that it's easy to mix and match microservices as we need them greatly helps reusability by minimizing requirements our services are very easy to understand and talk about by focusing on this simple mental model it's very easy to work on so what is this mental model? well, for starters we limit the base technologies that each microservice is built on we focused on JSON API which is a very well-written outstandard of how you can build a CRUD REST API the service is talked through HTTP with frontend so JSON API and you spark a well to query and store data in the triple database this provides both restrictions and freedom because we focus on these few technologies it's also very easy to reason about and it's very easy to mix and match things it's truly, truly fantastic fantastically quick to write things because you have these constraints you know you're going to be talking JSON API you know that this is very easy to use in your frontend because you keep using the same APIs and then there's the semantic models in the back end as I mentioned we use spark well but the awesome thing about these semantic models is linked data and linked data always sounds awesome like you can link things and you can find things but what it's really about is that you can also just focus on the part that you understand so take user registration for example we have a service that registers new users and that's all it has to understand users and accounts another service, a login service can link to those accounts and current user session to the account if you look at the triple store it looks a bit like this so at the top you have the triple mentioning that we have a user called Art he has an account and a password and that's all the registration service cares about the login server has a bit more information here we see the session linking to that account and it will of course also validate the password and the username and that's the awesome thing about the semantic model you can focus on the part you are interested in and still have comprehensibility for bigger services, for linking services a huge advantage of using fove in this example but using linked data in general is the ontologies that already exist they are well tall out and we found that for example for fove they are made this good that we can reuse them even for different services we can use the fove model to do just regular username password logins but also to support all out logins or other providers like ACM IDEM which is a provider from the Flemish government so we can have many implementations but still keep the same model I already mentioned Docker so I don't know the hands who is familiar with Docker so about half the room so I'll quickly explain Docker it's actually two things on one hand you have the Docker container which you can just run locally and you can compare it to a lightweight virtual machine a bit like a shrewd it's still sharing the kernel of the host it's a lot lighter than an actual virtual machine and then Docker compose which allows you to combine these different Docker images into a platform and it's very easy to set up to I think I have a small example later on to have small descriptions of your platform you can just run a command it downloads the necessary containers and boots them up so we are hosting of these containers provided by hub.docker.com but if you're interested in Docker just look it up on Docker.com and what we try with NewSeptic is really to reuse everything so I've already mentioned mix and matching so we've built services like login and registration which we can reuse on different projects but we have a lot of them and we can just keep mix and matching but what we also found is that for some projects this is not enough so we started building templates basically a base Docker image which you can start your service on and this greatly reduces the amount of code you need because a template provides all these base technologies that we've defined before so by building your microservice on the template you already have support for SparkUI for JSON API you don't have to think about this yourself it's already provided and then there are some services some patterns that we use while we're doing this a lot and we're not going to build a service every time we need this so we created some configurable services new cell resources is a service that you can provide a model to and it provides a CRUD API for the resources that you define in that model so out of the box JSON API really minutes to set up not more and then for services where we find it useful we've provided Ember add-ons which are like plugins for your frontend that you can just plug in it links directly to the service and provides what you need for example for login no need to think about it just set up the login service include the login add-on and you have login in your application similar one is the data table add-on if you need listing I'm sure everyone has done some data tables in their website or in their application we have an add-on for that so that's also a thing about the resenter stack you don't have to use the entire stack if you stick to these standards so JSON API spark well you can just pick out stuff and use it if you're already doing JSON API feel free to use our data table add-on it will work provided you follow JSON API so a quick look at how this all combines that's a lot of code I'm sorry I know most of us are business people here but let's start at the left we have a base template here so as I mentioned we have templates this is the new Ruby template which provides the basics we need and then I don't know six lines of code not more just doing query and providing that as an API on the right we have the frontend application which will display value returned by our market service and then a small bit that's just the dispatcher I didn't really go into that but of course if you have these microservices you still need to map a request to the specific microservice and that's what we're doing with this patch and on the left you have the docker compose file as I said it's really really small once you have that you can just run one month docker compose up and you will have an application running HelloResult5841 admittedly not the most sexy application but it works and what we find is that if we by using this Emory.js frontend we can build really snappy good looking apps really quickly this is an example of a new cell resources configuration so as I mentioned you just describe your model the one I took here is from the LOD project which is a Flemish project about publishing local legislation and you'll notice here that we have the prefixes of our model so and we have my hand up which has the types of the hand up and it has some properties and some relationships and it has a resource base and we provide it on a specific part and that's really all there is to it once you have this you have a bunch of API calls that you can make and your frontend developers don't even need to know about link data all they get is these specific properties and they can do they can create new resources they can request them but they can also filter them they can include relationships if they want it's all provided by new cell resources so then the add-ons in a bit more detail once we've defined this resource you can just install the ember data table into your application and this bit of code is all you need to display the list of again my print that we had earlier so an example here is about books but the same principle applies just a bit of code we include the component which is similar to a web component if they ever standardize it we will move to web components but for now we're using ember components and a mixing interrupt that's all you need so now that we've built the framework and we've been using it for I think about 2 years now that's correct even longer than that what we noticed is it's extremely productive the services that we've built well it's not all of them the services we've built well are completely reused we've never had this level of reuse before it's very easy for juniors because the linked data space as you may know is a bit hard to get into and by providing this framework we're abstracting that a bit for them I think Hans mentioned in this talk that this freedom of linked data can also be constraining because people have too many possibilities and I think our framework solves that in a bit by providing APIs that they can use so we've had juniors build something extra in a day because they can just focus on their microservice and that they're part of the world that they need to understand the same for frontend developers they don't really need to understand linked data at all of course it helps if they do they can just talk to APIs and that's it customers really like the frontend because we're using MRGS we can build really snappy JavaScript applications which are interactive which work well we work with good designers it also helps and what we found and I think this is the most surprising is that the data performance is actually quite okay we can handle up to 100 concurrent users without issue and we can scale up to more when we start with caching and this caching is also something that's actually fairly easy to include in our application stack the last thing and I personally like that the most about through SymTech is that it makes you very conscious about playing with alternative solutions because you can just mix and match you can just pick out things you need and if you think well perhaps this can be done better in another language or even another paradigm we can just do that we can focus on that one bit we need and play around with it if it doesn't work we just throw it away and put the old thing back or even write something new entirely this is a lot harder when you're focusing on monolithic builds so I really think we solved the microservice space by relying on semantic backings for us it's been tremendous and I hope it will also be tremendous for you now Art is going to make you warm about the future of this yeah so we're kind of reluctant to embrace new things in the stack because we're also afraid that we'll break what we have so far but if you can find out these orthogonal patterns which are if you have a space like mathematical space and you have two orthogonal directions dimensions then they never impact each other so what you want to have is you want to create features in the base framework which you can basically pick and choose and they don't destroy the other features I'll kind of explain if you have reactive programming for instance which is a very nice pattern the idea is that as we write something to the database a certain state arrives based on a certain state you might want to trigger a specific service that tackles some specific problem for instance thinking about it mentally if you write an email to the outbox it's fairly obvious to assume that some service should pick it up and mail it and that's exactly what you can do with that you can send an email or a tweet by writing into the tribal source the nice thing about that is that it's very similar to send out an email and send out a tweet so if your developers basically understand the email case which again is just writing data to the tribal source they can also easily send out emails send out tweets which makes the whole thing easier it also helps with long processing things like for instance you add a dataset and you need a certain set of KPIs key performance indicators to be calculated if that KPI thing takes about 10 minutes then it's very boring to wait for that and definitely shouldn't be triggered from the front end this reactive programming is especially interesting in combination with other things we're working also on performance so we know that the database is now okay but we also know that it's the most tricky thing that's this one single point of failure if it fails it's harder to go away so what we do is we monitor the changes that's also necessary for the reactive programming bit and if you monitor the changes you can also theoretically but we haven't done it write a multiple tribal source so you can have a failover more easily than what you have now with commercial solutions furthermore because we have this we can do caching on specific resources and caching on a specific resource that means that you basically enable a cache and that cache doesn't ever go back to the triple store the certain request is being made this sort of performance really helps us mitigate this potential issue of the triple store lastly we said this is that if we hold all data in this triple store and we hold it in the semantic model and that is consistent and basically all your microservices are very freely combined what it doesn't say though is that there's a requirement that you mirror that data in the structures that you need so if you have a service that basically needs a completely different access pattern than what a triple store can offer you you're okay to mirror that and because you can monitor the changes you can actually mirror that if you have all of this cache then obviously the thing becomes faster but you still have that issue that was made earlier what if we have slow clients we are still loading a single page app and mber.js isn't known to be the smallest it's not a huge framework in terms of how much functionality it offers but it does offer all the functionality there's also something in there which you can enable and that is fast but it does require some physics but fast boot essentially calculates your page on the server side and gives you a view of that page to the client of course now you're just hosting a javascript rendered page which you could have just rendered on the server side and not marked with all these frontend frameworks so what it does then is to generate that page and it rehydrates the page so essentially you wait 3-4 seconds depending on your device if it's really slow device it'll take longer if it's fast one it'll take shorter and then once that whole page has been loaded up and javascript is parsed the whole page becomes interactive that gives us a very fast first page render and it also really helps for pages where we have a lot of read-only pages because most of them can just be cached we're working on authority and this might be one of the more impressive things of what we can do with link data I think all of our services write through one graph in one triple store we basically require you to write sparkle and we guarantee you that we give you back the responses now what if a service could wrap around that sparkle endpoint and it could figure out what your current user can actually access so one of the things that you have with a microservice otherwise imagine you don't that is if two microservice are basically waiting for a certain authorization pattern something that users can see you will not be able to move them across projects unless they use the same pattern but if you do it this way if you solve your authorization in the triple store itself what you actually get is that all of your microservice that you written before are now scoped to your specific user and it has an extra benefit currently in our code we always add these things that we say this is what you need in order to know what the user can access and all of those authorization rules are written in your specific services so basically you know there's going to be an error here or there but you don't necessarily know where if we do this in a specifically configured microservice and in a service that is meant to do authorization then you know that you'll configure it right and you basically know that you have cooler patterns that you can work with afterwards because you know that security is going to be done right and the last one and this one works really well with the other case with the previous one that is interactivity if we have all these building blocks right and we have caching already enabled so if you have caching and you have the invalidation of the cache you know when resources have been updated it's not really complex to do if our frontend also understands and the basic knowledge in our frontend are basically ready for that already if our frontend understands how to update data on screen when the basic model changes then just by sending push updates there's loads of standards for that and APIs for that we can proactively push content to a client and if you combine the proactive pushing which allows you to for instance update the prices of your web shop life for some market if you combine that with the interactivity with the authority you can have applications in which users can cooperate now this is free in terms of developer type it doesn't cause you extra time of course it causes you it's a strain on the server no matter how you change it you will have more content that you'll be keeping in sync and you will have more complex queries but the fact that we can experiment with these applications that we can't play with them today rather than having to spend yet another tender for yet another idea that allows us to experiment with new applications with new types of applications and that's essentially what we're working on in the future with new centers consistent, they're good we know that they're stable for these new things we're experimenting we're figuring out what they do we have running experiments with most of them but we're not necessarily certain if we'll keep them if it turns out that it's hard to work with as developers then it'll probably be removed we have a bunch of links for all sorts of images that we used apparently you have to do that if you do good research so there, there any questions? Probably in the running we have time for one or two questions if you can contain yourselves but if you're not on your own you can probably suggest that we just position ourselves on a table any questions? I think everybody's I think everybody's down please