 or all of them is one of the different communities all headed in terms of this configuration of the cluster. And if I look at it, that looks like a policy. And if I'm informing them of this policy, whether or not they think it's my place to invest is what I think my policy is. If I'm enforcing, it'll actually detect that it's different. You know, if we need it or update it or just confirm that some features of the policy add-on design is for primary policy reporting. I'll do the demo since I found that there. We buy some of your secret distribution from the cluster. We wrap other policy add-ons, so if you're an organist or if it's the end policy add-on, we find all the results are shown in the handle. We wrap those in reporting back to the violations of those policy add-ons. We also have an essential question of policies that have to be done by the engineer. It's currently created by Rehab and we're in the process of the process of the report going into the demo. I just want to provide some again concepts for policy. So the first building block is really a policy template. So that's a policy, a form of specific action or check. So one example that I'll show you is a demo of a configuration of policies. So I want to apply some significance of the configuration to some of the policies. Policy that includes those policy templates and puts them in the context of summary or it kind of brings those policies together. So one example, say, if I'm making an S&P version on the version cluster, and I have one configuration policy actually goes bigger than S&P version, then I have another configuration policy with accordingly whether or not that is successful. So you can examine, say, with status and that's where they take the eye with the cluster to see if that's the best point example. So the concept of placement and placement of findings. Placement allows you to select those and get clusters that you want policy to apply to. So we have clusters of five clusters. You go into clusters. That's all done through big-length clusters. And then lastly, I won't go into this much today, but policy says another can make policies together. So it's helpful if you have different authors or different personas writing the policies, then you have, say, an SRE that's important, bundle of policies. Policy says it's nice in that way. So I'll start off with the first demo. And we have a policy that creates a big-length demo on only clusters in the policy of the SRE. So here's what the demo of the policy of the SRE looks like. So you can see first that it's a kind of policy with a cluster grant. So we're calling it a dev-comp demo policy. Dev-comp demo, we make that. And here are some annotations that puts this policy in context for the standards that you're trying to apply to. It's helpful if you have a formal standard that you're trying to be, right? So we're looking for it in the client-sense domain that policy can have a column next. So you can have policy reporting tools that allow you to know the better stuff. Those are by that. Red Hat has a user interface and a product line version of the customer data. So it can be really easy to say, like, here are my policies there. So here are these policy templates for Ray, which I've moved to before. So we just have one policy template, which is a configuration policy. And here I have it to be informed that we must have a big map called dev-comp demo. And then we just have a preview book in there. There are other options for the client-sense that we have must only have. It can be seen as this, except it will remove any additional content in the account. So exactly as it is there, it must have merges and there's also a must-not habit for policies to be applied. Have a placement here. And again, this is what determines what clusters as policy applies. So in this case, I'm saying that I want my policies to be applied to a data center, all my clusters are in the data center of Boston. So I'll go ahead and apply that now, and I'll show you the part where you have your revenue. So right now I'm on the cluster. You can see that on the tab, it's still right inside that this one is the hub and then I have my two data clusters. So right now I just show the managed cluster objects. So you represent the cluster to the registered to the hub. And I have cluster one, cluster two. And unfortunately it's in the cluster, it's not capable of text, but what's important here is that there's labels associated with it. So here I've added labels to my managed cluster. And so I have data center because we're all in for cluster one, and I have data center equals Boston or my cluster two. That's what allows me to use that placement to select which cluster that I want to use. At arbitrary labels, some of the limitations I want with cluster management is like the man's cluster management, for instance, all of the poplites and things that people should urge and that you're using. It's really up to you how you want to use it, how you want to store your cluster. For this one, I'll go ahead and do things. And we can see that it's still confined because we've all had it set to four. Tell me if it can take it out, what is it there, as I may expect. So if I look at it again, I want to check the status so we can see that cluster two is non-compliance. That's not a lot of information, right? It's constantly in others. So the policy framework to policy has that policy. And what I meant by that is that on the hub, there's a namespace for every single man's cluster up and down. So we have cluster one and cluster two. So if you recall before, we only converted it to Boston, data center, cluster one, which is cluster two. So I can go and look at the policy in the cluster two namespace, and this one. So the cluster two reported back saying that it's not confined and why it's not confined is that the big man helped them and helped them to keep all namespaces and stuff. So what we can do then is set to a force. Policy action, the big man can actually get to me. But again, I have the remediation action set to a form, so force, so just find out. So what I can do then is I can go to cluster two and do that as I specified in my policy. Do the same thing on cluster one. See that's not there. I say I want to target all the name of clusters. What I can do is I can edit the placement that's on the board. I scroll down and see the label select Boston. Go ahead and delete that. Make some kind of goblets there and tell them. Then watch the policy. So I'm going to go ahead and set up that policy. Now that we put one demo, I'm just going to do a little part of the scene. So we define a root policy, and then a copy was made of each policy. And then on the main cluster side, those policies are pulled down. And that allows you to make some kind of goblets there. So I'm going to go ahead and set up that policy. Now that we put one demo, I'm just going to do a little part of the scene. So we define a root policy, and then it allows you to process those policies on the main cluster and it can disconnect the requirements. But then when it disconnected to the hub, it will then send back that status over to the admin to have all that status information. And then additionally, there's policy controllers, which are the ones that enforce or mark the root policy out of there or other. It's also working on the services within your cluster to find out what's going on. So the next demo I'm going to talk about policies, I believe. So it allows to identify the configuration of policy definitions. So what I mean by that is if you have a saying that these are configuration, they belong by multiple clusters, but one small piece of that configuration is different for a cluster. You don't want to have to create a cluster. It allows you to use dynamic values and replace them with dynamic values by creating. So there's two ways for this to happen. So that's not how the policy template gets executed before the policy makes its way to the main cluster. And then similarly, there's just the main cluster templates that we're going to trace, and those get executed every time the policy is valued. So some examples here is we have a field from the big map and we're using that value instead of the static value. Then on the second example, we're actually getting a field from a secret from the hub and using that value on top of that. So this is a period that we added that's facing behind the scenes of the last demo. And then lastly, just to show that we're connected to whatever we have. So the main resources, you can just do a generic look up in the last example, we're getting a service called metrics and it's getting that cluster out. So the next demo I'm just going to be showing is really copying the secret value from the hub to all the clusters. That value is going to be different for those who want to start quickly. So I first created a secret system for the hub templates are restricted so that you can only query for objects that have the same name space as the policy. This is to prevent anyone who does have great access to great policies from being able to escalate without any resources outside of the space. So there's that limitation for that. So I have two fields and that's secret. I have a cluster 1 app password and I'm basically going to be creating a secret learning cluster with an application password and the value is going to be different for the cluster. So I have a policy that I'm going to have a secret. I have a configuration policy that was saying that we are forcing that we must have a secret called depth cop demo. Here's the template here where I'm going to be doing on the hub side. So before the policy gets distributed we're going to have a branch cluster getting from secret from secret same name space as the policy. We're getting a secret that comes in from secret. And then the field is dynamic where I'm just taking this managed cluster name dash app password and we're just applying it to all all the clusters. So now we can look at our policies and you can see that so if I go to the source of the we can see that we're getting the value from the cluster view that we wanted. So we're able to provide this a secret policy. And some of the security aspects that's going on behind the scenes is that again we're looking at the property for cluster 1 for instance of that policy with the brand app password where it gets sent over to the managed cluster view 5 it's actually encrypted so you'd like to keep going to that basis or let's just differentiate it's binary data. So what's happening behind the scenes is that each managed cluster has its own and the hub has access to that encrypted piece so it will encrypt that data before it inserts into the policy it also gets moved over to the managed cluster so it's kind of a lot of gamble that police provide the policy there's kind of stuff that's wrapped within the stuff so one way to simplify this real quick is that we'll do the customized generator plug-in especially useful to get ops and virus because I can actually customize so for instance cluster manager can also get ops so you can have patient add-on or you can just order some heat so it's really helpful where that blog can kind of use that so real quick I'm just going to show you for our first example that we have that I showed you that first demo we were having possibly the big one I just had a customized camel where I'm just spying a generator that I wanted to use so I customized the generator plug-in just a plug-in that would generate the camel content that you've got and before the customization we're customized what that app is so my policy generator that gave up is just this it's basically this is boilerplate to identify what plug-in you want to use to customize so it's not really important what we're trying to accomplish but here I'm saying my policy should be in the policies namespace and I want it to apply to the data center of Boston and I have one policy that I want I called dev.demo big map I want to enforce it and then my map that I want to wrap in my configuration policy I can specify directory of course if we have a file so in this case I can specify this big map dot demo which is the same one that I have for the first example so if I go ahead and run that so I just ran a customized little plug-in that directory and you can see it generating that placement so that allows me to have my policies just sent to the processors place of mind being and then the policy itself and you can see that configuration policy wraps this big map so it's so all you have to do really in a good off-scenario can have the policy generator installed it's really just committing these those files I just showed you and then at execution time I'm going to do the off-scool I'm going to customize it yeah so some coming improvements so the first one is a disclaimer unfortunately is that when we do cluster management there's a power tool that makes it long registered clusters and such there's an issue in the way so it's being fixed but we'll work on through documentation policy enforcement what that means is that you want to have a single policy that you enforce on some clusters but inform on others so if you want to rule out a change for instance on select non-clusters just to see how things go before you apply to all your clusters that's a mechanism that's about being in an incident want to make integration with the firmware quicker and then also document all the single cluster about the rest of the cluster management use your process I think I'm sorry I have one cluster different from each other policy that I think will require so we'd love for you to participate in the community we have a website here that shows how to get started, how to install those clusters, how to install the policy app on top of how to contribute to GitHub we have a Slack channel it's up there, we have community meetings we have lots of people who are interested in how to get started questions any questions hey so you showed so you showed an example where you queried your cluster which is let's say you were a minister of a cluster and you wanted to get started with that yeah so there's currently a mechanism right now that there's another add-ons for the facility where you get alerts based on certain metrics that happen so you can get on alternatively there's the answer one for any time in policy there's not a chance to look down on an incident tower is there any other questions oh yeah okay yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah I have a 3 upon 3 so normally the square coordinates is 3 upon 3. But I like to be either set up in there, and normally this is all true, so you should have some kind of hope. The problem is that I have a match because I have the wrong gearbox so there's a clear advantage. On the right one... I have a right right right on the left. Hello, it's... No worries. It's like you're saying that it's not... Everything works. It's like why? So what's the line? It's simple. But in general... I'll just go back to you. Do you need anything else? Are you good? No, I'm just going to set this in case you wander around but I'm not going to worry about the screen on the end because we cannot see it. The good thing is on the stream you can see it very clearly when I go to the slides. No worries. Good luck. I know. How are you doing? Not too bad. You having fun? It's good. I enjoy it. I don't know for whom but so there was a session right before here and currently somebody got the feedback for the feed. I don't know if the mic there is open but apparently something pickups the sound from here and that actually does louder than the people talking there. I was really surprised how this was... No, this was up there before. Okay, so I don't know. Just providing the feedback. There was no mics up here during the last talk so I don't know what's going on. This may have been... They had two mics up there. When the two of them were presenting. That mic was on as well so I've switched everything off and I'll only have his on while he's talking. Again, it was me... No, no, thank you. So in case you see... Do you need to go up? Okay, thank you. Is there any... No, no at all. There's like... There's like a little monitor thing there when the mic doesn't... And this one is definitely off. Somebody was saying there was feedback during the last presentation, I don't know. I have no idea. Yeah, there was no mic up here during the... That's also weird. Why is that backwards? I don't know, online it's fine. Yeah. But there's no mics here. I mean, this was even up there before. Probably happens if there are two mics open at the same time. Yes, okay. Cool, thanks. You're streaming all the time? Okay. Thanks for being here. I miss you. I'm happy to be right here. If you've been this choice and become a frontier, I was like super disappointed after 70 years of new consulting because the area choose always technical solution to resolve organizational issues. After that many years I decided hey, let's do some work for advertising. Anyway, who here has already designed crazy APIs? Okay, great. So I assume the first time you did that you were very focused on designing your entity and finding the right boundaries and finding the right types then you had to think about the semantics when it comes to both patch or whatever and you didn't think about stuff that was singularly that was not really required for the project or the successful project. So basically when the business comes to you and say hey, we need to deploy a P2 you might have this kind of reaction so my idea is just to give you a couple of tricks so that the next time you design something you might already like to incorporate this kind of stuff from the ground up. So here we have the initial situation you go out to your upstream and you go out to endpoints. Okay, then you fell into my trap probably you lied because my solution is hey, just put an API gateway in front and it will solve all your problems thanks that's the end of the story. Now of course not. So first probably you lied because you wanted to please the speaker and actually probably this is your situation. You probably never exposed directly your back end, your upstream you probably had a reverse process and my solution is to have an API gateway and that looks like pretty similar so since we have some time first let me explain to you what I feel is the difference between the reverse process and the API gateway. So I wouldn't say all but let's say I experience so I like to tell stories and this story is the following I started using the internet already like more than 30 years ago and I created my own website and at the time there was no hey, give me your website in 5-minute books tutorials were hard to come by so basically when you wanted to build HTML page what you did was you looked at the source of other HTML pages and you had to understand how it works and so at the time it was really nice because we already had images which was not the case a couple of years before. We had audio files, mini files so it was like super deep, a mini file that played automatically when you started to build your website that was very important for your experience obviously but it was super deep so everybody did it no CSS so in order to create like a bundle because at the time it was like super cool to have like a visual effect on the button, when you stable it will stay that's what we have and well, no JavaScript at the time I don't know whether it's in the method or if you're not and I used this like super nice software called Honda Pro who knows about Honda Pro it hides our user experience so I don't know about you but in my version we could like click and it inserted a tag there was no busy week that can take forever afterwards but Honda Pro it just allow you to insert the tags by clicking a button and of course you have to click the button to close the tag but still it was great to learn and at the time I didn't give it too much attention I just put the files in the folder and the files in the server and what's the result static content that was super happy and then some people decided that was not enough because actually we couldn't display the date and the time that was really a huge issue I mean it was not enough than the computer game the date and the time put it into the web page so it was also completely useless so people decided instead of like writing the HTML let's have a script generate the HTML and in that case it was CGI script so you have to learn Perl I didn't learn Perl and a lot of people say that's not super great you have to learn another language Perl, the problem is basically it was not super entertaining because you had to understand what the script did what the generated HTML was so the improvements was to write HTML and just to add like tags inside and then you had KHD KHD was much better KHD which means personal web page they tried to hack it KHD is a hybrid expert processor actually it's personal web page it doesn't look super professional so they had to come up with a better name but anyway today it's super successful then the internet and especially the world wide web did more and more equities and like companies not only teams but real companies like if you are a private company and you like into the web server so we added the real like layer and when you do load balancing basically all the nodes are equal all the designs perhaps you have different kinds of web work and so you needed to serve different contents like first layer had to do routing depending on the request so the evolution of the web server went through several stages like first to serve static content then to serve dynamic content then to stuff that is completely unrelated to serving content like load balancing and routing and because of this there were some people who started to think hey now we have a single point of entry into the introduction system we can probably do much more authentication authorization and I can log in with anything whatever and on the other side because of the rise of the internet we needed to share data across the original information system so I assume here most people are job developers so you remember Kova everything was great but the problem is it works only inside your single iJava stack and some companies are using I don't know Python .NET whatever so the most basic stuff at the time was if you wanted to share data with another company or even another sometimes another important what you did is you uploaded a file on a remote entity server in that batch that watched the server on the other side bring the data to what it had to do and re-lend the file to .NET basic level but it works only if you don't need like close to real-time data if you want to have HTTP endpoints because HTTP endpoints is the most basic grid that is found so comes the definition for me of API data way it's a reverse processing that can stuff that reverse processing were not meant to do and generally this stuff is focused on APIs for example I mentioned RedLimiting any reverse processing can do RedLimiting the thing is now you don't want to do RedLimiting because you want to avoid you also want to do RedLimiting because you want to avoid you also want to do RedLimiting depending on the profile of the client if you can need this sum then you can call that many calls per hour and you should be higher you have a higher limit and this is business logic and in that case the API gateway is a way business logic into your reverse processing into your entry points billing the same stuff you might want to log by request no reverse processing can log request but if you want to billing you must be sure that the log is actually on point that you don't use that so perhaps you want to talk to Kafka or whatever but you want something more for all those reasons like the API gateway for me is like the next generation of reverse processing there are a lot of API gateways available on markets probably you know some perhaps most of them I work on the Apache API6 project so I will have a couple of slides on it my demo will be using it but of course you can use any API gateway for the rest of the talk API6 is an Apache project it's like managed by the Apache foundation you can return to it like a layer of OpenResty as you can see OpenResty allows you to write new one on top of Nginx manage and you have some like in-village and property so you can use it so coming back now that I have defined what for me is an API gateway that was by introducing I've talked a lot and now let's do some demo and I'm sorry if you're in the back screens are pretty small so I will try to make the font bigger is it good enough? perfect thanks I'm happy there are young people because if I would sit in the back I wouldn't be able to use so my idea is yeah you have a lot of questions it's a spring project and here it's with me Java and there are tons of annotation it wasn't open that's pretty good just takes pages so yeah I have to endpoint hello and hello with the parameter and my idea is hey I want to rewrite everything and do much better and I want to write everything in Kotlin using like the clarity endpoint and I want to use this in version is much better so now I have the deployments I will be using Docker compose because it's good enough just for the demo so first I want to why deploy API 6 API 6 depends on the tcd because it stores its configuration in tcd tcd is the key value store that you use both so it's pretty solid I will show you the API 6 I want to monitor everything so I also have deploy parameters in Grafana and finally I have deploy the old API and the new API at the same time is just because I'm so lazy I want to deploy everything at the same time but of course normally you first deploy the old API and then afterwards the API so let's start everything I will just try to do everything just ensure and now we can start the second especially since we need to lock each of these ports so let's see if it works I will show hello they have called yes it works and the second version is deployed in another port and it's much better and now the IE is I want to go through the API gateway so normally you shouldn't be able to access these ports directly you just need to go through the API gateway so I will do this and obviously it doesn't work the reason is I didn't continue anything I just started the stuff so I need to configure configuration in FHD API 6 is quite easy it can be managed an API code itself so here I have the scripts since everything runs on Docker compose and I don't want you to need to install current because everything is on GitHub so you can do it at home I am also using a Docker image actually here is just to use the same network because I have Docker compose which is separate file so it has its own network and then I have FHD API 6 which is the route I created I live in one because it's a pretty sensitive operation I need to question that here I am using the default one which of course you shouldn't do but it's illegal and now I have the payload so name is just optional but here is if a request match this method and this viewer arrive but normally you probably have a cluster of like similar nodes so you can have different algorithm to dispatch them between and here for example it's from Robin with weight 1 those with different weights and here I am saying just to monitor it so I run it and now if I configure so now I am going through the API gateway to go through the old API effects which means I am like a square one I didn't put anything at the moment just like made the same work next step is a bit more interesting I want to deploy several versions a parameter, a header, whatever here on the first route I created everything in one node but actually there is an abstraction that my function is the upstream route so first I will create the upstream and then I will be able to use it in the graphs so first I create the upstream and then I will create the plug-in configuration and finally I will create the version and then I will be using the upstream ID that I previously created and the plug-in that I previously created so let's run this and now there is a little trick and the trick is to follow it now I will expect to have v1 hello the thing is if I do like previously then I will forward v1 hello to the upstream and v1 hello the upstream doesn't know it it just knows about hello so that's what I said like I will receive something like this and just like use what is after v1 so this gives the following by after routes if you are not a fan of the current line we also have a dashboard as I mentioned I think it's thousands I guess what the password is of course it's a demo you shouldn't use it and here we can see our two routes so if you prefer to configure it to create a new route to the dashboard it's possible here you can see that directly you can see the JSON configuration and we can change it like say hey like directly maybe it's instantly like say this in TCD and we can see the upstream that we created so depending what your approach is you might want to favor the NTI okay great next version route that we want people to use and we have an N version route that is costing people to register like the first time someone used your API it was super heavy or there is no way to send people an email saying hey it is we will duplicate this import and blah blah blah so let's use a TCD for that so what we are doing now is the N version route we are going to send a 301 and a new vocation and the new vocation will be the first version route that's easy actually again we have a plugin for that and the plugin in that case is a redirect plugin for basically easy way that these people are using the first routes so here I am using the patch I am just checking the plugin I will say use the same route that we tried but perfect if you want it works also if that comes to that we are going to send it so let's do this okay and now we have this like this command if we look at the header it sends the right location now on the client side there are two options so first even if you are not an API provider you can always monitor your header codes on the client side please because they are two options in that case either you are like getting this real one but you tell it nothing works anymore and you need to know it before your business knows it or you can also do automated redirects this case it will still work so here is the first request first request we are going to return the 3.1 into a second request with the new endpoint or your infrastructure will probably buy and consume more resources okay advanced now we can do a bit better the problem that I told you about is we need to use an HTTP code we didn't think about making our user resistant but it's normal who here likes to register you'll love to get out your email so that I am not your kidding that's good I think nobody here loves to receive likes and from marketers especially when they are not so relevant but I suppose I assume that most of us likes small benefits free stuff so in that case you can be smart okay we limit you by default but if you register you will have a higher limit or an unlimited number of codes or whatever so that's a good stuff again we still don't know our users so we need to like return a code asking people to register at this point unfortunately there is no plugin out of the box that does it so what I did is I created my own plugin and it's called adult limits and basically it's written in Lua am I a Lua developer? not at all what I did is what every developer does I copy pasted the limit connections there is a plugin that does the number of codes and I just edit the logic that I wanted and the logic is hey if I'm authenticating this plugin called Codes then just return otherwise continue to do the logic of the limitation is anybody interested in Lua codes? that's pretty solid and that's good because I wouldn't be able to extend too much I did it and it works so I forgot everything the idea is you can configure it like this here is the number of codes in this time window so if you want to go with both estimates then you will get this code and you will get this message so let's run it and I just want to do one again because it's on the second round now I'm limited don't try to register the only message the registration system itself is outside the scope of this so whatever you want in the end you should have people register so here I will register a stupid dummy user in the simplest way possible and basically it's through a key basically here I can create what we call an API system consumer and this consumer the name is Jamdo and how do you know it's Jamdo? it's Jamdo because it has a header named API key with that mic if I have this header with this value I will be considered Jamdo because I'm considered Jamdo I will be authenticated because I will be authenticated I won't be limited so let's create this user we can check here I have a consumer that is creating Jamdo and now I need to press the header and it's API key mic key that's original it works I see you frowning but that's not here here I could replace I could also replace the key I don't understand the problem I don't understand okay is it here on one line? no? come afterwards and I'll try it on this side okay it's good so the next step now we have everything is tests who does unit testing I don't want to do that to people okay great testing good testing production yeah everybody does testing production in the end so at least you are very very open about it in general you're testing production by very generally but I would encourage you to just like mirror the traffic and to send it to the new end point and you discard the response just to check that your code works with real real production workflow because whatever your tests are however great they are there is a high chance that in production there is something that happens that you never, never come before and if it runs it might get a new plan so before doing general release you might want to do this like mirror it and in API 6 it's called like mirror traffic so I will get all the loads and I will send it to the new end point and just to like manage your expectation at this point there is a part that will be fixed in the new release now production has been done it just needs to be incorporated let's do a couple of calls and let's look at the dashboard so here is a graph and a dashboard for API 6 what I did is I just added two new widgets it's small I'm sorry but the idea is here we monitored everything from ethios so here we already have some data and normally after some yes here we started having some data as well I think you should add the two curves and the curves should be similar if the curve on the right is not similar the curve on the left that is probably at issues so here I have issues and the reason is a priority you didn't want me to show you the code and I agreed there is something you should know is every plugin has a priority that's how API 6 knows the order in which it should apply the plugins in that sense but before I said there is an issue this could be two is we want something and again we want something so now in the next version you will be able to override the priority on each plugin and you can probably solve the problem and well now we have 404 of course it's logical and so now let's do some finally some candle release based on a header you can do candle release based on your location you can do candle release based on like just randomly here what I did for no purposes I do random stuff basically how they represent yield so I have here and now I can kill normally it's the conversion of the time of course it should be the beginning it should be 5050 as I mentioned like restricted to a number of users like 10% or 5% or 2% you are conservative depending on how many users you have but basically great and what's the last step well the last step is now you have a view bar and a view 2 probably at some point you have a view 3 and a view 4 and whatever and ah yes sure yes is there a capability to do it for so the question is it possible to respect it to a number of users I don't know if you can respect it to certain consumers I know that you can respect it to certain headers so basically you could have a dedicated header with a dedicated value I need to check the documentation for the the consumer itself but for sure for headers you can do it so it would be similar to solve the same issue now we have parallel if you are a product manager you know it's not a great idea because that means that you need to limit the budget and you need to maintain all those parallel versions so you probably want to differentiate some endpoints or even to remove them and you do it with a shell you know but there is an ITF graph that tells about differentiating endpoints and it's called the differentiating ITF graphs so basically you have a header and this header is called the verification in the demo well it can be a boolean or a date in the demo I am using a boolean I would really encourage you to use a date because it probably wants to warn you in a tense and not just take force stop it soon but I am lazy so I am using a boolean and then you have another header and that points to the endpoints that you should use that you or your client should use and also there is an optional header that is called sunset and then a base date or no endpoint at all so it's just like in Java API first we duplicate and if you are in the old Java version you typically duplicate it for pages and at some point finally you really do the API so it's just the same actually it's quite easy we can like add some headers so first I will create the graphs and then I can duplicate a few more graphs and here it's just about adding some headers so that anything it can be as I mentioned a boolean and here I am using some engine so I don't need to write everything again so here then when I show with the boolean and I want it to be verbose so you can see so here you can see it's the link points to the version to use and now normally again because you are monitoring your course then you can call it now every time I let it point you are finally migrating to your APIs okay that's that enjoy I did it at this step if you are applying for the bigger step well thanks for your attention you can bring my blog for the intrigue if you are interested about the code it won't come to me afterwards you can check it from here and of course if I go to interest then I will share this with you and now we have a couple of minutes more than a couple of minutes for questions so I will be very happy to try it once for then there is a mig so it will probably come or perhaps I will bring you to mind because yeah even better so is the performance good enough to transform the requested responses from D1 format so potentially D2 format if the restaurateur is different I didn't know the answer to that it depends it depends on your requirements it depends on the complexity of the transformation and I mean as I mentioned I was a consultant for a long time so the question is super-generate I will need to have more requirements more constrained properly answered but I mean as an educated guest what I would advise you is not to limit automatically unless you really want not to break anything so if the requirement is not to break anything then you will need to pay the costs anyway if you are alone to break then probably like a proper route to handle the same would be better so as I mentioned first clear one that your customer please use it then after their register you should have their email so you can contact them directly and tell them hey dear customer you are clients you are using this endpoint do you agree with me yet? here is an interpretation so we have very complex application cycles in engineering so I was trying to work out if it's better to do transformations but in real essence it depends on questions the life cycle is not an issue the issue is whether you accept to break things or not and then you answer whatever the performance cost will be then you will need to pay the other option is maintaining the back end that is a private management decision I was more concerned you said don't do that to be honest again I've been a consultant I've seen a lot of stupid stuff so now for me it's for the life but no you can transform if your requirement is not break anything very conservative we want to keep our customers every one of them please do it then of course, monitor check that the cost is not too much and if it's too much then you don't do the answer but in general I approach the main things is try an error try, monitor, check send and check but questions thanks there was a question sorry, I was just wondering if there are any libraries that act when there's a verification error in forms and blogs I'm thinking of this here but I know that error is there I didn't ask that's a good question so first my first question to you is a library in which language no, I mean, checking for the existence of the header if you really use corpus or spring boot or whatever should be pretty easy so I don't think there is such a thing but perhaps it's a good idea to create such a library and redirect yourself by science and data if I see this header I will check for the link if it's possible but to my the extent of my knowledge I tell you it's a draft so basically people are not super eager to build something on the draft other questions then I thank you a lot for your attention and I wish you a good rest of the country and I have stickers and a lot of pins if you want I have a different header but it's it's a lot superficial then yeah it's a little bit a little bit more yes but it's a little bit more right it's right so here I'm using I'm using again I'm writing so here it is not a big plus I go across as a production and you can start a thousand changes so you can write you can write and you can write and you can write any question and you can use values instead of the same way so I guess there's more the thing is I cannot possible in that case plus it's a good one I can use I can explain how it works and I have a lot of questions I am super fast without seeing anything slower and less but it's got the exact numbers and so the most the better and we are trying to get as much as possible but it's different so yeah it's better than the set and point no actually I don't know about sheet with that a couple of years of experience and I'm not a Python developer but I'm going to do something to find out if I'm going to write a new script I will write and I can go to another Python software like this I just like the word I know but there's more fun here you have yes so yeah right now we have the numbers it's not great but And it's really, really important to be shown that we're going to have someone who is going to get an additional layer of wisdom. I need to have no questions. I can say for you, I talked about my democratic person already. Yeah, that's great. And I can say for you that the field like six months ago, so I'm not an expert, obviously. I also might be staying true and like watching. Six months ago, I was not an expert. And I mentioned that yes, so this is the best field. And we are really happy about it. We are confident in this. So I think that we're going to be able to help with this or not. Yeah, yeah. Maybe it's actually, yeah. No, no. No, no. Have you put it on? And so they are in fact back. Thanks a lot. Well, thank you for that. Yeah, sorry. No, no, no, it's okay. All right. Just start. Open it. Sorry. I don't understand. If you have a wet skin, of course, like half a percent of users might be like 5,000 people. I don't understand. I don't understand. I don't understand. I don't understand. I don't understand. I don't understand. I don't understand. No, no. Yeah, but sometimes I get a concern for that. I mean, if age level, well, like I said, if any level, this time of century. Exactly. And then it's so amazing. That's unfortunate, yes. I'm really wanted to go home. So I'm lucky. Can you say something? So, that's the way to start. I'm Matt Young. I have been at Red Hat for the last eight years and I've been working on automatically acidified rhythms. And today I'm here to talk about metaphors and hard parts, which is an important solution. I'm going to be talking about what they are, why we wrote them, and how we wrote them. And this talk sort of assumed a bit of knowledge about Podman, Containers in general, and how container networks are specific to this. Let's go a bit into that. So, we're going to go over specifically. This is a big work complex and we didn't really change it at all as part of our recent changes. And also on top of that, cluster networking is generally going to be similar to one of those shrines, but with a bit more SDR on the top to spare traffic. So, let's talk about metaphors. A network is a construct of your container engine, your Podman, and your LEDs. And it doesn't really directly correspond to a program. It's more of a watcher program, but it gives a way for containers. And that's basically what Podman is going to do for it. The question of Podman is the number. The bridge is a virtual address. It has an interface plug into the host. It has an IP address and some of that associated with it. And the interface plug into the host is going to be set as a call gateway for call traffic networks. And that is now. The pairs are basically a pair of virtual appranger faces connected by a cable. One end goes into the container. The other end goes into the bridge. It gives the container and the bridge an ability to do that. So, you know, all the other things, they're all set in order to call such an internet. Next, we have firewalls. So, we can establish a way through what the pair is talking locally to other containers, but we're going to need a way through our container. When we do that, we have an aspirate and a one-to-many net. But also, we have a firewall or one-to-one net. So, we want to be able to have a specific port on the host to choose specific ports on the container. So, we have two different types of net that can occur. And then on top of that, we need to add extra firewalls to make sure traffic actually gets to the internet. So, we're running on a system. People probably add firewalls. We want the podband to be able to access the internet. How did networking work in the first phase of podband? We use something called the CNI port. CNI stands for the Pair Network Interface. It's a standard developed for four enemies. It will allow for a series of plug-ins to run a specific order. We use doing a little bit of configuration and end result. And there is also CNI plug-ins. These are a reference set of plug-ins for CNI that accomplish the basics. They let me get out to the internet. They let me talk to the computers. They set up all the stuff that I just talked about. So, the CNI plug-ins are the default for Kubernetes. As such, they were the default implementation for IO. And IO was at the time of podband's creation, built by the same team that had it. So, we had a lot of experience with them. And we had to use them. And it worked. We knew they were production. And we knew that they had good go-lines. Almost together, combined to make us think, we can use these to rapidly produce podband and get it plugged up a little quickly. And that was at the time what was most important to us. Plugging model is the fact that it can do all these various different things in a matter of just one. The reference implementation and CNI plug-ins were entirely added to our uses without simply doing it. On top of that, it was now a rule with the eddies. At the time, this was important because we were trying to not share a code between podband and IO. That did not come out. But at least for the initial days, years passed and friction began to set. Kubernetes and CNI have a model where they have cluster with a bunch of different things. There are a bunch of different codes called these other. Podband has a single code model where we have one node, one set of containers. And there's some friction inherent there to what we were trying to accomplish versus what CNI did. And as time went on, that architectural difficulties were just making these things. The first thing that we really noticed was podband aims to be compatible with Docker and as such, Docker provides DNS and so containers can resolve other container files really well. CNI does not have a plug-in to do that. So we're going to have to write our own. We wrote a simple implementation. It was pre-node and it worked okay. But when we tried to submit it back upstream to the CNI team and the reference plug-ins, it was not accepted and it did not work. The next issue was hygiene. CNI had support for v6, but it was complicated. It required a lot of configuration. It required users to provide a public loop route with an IPv6 subnet. What we were looking for was more than zero integration solution. Something like what we have in IPv6 would be auto-generate a subnet and add to the content. Something like what our users have and something like that. And probably more about complexity. CNI has quite a bit of it. We care only about a very few CNI plug-ins, but there are a lot of stuff that we plan on. And what happens when it should work? CNI would like to do that. But we also have cognitive tools to create networks, inspects networks. As a cognitive 1.6, we added those. And we wanted to provide a cognitive path we would embrace there as we do today. And that means we inspects arbitrary CNI networks with plug-ins that we don't know about. We can't do that. We throw our hands up and we say, okay, we can't do this. We're just going to inspect the CNI network so that we can have a way to do that. That was not an optimal solution. We didn't like it at the time. We're going to like it. The final big issue is the security. CNI, their job is to support the network. So when we have problems, we are largely going to have to solve them ourselves. And that's not an option. None of the cognitive maintainers are CNI experts. We lose our understanding over time, but we do really get a true understanding of who we're operating. And the CNI maintainers best efforts are local. Their day job is to support the amenities. If they have spare time, they can help us. But even then, our issues are complicated and usually they're not going to help. So, in the second half of 2021, things came to play. There was a serious conflict between where CNI wants to go based on clusters between families and where poverty has to go. And so anyway, there is just too much diversity at this point. And the CNI maintainers come to us and they indicate that they are having serious discussions about deprecating all your states, deprecating some of them. Unmanned needs deprecating to be able to do that. If CNI deprecates what we're doing, we need to have an ultimatum because we can't just say, no network aid, you're going to pay for it. And the CNI maintainers, their position is also understood. This isn't their job to support us and they feel like they're spending too much time on support. So, at the same time, we're also looking at a major rewrite of the rest of Bobman's networks, largely because of tech then, legacy issues. Bobman and he made some naive decisions. They're only ever going to be important. They're only ever going to have one idea. They're only ever going to have one package. These help for a while, but as the range of support, these cases that Bobman has grows as customers use them more and more, they start asking for things that that is something that needs to grow. So, we knew we were going to have a major rewrite of our network, especially on database. And it seems like an optimal opportunity to add on some extra work for the place of CNI. We also need to address the rewrite of our DNS plug-in. So, the DNS plug-in, as I said earlier, we're only watching the same network. Customers come to us and they say, I have containers connected to two networks. I would like the DNS on them. Okay? Give me the DNS plug-in. So, we need to do something and we're in a good position to make the modern network a way of working. What exactly are we going to do? The decision we came to is rewrite the scratch to see what part it is. We looked at all the pros, we looked at the cons, it was a hotly contested decision from the team, but eventually we decided on it. We wanted to tailor for what part we needed. No plug-in model. The plug-in model is wonderful for Kubernetes because it has so many different touching networks. You can't guarantee that K3S and OpenShift are going to use the same network stack. And CNI copies. The plug-in model all or never remains open for all these centers. The plug-in model, all it does gives it more time to execute a binary size bigger. We were doing some performance tuning and we found that on plug-in, one of the single longest parts was CNI setup. So there's a real opportunity here in each plug-in model we can get user-facing real performance. Also, we can change the language. Nobody on the team likes them. We were positive and go because of the existing network. Author of Kubernetes chose way back when in 2015. Since then, the entire ecosystem has been written over to for that reason. But this is an independent network tool. It doesn't seem like it has much to share with the rest of the project. It doesn't seem like it really does. It decided the cost seemed nice. A lot of members were excited about it. And it's also better to get the build for a low-level example to use. And we can take the ownership over to the older build. CNI, we never had maintainers there. We never really had a ownership over the older build. Here, we're shifting the burden to us, but it already was who was on us. We're going to take ownership by writing this stuff. We have a better understanding of what we need to support. Downside is your comments. We're going to be looking at a major duplication effort because we have to be writing all that core plugin code, all that core network that CNI had. But we looked at the CNI code and we came to the conclusion that we weren't going to be able to reuse any of it in the future. It was heavily tied to their program model. Looking at it, we're going to be looking at how we work with CNI. And all together, we look like a massive investment, but the potential gains we're going to have to go over is like the site-to-party plugin for that openness. At the time, it was 12.21, we had a big target date of 12.22, the plugin for us. We had an hour quarter of some time but not really that much to work with. So, what we came up with was meta-work and hard work. We've decided early on that we're going to need to and the reason for that is the DNS can need a persistent service or we need something that's going to always be on as long as the containers as we can. But Podman in general wants to stay as far away from persistent services as possible with the DNS container engine on the other slide. So, we decided to split so we would have DNS server persistent while running, but optional. We can turn it off if you want to. And a guaranteed running small, very minimal, quick-executing network setup too. And that allows us to continue our edge focus. We have very, very little resources at idle because not many are in idle. And from there we start off with our network setup tool that's the Hindi transliteration of the network and we also do hard work on top of that as a DNS server to make it sound simple and they're used to doing that. So, in turn, we call it an NB and AMB to send different things. We decided to break the rust. Again, it seems excited about it. So, we're going to kind of our handles network setup and it creates bridges, that pairs, by the way, it sends systems to the routing and it handles care now of all those things. And it's written as a model. Again, we don't like to plug in models by spinning everything down into simple binary. It's fast. And we have hard work. The DNS server handle and container lookups. Anything that's external it forwards anything that's internal for a specific container it will answer. It's based on an existing Rust DNS library and we have a single instance of hard work for it. So, all networks, all containers and devices in the process. So, if you're on a pod manage, you want to handle each one of those users that's separate. Well, let's talk about how we started to make this happen. So, we're not just implementing a lot of hard work. We're also handling it. And we could have done these as separate changes, but both of them had serious effects, which would be breaking. So, we could have done a podnet 4.0 with 5.0, 1.1 that for a number that would have been decided to do them all, bunched together, it would have been changed. Maybe for a little ppodnet 4.0, both of them were happy users. And from there we need to make some decisions. First decision. Are we going to completely drop CNI or do we want to retain and support CNI? The answer to it is, we want our CNI. Netfork is not going to support everything that's going on. Literally, we want to support the main use case that podman does. But there are cool things we can do with CNI. You have a flannel network to your Kubernetes cluster, you want to connect your podpinks to your cloud. Sure, we can do that with CNI. That's cool. I like that. I don't want to break that. So it makes sense for us to retain CNI and CNI. Which leads us to next major decision then. We want to support them at the same time. This one is a fairly easy one. We're a massive amount of firewall that's involved in making traffic for lots of internet, especially for forwarding. I don't want to make guarantees that hard firewall rules and their firewall would not step on each other. I don't want to have to write code to this in a way that addresses the barriers that we are using for the last validation. And it seems like it would just be a lot simpler to say that you can run CNI or you can run it. And that forced us to make another decision. We're going to have to build a network to the network stack. Google is also great because of that. You're going to have to do the same otherwise. And because you're already going to have to monetize the network, you're going to allow CNI and network to be at the same time. Final decision. We want to migrate everyone to network by default. Easy answer again. As already said, we're deliberately not supporting everything that CNI does. Which means that if you have a fancy CNI that doesn't have something that you can't support, you need to break it to try to run it. So, easiest to say in this decision, don't do anything by default. If you're a fresh install, you get that apart. If you are a existing install, you have to do it by default. So, from all of these decisions, we want to have to modularize our network to both be on distant angles, CNI, and the rest of Podman. Move into a separate and it's library to build up. And then we're going to have to make some changes to our database CNI. As I said, you have to make sure that there are assumptions about some single IP address, etc. And, but it's my co-worker, Paul. We're doing a migrating fashion. We are actually migrating stuff in the database to new locations that both version of Podman does not know we look at. So, you can take a Podman 4 install and integrate to real. And, few things great. The network configuration has moved. So, you lose your static attribute. So, start the container, which is better than we would expect. So, let's talk about Net-A-Bard. Net-A-Bard 4 is the biggest problem you would expect. We can't regress on features versus CNI. We can't have big, gigantic bugs in critical methods. And, we have experience already dealing with it. It's obvious that we've been developing on Net-A-Bard 4. We have never developed a Network 7. This is the most interesting. Fortunately, the core Network 7 fits not that good. They have that pair of richer faces, just both. All that's pretty cool. All generally matched CNI do what they're doing. It seems relatively same. When we tried opportunities to improve and did it. But, generally speaking what they were doing in the church is impossible. If you have ever done an IP table like a Net-B-L when a container is running, you know what I'm talking about. There is a massive set of cool features. Fortunately, there are relatively limited number of scenarios in which you can work. For years are the biggest candidates and there is really only single port versus range of ports DCP and VVP same destination port versus changes. So, it's possible to construct a scenario that would test all the scenarios that we did. From there, we have a test suite to validate Net-A-Bard hardware. From there, we were able to identify what groups CNI was generating. Basically, we are doing a very similar thing when CNI does both of them. We have some simplification in the container. For example, for example, next we've got to look into what does CNI mean and the big ask there is going to be support for NF tables. If you've ever done IP tables programmatically, there is more to them. Everything with IP tables between second IP table binary and of course the output. It's high performance. NF tables and you can resolve that. So, we didn't think we were going to have time. Again, we have a single port of development where we were able to build in a module of interface and then we could build in and make sure someone looked for a firewall of interfaces that could build some of that. So, let's talk a bit about hardware development. Network schedule. It's fairly still obvious why we need this. We're replacing CNI, which is the only one that really does have CNS. CNS servers are dying of us. CNS has to go behind. Why can't we use one of those? Why is it hard for us? Hardware is usually just a problem that you're going to find anywhere else. But, for instance, your specific container are dependent on who is asking. For instance, are dependent on what you have on networks. So, containers can only see other containers in networks that they are joined. We have that here. Let's look at container one like that. We use a number of all three networks. A, B, C. Which means we can see everyone. Let's look at container four. Container four can only see container one because he is only in container four network C. He can only see other containers in network C. Container five, for example, in C, container one, two, three, but not four. Because container five is in a network C. So, what we need is a server smart enough to know who is asking and based on what networks they're in and from that we can tell what they're doing. Our initial implementation was 90. What problem there is that the kernel only resolves with a single server. So, let's say container one wants to look up a container four in front of using a network A to get to that external network because container four is not in a network C. And he just stops working. He takes that answer as authority. So, we're looking to potential alternatives to writing a non-server. I've got 90% of the way there. You can find a whole lot of basically access lists, and I specifically write these based on what network you're in. It makes sense to do a proper server that actually understood what container is asking what network you're in. So, let's talk about the process. It was the last thing we built because it's the least essential. If we shipped with our hardware and DNS, it's not a big change and it may be a bad bit of an odd network. We actually made it what it seems smart to do. And it's a bit unique in our way because it's everything else in Podman makes it quickly. One part will be running as long as you have it. It's also performance critical. While the rest of Podman is sort of performance critical, it's performance critical in the sense that you, the user are going to run a command with expected performance critical. DNS is a big more performance critical. It cares more about the individual business. So, from that, the DNS is being a very scary prospect. But we found a existing Rust DNS server implementation called TrustDNS. It seemed to work and it did everything needed. So, we adopted it and it's been serviced quite a while. TrustDNS handles the cache name through the port forwarding. So, any query that is not for individuals who are close to it, it's a cache and you're actually dealing with performance most of the time. So, in this case, we were reporting these query into containers. We started off with configuration files that are set up by hardware or network rather. So, my container comes up or goes down and is adding to those different files. The different files will be in there with the IP address and you want networks to run. And the total set of networks is on the system. And when the different files are changed, we do what we do on our part which causes it to hash all the changes in networks available to bind to the addition of networks. So, if we have, say, a new network created, we'll bind to that new network's creator. And from there, since we now have a cache in memory of what IPs it has and what container to which network, it's relatively easy. Frequently, Netafark and Hardpart released in public at 4.0 on February 17. In Netafark and Hardpart 4.0 at that time because we thought that they would pretty stable make sure that we could be able to make sure that we have a pretty satisfying mistake. But, on Netfark, there was a break in changes. So, it's not a world without security. It only hit default in the year 2006 in 2010. Revolved in the year 2009 and made it a type of act. But, relatively, we can't change the information we need to go around and have to keep this in eyes again. We were able to see Netafark in as a tech preview in 2006. It will be all tech preview in May 7. It's in the process of being packaged for deviant. It's been released to a bunch of desktops in 3.0 in March. And, overall, the role has been pretty smooth. I would say at this point, based on public reports, 2 to 3 and 10 common users were using Netafark, which is about what we expect since it's system installed again. We can't be expecting a bunch of public reports when the 306 came out out alive with Netafark. We are worried about people, which means that it's just a type of act. Now, just a brief bit on the next steps. We aren't finished here yet. All the way released with Qualcomm 4.2 last Thursday, we believe to buy the 4.0 in 53 with Hardfark. So, we have complaints that Hardfark is dropping 4.53 LLG on interfaces that's in bigger groups in different places. But, if you're doing a DNS server that just wants to buy it in 0.0, 0.0, 0.53, it'll break that. So, now we have support to buy a non-503 or ports in Qualcomm 53 and the new spiral one was to forward-drap a 4.4 to tear to Netafark. Also, incognito to be able to isolate you. This is a Docker feature that we haven't had for a lot of LRCs that's in our hits and pull-its. This is basically tears in one network that is set to isolate and hold. We're eventually working on an app that was bought over these 4.0 back previously. Hopefully, in our next release we're working on that. We'll be done soon. Another thing DHCP will support the manual. But, I didn't mention that when we used to put it. Minus DHCP in the next release that should be out soon. So, we'll see you in the next release. Hold on, man. Are you going to put the DHCP support in the suit of our app? Will it be a separate demo? DHCP support is in our app. So, it should already be here. Just about the network is used in space. What are you doing in this game? There is no differentiation. We're using the Docker model which means that we can have containers of 3 networks with all of them with the same name and we will just have to do all of the right differences. So, absolutely no differentiation. It's an absolute may and there can decide to have a different name. Please don't do that. Do you have a second? Yes, actually. That was a horror show because I thought we were going to have to write ourselves but then we found trust DNS and it supports it already. Thank God, I didn't want to take it. Are you using the IP tables from hardware for NF tables? At present, we're using IP tables from pads. I'm working on a dedicated layer that we use the actual NF table API so, soon to have a best and hopefully that will be even faster because I'll take it for some time. Thank you. So, I think you kind of answered the question and I feel that if we have different containers on different nodes, do you use any sort of encapsulation to the top of the classmates? Like the land or the x-land? We are strictly single-node and that makes things easy. If you want to do that, stick with CRI and there are various plugins that will help you with it but, at first, only intended to work as a single-node. I like it, but the question was are we going to rewrite Rust and we talk about it jokingly but no, there is no chance that we are rewriting that much domain-specific knowledge. You are right, I was curious. How was experience coming from black? I like it a lot. I like it a lot more than I do but I have always been a person who really likes keeping things constant or just not agreeing with me. I will say that our initial attempts at Net-a-Bark and our part were kind of atrocious because we were still very lost in the process. This was our first serious effort. There is a lot of factory going on that fits our early sense. I was curious about the names where they come from and how did you choose those? Net-a-Bark was, I think, the best iteration of Net-a-Bark. From there, I just chose our part because it sounded sort of similar. How do you do the user space and run command with every privilege? That actually works natively because we do it inside the user namespace. Because we are doing it in the unprivileged user namespace, we get to pretend to be root and we can find a 50 degree and it only works for containers inside the unprivileged user namespace. Basically, the host can't get a DNS unless it enters the user namespace with our name on it's share. So, let's say you want to start a user namespace with a root privilege one. Does that mean three part-barks running at that point? Yes. The village containers and the web network, they run the same user space and they cannot talk with each other and you can set it by that to prevent them from interacting. If you have two different unprivileged user namespaces, they can't talk to each other, but they're a single user, they should all be within the same unprivileged user namespace. In that user namespace, we can run meta-arc, which means we can make richer faces when we talk to each other. So, for a single user, your containers can talk to each other if they're rootless. But two different rootless users can talk to each other if they don't want to talk to each other. Yes, that's actually the problem. Okay. Let's see. We're a little concerned. Oh, yes.