 Hello everyone We are here to talk about the courier Kubernetes. So please check that you're in the right room My name is Rana Berezovsky and here is a Tony Segura Pomidon. We both a part of the courier team and We are here to talk about the recent edition to the courier portfolio The integration with this is one of the most popular container orchestration engines, which is Kubernetes I'll provide a short info to this a Project introduce you to the currently supported functionality and explain about the overall architecture And then Tony will dive into much more detailed detailed explanation of the integration we did and we'll Show you the demo How the neutron networking can serve the Kubernetes cluster So what was the motivation for this project? with the cloud native workloads and Microservices based applications that Gradually take place of the VM based and monolithic applications We see that there is a different options that's popping up there different container orchestration engines such as Kubernetes that try to provide solution for them, but if you want to deploy your Containers aside existing OpenStack VM deployment these options that currently doesn't provide a proper answer Even though Kubernetes is quite extendable platform and there are many options that you can put a different network providers To support the Kubernetes networking these options are usually limited to the container a based environment So you need to figure out how you can put your bare metal or a VM parts if they need to be part of your application on this network And usually there is no unified network that can serve this In in case where some a better security and isolation between tenants is required The way to do it is to deploy containers insert virtual machines, and then each tenant gets its Own virtual machines that can deploy the containers inside but then you end up with networking layers that connects virtual machines and in other network layers that connects containers and This actually has two major problems one is a degradation in Latency and bandwidth this part can be resolved with some technical solutions But more severe problems that we see is the management problem because now the operator will have to deal also a Not only with one orchestration system and troubleshoot and debugging monitoring but actually we still one that connects the virtual machines and in other that connects Your containers Neutron we actually see that it can be the unified platform that can provide good solution for the case that you have mixed environments that you have virtual machines or bare metals and you want to add containers to these applications with options That we currently explore with the fushi Kubernetes It will allow to add storage to the bare metal Kubernetes clusters Which will quite complete this solution and will allow to support also the neutron-based networking and Manila or Cinder based storage in the Kubernetes cluster So what is the mission of this project with all these challenges that we see in the containers environment? We believe that Neutron will be a able to provide the real unified platform to Combine the VM based bare metal based and container based workload in the under under one networking solution and It will actually allow the smooth transition to the containerized world And we see it already happens in OpenStack where OpenStack services or get containerized and orchestrated by Kubernetes It will allow both VM containers to share the same virtual topology and use either layer 2 or layer 3 a Networking that will connect your VMs and Kubernetes Cluster and if there are some existing applications that already deployed in your data center And they're in virtual machines and you want to transition to the micro services you can do it on the on your own pace while rewriting certain parts of the application keeping the rest in the VMs and Take your time when you can combine part of them in virtual machines and part of them in in the containers So what are different use cases we're trying to cover one use case is the bare metal where Containers are deployed on physical machines This use case is usually common when people do care about performance So you don't have additional layer of virtual machine where you run your containers in in such case From the deployment point of view what we'll have we usually have the OpenStack controller node where all the services of the OpenStack are running in the Courier Kubernetes integration the important services that we need are actually two of them as a neutron server and the keystone So there usually be the dedicated node that will host all the OpenStack controller services on the Kubernetes master node In addition to the neutron infrastructure will have the major component the courier controller I'll explain a little bit later. What exactly it does But the main purpose of it is just to bind and translate the Data model of the Kubernetes into the neutron resources and on each of the worker nodes will have the courier CNI which will be called by the Kubernetes qubit agent and this guy will actually do the real binding of the pods and plug them into the Host networking we are the neutron infrastructure Another use case that we support is the nested case where we run the pods inside virtual machines this case is Usually used when we need to have the tenant isolation. So as I mentioned before Each tenant will be allocated to these virtual machines and he will be able to deploy its workloads inside this virtual machines and in Usually for example with open stack you can deploy such cluster with Magnum. So in this case, we'll have a some node to run the open stack services as before and we'll have a one compute node where we deploy the courier virtual machine, which is Actually belongs to administrators that will have courier controller component is that we'll have to go and to connect to the Kubernetes API server and Also to the neutron server to do the translation of the model But it will be actually the only point where we need access to this API's so from the Compute nodes that host worker VMs There is no such component required and the only things that we'll need we'll need the courier CNI to be deployed together with the Cubelet agent There are of course the mixed environments where there are some workloads that require containers virtual machines sometimes bare metals So this is also possible with the courier Kubernetes integration What we'll have in this case usually a number of Networks so one will connect on the VMs that host containers and in case you use Magnum So this network is usually will be deployed as part of the Kubernetes cluster and We'll have additional networks that will connect the tenant containers that running inside VM and this is done by a Trunking the container support through the hosting a VM a port into the network So what is the functionality that we support right now? first first the Mod of the networking that we decided to support is the actually Kubernetes native Networking mode When all there is one subnet and all pods are deployed on this subnet so in case It will require some isolation. It should be achieved through the Kubernetes networking policies We don't have isolation right now by the network or some net layer So There is one single tenant and there is full connectivity which is enabled by default We support the cluster IP service type of the Kubernetes This one is implemented through the neutron L load balancer API we disable the Q proxy and use neutron load balancer obstruction and API just to go and Support the Kubernetes services our default provider for this is HE proxy and we actually plan to move to the Octavia in the next release and We support of course the bare metal and the nested Cases and the mixture of different workloads if you have your bare metal VMs and nested containers just a quick intro of the overall architecture that we have in the courier Kubernetes as the main service and centralized service is the courier controller This service is responsible to watch changes that happen in the Kubernetes API It watches for the changes for events on the pods on services and on the service endpoints once there is some a new pod service on endpoint created or there's some modifications Courier controller translates these changes into the neutron model so it either allocates creates ports and allocates IP from the cluster subnet In case of the services it goes and create load balancer listener and pool And once there are some endpoints added because there are new pods added to the replication through the replication controllers that touch to this service. So it will also create the member in the neutron load balancer instance The courier CNI component is essential part that should be deployed on each worker node And this one because there is such component of each worker node we decided that from the integration point of view this component shouldn't have access to the neutron API and keep the existing APIs that already there because the Cubelet is there so the courier CNI can get information through the two sources either from the cubelet or the environment Where it's deployed or it can get the additional information in the instances or entities of the pods Because the courier controller will annotate with required information to do the local binding So this will include the MAC address and the IP address and the port types it will need though the courier CNI will perform the local binding of the pod by creating the virtual interface pairs and then plug one end of it into the Pod namespace and another one into the host networking stack through the neutron infrastructure a we use two essential OpenStack services Neutron and Keystone Keystone for authentication and Communicate with neutron through the Neutron client and we also try to use as much as possible various Open OpenStack services in our integration. So we use Oslo config for the configuration management And versioned objects to keep the format of the annotations that that we Propagate from the controller to the CNI. So this will allow much easier a rollbacks and upgrades and It helps in case you need to upgrade the version of of your deployment a Bit about how we translate the Kubernetes service. So in a native Kubernetes Implementation, there is Qt proxy that uses the IP table IP tables to to program the required rules to do the load balancing We because we have the neutron back ends. We actually don't need this component. So this component is disabled and we can choose whatever component is there to support the Neutron load balancer API For now, we have HEProxy in the future. We plan to move to Octavia and From the translation point of view. So in a Kubernetes service, there's usually cluster IP Which is mapped to the virtual IP and the neutron load balancer language We use the same subnet that Kubernetes defines for the cluster IP range and this subnet is used for the VIP allocations So once there is new service created There will be pool created a neutron with a listener and then for every pod That belongs to the service there will be member added in the neutron a balancer instance So now I'll pass All right, so I'm going to give you a bit of a deep dive Just to get about the feeling of of the of the room How many of you have tried? Kubernetes That's a very good show of hands and could air Kubernetes just in case Okay, we have a couple. That's good. All right. So let's get started with this as It and I said there is the cool the cool rear controller the cooler controller I'll spare you the details. This is something that it and I already said maybe I'll add some information that Thanks to Steve door, which is also an open stack project. We are completely pluggable So if you were to say, okay, I like what could air Kubernetes does But for example, I want to do something extra for the ports or I want to use these other load balancer application that I have or some Provider that has I don't know some hardware that also integrates with the neutron as long as it plays nice with the neutron ports You could do that or you could even trash our drivers. So there's You can change the how it sees the events and what it does with the events So in that sense, you could say, okay So I want to just not use neuter at all and I use ODL or or straight some SDN You could do that just changing and making your own modules available in the system and pointing to them with Steve door. So In this sense, it is very pluggable. Sometimes we think it's a bit too much pluggable And you don't know which things play with each other So we're gonna have a feature there that is gonna say, okay This is the vanilla configuration and it will load all the drivers that play well with each other And yeah, finally we use OS beef for the for the interface plugging so the way the contract between the controller and the CNI component of courier is The beef the OS beef objects this standard open stack And the good thing about that is if you were to say I only I'm interested on the CNI part you could make your Pod creations already include the beef information the courier CNI would already pick that up And you would not even need to run the controller But you should it's a nice project So this is a bit with 1.6 Kubernetes Role-based taxes control became very important and this is what I use for the demo Yes, I cut it down because I had some extra permissions for some open open shift objects that I removed last minute And basically this is what the controller needs and as you can see what it does the most is to get and watch objects and The ones that we really use now are the pods the services and the endpoints But I added some more in case we will want to do something for example somebody from Samsung send a patch about That's our IOV So that if you want to manage a poll you could for example use the node object and put information in the node object about Which virtual functions are available? So that's why it is there. It's not used now and then in terms of patching which is what we do to So that the CNI finds information that the controller put We put all those resources, but mostly we use pod services and endpoints as well and Service status is a special object that allows you to implement the load balancer type so that when you have Service that you want to make available outside of the cluster You you use the load balancer type and then it will get a in our case We implement that with a floating IP or we will implement it because it's only been Prototyped All right, so for the CNI part This is a bit to give you on on the left hand side Wait your right hand side That is The information that the CNI expects to find this is I cut some of the versioning information because otherwise it was like three pages long but you can see that has all the information need for for a beef for a virtual interface and The thing is when it runs on bare metal and I'm gonna show a very detailed flow It needs this information to perform the OBS binding or OVN or dragon flow and whatnot As long as it can bind with Nova it should be able to bind with Could hear Kubernetes we support CNI 0.3.0 and as long as and as soon as another one comes out will support it and if By default we'll output the latest one, but if keep the cubelet asks an older one We're able to convert it ourselves. So it's that's fine And and the way that it works now. It's just the cubelet calls the CNI. It's an executable. It watches the the Kubernetes API and Once it sees the annotation put by the controller it can already proceed, but let's Look a bit about the permissions as you can see since it cannot really It doesn't need to change any information. It only needs to to get Unlist. All right, so let's look with more detail. I Hope I will not bore you too much with details, but let's let's go a bit through the flow. So you understand how this all works So the user creates a pot Coulder controller and cubelet are Applications that their main business is watching what what happens in at the API level what the user defines and The result of creating the pot is that events are sent out to all the watchers So in parallel the cooler controller and cubelet will perform Their duties so the cubelet will create the infrastructure container and Then it will ask CNI local CNI In order to get it networked and wait for it to return. So in the meantime the cooler controller will be creating the neutron port it will Then check the port Get the port information annotate and put it on the Kubernetes API server with a patch request That will trigger another modified event that will be picked by the courier CNI Which I did this point is already watching for the beef information and then it will call the plaque method and That will trigger the OBS Asian if you're using the reference implementation to Start the dance of getting the information from from neutron to perform the binding and set up the flows and so on and once the controller sees that The port is active you see there that you have like a loop that is showing port all the time until it sees that it's active Once it sees that it annotates saying it is active and then the courier CNI will return control to To the queue let it will finish its job And then the pot can already work and we guarantee because it is active that That no package will be lost because of it not being ready and you if you're paying attention It will say okay. There's a bit of of a big amount of round trips here We're aware of it and the way we're solving that is we're gonna pull resources We have some patches in flight for that and what we're gonna do is we're gonna have already bound ports In the computer in the worker nodes of Kubernetes So when you need a resource the controller will just say use this one of the pool and then the courier CNI will just move it Inside the container and this is done alright, so that now if we look at what happens when the Kubernetes cluster runs Inside no instances and this is a very important use case for us because we want to make open stack The best platform to run Kubernetes on and its derivatives, of course What we use for that by default is Neutron trunk ports this started with Okada and What it does is basically this is what you usually have but As you can see there is a bridge that probably you're not familiar with there in the middle It's called the trunk bridge and You see that the top device instead of being attached to the VR int is attached to this bridge and the reason why is because This port has been made a trunk port which means that it can be split into several Supports and pen and port so the the F zero it's on villain Zero so it gets a split into that trunk port that you have in purple there And when you start launching pods what happens is even if the pods are in the same network They get different villains what this means is that in the trunk bridge They will be able to be split into several separate ports. There you have the Turquoise green. I don't know and the pink sorry for the color choice. It's a bit but but and then you can see there is that on the upper side You see the pot has a device that comes with a villain tag Then in the middle layer, you can see that that villain tag is stripped And they are separated into internal ports of that bridge that are used to patch to the OBS VR int and there you already have the regular subnet that you're all familiar with that are corresponding to the subnet What this means is that when a pot wants to talk to another pot It will still go all the way to be a rint even if they're running on the same VM and this allows you to apply things like policy Between pods that are running on the same subnet and and this is something that With other implementation to have you could get rid of but if you would need the security group model that would Have this requirement. It would allow you to do that If there is any question at any point because this is the deep dive part Really just go to the to the microphone and ask I'm willing to spend the necessary time for everybody to to be confident of their knowledge on this so This is how it looks like when you're creating a pot inside Nova instances So you can see it looks almost the same the difference is that when the port is created automatically afterwards You can see that it does allocate villain ID and this means that it checks which villain IDs are being used on that Nova instance at zero it will select another one of the 4096 the tar available and it will add the port to the to the trunk the new port that it created it to light it to the trunk with that villain ID and then it will wait for the Trunk service of Newton. So in your Newton deployment, you have to enable the trunk service Which by default usually doesn't come enabled. So remember to do that. Otherwise, you will spend time like me figuring out what is going on and Once that is done The port is active and it will send the notation that it is active and could see and I can return But as you can see on the on the courtesy night Side this time we don't need to perform any plugging because it's just creating a villain device and that's all Okay, so now As it and a set with this release weathered service to port and the flow is again since this is our architecture It's a bit similar, but it doesn't involve CNI. Obviously this time what it does is it sees that survey that the service is created and logically endpoints follow and With information about that it will create a load balancer wait for the load balancer to be up So there's some pulling there. So it's also a candidate for Making a pool of load balancer so you can use them faster when you need them Then it will create a listener and listeners. We just use them very plainly like just for the port if we when you will support Ingress control sorry ingress Objects in in Kubernetes then we'll probably use them with the layer 7 policies to be able to differentiate between paths and so on but for now It just plain listener the pool and then for each endpoint that every time we see a change on endpoints we check that the List that we have in the members and the list that we get from endpoints might and we remove and add whatever is necessary All right, so as I said, there is this work going on thanks to the the folks From the super I go ahead Sorry, so the question was if We are doing the load balancing for all the endpoints Sorry for the endpoints or the external endpoints The answer is it's for the this is just for the services to work and with the pods So it's for all the endpoints And the external ones the ones you would put with external ip Would be processed separately by having a floating ip assigned to them. This is something that we don't do yet We don't have support for external ip Because most people were just wanting to use the load balancer type But if people would come up and say look we would want to support external ip We would we would also add that Without much cost to the flow that we saw before Thank you for the question So the super fluidly the folks are interested in Deployments that are side by side open stack Kubernetes and kubernetes using neutral networking and they are interested in creation That the creation goes very fast of the resources so the current Uh state of things where you need to go to neuter many times and so on it's it's not something that That is satisfactory for anybody, but they care a lot about it. So they Send some patches as I said they already work. We're just Bike shedding a bit about how they Should look like But you can see the performance difference between how it is in potting bm now to create 50 containers and how it is once you have the resources pulled And and the difference is like almost eightfold Better with with the pool resources. We hope that this will be available in pike However We'll see it's it's it's almost ready to merge. Yes. Go ahead This doesn't depend on anything special from so the potting bm part. Yes, it needs Okada, but the resource part it could work even with earlier releases So the to get the pulling you will need only for kudir the The pike release for neuter and you will be able to use all the releases Newton For potting bm it will not work because of the lack of trunk ports unless you use We have some people from intel submitting a patch So instead of using neutron trunk ports, it will use allowed IP addresses and mac villain devices And in that case you would be able to use newton even with with the pulling And and the way that we're doing the You have more okay, so thanks And the way that we're doing this is we're doing it in a burst Tolerant way So if you have the typical use case of 9 in the morning, everybody gets to the office and starts firing services and containers And let's say that you have a pool of 50 and suddenly you have 100 containers be created. So what we do is we Double the amount of stuff and as they get less and less used We clean up and reduce the size of the pool again But we don't aggressively try to keep the pool at the same size. So if you have a lot of Of churn of containers we can we should be able to keep up with that All right, so we get to the time of the demo here. You have two options You can ask me to run the demo live and I will pray to the demo gods Or we can have a recording. Anyway, if the live one would fail we could have the recorded one and it's exactly the same deployment So The deployment that I did was two Parametal servers you can see them as r 510 or two on r 510 or three the one 510 or three was deployed with pack stack It is okada And there you have the typical stuff that that gets deployed with pack stack like glance nova cinder neutron keystone And all that jazz Then on the other one I added it as a work as a compute node afterwards And then I created I used a project called courier heat that we are Incubating and then it will move inside courier Kubernetes. So I used Some heat resources to be able to create the VMs and the trunk boards and so on So I wouldn't have to create them one by one by one and then We have the I don't know how many people are familiar with the Kubernetes guest book example application Okay, some people so it's it's a very basic application that needs Two tire application. It has a front end that is some javascript with php to decide to to find The back end services it uses php, but otherwise it javascript And then underneath as a storage it has Redis it has a master and as many slaves as you want So let's let's get that going so Do you want me to try to run it live? Yeah, okay All right, uh, wait somehow it went Right, all right So I'm in the right machine Do you think I should make it bigger a little bit? Okay Like this bigger people in the back. Is it okay this size? Okay, cool So I have this script that sends the commands for me, but it's happening live. I'm just Too tired to remember all the command by heart And this is actually the same that will be posted on youtube afterwards And it has some explanations about what he's doing But I already made those explanations earlier This is about I'm writing like this is Just a picture that you saw before when I'm gonna edit on youtube I'm gonna make an overlay here fancy weather showing the diagram at the same time And also giving details about what I used that it was pack stack I want to Tell you that I'm gonna make a blog post about this so you can do it at home And But the big important thing is after running pack stack You should make sure that your obs fire so that your firewall driver for neutering is obs because otherwise With a hybrid driver there is Some problem with the transport that the communication doesn't go anywhere And you're gonna keep wondering Three hours if you're like me what is going on? All right. So yeah here. I'm gonna show the guest book application so typically the guest book application comes with three services and Each service I think the front end And the reddies and slaves are in three replicas but since I was I was using a machine that was not very powerful I reduced to one replica and then it will scale so that you can see that scale still works So yeah here As you can see I modified it to one replica, but otherwise it's exactly the same application um One difference that that is here is since I'm not running kube dns you could run kube dns I just Chose not to so I changed Near the bottom you can see there the value m that means that In order to find the services instead of using the dns resolution It's just checking the environment variables that the kubelet sets About the services So let's clear that So yeah the subnets that we use for this demo are three one is for the Pod subnet The other one is for the vm subnet and the final one is for the Service subnet so in this way you can you can imagine that they are All connected to the same router so that the communication works But it's also giving you the idea that you can have Several neutron networks running inside the instances. So if you would want to have a driver that actually Put different namespaces of kubernetes in different subnets for more isolation and security groups You would be able to do that without any difficulty. All right, so now We'll start getting to the real interesting part that can actually fail So this is all in a namespace that I created called demo All right This went fast So now what we're going to show is that the pods are actually being created As you can see they are still creating what this means is that Now it's a time when there's the dance between neutron and The so the current courier controller Kubernetes API and so on and this as I said before with that graph With the pulling goes much faster And it creates a support. So we're going to show how the supports Are actually being created and this is something interesting that usually if you have Courier you will see that the device owner is usually compute colon courier Just like you have compute nova but when you create a A trunk port the new put supports what actually happens is that The trunk ports Overwrite the name that you put for the device owner. So they put trunk ports And as you can see here like we have the front end It already has an IP of the subnet the slave and so on All right And yeah one one useful thing for the die thing is quite powerful for for those managing These environments is that we Have deterministic names for the pods so that you can Find your pod when you're doing open stack port list So this gives you a visibility even in this case where the pods are running inside virtual machines right, so this is just to show that The The kubernetes site gets reported from cni the right information about the IP other thing that that Neutron and courier did for it Just to confirm All right, so let's look At the at the service side of things and here i'll apologize for not using open stack Common line interface, but this is because it doesn't support lot balancer yet So we had to use neutron for the neutron all cli for it So you're going to see a lot of this is deprecated messages So we check the services. We see that they are created The cluster ip of course you have to make sure when you make such deployment that the cluster Subnet range that you give kubernetes and the one that you create in neuter is the same Otherwise, you're gonna have a lot of fun another important aspect that I want to remind people because I've fallen into this trap before Is that kubernetes when you give it the the cluster ip range Doesn't allow you to give it Anything other than a sider so you cannot say okay just reserve this ip for the router Of the of this subnet So make sure that the router port that you put on it is on the last ip so in this case i think i have it on Well on 254 of the last of the last part of the subnet just just so you're careful Some people that want to be extra careful. They create a fake service definition with the ip of the router so that kubernetes will not allocate it Because in the pot case we allocate the ip s and we report them but on the service case It is kubernetes who demands the ip s because when you define a service This is your prerogative to define the ip that you want for that service So kubernetes is right in doing things that way So you can see that the front end has a single ip And what we're going to do is Check that the load balancers were actually created and it has the members So as you can see this is using uh ha proxy And I see that I have an old front end from a previous demo run That I probably avoided before it had the chance to clean up So then the listeners it will probably also have an extra one from I know this this one was cleaned up All right, so as I said, this is by listeners with anything also Pay attention that it's a deterministic name namespace slash name of the of the service so it's easy then to To find out the stuff and then the pools Okay, so we have demo front end So now what we're going to do is to use the kubernetes api to scale the deployment And and as I said before like with with kubernetes controller We'll just check that there is a difference between the the endpoints it knows about and the pool members And it will create both the the port And it will also add it to the member list Right, you can see that that is a new one I don't I right now. I don't know which is the new one and which is yeah The old nine is the new one right and hopefully the member list will already be there. Yeah, it's already updated So now, uh, what we'll do is to verify that, uh, the pods can talk of one service can talk to other services So let's get the ip of the front end service Okay, so it's 172 30 194 and 45 And we will just ping it from another one which is the simplest test that we could do Good, that's always good when it works until this step So as you can see this is running obs so the first the first ping takes longer than the than the rest because It has actually to go to the user space and install the flows And now well what we will try is the actual application. So I just hope that it all works fine In order to do that since this is running on a server that Is on my company vpn network, and I don't even know what it runs I'm gonna have to assign a floating ip to it. I've made a mistake Maybe I'll have to do this this part by hand Right Um, yeah, we're short on time. Okay, so I'm gonna put this part out of the of the video because I'm Being told the time out of time. Let's just move to it Uh, yeah, so floating ip list Uh, and then you can see here that I had a Free floating ip that I had to assign And here I had the the port so you can just take the port of Of the service of the of the vip and just assign it like that Okay, so once it is associated we can already go to the gas book application This is the floating ip that I assigned there in the In the search bar and then what I try to put is let's go Celtics to see if if they manage the wind tonight And As you can see the application works and now what we're going to show is uh how When you delete the cluster the resources are cleaned up So let's do we have time or I speed it up Okay, well, so I use the label selectors to delete the the resources And you can see here that the deployment and services are being deleted And now what we're going to see is that when we do an open stack port list Very fast it deleted Four of the port there one was still being deleted, but it eventually Uh, all gets deleted and the load balancers are all gone. So thank you for watching the demo And if you have any question, please Go ahead go to the microphones and just ask us Lights to finish up All right, right. Yeah, we have uh, I forgot that we have actually some slides about future Features, so we have ingress support policy support coming up Then the resource management means this pulling that I was talking about so this will be sped up And then uh improvements We want to split the cni into a demon and an executable So the demon will be able to keep already information about what's going on So no if you imagine that you have a lot of worker nodes all connected to the same Kubernetes server, so Uh, and all of them get a lot of pods scheduled that's a lot of connections happening to the Kubernetes API It would be more load So we're going to reduce that by having a demon and then to have a cha for the controller So we would like you to join us even if you're an operator or even if you're a developer Just everybody has things to contribute even if it's just requirements or feature requests Bring them up on irc or on the mailing list And here you can see Some blog post that helps you get started trying the nested feature. As I said, I'm going to tweet Soon about some other way of doing it with cube atom and so on And uh, yeah, this was the the old slide for the for the services For some other demo that we had prepared Uh, and so if you have some questions, maybe we have some time for it. Otherwise you can catch us afterwards All right, so thanks a lot I hope to see you soon