 Yeah, I think that we can start So hello everybody. Thank you for coming and choosing this presentation We try to make it as the best. So my name is the Michal Yura. I'm the Linux Cloud Developer and with me there is my colleague. Hello. I'm Fabio Castelli I'm an engineering manager for the container team at SUSE. So yeah, we would like to invite you for our presentation. It's called OpenStack and Magnum and the Kubernetes for everyone So there are some challenges nowadays As you know, you have to go through really rapid development cycles You are using agile Development methodologies. You are using continuous integration continuous delivery All of that to just react quickly to all the changes that you face on a daily basis To help yourself you also rely on cloud environments to have the flexibility to scale out when you have this traffic picks and To be everywhere by leveraging different data centers But for all that there is a price So you have to deal with a high complexity of the cloud environments You have probably to deal also with different cloud providers being them public or private one You're trying to cope with that also by adopting microservices architecture So you find yourself dealing with tons of containers and you need help to to sort hold that okay So what you want to do? What you want to do is to focus on the application that you are Developing not on the machine. So you want to manage application not manage machines to do that you can You can resort to Kubernetes Kubernetes is an orchestration engine We're going to talk more about that now. It allows you to be portable So once you package your application inside of containers, you can move them really everywhere They do not it doesn't force you to use certain directives in order to deploy and create your application It is also friendly with legacy applications all applications that are not born respect in the 12th Factors application manifest So it's really flexible It avoids vendor locking because once you have kubernetes It doesn't really matter if it's kubernetes running on top of an open cloud or on top of a private one or on bare middle It allows you to focus just on on your application as I said before So it's self healing if something fails it will automatically recover It can also be instructed to automatically scale So if there's a peak of request it can scale up and then when the the peak is over It can roll back so that you're not wasting resources And the really nice thing is that once you try to run stuff inside of containers and deploy this distributed application We'll start facing a lot of challenges You will have a lot of problems to solve kubernetes has an opinion about how to solve these problems But despite being opinionated it allows you to to switch out of implementation details Thanks to his plug-in architecture that supports different types of drivers for the different types of problems that you are going to face it It allows you to solve stuff like persistent storage which Despite everyone speaking about state less application. It's still a problem You're still going to have stateful application then you still need to manage them So kubernetes can help with that as it can help with secret management's key certificates Credentials and it can also help you with doing deployments. We'll see that later on The architecture of kubernetes is this one. You have a cluster of at CT nodes Hcd is a key value Database it's a distributed one it keeps the status of the cluster inside of it. You have one or more masters Which are interacting with hcd of the master you have the scheduler who figures out where a certain Workload has to be placed you have the API server, which is the entry point for your clients Meaning for the operator to to manage to manage the cluster You have the controller manager who is enforcing that the desired state like I want to have five instances of the Gas-brew container running that at any time there are always five instances. So the controller manager Make sure that the desired state matches with the reality then you have a number of workers on each worker You are actually running the containers the containers are grouped together into a pod Which is the smallest unit of kubernetes these containers are really tied together It's up to the designer to To specify that these containers have to be co-located into a pod because by doing that you remove some isolation features of Containers, but it's done on design by design So the containers are created with a container engine like docker or can be something else like rocket or various There are current efforts to use run C to do that These container engine is managed by kubelet kubelet is a kubernetes process Which receives directives from the API server to perform the different operations and then there is kubeproxy We're in charge of figuring out some details about the dock of the kubernetes networking On a higher level, this is how the architecture looks like So let's say that you have a gas book application which is running inside of the container This is deployed on a kubernetes cluster. So how do you expose that to your to the internet? So the internet will Go through a set of load balancer like traditional ones and then these load balancer will redirect The request to one of the worker nodes of your cluster on each worker node As you can see there is a port number like 88 in this case This is a port number which is common across all the workers and is specific to the gas book container So the request goes to this port and then this is forwarded to one of these containers So how is kubernetes? Deployed so what does it require as I said before it requires an hcd cluster. It requires one more worker nodes It requires one more master nodes There is a software defining network which links all the containers together and you are going to need a load balancer To to handle the ingress network plus you will need a lot of work to bring up everything together Upstream is aware of this problem Because they know that it's really a pleasure to use kubernetes as a user to manage that as an operator But they are aware that it's a pain to deploy it So kubernetes upstream is currently working on a tool called cube admin, which is a tool currently in beta stage Introduced with 1.4 that makes the deployment easier, but it's not yet there. So in the meantime, what can we do Mika? Yeah, I Think that so we have to combine these two works So we from one side we have the open stack word which is the Perfect infrastructure as a service framework and from the other side we have the kubernetes, which is the tool for scheduling applications So we are thinking that this is perfect solution for everyone I mean the open stack and the kubernetes and this can be brings to users With the new open stack service called Magnum. This service was introduced with the Liberty Release and it's the Containers as a service So it supports different Linux images and integrate integrates, of course the different components like kubernetes docker flanner and of course the open open stack services Keystone glance and cinder and et cetera So open stack Magnum and it's providing also a new API. It's used for the Isolating the container also LCS in engines and it's a perfect management tool for orchestrating the cloud resources and Instances with heat so we can just clone for example our development environment and have the same Network setup for alpha beta and production Environments with this same network configuration. We can just launch different kubernetes clusters among the among the different projects and We will be talking only about the kubernetes, but we still you can still use different containers containers orchestration engines like swarm or mesos and At the end for example when your kubernetes cluster will be up and running you can still communicate With your tools, which you know the best like docker for getting access to the containers or to the host and You can still use the kubernetes client for creating the pods or replication controllers How does it look like from the architecture point of view? So operator of open stack can access the Magnum API using the magnum client. It can just send the request about creating the new object So this request will be passed to the magnum conductor and the magnum contact conductor will pick up the heat templates So this will be a dozen driver Which will be using the heat templates and this templates will be passed to the open stack heat and the open stack Heat will create for us the cloud resources. For example, it will create for us the network it will create for us volumes on the storage and will pick up the specific image and This image will be used by Nova to launch the Nova instance so and this Nova instance will be built from the specific image where we will have installed already some Cloud init package kubernetes or swarm packages. We will have the docker package and of course it can be also delivered by different On the door different operating systems. So how does it look like on a open stack architecture? So The kubernetes cluster will deploy two different type types of instances from one side. There will be cube master which will be launching the Controller services like API server controller manager and scheduler This will be this services Are designed to control the kubernetes cluster and we will have also the workers the minions to minions which will host for us the containers and whole application and At the end when we will decide to deploy at our application We will be able to also expose this application to the end user to the internet using the open stack Neutron load balancer Why the open stack? Magnum is so awesome. So everybody or Everybody can have its own kubernetes cluster, of course the deployment will take only a few minutes So for example last week we had the user who tried to deploy the kubernetes cluster manually. So it took him something like seven days and Do this in our demo in a couple of minutes, of course the whole configuration will be done automatically At the we can autoscale our cluster. We can autoscale our platform on demand and Yeah, we can start on this ready environment our containerized application And we can just expose it to the internet using the load balancer Why we are supposed to pick up the Magnum with kubernetes? Of course, it's based on the Google experience with their Containers running the containers in the production Once we will migrate our application to the kubernetes mod infest we will always have this same deployment process So and this make our Application very portable will be able always to migrate the application between their different clouds We will take care only about the application So this is was what we defined at the beginning this kubernetes, of course, it's ready for really big cluster Deployments, so we can just deploy hundreds or a couple thousands of nodes and We can just choose between the virtual nodes and bare metal What they will be the future for us as developers So for example, we we would like to focus and bring the support for kubernetes on different platforms Like the arm or s390 we will get also autoscaling and autore starting feature. So this We would like to give our data center more artificial intelligence we would like to the make our data center more autonomous like the Autonomous cars which will drove us through our the Production of workloads and we would like to also support some other containers engines So this is let's sum up a little bit. So we are thinking that the magnum is really big thing Which currently it's making open stack more more complete. So we can build for example our own Library of application, we can just launch them. We can just Manage the different projects which are aware of the containers topology and we make the open stack as a first-class citizen For container technology and this is everything. It's only to make our work easier and Yeah, the better so I Think that yeah, right now. We are ready for demo So we would like to show you First we would like to start with showing the open stack with the crowbar project. This what we are using for deploying the open stack So we created some special Magnum barclamp. We to deploy Magnum. We we have to only drag and drop the node which we would like to deploy On which we would like to deploy the Magnum service So right now we are switching to horizon and we have the ready open stack cloud With different resources and what we will use to our demo is the Magnum slas 12 sp1 based image Which will be used to deploy the kubernetes cluster. So to do this We need to only create the bay model, which is the some kind of template which is a bunch of the parameters how to Deploy the kubernetes and Yeah, we can also provide some options some recipes. We have to choose the our image We have to choose the flavors for the minions and different flavor for minions and different flavor for master We can just provide the cinder as a back end and we will also create some volume Which will be used for our containers data And to set up the network things for kubernetes We we choose already the flannel as a network driver we have to only provide the name of the external network and For our demo, we will use also our local docker registry Which you show how we are going to use it to to operate and yeah So when we once we will create the bay model with all our parameters We have to only click one button. We just create the bay and in this this this way We'll be ready to deploy kubernetes cluster one thing which we have to only define it's the number of the master nodes and the minion nodes and that's all one click and we are starting the provisioning process for Deploying the kubernetes cluster and we can go as I said Magnum is using heat templates. So everything our deployment will be shown in a Heat stack Tap so we can see that for example heat is taking care about the deploying the whole Kubernetes cluster we have all them all the bulbs all the blobs which you can see they are different components different Cloud resources, which normally you should configure manually and The yeah, and the heat Right now it's starting the kubernetes master. We can just go and to check on the console how this node is how this node is booting and Which services are configured and Started so Right now we see that Cube master is already Launched and right now the cloud in it service will take the data Metadata and will configure for us the rest of the Kubernetes services Needed to on the master world. So we see that there is a ATC so Flavio, do you recognize the services? Yeah, sure. So It's good heat is going to enable the services that are needed on the node So like at CD cube master cube scheduler the API controller is going to generate SSH keys for for the node, of course and then once it's done with that It's it's going to move back to the minions to the worker nodes Which are the nodes where we we are supposed to to star the containers It's going to create them using the flavor that was specified inside of the of a bay model that we create and Again, it's going to customize everything. Yeah, so right now the minions are starting So we decided at the beginning that we would like to have the kubernetes cluster of two minions So we have the two workers so We also Provide that we would like to get two volumes from which will be created from cinder and will be Mounted on the minions for the containers data so On the network topology, we can see already and check that all the Instances they are already in the same network Connected together and routed to the outside world. So in this way, we can also use the Upstream that the registry with two containers or something else. So one more What one more click on the console on the minion to check the status what what is what it's done there We still have to wait for a cloud in it. We just launching the cuba let and Starting our docker service So now everything is operational. Yeah, and it took us what three minutes? I think even two so and it was done on my Development and development. So okay, so that's pretty cool. So now we can play with it, right? Yeah, I can just pass the the ready core kubernetes cluster to you and Maybe you can just show it as something more. Yeah So in in the remaining part of the demo, we are just going to to play with this kubernetes cluster We're going to work from a laptop So you can just point the cube CTL, which is a common line tool for kubernetes You can point that to your cluster by just copying the IP address Which is shown into the bay page and now you can get information about the cluster You can see for example, how many nodes are part of the cluster Yeah, like that. So we you can see that you have to we have to know it's running. So now we can do a deployment So here we have some kubernetes manifest. These are from the one of the quick start guides of kubernetes It's a gas book application, which is using radius as a as a database But radius is not deployed in the simple way. It's deployed in In the master and slave mode. So we have one master now We are creating a pod running a kubernetes master. The master Is going to receive all the read and write requests to the database and then everything is going to be Propagated to other two instances of radius which are configured to be slays The slaves are going to be there to to act as a backup and also to to respond to read request so The the master is up and running We also created a service a service is one of them is a way to expose Program running inside of a container to other containers inside of the cluster It's one of the clever ways to of kubernetes to solve some problems like service discovery how to end up multiple choices or how to recover from failures. So now we are creating The the slaves as you can see Some containers are creating you second. It's taking some time because this is the first run. So Kubernetes has to download The images from the Docker registry what we are using now the slave one of the slave is running. I Think also the other one should be running. Yeah, right now. We will auto scale our the radius back end okay, so we Yeah, we auto scale the the the replica So the replica set means that you want to have a certain number of a certain type of container always running So now we we decided that one slave was not enough for us. We wanted to to be more resistant So we scale the number of slave nodes from one to two So a kubernetes not as bad and they immediately started to create a new container for the slave This time it created that on a different node because you know It's better instead of having all the sink on all the containers on the same node So it created that another really nice thing is you can do debugging straight from your laptop as you have seen You can get the logs of a container. You can even SSH kind of SSH into a container all of that from your local machine So no need to figure out which cluster which in order the classroom is running The container then SSH into this one Now that what we are doing? We just created the front end controller the front end is the gas book application We also declare that to be a public service meaning that this time this is a service Which is reachable also outside of the cluster So we highlighted here a port number So kubernetes as you have seen in the third slide of a presentation Kubernetes if you want to expose something it will allocate a port on each node of your cluster on each worker node of your cluster and then you just have to Point your load balancer to this port number and it will work. So what we are seeing now So yes, right now. We are making the our service our application available to the internet So we have to go back to the neutron and create the new load balancer pool so right now we are adding the visual IP from the Subnet from the kubernetes cluster So as you can see we choose already port 80 because it's our web application And we would like to make it available on the internet so This is nice, but it's it's quite some steps. Can we do something better? Yeah, of course, there is already some kubernetes open stack driver which will take care about the automating And creating the load balancer rules For our cluster. You just defined that into your kubernetes manifest And then kubernetes will go back and deal with all these manual steps for you Yeah, right. So yeah right now we added the minions to our load balancer pools as a members And we are almost ready and we are done with our the whole deployment and we can just go and the check the status of our web application containers and Yeah, we can just try and to access the application so Yeah, they are up and running and we can go to the our browser and to type the domain name and yeah, we are done Yeah, so everything is working now. We are entering Messages into the gas book which is storing everything into into radius But what if we want to do a further iteration on the development of this application like we want to change a bit the graphic? Is there somehow Something that can help us with kubernetes. Yeah, kubernetes has many different tools so Maybe we would like to rebuild once again our web application and to upload it to the cluster Yeah, so now what we are going to do we are editing the html file of this web application We are introducing some some new graphics inside of it then we are going to Rebuild this docker image with a traditional docker build command and after that We're going to push the image to our registry, but then we have the problem of performing the deployment so We could just shut down everything and move to the new application or we can do something cooler So what we're going to do is to use the kubernetes deployment feature Which allows you to do a bling a blue green deployment? So it will start a rolling update of your application. So in production now We are going to deploy a new pod Running the new version of the image as soon as the new pod is up and running and is behaving correctly Then kubernetes will remove one of the pods running the old application And it will keep adding a new a new pod running the new application and then removing an Old pod running the old application until you're just running instances of the new application in the meantime production is up and running if you notice we just did a reload and We we were accidentally redirected to one of the pods running the new application So we we saw The nice logo in there, but this is really nice because there is no downtime there if something goes wrong You can always roll back and everything is built into kubernetes. Yeah Yeah, exactly. So this is why we are thinking that this is really a perfect tool for developers and also to use on the Production production all environment. So it has many good features So right now you can see that some of the requests which we when we try to Refresh the our browser some of the requests. They are still going to the old version of our application and Then the some others will go to the new version. This is because we just pick up the round robin Algorithm for our load balancer. What is going right now right now? You can see that one instance is terminating. This is an instance of the pod which is running the old code So kubernetes is getting rid of it and now all the production is running with a new version of the code So this Yeah, the for example when you decide that would like you would like to get the rollback because there are still some bugs You can still the make the rollback from your to the previous version of your application and it will be also fully automated and it will be almost Not seen by the end user And they're the one thing that we just made the Running update and you can see that the the backend data there the state always the same so it means that the back end was Preserved and with was always untouched, you know So the all the back end data will was saved and still kept in the consistent state Okay, so I would say that the key points of this talk are that kubernetes can really help you with your development Setting up kubernetes is not so easy You can really leverage a magnum in there to simplify everything at the same time with magnum You can satisfy the request of the different people inside of your organization So if multiple tenants want to have their own kubernetes cluster, this is really simple to do with With open stack and with magnum and in the future there are going to be really really cool integration between the two of them So as I said in the beginning kubernetes can automatically scale out when you have a certain traffic spikes But what happens when you saturate your kubernetes cluster? There's no more room, okay? They there now there is the integration there is going to be the integration with open stack So you once kubernetes is is maxed out you can scale out the underlying platform So you can add up new worker nodes that can be joining the cluster And then kubernetes can start to allocate new pods on top of them and whenever thing is over You can just scale down to consume less resources Yeah, and yeah, I believe that we at least try to show that There's this two combination that from the one side we have the Framework for managing our infrastructure and from the other side. We have the perfect platform for orchestrating our application and these two Technologies these two tools together. They are just creating perfect fit, you know for the end users And yeah, that's all that's all so we managed to feed in time so for some questions. Yeah There was a microphone Thank you very much for that What release of Magnum are you using looks like you're running mataka in there? Yeah, this is right. So the demo was Prepared on a mitaka version But in after the summit we are switching to Newton. So no worries Just a quick remark here. There's some key differences in Magnum in the Newton release Bays are not called bays anymore. They're just called clusters So everywhere that you saw Bay in this presentation as of the new of the Newton release It's just called a cluster Yeah, we know about this and yeah, we just we were also a little bit concerned about how to Name these two things in our presentation Yeah, so but they are we are calling right now the Bay as a Kubernetes cluster as a cluster, so thank you for the command Question about Magnum Can you span workers across regions? So for to do that I'm not making my expert Kubernetes as this project that was initially called uber need is which is about federated deployments of Kubernetes This is something which is still in progress inside of Kubernetes upstream I wouldn't define that as production really So I think that Magnum has to wait a bit maybe they can start to experiment with that But I wouldn't consider that production I thought Magnum could solve that problem with load balancing and things in the open stack environment perhaps Yeah, when I talk about the federated deployment I mean to have something like a Kubernetes cluster so with its own master in one region and then another Kubernetes cluster with its own master in other region The load balancer you see in various used to expose services which are running inside of it When you when you want to have a federated Kubernetes cluster You want to put in touch different workers together to to address scheduling between different workers So Okay, there was also some question Would you like to go to make a microphone? It's for very courting. Yeah, thanks. Thank you That was really interesting. I'm wondering how we're dealing with the security. For example, can I enforce security groups on firewalls on the containers? Level something like that. So Kubernetes has become set of network security policies So you can define for example that certain containers are not allowed to talk with other containers This is a new feature of Kubernetes which is not supported by all the different network drivers So for example, Calico network driver can do that so far I don't know if there is an integration between Open stack and and Kubernetes, but I would go that direction Thank you. And from an open stack Side of course when we are deploying the Kubernetes cluster we there they are created some security groups which are assigned to the Kubernetes masternodes and the Kubernetes minions Do you guys also handle upgrading the Kubernetes itself? There's a new version of Kubernetes releases like if you have already deployed one cluster everything's working great But how do you handle that? Okay So but maybe could you give me like out a workflow what exactly how that may look like? Oh, okay It all depends yeah starting from version 1.4. It's possible to do this self upgrade of Kubernetes I love depends on how you actually deploy Kubernetes itself where our work in progress upstream to do a Kubernetes deployment through Kubernetes itself Yeah Inception yeah exactly, but yeah various work in progress, but something is already possible right now But will Magnum for example help with that I mean so essentially I understand that Kubernetes is after a certain release They're they're planning on you know, they do support the in-place upgrade, but is Magnum going to Provide because since for an end user, they're just using Magnum. They're not directly installing Kubernetes ever So for them the installation of Kubernetes and setting it up is Completely opaque. They don't know it. All right. Thank you Okay, so The last question related to the opportunity to Link resources Austin on the pun stack on the Nova instance with Kubernetes and containers Do you need to rely on floating eyepiece to connect together those resources? How do you do to what what is a deployment pattern to to deal with that? I didn't get what resources. Do you want you have resources for instance, you have a database running a virtual machine And a typical application when you're running in a container. How do you do to deal with that connection? So during the deployment, okay, you can specify the network configuration and I don't think there's nothing you have to do special because I mean from the container that can access other machines Which are inside of the of the open stack cluster open stack installation So if you have a virtual machine deployed on top of open stack with its own IP address and database running inside of it You can access that from the containers the containers I have also separated network for their own special purposes, but this doesn't relate in this case So there's no need to have an elastic IP Had a quick question is Mirano, I mean restricted to running I mean bringing up Kubernetes clusters only in VMs today. Is that right? Like can I bring up cool masters and cool minions on actual biometers themselves instead of bringing up in VMs Can you do Burmese deployments, that's the question. Yes, you can do the Burmese deployment, but we have to a little bit improve it Okay, and the second part question is that if I'm bringing up minions Is there a way I can specify different plugins like if I want to use a specific CNA plug-in Can I configure it now you mean okay? I Okay in our case we decided to use flannel so we we bundled everything to work with flannel the question is can you use different types of networking and choose them from From Magnum so I'm in the bay. Yeah right now We are supporting or just we implemented flannel as a network driver But we are open for our users voices I'm looking at him. I guess you are a magnet developer not knock. Yeah. I think the question is the answer is yes, right? So it's pluggable. Okay Yeah, so thank you. Thank you very much