 good morning guys thank you for joining us this early it's it's it's good to see at least some people after last night party my name Sergei Bezvirhi I'm a part of the Cisco systems and part of the services organization I am also core on the Cola Kubernetes project and the developer okay I'm Pete I work for Accenture and I've been involved in running OpenStack on Kubernetes for some time and built one of the earliest implementations that worked sometime before Barcelona and they got involved in this project after that I'm also a core as well as being on Cola Kubernetes working on OpenStack Helm and the new Loki image project my name is Steve Wilkerson I'm with AT&T I got involved in the Cola project also a little after Barcelona primarily with Cola Kubernetes core reviewer on the Cola Kubernetes project also work on OpenStack Helm so getting some some broad experience with OpenStack managed and orchestrated with Kubernetes so I think I think before we get into what Cola Kubernetes is and how it works we should probably talk a little bit about what what OpenStack is and what and more importantly in some ways what it is not I mean essentially lots of the time people talk about OpenStack they describe it as a a cloud operating system but I think in many ways it's more accurate to describe it as a cloud ecosystem of complimentary projects that help help each other provide all the sorts of services that you need in order to run a multi-tenant environment that allows you to provide resources on demand to your users and applications however what it's traditionally been very bad at is being easy to use and from being easy to deploy scale provide high availability for your services at all levels of the stack from APIs to messaging buses and things and it's been fundamentally very hard to upgrade and manage when you have it in production and so there's been multiple multiple different sort of orchestration platforms and solutions people have looked at from Ansible and Puppet and other things but over the last couple of years Kubernetes has really really come to the forefront as a way of managing processes that you have running on multiple hosts and so it provides really good declarative interfaces for automating the deployment scaling and operations of containers across clusters of hosts and whether they're on bare metal or in virtual machines and it allows you to deploy services quickly and easily scale them on the fly roll out new features on demand and roll them back as well when you need to and this complements the feature set of OpenStack because it's not currently very good at multi-tenancy operations and provides a sort of large ecosystem surrounding with block storage and other things that you want to run can't really run virtual machines very well and isn't isn't capable of supporting the sort of complex and networking topology that a lot of people are looking for that said it's really worth bearing in mind that it's it's definitely not developing slowly so a lot of these things are starting to change with the introduction of projects like the alerts that are bringing on and virtual machine support and then improved CNI backends which lab much more complex networking that was originally originally possible so with with these two with these two sort of orchestration platforms rather than being the even being competition with each other I think you can bring them together and then start trying to treat OpenStack as just another application that runs on top of your and infrastructure and this this provides several sort of benefits when you use Kubernetes as it provides a abstraction layer for your hardware that just means that you can reduce all the hard edges and start treating your treating your infrastructure just as a pool of resources that you can push things out to and Kola actually when it's originally started about three three years ago I think and tried targeting Kubernetes back then as its original orchestration mechanism however there wasn't the features set in place that was required for that so they moved to an Ansible based deployment and then after after some time the features started to come into Kubernetes that allowed it to be started again and development started again so it was around about the Barcelona time last year that was the first really viable deployment of OpenStack using Kola Kubernetes and which relied on a customized Ginger 2 templating engine and then at Barcelona it was actually some people from SAP and AT&T as well who were really pushing the idea of using Helm as a templating engine as Kubernetes are native tooling started progressing to the stage that was required to manage these sorts of complex applications and so the change was made then to move Kola Kubernetes over to Helm which really I think demonstrates one of the best things about running your infrastructure in Kubernetes is that actually the resultant manifest that you deploy out your infrastructure which describes how your application runs is the important part and while you can have many different opinions on how to build these templates and how what should go into them and how they're structured it's actually means that you can entirely change that process without changing the running of the application on your infrastructure and do quite radical changes under the hood to how you manage your applications while continuing to support customers and active workloads and also in part of this cycle things that we've added into Kola Kubernetes have been things like Furnet and token rotation and management for keystones part of a security drive ironic for bare metal support and managing the full life cycle of machines from from loading them into your data center to getting applications up and running on them and then eventually decommissioning and taking them out of production and Prometheus for advanced logging capabilities. Cool. So I'm going to step back from the open stack part a little bit and actually talk about helm. I know it's been talked about a lot the last few days. I've had people come up to me and ask if I can tell them a little more about helm since I started getting involved with Kola Kubernetes first and then I started getting involved with open stack helm but I'm also involved in the helm project. So I come from a software engineering background so infrastructure stuff whenever I joined the organization I'm with now is really new to me but I really loved hacking away at some code. So being able to contribute to helm has been great because it's a really cool application. It's serving a very real need and also the community that supports it and brought it to us to start consuming is very open and welcoming. So at the end of the day helm is really just the Kubernetes package manager but it's built to manage the entire life cycle of an application which is really cool. So it's beyond just packaging up this in our case open stack or someone's wanting running to run you know say a web app or some other Kubernetes workload they can bundle it up into this package and helm's going to handle the ability to install it. The ability to upgrade it you can manage your releases you can roll back a release so it's not just fire and forget helm's actually going to help you go through that entire life cycle which if you're using it to manage something like open stack is pretty fantastic. You don't really want to I know me personally I don't want to have to develop a lot of custom tooling to be able to do this if there's a first class citizen for Kubernetes such as helm I think it's a it's a great choice. Another thing that helm really helps us do it helps us improve our CI CD workflows so with those life cycle hooks that helm has we can extend the ability to not only have individuals use helm to install our application open stack in this case we can use software or other machines to deploy it and manage it to make sure that our applications and the changes we're making the configurations we are pushing are actually valuable so we can make sure we're not going to have something and try to install it and break it. It's great having those hooks in there to help us maintain that workflow because people who are deploying open stack of course you're concerned about your CI CD pipeline you want to make sure what you're releasing and what you're going to make available to your customers or your employees is actually functional so this is really helpful. Helm also has the ability to write plugins so this goes back to if you want to add something custom to Helm instead of having to push a feature into Helm you can develop a Helm plugin for it and Helm actually has a fairly it's a pretty decent size list of plugins that are available now. Some examples of those are you know they're fairly simple one of them lets you it's just a basic plugin that you can go through and it's called Helm Nuke you can just run it and it goes through and deletes every single helm helm release that you have run so a helm release is going to be an actual installation of the application that you're trying to run so instead of having to go through manually and Helm delete purge whatever the release name is you can just run Helm Nuke and it'll go through and take care of all this for you so in the CI CD space if you just want a quick and dirty way to get rid of all of your releases and not have to go through manually I know that's probably pretty bad from a CI CD perspective but you can do that there's another cool one it's just Helm diff you can run Helm diff it'll show you the differences between two releases without actually installing them which is good so you can also the CI CD space or a chart developer can go through and kind of look and see what the differences would be with these Helm at the end of the day is also an advanced templating engine so it's not just being able to template out values but you can use what are called partials which you can develop a partial template that can be used to either generate values for the helm charts that you're developing you can use it it's kind of like functions in a programming language I think is a fair way to put it so in the open stack for open stack for example you can write a partial template for handling keystone endpoints so that's pretty neat and the service catalog and other things that's something that we really managed to take advantage of with a lot of the open stack using using Helm on Kubernetes projects where we've split up different different parts of the deployment into its own chart with associated functions that allow us to template out these values so when you're looking at a open stack project generally they actually all follow very very similar sort of characteristics when you're trying to install them where you want to create a database perform a schema upgrade on on that database create keystone endpoints and a service and then an associated user and all of these can be split out into their own very simple functions and jobs which is why when it comes to instantiating a service via Helm once that you've got the initial framework in place it actually becomes very easy to expand this out to other services and manage them all right this would be a Kubernetes manifest uh it's going to be I full disclosure I haven't worked with puppet I probably couldn't answer that you might be able to and then also it eliminates the numerical values in the Kubernetes manifest this is pretty helpful because whenever you're looking at transitioning YAML to JSON and then on marshaling it or marshaling it it can present some issues I know in pasta there's been some issues reported on GitHub for Kubernetes where this was kind of causing issues for for deployment of applications and then also I mentioned Helm with the lifecycle hooks that it has supports the ability to roll back and upgrade a release so for in our case it would be an open stack service or another application that we want to use so current state for Kolo kubernetes whenever I joined like Pete mentioned we were still using a Genja 2 so between Barcelona and now it's been about seven months I want to say a little shy right now there are charts in Kolo Kubernetes for the most common open stack components so it's things like Keystone, sender, Nova, Neutron, Glance, Horizon it was also mentioned that we have a chart for Ironic now that can actually stand up a new compute our compute host we also have the other various infrastructure components so we have RabbitMQ, Memcached, MariahDB and really we wanted to these charts were designed with flexibility in mind so instead of creating a you know a a chart for an entire open stack service it was designed to encapsulate these smaller components of the services to enable operators to have the flexibility they needed and then also it's aiming to be a you know a single solution for containers virtual machines and bare metal and then what's next this is the exciting part for me right now if you look at where for example Kolo Ansible is versus Kolo Kubernetes today there's about 50 plus roles in Kolo Ansible I was trying to count them the other day but there were a lot and I was getting distracted with the background noise but I knew I got to 50 and right now we've got 14 services in Kolo Kubernetes so if we're looking for parity between the two there is some room for additional contributions which is fantastic because as we're looking to identify new contributors for the Kolo Kubernetes project there's work to do and really if regardless of which solution that you're looking for it's very clear that open stack running with Kubernetes is a hot topic I think it was mentioned there's 48 sessions this summit just dealing with Kubernetes and open stack working together so it's it's important for not only Kolo Kubernetes but the open stack community in general to get people contributing to these projects so we can kind of go through and identify common pain points or common gotchas that might arise regardless of which architecture path you're going down and also it's my experience with the Kolo Kubernetes project is it was very open and welcoming whenever I first got involved after Barcelona volunteered for some work while I was there as soon as I got off the plane I actually think I wrote my first blueprint for Kolo Kubernetes in the airport when I was stuck in Barcelona because of the flight delays and uh yeah that's also how I met Pete that's how I met Sergei some of the other individuals in the community and it was the first time I had contributed to an open stack project where I really felt excited and welcomed by the people who were there I think I think something else is worth bearing in mind with you know that Kolo Kubernetes is the evolution of the actual sort of deployment is is pretty advanced but what there's a lot of work going on at the moment is with the configuration management and mechanism by which those configuration files are produced so up until recently we were sharing code with Kolo Ansible and using it to generate configuration files that were then being converted into config maps this this has now been brought into the Kolo Kubernetes repo so that we can start customizing this more towards that and we're actually having conversations during during this summit about potential future ways that we can both advance and simplify this potentially by collaborating with other projects such as OpenStack Helm in order to try and produce a standard mechanism for making configuration and files for OpenStack components right just want to talk on this topic the goal is to try to to make a generation of the configuration as a native to the Kubernetes environment as possible so one of the goals is to eliminate the dependency on Ansible in in this area and in this case any other tools like Puppet or Chef or anything else they could use the same approach the same charts or to to generate configuration and then consume it based on their workflow and I think I think sorry I don't know if any everyone could hear that but well Kevin one of the the major developers on this project was saying is you know really one of the major things that Kolo Kubernetes tries to do with its very discreet architecture that it applies to everything is not limiting an operator in what they want to do so whether configuration is generated by Kolo Kubernetes or brought in externally it tries to support all of those options all right there was a little bit of confusion in the Kolo Kubernetes architecture the way how we package and we map between the microservices services and etc and I think it would be worth to spend just a couple of minutes to clarify that and maybe to explain the wider audience why we went in that direction right after Barcelona we had lots of whole discussions on the IRC channel and it's the question was to microservice or not to microservice and we found a lot of compelling reasons to go in the microservice direction because it gives you the greatest level of flexibility basically and it also maps well with the existing Kolo images where the Kolo images are thin images so at the end you have a single process running in this case in our case the microservice maps to the existing Kolo image example what we call a microservice like example Neutron Neutron OpenVswitch agent it is a microservice even though it's a part of the bigger and tasks like creating the database creating the endpoints these all are treated as a microservice and it gives you the greatest flexibility in combining this service because I mean if you deploy vanilla open stack I mean the list of tasks is fairly standard but probably not too many operators deploy vanilla open stack I mean there's there are always some corner cases which well I mean for some people it is a corner case but for some people it's how it's supposed to work and they cannot function otherwise so we try to in Kolo Kubernetes we try to address more what's operator once than what or how the developers would see it work and I think we achieve that by going in the microservice direction one of the cases which we use a lot describing why this architecture is viable and right is example for the operators to optimize their network sometimes they need to run RabbitMQ as a stand as a separate instance just for Neutron to optimize the communication between the agents in our architecture it's fairly simple I mean you don't have to change the chart you don't have to change any code basically everything is done just by the configuration and so the configuration will tell the deployment mechanism that yes Neutron will have and use its own RabbitMQ and it's going to talk to to that instance everything else will use a completely separate instance of RabbitMQ when we developed microservices charts there was like lots and lots and lots even though we at the gate we run in microservices mode if you say but it was huge so people were complaining it's too complex I agree I mean for the gate it's fine but like in the real life you probably don't want to deploy open stock using the microservices so we added the next level on top of the microservice service chart where we consolidate required microservices in the same package and then you can deploy just Neutron or just Cinder or just Nova control, Nova compute one piece of there's a key here in Kubernetes when you send a bunch of pods to deploy because basically each microservice is translated into the pod in the Kubernetes environment so if you send or tell the well I mean we need to deploy like five pods it will try to deploy them at the same time that will cause a problem because for open stock I mean you cannot deploy Neutron server before you have a you have a database created or Keystone has endpoints and the service account created so within the service chart we use a piece of software it's called Entrypoint and Entrypoint allows us to define certain rules and the dependencies example Neutron server will not start unless MariaDB is up Keystone is up all the endpoints are created and the user will create only then after all these checks are completed only then Neutron server starts and well basically it's ready to function well it we end up with probably 10 service charts and it again some people complain well I mean it's still too complex we don't want to deploy 10 charts we want to do a single single command to deploy the whole open stock that's where the compute key chart was developed so the compute chart used the same Entrypoint mechanism for choreography they call it within the within the service charts and so you start the compute chart and then it deploys it has inside service charts for all the components and inside of the service charts you have a microservice so it's like a kind of a Russian doll type of scenario so that's how we answered the request for a single command to deploy the open stock there's a key component is the Cloud YAML so when you start when you want to deploy open stock basically you need to provide certain configuration to help because we cannot hard code like the configuration in the chart something needs to feed the current configuration to the chart before it gets started and we use the Cloud YAML Cloud YAML allows you to describe your specific environment like the IP addresses interface names everything like personally I really like the Cloud YAML because it gives me a chance to have a reproducible results so I know like when I start a chart and I use the same Cloud YAML like 99.99% I'll have exactly the same results every time so it makes things very portable so all you need you clone COLA Kubernetes repo you have your Cloud YAML customized and then you're good to go to deploy COLA Kubernetes right now the demo well I think Steve mentioned that we see a COLA Kubernetes it's kind of a one point for bare metal, VM and the containers well Kubernetes gives us the native way to run the containers open stack well we deal with the VM and bare metal but during this demo I would like to show that basically you use the same tool to expand your existing cluster like as it says you're so successful and well you need to expand your cluster on the fly it can be done using the same tool you're already running on your on your cluster with the COLA Kubernetes so I have two versions long version which is the live demo it takes about 14 minutes most of the time it's gonna be the bare metal not going through the bias boot up procedure and I have a video as a backup which was condensed pretty much to five minutes I guess I mean we are okay yeah demo and put us right at 940 excellent all right so then let me switch to all right so here I have my five node cluster it was so successful I really want to expand it with the brand new UCS box I just bought I think I think what's very highlighting here actually is how using using Kubernetes as the deployment and management mechanism for OpenStack is such a great option because it allows you to go from a single low availability node to expanding it out to as large or small as you need your result and infrastructure to be becomes very adaptable as a result right so as you can see I'm already running Nova on top of sorry I'm already running Cola Kubernetes on top of that Kubernetes cluster and I have a bunch of compute nodes so these are like compute nodes which are used to start the VM and then at the ID number 10 I have an ironic compute node which I will be using to deploy bare metal instance okay so here I have a CMC console to my bare metal server and so let me start so this is just a regular OpenStack ironic command which pretty much everybody use to instantiate ironic instance very simple all right it started what I noticed working with the OpenStack and ironic that it takes time before ironic talks to Nova so I mean there's a a little bit of delay so for the audience not to get too excited that a live demo is gonna fail again no it's just an expected delay all right in this case for now the console is still part of so let's see what's happening with the node okay so as you can see it's in the deploying state so that it's when ironic compute node talks to the Nova and basically doing their internal magic okay so now ironic brought up the bare metal server and unfortunately that's the longest part so it's gonna go through the bias checks and everything before it actually starts doing that doing something interesting and useful and it's gonna do it twice so it's normal workflow I mean there is no way around that unfortunately well that's why I came up with the idea of video but live demo is always better now let's see let's check the starters of nova instance all right so it's in the spawning state so next what's gonna happen after it finish it's magic bias magic I call it it will communicate with the tftp server and then it will grab the image which was placed in the glance and that image has already baked kubernetes binaries and as a part of the command I use to spawn that bare metal instance I'm passing the script which would be executed by cloud init infrastructure in this case when the node comes up with the image it will automatically join the existing cluster it will become a new kubernetes like a real native kubernetes node and then after it's completed it will also become nova new nova compute node where you can you can use it basically to run other virtual machines or something else so I guess at this point yeah now it's gonna get the IP address image and will contact the ironic conductor for yeah for the image the interesting thing that we discard is that it basically it uses the ischazi interface to mount the image and do the copy so it takes the disk space available on bare metal node and then it just do dd from the glance image to that disk and that's pretty much it so it's yeah there are gonna be a few seconds here while it's copying the image as soon as it's done it's gonna be another power cycle and it should change the state of node in ironic and then it should show that the in nova list that the bare metal instance is active that's basically the time when it does copying of that I don't recall what was the size of the image but it's it's basically it's centos based image with the binaries okay yeah all right so it completed the copy and then final reboot before the real operating system will come up on this node yeah let's see what's going on here okay so from the ironic perspective at this point the deployment is completed the power state is on and the provisioning state is active now let's see what nova thinks about that yeah okay so the nova also sees that the bare metal instance is active in the running state now we can switch yeah I'd like to basically demonstrate some sort of a dynamic of adding this node into the kubernetes cluster and for that I use highly recommended by one of our team members watch command hopefully I typed fast enough yeah okay so it's not there yet all right and then I'm switching back to CMC console yeah plenty of time as I said servers they don't come up as fast as we would like yeah exactly but you have more CPU or more horsepower behind that one more time I'm positive it's a marketing trick I guess my co-workers at Cisco decided okay let's show the Cisco logo twice all right so now it's gonna bring up the operating system which is a Linux and then it's gonna go through adding this node to the cluster and then there's a kind of a not very lengthy but considerably visible step when there is a synchronization of the kubernetes states so the existing API server pushes a bunch of let's say file system like virtual file systems for the existing already running Kubernetes ports because when it joins things like the kubelet and the proxy will start running on the newly brought up the compute node so around this time we should see a new yeah all right so this is the new compute node it appeared in the kubectl node list command as you can see it's in the node radius state and that's exactly what it's doing now so it's synchronizing state between the existing cluster and the newly new node normally it takes about 30 seconds to get into the radius state I guess this time by some reason it takes longer but yeah okay all right so we are in the radius state now we can get out of here and then use another command again well we use in kola kubernetes we use a lot kubernetes labeling and labeling allows you to direct or steer certain ports to run or not to run on the specific specific node and I'll talk more about that just in a second get ports oh no sorry minus V okay so as you can see we have several ports which are in the init state and the reason is that all these microservices are demon sets so what happens is that when we label the node to be kola compute node kubernetes nodes which set of ports needs to be automatically pushed in that direction I think it's a beauty of the kubernetes basically without any manual operation you can you can deploy certain services on your new new compute node that what gives a great flexibility and I think it's a great advantage that kubernetes brings to the open stock environment okay well I mean this process might take somewhere between three and four minutes depending on the how busy the internet link is because all these images gets pulled from the docker hub hopefully not too many people are using internet at this hour so I think it's also worth mentioning because this is a great example of how kubernetes has evolved over the last year or so with things like in its containers that are running and being processor at the moment we have one here just checking the environment so it is seeing if there's communications with the database the message bus and other things and then we have other ones that are preparing certain aspects of the host's file system before letting the primary pod launch which is which is both cleaner and also allows us to do sort of very very basic ordering of how a service should come up and you know at a micro level right and like before the boot up time for a kubernetes cluster without the init container was significantly longer because they crash every time when the computer oh sorry when the pod crushes then next time it restarts with the exponential delay so if it crushes too many times I mean you might have to wait five ten minutes before it gets retried okay all right so that should be pretty much done now ironic node list okay so it's active oh oops yeah I like kubernetes yeah it's you know mixture between open stack and the and the kubernetes you know makes you to type weird commands so I was thinking about cinder list anyway as you can see now we have a new new node here and then yeah as you can see we have another computer resource here which is in the upstate and basically it's ready to start serving or start being used for the M1 component all right that's pretty much thank you thank you no no no that's it you're on a oh okay what did we have left so I think and obviously obviously any if you have any questions now just feel free there is also a cola on boarding session running down downstairs it'd be great if if you wanted to go to after this which would tell you how you can start getting involved in this project as we touched on earlier there's obviously lots of things that are going on with currently a big emphasis on sort of how we start to configure these services in a viable fashion moving forward as we move away from Ansible based configuration by default and and enable people to do whatever weird and wonderful things they want to do with their own clouds okay so that's that's room 105 downstairs no it's basically it's a built image I was using disk image tool that usually ironic people suggest and or just with the slight modifications of adding the Kubernetes RPMs so because I mean we need those binaries to to join cluster yes yeah Cola Kubernetes doesn't install Kubernetes cluster so if you have a cluster exactly yes yeah so I mean both both me and Steve work work on both projects and I think this is what I was touching on earlier in the way that actually the the resultant manifest that is output both by Cola Kubernetes and OpenStack Helm is in many ways very very similar we we do things like Furnet token rotation quite differently which is something that actually Intel have been working on very recently with us on OpenStack Helm and we manage configuration very differently at the moment but I'd say the biggest sort of difference between the project is the Cola Kubernetes provides almost what I think of sort of cloud Lego in the sense that we provide very low level components that can be built up into a larger structure using Helm's dependency modeling which works works very well for for some deployers but for other people we're looking to obtain a manifest which describes a service in its entirety which makes it potentially for many use cases a bit easier to manage orchestrate and and upgrade because you can do a service in it in its entirety rather than as discreet steps however the work Cola Kubernetes has been doing recently on producing service level charts just provides a a rather different approach to achieving at the end of the day a very similar end result of OpenStack on Kubernetes so you know I don't really see a sort of conflict between them because we are coming from such different angles with both both projects in respect to how these charts are manufactured and built Okay, so how does Cola compare to what? Well that's that's a tricky question I mean and sort of in many ways kind of slightly touches back on on this point in that Cola views the image as being the sort of oracle of truth in terms of how the service should be run so it has a rather prescriptive entry point script which determines where configuration files go the entry commands that's run whereas with Lockheed we view that just as a sort of bucket and repository for code and leave it up to the orchestrator to decide how that's managed how the container itself starts up and where configuration is within that so it's a much we try and view that as being a much lighter weight project than say Cola I think I think actually what's what's quite interesting is both of both of these questions are very related in some ways in that I think with Lockheed would it be Sam the the other guy well the P.T.L of Lockheed I think would it be fair to say that it is the we provide building blocks for for services and whereas with Cola Kubernetes the chart is viewed as the the building block I guess we're done yeah oh thanks thank you