 Well, we enjoyed it. Nice try. Appreciate it. Almost there. Yes, to the other end. Got it. Super. Great. So, just... When I finish, I'll bring you to the... Oh, come on. Exactly. Enjoy. Okay, so I'm doing good. Okay, so I will just read the title of the session and you will say the interaction... This is the same as you can see on the projector. The screen. Yeah. Still something wrong? It was working. Is it taking input from VTR? No, no, no. It's taking from EJMI. But maybe try to... Yeah, mirror display or something like that. Okay, I will try to switch it on and off. Or that. We have still a black screen. So... Probably the easiest way is to convert it to PDA and run it on the computer. Okay? I'm going to... Can we try and find a VJ adapter at some point? Because I really would like to have a demo at the end. Okay, so I'll try to find the correct letter for the presentation right now. Sure. Yeah? Yeah, I will... Okay, we will run another talk. The title of this talk is OpenStack with Kubernetes Better Together presented by Pete Burley, who's from port.direct, consults the specialization in OpenStack and container technologies. Okay. Right. Unfortunately, I can't use my laptop at the moment, which is a bit tricky, but we'll see how we get on. As I said, my name is Pete Burley. I do quite a lot of stuff in combining OpenStack and Kubernetes together. I am currently a core developer on the Kola Kubernetes project and also do quite a lot of contribution on several other efforts combining OpenStack and Kubernetes, including OpenStack Helm and some other things in the Kubernetes space. So I think sort of to put this in a bit of context, sort of what's going on here, I initially got involved in a lot of this stuff while I was working at a university doing PhD research into solar thermal systems and in that I was running a quite a small, relatively OpenStack cluster for CFD work and I found more and more that configuration management was a problem in this and I, like most PhD students, got very distracted with what I was doing and tried to start playing with the toys around me and then got into Docker and started moving towards containerizing our infrastructure there which has expanded out and grown. So I think everyone here, I imagine, is fairly familiar with OpenStack. I take it all of you are. So I think it's worth sort of going briefly over where it is at the moment. I mean, it's very much concentrated on virtual machines and now some bare metal provisioning and also some other services. It aims to be hypervisor agnostic although in my experience most people test for running KVM or Zen. It supports a huge number of networking models which I'll come onto in a bit. We do sort of state storage primarily in database backend with complex orchestration via heat and mistral services like that. But the main thing is that it's sort of multi-tenancy from the ground up and is supremely good in that realm. Whereas when we look at Kubernetes it's very much orientated solely towards containers. Again, it tries to be agnostic in the way that it does that although currently Docker is really sort of dominating that scene although Rocket and other sort of OCI compliant artifacts are starting to appear. Typically it works on a single flat level 2 domain and we've got our state storage and messaging being done via SED as well. And the main sort of difference I think between Kubernetes and OpenStack is that with Kubernetes it's got... Sorry, with Kubernetes and OpenStack it's got a very sort of narrow yet focused laser focus on what it does. So we get very simple yet powerful abstractions for creating replication controllers and services or VIPs and then abstracting out for things like persistent storage as well. The main issue that we find with things like Kubernetes is they tend to be sort of very good at running applications but not very good at multi-tenancy. So they tend to be occupied just by a single tenant within a cluster at the moment. I think we can sort of summarize this quite effectively here when we look at sort of the differences between them. There's a huge amount of overlap but the overall takeaway I get from this is that OpenStack is something that is very broad in its base but it's very hard to install and maintain generally. The barrier for entry is high and there is a huge amount of ongoing engagement when it comes to sort of day two activities which is something that Kubernetes is improving at itself but it's incredibly good at managing these sorts of applications which have this level of complexity. So traditionally people have looked at OpenStack as being a very good foundation layer for launching applications like Kubernetes where it's mostly been launched on top of quite heavyweight cloud infrastructure like OpenStack or EC2 or another public cloud but it became sort of quite apparent very on in the early days of OpenStack sorry, of Kubernetes that there was the potential for using it to orchestrate OpenStack services. However, the initial feature set that was available did not really allow this because although it was capable of running the API layers quite effectively and the rest of the control pane we didn't have the features we needed like host networking, host PID and IPC namespaces that we needed in order to enable Luster and Hypervisors and other networking components but around about sort of six, seven months ago we found that we actually had all of the features in place that we needed to start making this transition to run both the control and the data planes fully within Kubernetes. So I think at this stage we sort of re-tasked where we're going to try and begin in this journey of running these two services together because there are obviously a huge amount of moving parts in both of these systems so we have hosts that we need to run things on the actual container images that we're going to need to try to orchestrate with our platform the networking to connect both the hosts and the containers together some security considerations that we need to pay attention to and then actually the orchestration level that's going to plug all of these things together and I think it could be argued that I'm about to start doing this the wrong way around where I'm going to build it very much in the list that we're going through here but to give a slight preview about where this is leading to essentially you want to be able to get it to the stage where to deploy a basic open stack cloud that you can just set up a master node and edit some basic configuration of that system then write out a set of configuration for IELTS from that and then start the Kubernetes cluster and have it build itself on from that point there and this is something that we've achieved in prototypes in a way that I definitely wouldn't recommend deploying in production we're quite a long way away from that but there are definitely signs of promise in the approach that we've been taking so I'm going to start with the host operating system that I personally have been using for a lot of this development work and while I was looking around for a host to base this on initially we were using very traditional operating systems like CentOS and Ubuntu we also did some experimentation with CoreOS but I kept on coming back to Atomic Linux and its various flavors just because the main strength I found from my point of view with this was its adaptability and adjustability so at stages we were needing to run custom kernels and build things against that and Atomic gave us the flexibility to do that and essentially it gives you a lot of the same advantages I would say as Docker or another container type system or even sort of like Git for your operating system where we can easily control what goes into the artifacts we produce we can upgrade systems very simply and distribute those upgrades around a cluster in a very efficient manner and also specify it pretty well I think the only criticism I really have of Atomic is that you need to write the configuration file to produce the image in JSON which I find a little bit unpleasant to do so if it was in YAML I'd be totally happy and for this we actually devised a slightly convoluted build system that allows us to build Atomic hosts totally through the Docker build command where we actually have four Docker files that ultimately create the images that we're using with the first one that contains a pristine RPMOS tree and all of the assets that we need to do to build the rest of it we then have another Docker file that we run on a host which is the Docker socket open and listening unsecured on the Docker zero interface for building and it then launches this first Docker container we've built in order to build and produce the final RPMOS tree that we can then use to deploy out to different systems we then have a third Docker file which then takes this and then runs image factory in order to produce an ISO that it then can serve out as well and the advantage we found of doing it this way was it meant that the end images that we produced at each stage were quite small and easy to produce so the only thing that they really contained within them was a copy of the Docker client and engine X or Apache in order to serve the end asset out to the cluster and the very last image that we have is a set of images that we can use to then build images either for AWS or imaging straight on to bare metal or open stack clouds okay so then once we have our base operating system we need to actually look at the containers that we're going to start launching on to that and there's essentially three projects I'd like to talk a bit about today on that front there's obviously OpenStack Cola which is very much the heavyweight within this industry which has been going for some time it very much sort of proved the initial viability of running OpenStack in Docker and actually right when it started attempted to orchestrate and through Kubernetes before reverting to an Ansible based deployment method and then go on to harbour some of the images I've built and then Yado which is a new packaging system that we're working on as well so as I said, Cola was the first project really to start in this it started as a complete system both building images and deploying them it's now split into three deliverables and Cola which just does the images themselves Cola Ansible which concentrates on Ansible based deployment of these images and Cola Kubernetes which is currently being developed and it follows a fairly complex build flow where we use Ginger 2 templated Dockerfiles which gives us a huge amount of flexibility so from one set of Dockerfiles we can then run our build system to build images either from source or from distribution provided binary packages for a number of distributions the main downside of this method is that it results in very large images typically between 250 and 350 megabytes for a service image and with a large number of layers to so first thing I started experimenting with was how we could start reducing this and making it a bit more efficient when it came to distributing these images out to larger systems so I actually used the very first version of the Cola build system which was based around Bash to run a very simple hierarchical tree build structure where we could record the parents just by the directory structure that we were using and also explored using Alpine Linux and other muscle based Linux systems for building out the control pane falling back to either CentOS or Fedora for systems that we weren't viably able to package within Alpine and this resulted in a very similar build system just with a slightly smaller end result. One of the things that we did there as well was started to install and remove all of the build tooling with each layer so that we weren't carrying that between parent and child images and with that process we managed to get the image size down from 280 300 megabytes down to about 67 67 in this case for the Neutron API image and about half the number of layers so when it came to pushing it out to a large number of nodes rolling updates that was a lot quicker and something that I started working on literally the last two or three months is another method of packaging OpenStack within Docker and this takes a lot of the experience that I and another guy called Sam the Apple have because we got into an argument about the best way to package images and we then really sort of went very much back to basics and some of the stuff that he's done in this field is seriously quite impressive so all of the previous methods have generally relied on some sort of templating or external tooling for building out whereas here we take advantage of the OpenStack project called OpenStack Requirements which gives a list of all of the Python dependencies that a particular OpenStack project may use. We then use this to build a Dockerfile which contains all of these packages compiled for an operating system as wheels which we can then from another file directly accessing the Docker Hub API pull in just the layer we want and use it as a local repo for building out the service image that we want and the advantages of this are pretty massive it means that we can produce a complete image for an entire service within about two to three minutes on the developer's laptop versus the much longer times that it was taking before and also we don't need any external tooling installed at all on any of the machines so there's quite a few organizations and now actually starting to look at the ways that they can integrate this within their CI CD pipelines because as well as being able to build directly from a source repo we can then go straight in and pull down a particular commit or anything we need from Garrett that going quite smoothly and the end result from this is an incredibly compact image that's inherently very auditable can be pulled around very easily and also as much as possible strives to be orchestration engine agnostic so unlike previous attempts we don't load any configuration data or configuration helpers into the image and offload that job to the orchestrator itself okay so now we've sort of got an idea of the images that we started playing with for this we can look at the networking layer and something that I was working on with that is actually using Neutron as the basis for networking in Kubernetes and this gives us sort of some potential advances I think this again is far away from being ready for production and a lot of work that's done here is based on prototypes that Tony and other people who were working in the courier project produced which I extended out slightly in order to enable them to be used for open stack deployment and what this by providing Neutron as a networking layer for Kubernetes it allows us to use hosts and Kubernetes pods together which gives us improved security control and allows us to apply quality of service and scalability to these services very easily however it's not all sort of roses in this case because what we found quite quickly is that the reference architecture really wasn't suitable for operating at this sort of scale where we found huge bottlenecks coming in things like the level 3 nodes and the DHCP nodes which were giving us long provisioning times and so we started looking at alternatives and when you do that there's generally alternative solutions fall into two categories for Neutron back ends either rooted solutions things like Calico and Ramana or tunnel solutions like OVN or Mid-O-Net and for the work I was doing I found that the tunnel solutions generally were better because they offered feature parity with what a lot of open stack operators were expecting when it came to things like supporting overlapping sub-net ranges and floating IPs and so for this we did quite a done quite a bit of work with using OVN as the backing there which has the advantage of distributed routing and lost the bottlenecks that we were experiencing with network nodes and it also offers incredibly fast provisioning which is really quite significant when you're trying to scale up or down a large cluster or deployment of nodes the other big advantage that we found there was it's easier to put into containers and orchestrate with Kubernetes as well without network namespaces and other aspects there were much less moving parts to deal with which made it much easier to load in and to make this work there's a part called the Raven which is the prototype of what is now becoming Courier Courier Kubernetes which is a very simple Kubernetes API watcher which takes Kubernetes objects and then converts them into Neutron constructs and with this we got to the stage where we actually found that using the Kube proxy was no longer relevant within our cluster because we were able to replace all of the services that it did with Neutron elements so here there's a brief overview of how we mapped objects into Neutron objects with basically taking pods and prescribing ports within Neutron as services replacing with load balancer and then applying security groups directly to the containers as well and something else that started to creep in here is using free IPA as a common as a common DNS back out and this is because in order to provide this sort of infrastructure where you have multiple orchestration systems working together we found that it was quite quite essential to have a common DNS based of these things to talk to each other and free IPA was really very good at doing this letting us tie in together both external DNS and DNS driven by Kubernetes and designate both the open stack side of this and then distribute that out to both hosts, virtual machines if necessary and control plane pods running within the cluster and then to allow these sort of systems to talk to the outside world it became initially quite complex working out how we could provide a sort of scalable edge structure to this but we eventually started using uplink pods to allow us to bind to the docker socket on a host which then fired up a router container which in turn creates the link local address which allowed traffic to very easily ingress into the cluster and access services as required and so putting this together and let's see how this starts to look in practice where we have a service in this code Mistral running inside Kubernetes orchestrating pods which are then displayed and held on neutron backed networks which then also lets us get up all sorts of details about the actual pod itself and then connect how we can connect that into other parts of our open stack infrastructure and then tie it into existing virtual machines okay another part of trying to get these systems to work together was working out actually how to manage authentication between them and there have been efforts to get keystone talking directly to Kubernetes via a token based authentication system but we found that the models between these two systems wasn't directly comparable and so actually getting a federated system with our users held externally proved to be very useful for that and to allow a common base between them and for this free IPA was really an incredibly useful tool because it allowed us to deal with user crud, act as a CA backing our systems and provide a DNS layer for things together and for a real world deployment you'd obviously want to have an external free IPA installation but we found that it worked reasonably well for testing being launched within a container the main problem that we found there was actually again adapting it to work with Kubernetes networking so we bind mounted a pod to the Docker socket and then launched it via that method this then got tied in so we installed the free IPA client within our atomic hosts which allowed us to perform host registration and then generate all the certificates that we need and distribute them out to the hosts to allow them to talk to our cluster the other thing that then did was started to use PKI and TLS to authenticate all services internally within the caster so we used certificates for RabbitMQ, MariahDB and APIs when they talk to each other within the Kubernetes cluster which works incredibly well and reduces some load on certain elements and then when it came to trying to tie in user accounts between systems this is where the Fedora's epsilon project was really quite helpful because it allowed us to use SAML2 authentication to tie users into Keystone where we were holding our groups in LDAP which we were accessing directly means that we had an external essentially we were holding all of our user info outside of the cluster which again meant that when dealing with multi-region or multi-site deployments it works quite well and this sort of culminates in just providing a very seamless experience for the end users where they can select a density provider and then very quickly just get straight into Horizon or by a wrapper get a token that will then allow them to use the open stack CLI and so putting it all together as I sort of said at the start we can package this together in one of a couple of ways what I was working on with my Harbour project was very simplistic and very opinionated deployment which gives you the ability to set up a cluster very quickly and I think and we should hopefully be able to have a look at that in a second and what right so this installation method that we came up with for this was very much a prototype so it was very simple very opinionated relies quite heavily on some pretty pretty ugly tools to do it where it was using a lot of said bash and then system D to actually drive services it's not dry by any means at all but what it does does do is it just provides quite a nice testbed and development platform for these sorts of systems and then we can see sort of how the services are constructed here where we have bootstrapping jobs that bring up the initial service do things like manage the database create the database layer and populate it do all the migrations that we need for that and then fire up the actual replication controllers for things like Apache or the Python service that's running within and to pretty low resolution but so the end result we have here is we have all of the services split out into individual namespaces and operating either in the host network namespace or for the API services operating on the OVN backed neutron layer and if you've not used it before I can highly recommend cockpit as a really nice web interface for hosts in general but especially in use with Kubernetes because it allows you to very quickly get in, have a look at a service and get information up about the pods that are running with it scale it to get in for debugging get in, view the actual containers that are running for a service get the logs for a particular thing or just very quickly open up a shell and see what's running within there and honestly it makes the management of these sorts of systems so much easier and again then get all of our database volumes for individual things they can either be mounted the way we have it set up at the moment on host paths or actually via Cinder volumes after the initial clusters been brought up and get a quick overview of what's running and this again I'm sorry about the resolution and this then means that we can get a huge number of services running very effectively and pretty reliably so if for example on this all in one cluster that I'm running at the moment we can go in and find something pretty critical like say our Keystone API we can delete that pod and have it reschedule very quickly so obviously in reality you'd want to run multiple copies of this but within a couple of seconds you can be back up and running hopefully so it's really offers a huge level of resilience that you normally wouldn't see without much more effort and work however as I've suggested this solution is not quite it's not quite what you'd want to use in production by any means at all when I started work on that harbour project there wasn't a package manager available for Kubernetes but since then Helm has come on the scene which potentially offers to solve a lot of these problems so Cola Kubernetes is now using it transferring from its original Ginger 2 system to using Helm which takes a very extreme approach to microservices where every component is wrapped within its own Helm chart it currently shares its configuration and deployment set up with Ansible which is something that is trying to be eradicated as quickly as possible so it can move to its own configuration and it's intended to be consumed as package units which are then driven by operators which that interacts with Helm and it's very heavily in progress at the moment that's aiming to be capable of installation and basic data operations by May the other major project in this area that's appearing at the moment is OpenStack Helm which is written totally from the ground up for this package manager it takes the idea of one chart per service it performs all of its configuration management within Helm and it runs today for both development and proof of concept deployment at scale and it's rapidly iterating towards a stable image agnostic base so we've been testing it with all of the images that I mentioned previously and it's running capably with all of them and it's currently stewarded by AT&T community development but they're a really friendly bunch of people and I'd highly recommend checking the project out and getting involved because it's at the stage where that's really becoming viable and so looking at how to actually deploy a Helm based system it's fantastically easy when you compare it to any other OpenStack deployment I've come across before where literally within 13 or 14 lines and commands you can go from a fresh Kubernetes cluster in this case just MiniCube for a development system and have a fully operational OpenStack and deployment up and running and something that you can do just now to sort of demonstrate that is is take take a service that I started during the last so this morning and just launched Horizon and Keystone into it and now actually deploy Glance out to that namespace which then creates all of the API services load balances between it and a set of jobs in order to initialize it okay so does anyone have any questions about about any of that great great and there is your back so don't go so quickly okay and as well do you have difficulties as well where are you you will announce this of the Horizon yeah so yeah so okay okay hi do you want to use VGA it's probably the best way does this have VGA I don't think it does okay you will try this one is that display board I know but we have a program with HDMI okay this is very messy I hope it will stay as it is okay so be careful on the cable wonderful and you will change your resolution to 44% this should already be on 43% the presentation is yeah but it's up to the screen so what would you suggest 1400 to 150 really honestly I forget how to set the XR all right I'm just going to let it be for now and let it flip on and off and I'll try the case now it doesn't want to show at all doesn't like really okay okay I'll find Chris I will try to we have some time I guess yeah we have a lot so it's display board here I have no idea okay I'll send it awesome hey it's going to be I'm going to be stuck in two or three hours today. So I'll put it around there. How to pronounce this? Grittenden. Grittenden. The presentation is good here, right? Ah, he's working on it. It's not, the HDMI is not working. When do you get somebody to... Oh, he went there? Outside to just get someone? Yes. Alright. He said a name but I can't remember what it was. Kita. So, yeah, I don't know. I'm terrible with this kind of thing, so I was going to leave it to you guys. Ah, I made my presentation. Oh, hey. Good to see you again. Fine, thank you. Oh, Flo. Who's to meet you finally? Oh, great. The first time you meet me, I get to talk at you. Ah, I hope not. Let me see how it goes. Oh, sure. I'm not the best presenter but I'll do my best. I might not do anything at all. We can't get it to show on the screen, so... Yeah, sure.