 Hello. Can you hear me? Yeah. Good. Hey, welcome here. Welcome here. Please have a seat. So welcome here. This is a session about that all the PTLs in OpenStack have. It's an opportunity for all the projects in OpenStack to give an update on what they are working on. This is a session about TripleO. So this is going to be about what's going on in TripleO and what we are doing now and what we will do next. So if you are interested by TripleO, if you're just curious, that's I think a good place to be now. My name is Emilia Maki. I'm French, as you can listen. I live in Canada and I work for Red Hat. I'm the current project technical lead and I am with my friends because I don't like talking around. So Flavio, if you want to introduce. Yeah. So my name is Flavio. I also work at Red Hat. I do many things in OpenStack and being in the DC is one of them. And yeah, I'm right now working on deployment and specifically putting OpenStack on containers as part of the TripleO project. Yeah, I'm Steve Hardy. I also work for Red Hat. I've been working on TripleO. I'm previous to that, mostly Heat for a number of years. And today I'm going to talk a bit about the composability and also containers, as Flavio mentioned. So the agenda. I will start with what's going on in Pyke, what we are doing. The cycle is not finished yet. We are in the middle, I think. So I'm just going to give an overview of what we are doing, what we already did partially and what we are going to finish in Pyke, what we aim to finish. And then we will, I will give an update on what's going on in the deployment tool in OpenStack and try to mention some things that the TripleO project is working on with the other project in OpenStack. So you can also understand, you know, how do we collaborate in OpenStack community to make the deployments to better, I guess. The next thing will be all the things about containers, what we are doing around containers, and which is a kind of interesting topic at this time. The next one will be about upgrades and all this composability Steve will explain you how TripleO is something that you can compose your own architecture and deployments, so you can also upgrade, all this thing works. And we will finish and during the discussion today we are going to talk about the roadmap all the long, but at the end we have just a slide summarizing the next step for us, like the roadmap, but keep in mind this talk is like always talking about the next thing. So, but yeah, the last slide will be more about like a summary of what we are doing. I think we will have time for Q&A if you have any question and feedback during the presentation or at the end please there is a microphone and you're very welcome to ask anything. So let's start. What's going on in Pike? So I tried to, when I did the slides, I did my part of this section and I was like, okay, should we list all the blueprints? So I tried to classify the blueprints. The first one is about security, what we are doing in TripleO to improve the security. The first big thing that is coming is the TLS everywhere blueprints. It was, I think, started in OcataCycle and it didn't finish all the work. So it's postponed to this cycle in Pike. We have made good progress on this area and that basically allows you to deploy Open Stack and also infrastructure services with TLS enabled. So that's something interesting. We also started to deploy ATCD from some networking tools in Open Stack and we deployed ATCD at the end of OcataCycle and in this cycle in Pike we are working on how to secure ATCD, which is, by the way, using the TLS everywhere blueprints. So this is connected to the TLS everywhere. ATCD will be used more, I will come back on this topic, but ATCD is going to be used in TripleO and in Open Stack over the next cycle. So this is something we tried to focus before moving the service in production and widely used. We want to make sure it's secure. So that's something we are working on right now. We also have some people working on the advanced seed intrusion detection environment. So we are introducing these new services in TripleO so the users can run and can run some audits tool. And we also have, this is an ongoing work over the cycles, but in Pike we are, I think, we aim to deploy all the Open Stack services using the hot token plugin, which is using the Keystone V3 API, which is an interesting thing for using all the V3 features in Keystone. Moving to the next one about networking. We have integrated Open Delight, I think, in the previous cycle. I can't remind you, but this cycle in Pike, we have people working on HA for Open Delight services. We also have integrated BGP VPN from Neutral in TripleO. We have some progress on Octavia integration. We have people working on OVS 2.6 and the DPDK features. I think we had the SRIOV in the previous cycle and people now are working towards with the features from OVS, the new version of OVS and how TripleO can use the DPDK features. And we also have L2 get away integration in this cycle. So that's the major things about networking. I think Steve has an update about the composable networking in this cycle, but he will talk about it later. Oops, there is some, okay. We have some user interface changes. The first one was about we got the feedback from TripleO users, TripleO operators. And one of the feedback was that it was hard for users to discover all the services you can deploy with TripleO. And so we listened to you and we worked on having a tool in TripleO that discovers all the services and expose to the end user so they can understand more what they can deploy with TripleO and how they can deploy. Like all the services and all the roles that you can compose, that's something we are going to expose more in a friendly way for the users in this cycle. The second thing is about the UI. You will be able to import and export the deployment plans directly from the user interface. And we have something about SpeedStack and I think James is going to talk about it if you have the microphone. Yes. Okay, so in a normal TripleO deployment, you usually have to use Ironic to deploy the bare metal nodes themselves. What this allows you to do is you can use the pre-provision nodes and you can use a different tool such as cobbler or foreman to deploy the initial nodes and then we can actually orchestrate the open-sex services on top of that. We've been able to do this since Okada and in Pyke. We're looking at making that a little bit easier to use so that you can pre-configure the agents on each of the nodes so you don't have to wait for heat to start to create the initial stack. So hopefully it'll be easier to use in Pyke. Thanks. Okay, let's try to go to the next line. Yeah. Okay, so TripleO is involved in some kind of deployment working group. We are trying to not only in TripleO but we are trying to solve some problems outside TripleO so the community can use it. In the last PTG in Atlanta, we went in a room between TripleO and other deployment tools together trying to list all the challenges that we have and trying to work together on those challenges instead of trying to fix it just in TripleO. One of the things was the configuration management. All the projects in OpenStack, they have their own way to get the parameters and apply the parameters into the config files and that thing might sound easy but it's not easy. You have to maintain so many things, all the interface and all the parameters you have to maintain. For example, in TripleO, we have the puppet modules and every cycle we have more and more parameters that we need to maintain. So we were looking for a way, how can we use a unified way to manage all those parameters and that's something we are working on. We had a session on Monday afternoon this week. There is an interpad about the output. I will send an email to the mailing list after the summit probably next week about what's going on and what are the next steps but basically TripleO is highly involved in this work. We are going to investigate how we can use OTCD to start the configurations and how we can make OpenStack services getting the config from OTCD instead of files. So that's the big thing for configuration management and it's cross-project work with other projects in OpenStack. There is one thing I just mentioned in the slide. We are not actively working on it but this is just a thought that we have some people investigating how can we make the Keystone-Fernet-Key's rotations in Keystone itself or somewhere in OpenStack. I was looking at doing it in TripleO and I realized that everyone in OpenStack needs this feature. So if there is someone in the room looking at this and interested by this work, we didn't have the resources yet to start this work but that's something we are interested by, the Keystone-Fernet-Key's rotations. I think that's it for the deployment working group. We have now Flavio talking about containers if you want to go ahead. Yeah, sure. I'll try not to talk too much so there's time for questions. So the containerization effort in TripleO, it is a multi-year or multi-cycle actually in a year effort where we're trying to move the OpenStack services from bare metal into containers and this is something that started as part of the pike development cycle. The first two cycles, so pike and queens and are going to rely on the Docker runtime so we're going to containerize the services and they're going to run on Docker demon basically and eventually the goal is to move all these services and have them run in on Kubernetes. We're collaborating or we're trying to collaborate more and more with other projects upstream as also as Emily mentioned there's a deployment working group and as part of that we like to reuse as much as possible so we're using COLA build to create images that means that the COLA images that the COLA project uses are the same ones that we're using for deploying the services inside containers. We've been working with these guys, we're not using anything else from COLA and I want to make that very clear it's less confusing since they kind of like try to cover the same thing so COLA and COLA Kubernetes and triple O they're doing pretty much the same thing so we're using the images that are generated using COLA build which is why I just wanted to be very explicit there and the other thing is we want to introduce the least minimum number of changes possible to the architecture in the first two releases so that we can focus on containerizing the services and we don't have to worry about changing the architecture as you guys know it today I'll go into more details about that in the next slide and one of since we don't want to change architecture and we want to make this switch as simple as possible we're also working with well Steve is actually working more more than I do on that and the upgrades from bare metal into a containerized deployment so this is not going to be like a greenfield stuff like you don't have to come up with a brand new environment to have your containerized services so if you have a bare metal deployment of OpenStack we'll like to provide a way to migrate from that bare metal deployment into a containerized one using triple O directly and more on what I mean with not changing the architecture we're going to in the first two releases we're going to use the host network so we're not we're not relying on any of the virtual networks provided by the docker demon and for different reasons and we can go into more details on that later if you if you're interested but we're going to use the host network and that allows us for maintaining the current networking architecture and providing network isolation and IPv6 support and all that kind of things that you guys have probably already consuming if you're using triple O and we're logging most of well yeah we're logging to our log basically and again different reasons to that regardless on whether we'll also log to a standard output and within the container we're going to keep logging to our log and again we don't want to introduce many changes into architecture so there are tools that rely on var logs to actually have the logs for the services and if we stop logging to var log right now we're going to break all those tools so since we can log to var log because we're running on docker demon and we know where the services are running and the controller nodes and compute nodes are going to stay the same we decided to just go down that road and avoid breaking all the tools and having everyone updating their tools so eventually that might change after Queens but at least for the next two releases we don't really need to worry about that and there's more time for tools to update and change and evolve with triple O as we as we make these changes we still use pop it to generate config files we don't use pop it to run the services anymore or or install them we do use it to generate the config files so so yeah we still depend on pop it basically also that's exactly what i wanted to say we're reducing the use of the use of pop it within triple O but for the config files we will depend on that and it runs from within the container image itself and one thing i wanted to mention is that this was all implemented using composable roles so if you're familiar with the composable roles and you you were relying on those on some way in the apis etc you can still you can still do that we we implemented all these as part of the well reusing the composable roles that were introduced in a couple of releases ago so yeah that's that's it from the container side so if you have any questions like at the end of the talk prefer to ask them or you can catch me afterwards and i'll be happy to answer thank you thanks love you um so yeah i was going to go into a bit more detail in terms of the composability and also the way which we're handling upgrades um some of this is uh improvements which were happened over the last uh two to three cycles um but it kind of all fits together in terms of uh the implementation for containerization and the roadmap uh for pike and beyond so back in newton uh we introduced this concept of fully composable roles prior to that um triplo had a fairly sort of static architecture where we expected um you know specific groups of nodes to be deployed so you had a controller and a compute and then certain types of storage um from newton and beyond um that's now fully customizable um and so there were kind of two parts to that one was we had to decompose the configuration of all the services um and i'll show a diagram in a second which shows how that fits together but basically we now have one heat template so triplo is heavily dependent on heat as an orchestration tool um and uh we have one heat template per service now so we've basically decomposed um all of the definition of the configuration of each service um so you can more flexibly consume that data um and then we have one puppet profile um which is in the puppet triplo repository um and that's basically again just a nice way to encapsulate each service configuration uh so you can have more flexibility at deployment time and then the final piece of this was um enabling custom roles so this basically um allows you to have a yaml file with a list of roles so you can have um custom node types which could be like an sdn controller or uh for a special type of storage or you might want to break out your neutron neutron services to run on a separate node or perhaps have keystone running on a separate um node uh or group of nodes and that's all now possible whereas prior to needing it wasn't um and so that presents an interesting problem when it comes to upgrades um in that you don't any longer know where the services are running as in which services are running on on which nodes um and so you can't have a monolithic approach to upgrades and so during Akata um we introduced a new model for upgrades which we called composable upgrades so it's basically uh um going with the general theme of decomposing uh the logic that we had um prior to the Akata release and we did that basically by using um uh some ansible tasks in the service templates I'll talk about that a little bit more in a second um so this is all working out quite nicely for us um it has provided an easier interface for integrators to triple as well um in that now if you're integrating a new service or you need to modify configuration for an existing service you really just have to look at one heat template um whereas before there was some quite big puppet manifest and a fair degree of complexity um that some people found um a bit of a barrier to entry so I think that's made um for from an integration point of view um the story quite a lot nicer and um in terms of networking that's kind of the next step in terms of composability for us um a lot of people are wanting to define um you know uh custom network names and custom custom arbitrary um uh network topologies currently we support a model for network isolation where there's a fixed number of networks which you can either enable or disable and in the future uh we are seeking to enable operators to define that however they like particularly for certain um uh SDN and uh NFV use cases I think that's particularly interesting um so that's something which we're working towards for pike and uh I'm hoping that we're going to make progress towards uh enabling fully customizable uh networking layer uh during pike so this is just a quick diagram to kind of like hopefully reinforce um what I described in terms of composable services this is basically what allows like um the service plug-in model for triplo um we have a heat template uh per service and then the data within that gets merged together at deployment for each group of nodes so let's say you have um your controllers um we merge together all of the data for the services you've assigned to the controller uh role um and then that gives us the data we need to configure the services um at deployment time so um if anyone's interested in more details there are some um some some notes uh on my blog and also in the triplo docs there's a walkthrough of how this all fits together um and then as I mentioned the next uh the next part to this is enabling uh arbitrary uh groups of nodes um you know operator defined um uh roles we role is our term for uh the groups of nodes and so the way we did this is to use ginger to templating um as like a pre-processing step um and that basically enables us to to generate some things in the heat templates which previously was uh more of a hard coded system um it's quite flexible and the other thing to mention is within the plan the plan is the term for the the the heat templates that we upload uh into uh into triplo to do the deployment uh if you need to do ginger to templating based on the roles in your custom templates that's also possible um so this is quite a flexible system um now so one thing which is worth mentioning is uh if you're interested in playing with this feature there's a roles data file in the heat templates now you can just copy that and then pass minus our roles data uh so it's quite uh an easy interface to experiment with um so that brings us on to upgrades and as I mentioned we we kind of really had to make this um more of a composable solution and due to the flexibility afforded by uh custom roles and composable services and the way we did this is basically now in each of the service templates there's just a list of answerable tasks um so we chose answerable for this because it's a bit more of an imperative tool um and generally during an upgrade you want to do a sequence of steps um whereas puppet has proven you know a nice solution for the configuration management but it's more of a you know declarative system and um so we weren't with answerable in this case because you need to do things like disable the service and then perhaps do some database operations um and you know sometimes there are other migration steps that you need to do um for each release and this makes it again quite user-friendly if you're maintaining a service you can just modify those tasks and we apply it in a series of steps so when we do a deployment with puppet um we apply um puppet in a series of steps uh which controls the order that the services come up and we do the same on upgrade uh so on step one we might disable the services on step two you can do a package update on step three um you know you can do some migrations um and so that all fits together quite nicely um that's probably going to change a bit during the pike cycle um as we move to containers because uh the containers are going to enable us to do potentially more things like rolling upgrades um and you know running mismatched versions of services and things so there's probably going to be a bit less use of the of the um the answerable tasks uh because um we'll be able to treat the upgrades more like a minor update in many cases um so that's kind of um the current status there um it fits together quite nicely um and uh yeah that was all I had to say about um composability so I can hand back to Emilian to go through the roadmap oh yeah okay thanks um yeah so this slide is a summary of things we did we we mentioned during this this discussion but yeah the next the next big thing for us the integration with kubernetes it's like uh everyone talks about it and if you went to the submit sessions at least all the slide they have at least one time kubernetes world so we had to place it so um yeah we have also people working on improving performances at scale um we like um Steve said we are moving to upgrades and we are trying we are moving to upgrades to with containers rolling upgrades and I hope this will um help to reduce the downtime that you might have with the current architecture so that's uh that's something we are working on with the um containers upgrades um we also have different blueprints and features that hopefully will improve the usability the usability for debugging triple low when something's wrong uh we we got the feedback from users and I we understand it's very complex to debug so we we we have a some some features coming in pike for improving this uh user experience um the list of blueprints is huge uh we just uh throw the link if you want to have more details on what's going on um and again if you have any question about some specific features or or things that we talked during this discussion please uh we have uh the time I think um just a last slide about how you can get involved if not already um in triple low we have uh of course we have an IRC channel it's on free node we use the open stack dev mailing list we use the triple low tag there is the link for the official documentation and we have a like um Steve has a blog and we have some blogs people are blogging we have a like a planet with all the triple low uh related posts so that's uh that's the major ways to get involved but um how you can get involved as a user giving feedback filing bugs uh showing up on IRC and complain it's already a good contribution uh because yeah that's what we need in open stack just feedback um if you're a developer of course working on um helping us to stabilize triple low um if you're working on features we have a bunch of uh open bugs for a long time and there are many ways to contribute to triple low I'm not going to list all of them but if you have any interest in contributing to triple low you can come to us directly I don't mind like email or IRC whatever and uh we will of course welcome new new contributions um I think that's it we have time um for questions um and there is a microphone here if you have any any questions about triple low about the roadmap about open stack you can come here yeah I have a request um I'm here to complain just kidding at the uh one congrats first thing first congrats because the software from Newton on board is becoming very flexible I must say and one thing one feature that would be very handy I think would be um sometimes when you start to the deployment it's very easy that if you make a mistake you're creating the template etc so you need what you need to do is delete and restart the installation and unless probably you are you are on a on the face where puppet is running so probably you can rerun it and you will complete the installation one thing that will be will be handy would be um when there is a when when you fail it would be nice that if the software would be able to let you restart from that step so redo this so you can you can fix fix your template for example and restart from that from that step so in theory that should already be possible um so when you run the there's the command to do the over cloud deployment is open stack over cloud deploy um and you should just be able to rerun that um every time because heat supports uh doing updates from a failed state um and what should happen is anything that has already completed should be left alone anything that's failed um will be replaced the heat resources and then you'll continue from that point um so if that's not happening um probably it would be good idea to raise a bug um in terms of the configuration layer um one of the complaints has been um you know the series of uh of heat applied software configuration um resources can be a bit hard to debug um and so during pike some of that is moving more towards um heat driving an ansible playbook and so um when that fails it may well be possible in fact one of my aims is to make it possible to run that manually um using um a dynamic inventory that already exists for triplo validations and uh you know that's probably going to be more of an advanced user interface um and you wouldn't need to do that by default but for debugging the sort of the software configuration layer I think that's going to make a bit of an easier feedback loop um so that may possibly help um but if there are specific issues where you can't just rerun the deploy command uh let me know because it may well be a bug yeah so the main thing is if it fails you don't have to delete the stack you can just rerun the command um and then it should pick up where it left off depending of the state of the the nodes that are deploying it depends of if if you're going to run again it if the state of the node was like for example the the pacemaker cluster was not like bootstrap it correctly and if you run again and it's failing again it's maybe something in the node like that failed to be deployed so yeah I mean that there are certain failures that you can't recover from like if uh you know you're requesting the wrong number of nodes and you know you don't you can't satisfy that based on what is an oronic or if the ironic nodes are tagged incorrectly um you know those kinds of things that are you know you have to also fix that but even in that case you should just be able to rerun the deploy and it shouldn't um it shouldn't have to delete any of the nodes that have already been built what's going to be done in the case of um failures of certain nodes um we have it often that we have I don't know deploying 10 or 20 nodes and I don't know three or four have a problem they will not be deployed then my stack fails but intentionally I don't want to fail it because 15 are there and um I'm happy with that and I can just repeat the command and the other five whatever they have will come up later on yeah so are there any work is there any work done about that so there's been some discussion on that and there's an open bug I think James has been looking at that as well um I mean it kind of ties into my comment about the move towards having a bit of a we have a single um if you have a single piece of configuration you know and we can have a failure which we then can rerun the configuration on those nodes that that actually got built uh you know the the move towards more sensible usage may help with that and but James did you have anything to say yeah so that has been a common pain point for folks and we're aware of that so um yeah the main problem is um that's not really a good fit for the kind of declarative heat model and so we need a way to kind of um get beyond that error state using the underlying tool um so yeah that's uh hopefully something which we can um resolve during pike as well as a result of the refactoring I've been doing uh for containers upgrades and that involves moving more towards Ansible so I have question regarding this container approach previously uh disk image builder was used to generate image uh os plus over cloud open stack rpms and now we have this color color image but still underneath we have bare metals right and we need os so in so how it works now so you still use disk image builder uh os plus this color images or images has not changed uh color color build is used only to build the images the container images are going to run so actually just the previous stuff is actually so we kind of just took a sort of step back we're we're saying hey let's not break people let's not break go towards containerization well and then um I you know I think long return the lighter way but yeah I mean I would I think we will convene on a leaner version of you mentioned Octavia before and I was just a bit curious how the status is at the moment is it already fully implemented or just partially I think the thing that I had discussion yesterday with Asaf I don't know if he's here okay um we we are still figuring out how to make the euphoria image the post deployment right now it's manual so when you deploy Octavia um on the control plane you just have the API and um you have to deal with the euphoria image afterwards and upload to you know to nova and this workflow is not automated yet and uh but as far I know right now you can deploy all the Octavia services on the control plane so you have the API the engine and all the services I don't remember the names but that's the status but we didn't figure out yet with the the post deployment thing okay thank you I think we have time for one more question so I often face a problem with network config so it often happens that I lose connection because I I had a wrong template um so um is there any plan to make some validation before actually I apply this network configuration it's not full proof but it won't work in some cases it works in the trivial no network isolation case it doesn't work if you have multiple links on the system it's it's honestly it's a bug I've haven't been able to look into for a long time so you're saying so what's the issue you've already it tries to DHCP interfaces that have a DHCP server so they don't get an address and then what's next can they fail okay so that's a different issue but I thought we didn't tame the DHCP interfaces a bit it was it was a passion to do that anyway okay well that may have changed that but it was broken for a long time yeah okay so then it might be that you're missing the about yeah I guess there is some complexity there because with with the network isolation but what release for you uh are you used to metaka and avocado okay okay so I will check it thank you okay thank you thank you for being here if you have more questions or more feedback I stay here a little bit longer so thanks