 Hello, thanks for coming, so quick update about myself. So I'm Thomas, I'm now a full-time employee of Mirantis who employs me to do as a full-time job, the OpenStack packaging in Debian. I used to live in China, I moved last December in Grenovo, so after eight years over there. And so, packaging OpenStack is my daily job. So this presentation will mainly get you informed about what happened over the last year in the community of OpenStack and as well some of the new features and new projects we have in OpenStack. Let's dive directly into it. So there's a bunch of features that have been added to many projects. I'll go to some of them, not all. So the Eraser Code, if you know about it, it's a kind of redundant CRC so that you can recover your files if you have some of the parts of your big file that is missing. So that has been added to Swift thanks to also storage policies. And we can hopefully use it soon in Debian. So I haven't finished working on the latest Swift. So also SR, SR-IoV has been added to Nova so that we have virtualized network interfaces. Another very nice thing that happened during this last year is that we have IPv6 support for Neutron. Currently it assigns one slash 64 to every tenant and that's it, no more, no less. But hopefully it's going to improve over time. So Glenn's has now support for the catalogue of Murano. I'll talk more about Murano later. And Horizon had previously all of its libraries embedded in Horizon itself. And as you know, it's not nice at all for distributions. And they decided to use a new mechanism that they call X-static, which enables people to use either virtual-length or system libraries. So all of that is behind us already. What everybody enjoyed so much during the last two cycles of OpenStack is the new federation system for Keystone so that you can have a Keystone to Keystone thing or through the SAML 2 protocol. And then finally after three iterations of it now we have a Python OpenStack client which is a unified interface on the command line to control OpenStack. So hopefully in a few release cycles we'll have OpenStack, something, something, and not Neutron, something, Nova, something, etc. So for example, Keystone already has full support for that and already duplicates the Keystone command line utility. And there's some new packages that appeared in Debian which I worked on. So there's a designate which is DNS as a service. When it was released for JC it was kind of probably a preview release. Like it wasn't fully working as one would have expected. But I think it has grown up as an adult project. And now we have Sahara which is a big data as a service. So this is a primary Mirenzis project but it has been now integrated in all the other projects together so it's released together with the others. So Morano is an application as a service so you can imagine that you have a list of applications in Horizon and then you can click on one big icon with let's say Apache and then it will deploy that on your cloud. So that's what Morano does. Rally is also now part of OpenStack and it helps to test your cloud and profiling it. So finally, Aronic is usable so just before the JC release I decided that I would remove Aronic from JC because nobody didn't have support for it. But this has changed and now it's fully working with the Kilo release. So another thing that happened in the Debian Archive is that I started maintaining the latest OpenStack release directly in official backports. So right now you can use JC backports to install Kilo. So I did that because I wasn't really satisfied to have a non-official repository to provide these to our users. Of course I am still willing to use PPA when they are available so that I can have a PPA for every OpenStack release. So because like currently when the next version of OpenStack which is called Liberty, when it's going to be released I will have to override Kilo in the backports. So that's not ideal. So I guess you can still use snapshots.openstack.debian.org but maybe not everyone is aware of it. So PPA would be still a huge improvement for me. So a few pain points that I had to face during this last year. So before the release of JC, Raphael upgraded Django to version 1.7 and these broke lots of things in Horizon. Though I'd like to hear in this room tell everyone that Raphael has been really helpful and he helped me to fix everything nearly. So all this has been upstream into the Horizon package and then they benefit of all the Django 1.7 fixes. Another major pain point has been SQL Alchemy which has been uploaded a few months ago. So this broke a bunch of things in the Kilo that is currently in SID. So obviously it didn't break the backports because SQL Alchemy is still 0.9. Though it's my understanding that mainly tests are broken and not applications themselves. So that's still fine. And then there's another thing which is becoming harder and harder for me is that there's more and more projects to package. So it's not the package themselves but every new package with OpenStack comes with its own set of dependencies and new dependencies. So I have to work that out. There's a few persons that are going to hopefully contribute to the OpenStack packaging in Debian so it's going to be more smooth for me. And then finally about Python 3. So upstream is very aware of the Python 2 deprecation upstream. And then there's an ongoing effort that has been done upstream to support Python 3 especially by guys from Red Hat, like some colleagues of Sebastian, like Victor Stiner and Cyril, what's his family name? Okay. And then I've been adding support into the Python packages for the dependencies of OpenStack as much as I could. Unfortunately, some upstreams didn't make it and for example, a few... Yeah, I don't have an example in mind. But like Tableb, for example, has lots of embedded Python 3 incompatible packages that would need to be reworked. So if you want to participate into that effort, you can have a look to the wiki.OpenStack.org. There's a page that gives an overview of all the Python 3 related issues that we are facing currently. And then there's currently blueprints that are being implemented for porting Nova and Neutron and Keystone to Python 3 completely. And I'm not so sure about when I'm going to do the switch in the Debian distribution. So I probably will wait until I have enough projects that are fully ported to Python 3 so that I can do a complete switchover. And then you wouldn't have to have both Python 2 and Python 3 libraries installed on your system which would be some overhead. So that's the main idea. Yeah, so another thing that happened upstream is what we call the BigTent. So it's a... I think it started by a blog post by Monty, am I right? So he wrote what he thought... So before the BigTent thing, we had the concept of integrated release, meaning that we have a subset of projects that are integrated and security supported and scheduled to have release dates together. And then as new projects came by, we thought it was more and more important to include everyone. And then Monty Taylor wrote that post about having a BigTent where everybody would be welcome to join in. And everybody catch up an idea and thought it was the correct thing to do to include everyone. And then now we are effectively welcoming everyone, even maybe projects that are not as mature as the others. So what's happening is that we have a set of tags telling what is the state of the package. For example, does it have stable releases? Does it have security support and things like that? So that's the thing that I, as a package maintainer, look on to see if it's OK enough to reach the Debian archive. Just checking the time. Oh, stupid stuff. So the effects now is that there is so many new projects that came by. Here's an overview of the packages that are not yet in to Debian. So I expect to, in fact, worked on most of them already, like Congress, Dakar, Barbican, Manila. I already have some Git repositories on there for them. So maybe I should give a bit more details about it. So Congress is policy as a service as in general policies. It doesn't restrict you to any kinds of policies. Though I could give you an example of policies, like, for example, firewall. You could have some closed firewalls as a policy. Or, I don't know, you've used that much of bandwidth and then after some time will restrict you to a slower throughput, things like that. So it has a policy, I don't know, thing in it that will compute the stuff as a general thing with its own language. Dakar is queuing as a service, so it's easier to describe. It's just like Rabit MQ as a service, mainly. Mistral is workflow as a service, so it's like kind of scheduling things in a correct order. So Barbican is, I believe, in a very good shape right now, and it enables you to store critical things for crypto in its secure storage. So, for example, it's a good place to store your private keys for SSL. And then there's Manila as a service, which you can read, it describes itself. So here it was all the past of what happened over the last year since the last debconf, the last debconf. Now I need to talk to you about what's going to happen. So previously we had integrated releases and point releases, meaning that after a stable release of OpenStack happened, we had scheduled point releases every two or three months. Okay, so the release team of OpenStack decided that they would no longer do that anymore, meaning that there is still a stable branch that is maintained in upstream Git, but it's up to me as a package maintainer to decide when I release that and to decide whatever I want to call it. So the solution I've adopted so far is to name it after the date when I do it plus some kind of Git SHA or something. So it makes version numbers that big, but that's okay, I guess. And the other thing is that I'm not so sure about when I should do a point release and upload that to Debian. But that's the way it is, I have to do with it. And then more and more projects are releasing out of sync, meaning that they decide by themselves when they want to release a new version. So that's really, how can I say? It's not easy for me to manage because I have to decide by myself which version of which component are going to be able to work together. So for Swift, it hasn't been a pain point and it has always done like that. But for things like Eronic, I have to make sure that it works together with the version of Nova. So the idea was that they would still have versions that would be in sync with the rest of the other projects, but they don't really have to do that. So we'll see how it goes and hopefully it's going to work well. So another thing that is happening a bit more on my company, which is Mirantis, is that we are doing a derivative of CentOS and Ubuntu, which we call MOS as in Mirantis OpenStack. And it used to be a derivative of Ubuntu so that basically I was doing the Python module dependencies in Debian and then it would be synced into Ubuntu and then my company would pick them up from Ubuntu and then release MOS with it. So after a few meet-ups, we decided that they would pick up things directly from Debian if they can, but still supporting Ubuntu so that that's going to happen. So effectively after a few of these iterations, we'll have support for Debian directly into MOS, even though that's not what we are selling to our customers yet. And then another thing that happened, so during the last Vancouver summit, I had some discussions with Canonical about how could we improve the relations we have together and how can we more work together on the OpenStack packages. So this kind of discussion I've been ongoing for two years and I've always pushed for more collaboration with Canonical. So I'm not saying they don't collaborate because they do want to do some collaboration but there are still some blockers to do some full collaboration and having the same packages in both distributions. So on the Juno release, I've done all the package dependencies that they use for the Juno release. On the Kilo release, it was done separately because Jesse was frozen and therefore I couldn't upload directly into CID without overwriting both Juno and Icehouse. So when Jesse was released on the 25th of April, I used the five days before the release of OpenStack to upload all of Kilo, but then Ubuntu already worked out these dependencies. And then for Liberty, the very good thing is that we decided we would work together on these dependencies and then... Can I use that? And then Ubuntu people decided that they would start maintaining these dependencies directly into Debian Experimental. Why Experimental? It's because, again, I don't want to overwrite Kilo, which is the current stable release of OpenStack. So then I upload all of Liberty into Experimental. So I have to do that because many packages go through Nu. So it needs to be there in Debian before Liberty is released upstream. So that is the good thing about the discussion is that Ubuntu does work in Experimental with me together. And then during this discussion, we decided that we would do the packaging into Stackforge. So who here doesn't know what Stackforge is? Okay, a few. So in OpenStack, we have the GitHub slash OpenStack slash something. And we also have OpenStack slash Stackforge, which is a forge where we put things that are not completely ready yet. So the idea was to put the packaging of the server packages, so like Nova, Neutron, Cinder, all of that, into Stackforge and use the upstream gerrit to gate the packaging artifacts there. So Canonical wanted to do it, and then after I opened that thread on the mailing list upstream, they retracted of that idea and decided they wouldn't do it anymore. Though my proposition was already ongoing, and people from Red Hat catched up on their idea and moved from what was OpenStack packages, they moved from that to the slash OpenStack namespace. Because Red Hat guys are doing it, I don't see why we shouldn't do it in Debian, so I'm still pushing for it. So the way it works in OpenStack is that you do a patch to the governance git repository, and the technical committee approves that new project or not. So I'm very happy to announce that on the 13th of August, so like during that camp, this project has been approved by the upstream OpenStack technical committee. So effectively we'll be able to start doing that work. Unfortunately, there's mostly some people from Mirantis that are willing to work on that, and I hope that is going to be others. So I very much welcome people from HPE to work on these upstream OpenStack packaging efforts. So what it means to do upstream packaging? So currently, what I do is that I have a git repository into Alioth, and then it has a receive hook and then sends that to Jenkins that builds the package for the back port in Jesse. What it means to use upstream is that I have mostly like unlimited computing power resources. That's the words from the infra team in the Vancouver stream. So it means that I'm going to be able to do a lot, a lot more of testing. For example, a full deployment of OpenStack using packages on every commit that we do on the git repositories for packages. So it's going to go on review.OpenStack.org, meaning that we're going to use Garrett, and that anyone will be able to propose patches without fearing to break the wall. So also it means that it ends the story of should I grant ACLs for writing in the git repositories or not? Now everybody is going to be able to provide patches and they are going to be peer reviewed, so we don't have this problem of security anymore. Also, I hope that I'll be able to do some more testing on two parts, LinkedIn and things like that. So in fact, upstream is already having this kind of test with what they have internally. For example, Garrett, no, sorry, Zool and Notepool are already built this way upstream as debiant packages for trustee. So because we agreed to work with Canonical on the OpenStack dependencies, what is going to move to this review system is going to be only the server packages and not all these dependencies. By the way, it doesn't really matter because I see more people interested in contributing to those rather than all the, I don't know, or slow-utils package that nobody really cares about. As long as it's there, there's no controversy of what functionality it should bring or not. It's just a library. So also probably at some point we'll be able to introduce some non-gating tests for upstream patches to make sure that a patch doesn't break whatever is packaged because sometimes some patches do not break the gate but they do break packaging. So we also hopefully improve quality and avoid breakage in the long-term. And another long-term goal is to be able to do packaging from track, meaning also from master branch of the gate repositories. So currently I package every beta release of OpenStack, meaning that for me as a package maintainer I have a new version every two months or something like that. Like the next beta version for Liberty due for early September for FYA. So another thing that would be nice would be to have Garrett package in Debian. So I already talked about that during a session about continuous integration for packages. So as I said during this session, two weeks ago I didn't know anything about Maven packaging in Debian. And indeed it's... How can I say it in nice words? I'm not a fan of it. But I still want to have this happening. I hope that... I'm sure that I'm not the only one that wants to have Garrett in Debian. But I'm not sure I'm going to be the only one working on that. I have to warn everyone that if I'm effectively the only one working on this packaging, I probably will give up. So there's already... Johan, Stan, where are you? Over there, that helped me a lot understanding things for doing Java packaging. If I can have... I don't know, like two more volunteers to do the Debian packages that maybe we can have something in not so long amount of time. So we do need Garrett packaged. Let me check on the time. So where was it? We do need Garrett within Mirantis for our own infrastructure because currently what we do is that everyone even upstream we just use the WAR file from Garrett upstream and then use that to install Garrett. Obviously that's not what we do in Debian. We want every Java library to be packaged meaning that and also the real dependencies meaning that for packaging Garrett we need first to package Buck which is a build system that Garrett is using. By itself it's already like such dependencies Johan. And then once we have Buck then we have to do Garrett which is again maybe such packages or more. Take a mic, take a mic. So we started working on Buck. Buck uses Ant for packaging and most of the dependencies are already in Debian I think. There are a few missing. Jet is the most complex one but after that I think we can package Buck. And hopefully after packaging Buck we have a straight line to package Garrett. Garrett has also a lot of dependencies already available in Debian so hopefully it will be okay. So you feel free to get in touch directly with me if you want to join that effort. So I'm Zigo at Debian.org. Also with me. Can you tell your email address? Yes it's stan.iujin.gmail.com I know a lot of Java but I don't know a lot about Debian packaging and especially Java packaging in Debian. So yeah, thanks. So the idea of packaging Garrett is to have it for me in my merantis and upstream in OpenStack. And then the idea is also to have it in Debian so we already have something like Digit to interface all of the Debian archive with Git. What we could do if we have a Garrett package would be interfacing it Garrett with Digit for example or something else but Digit seems nice idea and that anyone could submit a patch against a package. They would have a CI that would run build of the package Puparts, LinkedIn and Search with a voting gate like we do in OpenStack and then whoever is the uploader or the package maintainer would be set as the core reviewer for the package and would be able to approve the patch or not. So we already have the list of core maintainers in Garrett. If you already use Garrett then you know is there anyone who doesn't know what a core reviewer is in Garrett? Can you raise your hand if you don't? Okay, so like mostly everyone knows. So that's one of the big reasons that is motivating me to do it. Another thing which I worked on is porting MOS and fuel to Debian. So, Fuel is a web interface that helps you to provision an OpenStack deployment. It's mainly maintained by Mirantis but it's also a community project maintained in upstream OpenStack and in fact we're pushing it to be more and more a community project. So, basically you have a discovery bootstrap image that the computers are PXC booting on. This one reports what type of CPU the amount of memory, the amount of hard drives and then you have it reported on your web interface and then you can select the compute nodes. So, yeah, I have a few snapshots to show you what it is. So, first you create a new cloud deployment with some values which you can select. Here for example you want to use VLAN, GRE, newer versions that also support for VXLANs and so on. And then on the top left corner you can see what the discovery bootstrap image has reported. So, in my case in the Debian port it's using Debian Live so instead of using a huge NetRD file that is currently in use in full I use Debian Live with the standard Debian channel and then it fetches the SquashFS which is so much faster than a huge NetRD. So, on the up you can see the list of compute nodes of nodes, sorry, then you can select them and then assign a role that you see on the bottom screen. So, for example, you select one machine and say this one is my cloud controller or my compute node or storage node, something like that. So, that's basically what Fuel is and so I've tried to port it to Debian. Unfortunately this isn't really on my company's agenda and I'm not pushed to do it on my everyday job but I still hope to be able to provide it to Debian because OpenStack is still very hard to install and having this would help a lot the end users. So, that's about what I have for my presentation. So, if you want to contribute you can I suggest that you read that first page OpenStack.alios.debian.org that you join IRC, register to the mailing list and then if you want to help one thing that you can do is either go through the list of bugs and try to fix some or package something new like I already mentioned, for example, Mistral I haven't worked on that at all so that would be a good exercise if you want to join and then after you will understand how everything is working, though I have to admit that so Andreas Thilo did a mentoring of the month with someone that he helped to package Manila and I have to admit that the curve is a little bit steep to do the first package but once you have done one then it should be okay because there's some specific things like packaging an OpenStack server that you don't really see on other packages then because there's lots of security fixes having help on security fixes back ported to JC would be very helpful if someone here in the room wants to take care of this I would very happily delegate that role in the team another thing you could work on is joining docs.OpenStack.org and contributing to the Debian install guide and then finally we are hiring in Mirantis and I hope to I have a few candidates but basically it would be either someone that would work on upstream code or someone that would work with me on working on Debian packages directly in Debian or somebody working with Puppet for the deployment there's many rules but me I'm mostly interested in finding somebody working with me on packaging OpenStack in Debian yes so anyway I was finished so I'm open for questions there's 10 minutes I was hoping to leave more time so maybe 10 minutes will be short the first I actually on the slide of the documentation on workflow that still has you building Debian packages against the private Mirantis Debian repo but where's the updated documentation around the OpenStack Garrett workflow upstream OpenStack.Garret workflow is not there what you see on the OpenStack.io.Debian.org is to maintain the packages in Debian not in Mirantis let's talk about it afterwards alright so is there any other questions so would there be anyone that would be interested in having a buff later on so if you are interested please raise your hand 1 2 3 4 5 ok enough then ok so I'll add a session on the on the blackboard on the lobby then or maybe we can decide just after this when to do it any question ok that might be a little bit naive question but assuming I would start packaging how to test OpenStack packages should I have many computers to test to build my own cloud environment or should I do something else so yes you can do that though every OpenStack package has a set of unit tests so it's been a big part of the work of doing the packaging to also make sure that the unit tests are working properly once you have done that then you can run Tempest do you know what Tempest is? Tempest is a set of functional tests so in the archive there's a package called OpenStackDeploy and these are scripts to deploy full OpenStack cloud inside a single machine using QMU and not KVM so that's what I use after a release to make sure that OpenStack is functionally working so I'm not check gating every patch on that but I hope it's going to happen afterwards so currently what I do is package everything for a release, run that test and then if I have not too many failures then I consider it passed and then I say ok I release like say OpenStackKilo you could do that all that is great and in Debian so just look at the sources of OpenStackPKG tools and OpenStackMeta packages and OpenStackDeploy which is in OpenStackMeta packages and then you'll find out any other question? who in the room has already deployed OpenStack? oh I'm surprised nice who here in the room has deployed it on Debian ok nearly the same amount so maybe you can tell me what kind of pains you had using the packages I've worked on I'd be very happy to hear about that if it's not a question just some feedback I'm sure there's loads of issues no one wants to tell about any problems you found? we still have plenty of time so some feedback please looks like nothing so I would thanks again there's a question hello we didn't use the Debian packages back then but the Ubuntu packages back then and it was quite annoying that it's not that confident but the Debian packages why don't you use the non interactive mode then? afterwards we told it ok just get out of our way use the non interactive mode but then still sometimes when you do backup it gets ok I didn't have the opportunity to ask that question ask them again during updates so that was one minor issue we had with that we used the pkg reconfigurer devconf you are always in non interactive mode later on it might be an option but this was our first setup and it was quite a bit annoying so one of the reasons I maintain all these devconf things is also for my CI because I want to be able to deploy the full open stack without using pipettescript and that's how I configure open stack to be able to do functional tests so one of the things I've been thinking about was having a file called edc open stack devconf or something and then no devconf and then if it was there then just disable everything so could be something you want to use could be an option yes anyone else? my question is how fast can you install open stack with the new packages on open stack on debian right now for example for someone reading the documentation and having general knowledge about networks and stuff someone new to open stack that don't know anything about it and wants to set it up? mostly it has a basic understanding of cloud and how stuff works so I think it would take that person some amount of time to just understand how to make sense of that t-shirt and so you have to understand which component is for what and that can take some amount of time by reading the documentation once you have done that maybe a few weeks to make sure you know what to set up and where then you could try in one day to set it up and one week to debug it like the most painful part is always the network so I'm maintaining some meetup packages to try to help people to know which components go where for example an open stack compute node that deploys the nutrient agent and the compute agent on the node so that's one of my long term goals is to make it super easy with fuel okay thanks so the kilo installation guide for debian upstream and open stack was removed because it's broken because it doesn't work did it back in liberty or do we still need a lot of work there's a set of install guides that the open stack project links to and they link to susie red hat sent ospedora and ubuntu but not debian anymore so in fact that's how can I put it in a nice way I'm not happy about what happened because someone from rack space opened a bunch of bugs against the documentation for open stack which were more rents than real action bugs I could have actions on and then finally it was removed because I wasn't watching the list and one of my colleagues was supposed to work on it and then he was assigned by his boss to do something else stuff happens mainly I just want to know for liberty do we have a good list of things that we need to get done when kilo was released they started to do the conversion from the maven xml documentation to rst and then I couldn't work on it at all it was like frozen and now the rst conversion has happened I believe for the install guide and now a blueprint for implementing the documentation for debian has been approved as well and hopefully for liberty we will have the debian documentation for the install guide back again in stringdocs.ovencyle.org so that's something you could use help on because you don't need much experience to try out the install docs and see if they work or not and as well you could use the Juno documentation which is still online but of course it would need updates for kilo and liberty which will never happen well that's one thing that needs a lot of help and that everybody can work on now it's rst format it's not the cdxml maven thing we used to have so it's easy to work on so our time is run over one more question is there a preferred network stack in debian open stack or you're just using neutron as the network component because I've read I haven't experienced myself but I've read that there are scalability problems with neutron and I don't remember this alternative software from jupiter so that's one thing I didn't mention but in neutron we used to have all the drivers that were contained in neutron and now all the drivers for specific network vendors have been taken out of 3 and packaged in separate packages so I haven't worked out all of them because there's so many I believe maybe 12, something like that 10 or 12 so neutron is the definitive stack that you should use and then you can plug some hardware vendor drivers on top the scalability problem that you are referring to is probably with GRE tunnels so as you have more and more compute nodes you have and you use GREs GRE tunnels and then you have a kind of mesh of tunnels from 9 nodes that would be 9 times 8 times 9 7 so factor of 9 and this can also bring a huge load on your CPU but since now we have VXLAN so VXLAN is the recommended way to do networking these days with neutron if you don't use a network vendor or driver so off the shelf hardware for VXLAN is the best option if you want to go cheap and otherwise you can use Juniper, Melanox or this kind of vendor that have specific hardware designed for open stack so thank you Dan we will do above and thank you everyone for attending thanks Zigo for the presentation