 So many projects had improvements. Some of them are more important than others. And this is just my opinion of what I think is very important. So I think IPv6 is now completely usable in Utrean, as well as QOS, which was kind of incubating a year ago. Now we have DNS support through designate. So when you create some IPs, it can be associated with DNS. There's also some general improvement with OVS and search. Nova got more drivers. One very important bit is that previously you had the choice of implementing cells on that cell. So a cell is a group of physical nodes that you can set up together. And then you'd have more than one Nova API that you will install in that cell. That way you can scale to a larger subset of compute nodes. So now in Nova, you have no choice. You have to do a one cell deployment. And this is what cell V2 are about. So there's more new EFI support and new disk scheduler that will check disk space, before selecting where your VM will be spawned. So a cellometer is the thing that provides a metering on your deployment. It has been split into two smaller projects. AODH, which AODH, which is some Irish vocabulary, is for alerting and you can take care of the time, database. So Keystone has now fully migrated to OpenStack Client. So you don't have a Keystone command line anymore. You just use OpenStack and all the identity subcommands. So the admin of token is supposed to be gone and now you are using a DB bootstrap, which is what the package is already doing. Cinder got nested code on this written backup so that you don't need to shut down the VM before we're doing a backup. There's more general improvements. Many vendors have been pushing new drivers into the Cinder code days over the last year. And yeah, one very important thing is rolling upgrades, meaning that you can have an always up Cinder service and you can upgrade it from one version of OpenStack to the next. So many projects are slowly moving to that. It takes a long time, but we already have Nova and Cinder doing so. And other projects like Hita are also catching up. So there's new Cassandra support intro. Another important bit is that Horizon used to carry lots of plugins by itself. Now they got out of three and separated out of three. So in Debian, last year, I told you about Barbican Congress and Mistrol, which I was working on for the packaging. I'm very happy to announce you here that since a year ago, we have a Barbican. So Barbican does secret as a service. Basically what it does is that it keeps your private key and it can generate SSL certificates for your deployment and such. I was able to package Congress because now there is NLR 3.5 in CID, so that could be done. And so another interesting project, which I think he was looking at is Watcher. It's been created partly by some public cloud providers, what it does is that it can shut down some compute nodes if you don't use them enough. Like it will migrate your VM workloads to another compute node and then shut down the machine. So that brings us to 22 services which are now packaged into Debian. Debian is the only operating system that contains as many services. Other distributions don't have, for example, Sendin, Zacar, Watcher, which are new projects. And when these are available in Ubuntu, Ubuntu 6 directly from Debian. So you will see some of them, but probably they are not fully up to date in Ubuntu. So maybe if you were present to the Python buff earlier this week, you'll notice that we are pushing for getting rid of Python 2. This is something that upstream is very aware of and a lot of work has been done, especially by Victor Stiner on parting all of OpenStack to Python 3. And currently we're at a stage where all the 19 Oslo libraries, the four development libraries, the 22 clients and six cross-project libraries plus the 29 services have all been ported to Python 3. When we say ported to Python 3 it means that all unit tests are passing. Remaining is Nova, which is ported 72% to Python 3 about one polar of the code base is still not fully working in Python 3, especially some unit tests. And unfortunately Swift is really lagging behind and not accepting Python 3 contribution fast enough. Hopefully it is going to be improved slowly. And another thing that is currently blocking is that we don't have functional testing of using Python 3. So until that happens, I don't think it's reasonable to have Debian packages of OpenStack in running Python 3, yeah, go ahead. Yes, so I repeat the question. He's asking what's blocking functional testing? So this functional testing is a special test suite which also is written in Python and this also has to be ported into Python 3. So there's DevStack that has been ported to Python 3 but not Tempest fully. And anyway, since Nova is not ported to Python 3 it doesn't make a lot of sense to right now start gating on functional testing using Python 3. So with regard to Debian, I kind of followed all what the stream was doing and I'm very proud to tell that right now all the client libraries are, and also libraries are also supported with Python 3 in Debian, meaning that if you want to use an OpenStack department right now you can write your code using Python 3 and the client lives. But no services are currently running under Python 3 by default because I don't think it's reasonable to do so until I can also test it myself using functional tests. So the general plan for Python 3 support in Debian for these services is to first, as I've been doing for the client libraries, have every binary in user bin to have Python 3 dash something and Python 2 dash something be installed in your system and then using the postings just select one of the two using update alternatives. Once this will be done I'll be able to activate Python 3 functional testing by selecting the Python 3 alternative instead of the Python 2 and then test on that. So I hope that therefore I'd be able to switch slowly to Python 3 without having anything that breaks for Python 2 users. So hopefully we'll get there for in a year of time. That's what has been said in the mailing list. And so after the stretch is released I think I'm going to switch services and demons so that they use Python 3. So now every six months there's a user survey that is going on in the OpenStack community. I think there's a few things that we can learn from the Debian perspective which I try to sum up here. So many of the cloud users are using PHP seemingly with LAM stacks or Java, but mostly PHP. So it's very important for the cloud that we have these running correctly. Mostly OpenStack is used on premise meaning that the company set that up in their own environment. This is mostly what we get also from my company. So Virenty's sales like has 70% of the global market share for doing OpenStack consulting into big companies. And most of our customers, if not all, are deploying in their own data centers. A lot of people are using Puppet and after Ansible and after Fuel. So Fuel, we were very happy to see these numbers. Fuel is a web GUI that Virenty's has created to deploy OpenStack. It came from the seventh position to the third so that's very nice for us. So because everybody is using Puppet I also package Puppets in Debian. So currently in seed you can see OpenStack Mitaka, Puppet modules so that you can use them. And also Puppet is now fully working with Debian which was not the case a year ago. There was some problems because of differences between Ubuntu and Debian which have been fixed in the Puppet manifest. Lot of users are using Ceph which is also not very surprising. This haven't changed over the year. So Ceph does block storage mainly for OpenStack. So on this slide you see what the running systems are, users are running. So Debian is only 3% but with these numbers you have to realize that Virenty's is using Ubuntu server as a base OS on which they run the packages which I created in Debian over on top of Ubuntu. So finally as a set of Debian users of the OpenStack packages it would be a lot more. And users are having a variety of scales of deployments from very small less than 10 nodes deployments to more than 1000. It's spread evenly. So one big evolution so if you saw there the down blue was production people and the very light blue was like proof of concept. Over this last year we've seen a lot of people that went from proof of concept deployments to production so that's a very good thing. And it's looking like Debian doesn't have a nice good image as it could have into for OpenStack users mostly still continue to use Ubuntu and don't even consider Debian. So there's some progress that needs to happen there. So that's about it for OpenStack itself. It's ecosystem and search. And now I have a proposal for Debian infrastructure which I think is interesting for every release. So in order for you to understand this proposal you got to understand how packages are done in the stream OpenStack. So first you clone a repository then you modify it and commit it as usual and then you do a git push through Garrett. And when it's pushed then the review system takes it and what happens from there? What happens from there is that there's a bunch of tests that are normally happening for Python code but again that can also apply for doing Debian packages. So I've been trying to push for doing OpenStack packaging directly on OpenStack infrastructure meaning that building packages happens there and all the checks happens there. So Zulu is the job scheduler it will pick up the new patch proposal and then it's going to transmit that to a node pool which will pick up a Debian machine a Debian machine that is running. On that VM the build will happen using Sbuild. So Sbuild is previously installed but like if you were doing that inside the Debian infrastructure we could have that already pre-set up in the image then the package is built and then feedback is given on the GUI or using N-Curse GUI if that is what you use. On top of just building the package we could add two parts, adequates, check all the things and such so that we would make sure that proposed patch for a package would be in good shape. At what stage in that process do you run any of the unit tests for that package or packages that might be affected by the change you've made? Yeah, so currently I'm not up to that stage yet currently it's a little bit experimental though what it does is just build a package which contains unit tests by itself. So is that, right, so the in-build tests are they the full set of tests filled for that package or do you have a second set that are more intrusive or need more hardware support or? So after I have built all the packages there's functional testing that happens but no I'm not yet checking reverse dependencies and such. That's something that could happen if you were setting the same kind of infrastructure in Debian I believe and that's really something that I would enjoy to see happen. So once all the packages are built for a given release of open stack I have a job that installs everything in an all-in-one machine. Once everything is installed so that's a first test as well because I make sure open stack is insolvable and then functional tests are on the work so it really spawns VMs, create networks and such. So once all that is successfully run it takes the unofficial backpots repositories that in Kinsa has created and moves that to a second stage repository which is marked as dash tested. So effectively right now you can use the latest version of the packages that I created and still make sure and you are sure that they can be installed and they work. Have they been merged into the branch at that point or are they still patches in Gerrit? When does the actual plus two from Gerrit in the branch happen? Just right after. Once this is all run, you have a review process that happens so anyone can review the patch. The patch can depend on another one. So let's say from the Debian perspective. But you're building packages that are they packages or are they artifacts? You're building in the previous slide before people tonight. So currently this is, the depends on, only happens on more general Python jobs but it can be applied to packaging as well. So only the core reviewers can plus two a patch like usual and once the patch is approved, automatic merge happens with a check job and the package is reviewed. Once more to make sure nothing is broken between reviewing the patch and when it's merged. And the build files are pushed onto the table, the open style log and hopefully soon pick up by a job that push them into a Debian repository. So all of that happens inside the OpenStack infrastructure. What I would like to see happen is having all of that also available for every Debian developers. The idea would be to interface that with the gate so that we would be able to take any package from the Debian archive just to get it and then push that for a review. So what we could do is having a kind of wrapper around the gate so that it would get any package and then artificially add a .git review file so that it would push it to a, I don't know, a review.debian.org or .net. And then apply all of that. So to run these tests we would need to have something comparable that we have upstream. There's a few ways where to do that. So we could set up an OpenStack infrastructure inside the DSA data centers. There's been talks about that already. There's some new gears that are coming to UBC so probably there's going to be an OpenStack deployment somehow happening in DSA. And that deployment will maybe have other kind of use than just doing the packaging the way described. So that implies setting up, or we could use third-party donated infrastructures like they do in OpenStack infrastructure. So they use compute power from Rackspace, or VH, it used to be HP but not HP anymore and some others, we have other names in there. Yeah, there's Blue Box, Blue Host and Search. So that's very nice to do that because we, like that we have many pools of providers so that we don't need to take care of redundancy by ourselves. Are these a single architecture there? Are they all AMD64 or are you looking at multiple architectures? It's only AMD64, though we could have the narrow cloud join the pool. So these are not donated hardware, they're actually donated public cloud compute time. So that kind of abstracts away from the hardware. So I can't actually tell you exactly but I would bet that pretty much all of it is AMD64. Yeah, I think they're all. Then there may be some I-386, I'm not even sure. Maybe, but I doubt it anymore. Yeah. I have a question from RC. So, yeah. What do you think about the purpose of SAC in DC-14 of work in the free service concept inside the deviant projection due to cloud computing services, implications for freedom of users about the power of the big companies to increment the sales? I didn't get it. Can you rephrase it with your own words? I don't know. Not really. Can you maybe read it on RC yourself? I suspect this was Stefano Zacharoli's talk. Yes. At DC-14. Yes. So one of the motivations I have for doing open stack packaging is because I refuse to give that we only rely on things like AWS. When I hear that Debian is using AWS, I'm happy that they are providing us compute power but I don't know that we are dependent on some such non-free software. It's even worse than non-free software, right? It's non-free service. Which we don't even have access to. We couldn't run AWS even if we had the binaries of them. So anywhere, whatever happens if we use our own infrastructure or a third-party resource, we would have to set up Zoologerman and NodePool and Garrett in some somewhere, probably in UBC. I have a question which is, I like the idea of automated CI CD on Debian packages but I'm not really sure we need Garrett. That's a very culture-specific way. So I don't even mind if it's Garrett or GitLab. There's been a lot of talks about using GitLab in the Debian infrastructure. As long as we can hook Zoologerman and NodePool to it, then I'm fine with whatever code review software we use. I know that DSA is not so much against setting up Garrett. I just know Garrett. I don't know any other review software. I think setting up Garrett or GitLab with DeGit is a super awesome idea. It would make contributing to Debian very easy, especially if it's their Garrett with the Git review file. Anyone can just clone any package and send the patch super easily. It's super hard to send a patch right now because you have to send patches by email which people don't do ever anymore. So they are not used to contributing to Debian and now just cloning and Git review. You have a patch. It's really good. So all of that, we have already, all the technology is already there, like Zoologerman, NodePool, Garrett. I have scripts to build on upstream infrastructure so we can reuse that. The only thing that would need a bit of glue would be DeGit, whatever review software we use and making it so that whatever you set as uploader or maintainer field has the core review rights into that core review software. Once we have that, we could have anyone from anywhere in the world propose a patch, have it for review, having anyone to review it and just the maintainer of the package, having the rights to approve the package. Ultimately, if we trust that infrastructure enough, we could even have that infrastructure to upload directly to DAC if the FTP masters agree to it, which I'm not sure they will. But yeah, we could do that. I would trust more an infrastructure to set up S-Build with a clean sheet and such than any random DD to just build it in his laptop, maybe polluted with all the software. But that's just me for the moment. Maybe this kind of thinking will slowly spread in DeBiang. I don't know. One more question. Yes, but here you're talking about an upstream patch. When I was seeing a patch, maybe I didn't express it correctly. I mean, any packaging modification for me is a patch as well. Okay, let's say you edit the short description of a package, that's a patch too. It can go through the patch review process. Yeah, so if it's a packaging patch, that's fine, that's good stuff. But it's not gonna be clear from this kind of system that you're limiting the support to the DeBiang directory because there's nothing in here that actually requires such a limitation that it would be potentially seen as artificial if you did put that limitation in. So you definitely need the link to upstream via the maintainer because you need to make sure this actually does get upstream. Otherwise, we end up with loads and loads of patches that aren't going anywhere. That would be a backward step. So it needs to be looked at on how we get these properly integrated. Okay, so I think that's it. We already started Q&A. If there's more, I'm happy to answer your questions. So he asks that you can read it directly on IRC. Yeah, okay. I can try, we have enough time. I'm not sure I can even start IRC in this resolution. What's the channel? Debcom 16-10. Can you read the box? I have the backlap, we can follow up if you want. Okay, yeah, so I'm largely not really OP. So the question is what do you think about the purpose of work in the free service concept due to the cloud computing service implications of our freedom of users and the power of big companies to increment software as a service? So I'm not very much, I'm not more opinionated than others about software as a service. I also think it's a very evil thing to provide software. What I'm fighting for here is to have infrastructure as a service, which is the foundation in order to build software as a service on top. So I very much would like software as a service to also be free software, like running things like on cloud and such. But yeah, I think first thing first, let's build infrastructure as a service and then consider software as a service. There's not enough of this available probably. Okay, the other question. A reply to that one is a lot of the conversation about freedom and cloud is making the assumption that cloud is a public cloud where someone else owns the hardware and someone else owns the software and the software is all proprietary and you have given up all control. So cloud is really just a level of abstraction in infrastructure and if it's all free software, all the way down, the infrastructure is free software, the application is free software and it's running on your own hardware, then that makes it no different than any other more traditional free software application running on a server. And that's the big difference in terms of freedom. So don't assume that cloud means AWS and you'll have a much better understanding of what OpenStack especially is trying to get at is making it cloud but free software all the way down to the very bottom and still respecting your freedom. Thank you. Okay, any more questions? No, no, no, there's only one. He repeated one question multiple times. Okay, then I think we're up for the conference dinner then.