 Ok, c'est bon. Merci d'être venu. C'est le programme pour ces 45 minutes. Je suis un développeur de bandes depuis 2010. J'ai toujours été intéressé par les solutions de hostings. J'ai créé un peu de mien. J'ai un business d'hosting. J'ai été faisant la packaging de l'open stack depuis le début. C'est toujours de faute. La faute était l'ABCD 5e. Maintenant, c'est en train de créer plus de 150 packages. La plupart des dependencies de Python sont importées dans Ubuntu. Je suis un de les uns qui a été payé pour la packaging. Je suis très heureux de ce que je fais. Je vais commencer par un update sur l'open stack. J'ai fait deux talks sur l'enjeu de la fin. Je ne vais pas le répéter. Si vous voulez comprendre ce que l'open stack est pour, vous pouvez voir le archive et regarder les vidéos. Aujourd'hui, je veux vous donner un petit update sur les nouvelles features de l'open stack. La première chose de la bad news est que j'ai packaged Zena API pour Weezy. C'est un Weezy, mais on a dû le retirer parce qu'il y avait des bug de RC. Il n'a pas supporté les versions de OCaml. Ce qui s'est passé est que Citrix a créé le fork de leur solution de Zena API. Ils n'étaient pas capables de continuer à utiliser leur version 2.0. Ils savent que c'est une situation inférieure. Ils travaillent sur ça. C'est comme si j'avais mis JC. Il n'y aurait pas de support Zena API. Il n'a pas de support Zena API pour l'open stack. La deuxième bad news est que c'est difficile pour quelqu'un qui s'occupe de Weezy pour upgrade la version qui va être en JC. Ce n'est pas supporté à l'extrême pour upgrade d'une version. Il y a donc 4 versions. Vous allez être sur votre propre version. Ce n'est pas comme si l'extrême ne veut pas l'adresser. Parce que tout le monde dans le cloud s'adresse à l'extrême depuis 6 mois. Si vous voulez l'adresser, je vous présente la version d'open stack. Vous pouvez aller sur SX, Grisly, Havana, Icehouse. Vous pouvez upgrade d'une version à l'autre. Ce n'est pas possible. Personne n'est dans la production. Les gens voulaient qu'ils aient le travail, mais ils n'avaient pas. Vous pouvez upgrade d'une version à l'extrême. Vous pouvez l'adresser à l'extrême. On a dit que 2 ans plus tard, on pouvait l'adresser. 2 ans plus tard, ce n'est pas possible. On a fait ça pour qu'on puisse s'adresser à H. Vous pouvez aller à H via le genre de termes. Ok. Il y a beaucoup de changements. Oui, c'est correct. Il y a aussi des changements. Si vous avez utilisé le volume Nova avant, maintenant c'est cinder. Si vous avez utilisé le Nova Network, maintenant vous utilisez Neutron. Si il y a des moyens de convertir votre database de l'un à l'autre, vous devez changer le projet et le upgrade sera compliqué. Ce n'est pas pour les mauvaises, mais pour les plus intéressants. Il y a de nouveaux projets qui sont en préparation. Je l'ai utilisé dans OpenStack. Je l'ai utilisé pour ceux qui pensaient qu'on était à la vitesse. Il y a un Trove, qui est un service de database. Vous utilisez un API et vous avez un database que vous pouvez utiliser. Il est designé d'un service de DNS. Ironique, il est un service de paramétre, où vous pouvez provisioner le hardware en demande. Vous utilisez un hardware délicat, comme si c'était une machine vietnamienne. Il y a un Triple O, qui est OpenStack et OpenStack. Tous ces projets sont très nouveaux. Exceptions Trove, qui peut être déjà préparée pour production. Les dernières trois sont en test et cédure, mais je vais demander de rembourser la table, parce que c'est difficile pour moi de faire un support de sécurité. Si vous voulez utiliser ça, c'est possible, mais n'hésitez pas à le trouver. Si vous avez un invité, avez-vous des packages ironiques ? Oui. Est-ce qu'il y aura un support de sécurité pour 3 ans ? Je vais demander des packages ironiques. Je vais voir si je peux faire ça. Ok. J'ai aussi construit... Un an plus tard, un collègue de mine m'a dit « Ok, pourquoi ne fais-tu pas un script pour des images de Debian ? » J'ai commencé à faire ça, mais finalement, c'est un packages que vous pouvez downloader. J'ai essayé de le designer très petit, pour que c'est facile à hacker. Il y a 400 lines de scripts, qui ont des hooks que vous pouvez aller dans. J'ai utilisé ça pour donner des images officielles sur le cloud HP, sur un cloud KVM-based OpenStack. Mon plan est aussi d'avoir l'utilisation de la création CD. C'est Steve McIntyre. Il m'a dit qu'il va essayer d'ajouter un collègue d'un script d'image de Debian dans le processus de créer des images de Debian. Le but serait d'avoir l'image de QCOO pour le cloud, à la même temps que les images de CD. Tout fonctionne bien dans ça, d'excepter que dans OpenStack, vous avez besoin d'alloguer sur le console serial, TTYS0 et le console normal TTYS0. Il y a été un bug depuis 2005, avec un patch attaché, en avril 2006. En avril, j'ai l'ampli sur les consoles, à la même temps. J'ai patché depuis 5 minutes pour avoir l'aider de supporter ça. C'est maintenant en CD depuis 4 jours. Avec l'aide de la réelle Steam, j'espère qu'ils vont s'adapter à ce patch. Peut-être, il y a un moment. Je ne suis pas sûr de ça. Mais je vais essayer. OK, de nouvelles features. Nova a support pour RDB. RDB, je suis désolé. RDB c'est... Who knows Chef? OK, il y a un storage distribué pour qu'il soit installé sur beaucoup de services. Il y a object storage et bloc storage. C'est, selon beaucoup de gens, moi-même, l'une de la plus mignonne pour donner storage pour l'OpenStack cloud. Nova, qui est l'OpenStack compute, a perdu des packages qui étaient nécessaires pour soutenir Chef. Et depuis Ravana, j'ai maintenu la main sur les packages d'Adebian pour Nova. Donc, c'est encore dans les packages pour IceHouse, qui sont currently dans JC. Et currently, avec IceHouse, directement d'un Chef cluster, vous pouvez streamer directement l'image sans aller dans le volume cinder. Donc, ça spécifie beaucoup l'accès à ça. Nous avons Pertenant Coda, parce qu'avant, nous pouvions installer Global Coda, pour que un tenant ait tant d'instances, tant de RAM, etc. Nous avons Dockersupport, une très bonne chose, c'est que nous avons agrogate filters, c'est-à-dire que si vous avez ce type de notes compute, ce type de hardware, ce type de SSD, ou SATA, hard drive, etc., vous pouvez filtrer et dire que je veux utiliser cette machine sur ce type de notes compute. Donc, oui, j'ai comparé tous les nouvelles features ici, c'est de deux releases, c'est du Grizzly, parce que mon dernier talk était un an, c'est pourquoi c'est de ça. En Grizzly, nous avons sales, mais ils ne sont pas vraiment utilisables. Maintenant, c'est un bon et utile. Donc, un sale est un groupe de notes compute, c'est un groupe de large deployments avec des milliers de notes, c'est maintenant possible de les diviser en sales, pour que vous ayez un workload distribué. Horizon now supporte beaucoup plus des projets. Ça supporte des features auto-scaling, pour que vous puissiez dire que quand mon web server n'a pas beaucoup de requises, je veux avoir plus de machines il y a beaucoup de moyens pour dire quand vous avez besoin de plus de VMs, c'est up to you to do it, so you would write that in a heat template. Horizon has support for heat for accelerometer, which does the mettering in an open stack cloud. There's Trove, which is database as a service, that you can also manage in Horizon. On the Neutron side, Neutron does the network. There's a lot of new features like VPN as a service, Firewall as a service, Load balancer as a service. The Firewall as a service is not security groups, it's inside your own little land that you run in the cloud. Load balancer as a service is kind of HAProxy as a service. So that you don't have to run HA in your own instance, you would run it on the cloud itself, it would do that for you. And there's some interesting things that happened in the network core, I'd say. So a year ago, the only solutions we had to provision a cloud without some specific hardware was to use GRE tunnels. So GRE tunnels is made between all the compute nodes. So like if you have 10 compute nodes, what happens is that you would have a mesh of tunnels between all compute nodes. It's quite CPU intensive and if you have a lot of network activity then it's kind of hard for the hardware to keep up. So we have a new feature which is there since the Linux 3.12 and really usable since 3.13 and on, which is VXLAN. So VXLAN is you take a packet and you had roughly 50 bytes header that's supported by the kernel itself and in there you have the VLAN information. So it's quite new even in the kernel and it's really nice because you don't have all the overhead that you would have with GRE tunnels. So there's some, the thing is that your hardware needs to support the size of the packet plus these 50 bytes. So if you use jumbo packets of 9Ks, it has to be 9K plus these 50 bytes. So when you buy some hardware you have to make sure you have the feature most recent hardware like Cisco and such have it. So for cellometer now we have alarming so that it can tell of issues you have in your own cloud. Glence has multiple location capabilities for images and it has a cinder back end so that you can store an image on a cinder volume. Another thing is that I worked on the documentation so what you see in docs.openstack.org is the official documentation plus some patches that I added for OpenStack in Debian and I don't maintain much the wiki though this may may be some information you may find interesting over there the process in OpenStack is very open so if you want to contribute to the documentation I very welcome you to do so checking the time ok yeah so I already explained what self is so as I said I continue to to take the patches from Dimitri who is in the room who contributes the back ports of the patches for Sefinnova so it should still be working even though it's not supported upstream this year red hat bought in tank so we have a good hope that it's going to be maintained in a much better way meaning upstream with some testing in the gate of OpenStack so that we make sure it's always there in the stable extremities so we have already support in DevStack so DevStack is a huge not so how can I put it in nice words it's a bit not so clean shell scripts that would set up your OpenStack locally and it's used by what we call the gate so the gate will check that your patch doesn't break the rest of OpenStack thank you yeah so that's for development and testing so we have now self support in DevStack which was the first step so that DevStack is able to deploy stuff and the next step is going to be adding self functional testing in DevStack once we have that then the world is great sun is shining and self will be supported forever in OpenStack yeah and not broken yeah that's the point oh yeah I'm sorry I should have replaced the interrogation point by your name I'm sorry yeah so we have solved as well the problem with our BD I do it support in QMU so hopefully that problem is solved for JC our BD is how to CDR Ray I can never say yeah thank you so hopefully QMU has the support for it in JC so I've been running the Tempest functional testing in OpenStack for about a year so to prove that the packages are working correctly so I'd like to thank a lot all the people from in advance especially Emilia who helped me a lot to do all these testings and others so what we have in WISI in my WISI back port repository is tested I hope to be able to test it for JC before the freeze to make sure everything is working as expected it should be roughly the same because it's the same packages so I do not expect big surprises so the security support for SX has been kind of sloppy I'm really not proud of what I did in WISI though there were there's a consensus that it was not really for a public cloud provider and for private use I don't think you were affected by any of the CVEs that were reported so it's kind of fine but I don't want this to happen again for the JC release the good news is that Red Hat announced 3 years support for security of Icehouse so they will produce patches for it I hope to be able to maintain these for the life of JC but I'm not 100% sure people will use JC and Icehouse so it's up to you to tell me if you think you will use that so I'm very much welcome your comments and feedback about it so if everybody tells me no I won't choose what's in JC the latest version of OpenStack all the time then it's not worth the trouble and I can ask removal so you tell me ok there's a colleague of mine who told me oh you did so many Python packaging so you should share and you must know a lot about it the thing is I don't pretend that I do I know so much about Python packaging even if I do a lot I think that people like Barry or Poetry in this room must know a lot more than I do but I think we don't do enough packaging experience sharing in Debian and I think it's probably one of the things that Debian people are the most interested about which is why I wanted to share with you the way I do so I hope that if you think I'm not doing right then you tell me and if you are doing in another way which you think is more efficient then please tell me as well that the goal is to be to have a kind of buff because I it's always the same kind of packages that I do OpenStack is made 100% of Python it's very repetitive so I try to find ways to automate that process 100% of the packages of OpenStack are coming from PyPy it's a requirement you cannot add a new module if it's not on PyPy so I created a small script which is called Deppypy that downloads the dope.xml from PyPy extracts information from it using XPath on the command line it's a small template for me then after I have a bit of manual work so I have to review the dbian copyright of course make sure that there's build dependencies correct and dependencies correct so I do that by griping the import basically and then because I want to have Python 3 and Sphinx most of the time it's not there so I just remove it and then manage the test shoot and that's about it so in roughly about an hour or so I can have a Python module package working so you can have a look to that script and customize it do what you want feel free to stop me anytime what's the reason for not doing that work inside the Python modules team because I I use Git and up to two days ago we were using SVN that's what I use a workflow which is different and what's the reason for not using the existing Python helpers I'm not familiar with Python helpers I use the Python helpers are you mean to create the dbian folder yeah maybe there's Python STDep package which probably does what what you try to accomplish using this Python script it takes a module from PyPi and generates dbian directory from it so it's already there must be very similar just my stuff you know I started to do it like very slowly and then it grows and then oh I have a script things like that so once I have that I use the Git repository from upstream which so I download it from github like for 95% of the packages I do it does a Git repository so I use the tags from the packages and then just merge the tag or that Git reference so some upstream don't like me to do that because they say that in their process of doing Python setup.py disks they do things which I may miss which is sometimes right but most of the time it's not to my experience with such a thing then I can figure it out I like this way because it's very efficient I don't need to even think about a table, I know it's there my Jenkins generates it from Git I don't need to care where to download it because I already have the Git repository which is defined in my dbian rules file so I just do fetch upstream remote and then it fetches it I don't have any trouble with Pristintar I don't have one single branch as well which self contains and have everything so I'm aware that upstream may not like it but I find it so much more efficient that it's very hard for me to do any other type of workflow I will try though to use what we have decided into the Python team and try to push more packages using Pristintar or search in the Python team we'll see how it goes so we had the discussion in a place which I can't name about the fact that we shouldn't treat the BTS as a to-do list on the opposite mindset so I've I try to lower the bug count as much as possible and anyway possible because I have so many packages to maintain if I don't do that then there's just too many bugs to care about and it's overflowed and I can't manage it every 2 to 3 weeks I try to go through all of them and try to close as many as I can so in roughly 2 to 3 days I can kill I don't know 20 bugs so I do some sessions like that and yes, for the moment it worked I hope I won't ever have too many bugs for so many packages because I do maybe 4 or 5 new packages new packages in Debian every week or so I had some I had to deal with the fact that there's a long waiting time in the new queue I have no magic solution for that the only thing you can do is just upload as fast as possible so in Debian, in OpenStack there's that requirement repository so it's I update it often check that there is new things and try to be as reactive as possible so I was also insisted upstream so that they don't add too many new dependencies before just before the Debian before the next table of OpenStack is out the other thing I've been working what's the same so I did lots of Python 3 things there's a consensus in OpenStack that we want to move out of Python 2 but there's some things that are keeping us there like memkhd eventlet and more hopefully we'll be there in a year or two having Python 3 only things so like the rest of the Python team is doing, I'm trying to add support for Python 3 in all my packages so hopefully we'll be there at this point for jc plus 1 so up to now on all the packages I maintain I also make them back portable for Weezy means I have Python 3.2 to support which is kind of annoying because there's all this unicode thing which is not supported in Python 3 upstream is not interested in Python 3.2 as well so I'm pretty much on my own but I still find it more easy to do it like that for example for Python Babel I added a big patch for 3.2 I'll be very happy to drop it as soon as we have jc out but that's quite bad example because most of the time the patches are very very small maybe a year and a half ago I didn't know about Python 3 at all and as I'm doing it and I know more and more about Python 3 I still don't consider myself Python 3 Python programmer but I know about them so here's the kind of cool things that you may find in a Python package so Python 2.2.1 I've checked on the internet was released in 2001 and I still the same kind of things you have to do so you just remove that ok some things I do in Python packages so I find now that to do this way is nice because you don't have to deal with the clean target and here as well instead of cleaning the egg which always will have differences with upstream is very annoying so doing that is efficient like you just add that in all packages and then you're never bothered I also disabled some dh dh targets to make the package building faster yeah I had to deal with lots of version issues upstream sometimes says ok there's a package that needs this version or that version and I just add the patch upstream and then they reply to me yes but in the requirements.txt we have another version why do you want to be compatible with that so I have to explain teach them that they are not alone in the distribution that their package must integrate with the rest of Debian sometimes it works it doesn't I have many kinds of reaction but mostly it went alright if you take the time to explain to them one very nice things that happen is that is with Python Migrate so this was maintained by a Debian developer who decided to stop working on it it has been taken over by the open stack community and support for sql alchemy 08 and then 09 has been added to it so that worked quite well for that package upstream has been very comprehensive with sql alchemy support so now there is sql alchemy 0.9 support we have a person who is upstream for sql alchemy he is now a red hat employee and he is doing stuff specifically to support open stack in sql alchemy and alambic so the future is bright and then there are some other issues like upstream would like to use a new version of jQuery which is incompatible with what we have in Debian so they have been very comprehensive too and they stick with the old version so there is going to be a new version for jc plus 1 i guess we are trying to push for Django 1.7 but like with many other packages in Debian there is issues with it so we will have jQuery we will be able to deal with it and have Django 1.7 in Debian so there is a lot of upstream who like to do vendorizing so the word vendorizing means yeah Barry does like that yeah this sucks I agree and so vendorizing is a nice way to say whatever version of any package I put in my source code so they embed a lot of other projects in their source code and then just release this way if you do APT file search 6.py we will find many versions of 6 in many packages so we still have some things to address in Debian and ok explaining to upstream why it's a bad thing to do it it's kind of hard they always have some wrong reasons to do it and it has unpredictable consequences I had one with WSGI intercept which was kind of funny so first they try to use mechanize which is in Debian and has a lot of embedded other project things as well so I'm not even sure this one should stay in Debian but they was dealt with and they removed it and after we had a funny issue with request and URL3 WSGI intercept is a software that is there to test request on the internet when you have no internet to do some tests so it uses request and request embeds URL3 in its source code yes Barry yes I'm going to it you will see it's funny so in Debian because we are smart people we removed URL3 from request this is nice and fine except that WSGI intercept expected to have the vanderized version of URL3 in request it took a long long time to find out what was going on so it's a problem and we have to convince upstream not to do it because otherwise you will run into all sorts of issues yes so I do all what's the time so I do all of what I do is back ported I thought about uploading everything to back ports but it's kind of complicated so I hope that it's going to be PPA main somehow ok my last slide and after it's open for questions is how OpenStack can help Debian so we had talks about CI in Debian one of the things we have in OpenStack is that when you send a patch then you have a batteries you have lots of tests that are done and then after only the patch is sent to upstream we could reproduce these kind of things within Debian so we have a Garrett or another patch review system if you find something better but I haven't yet connected to the gates so that we make sure you have git packages and then somebody would send a patch using git review it would be built tested with Pupart, Adequat and so on we could even build reversible dependencies and things like that add some specific package functional testing then we'd have some review from anyone, not only DDs that could say ok this thing is bad this thing is good etc and then depending on the ACLs we would have I don't want to define them because we could discuss about it later then somebody could approve that change on the package then the package would be automatically rebuilt and uploaded to Debian so where OpenStack could help would be on the build Pupart, Adequat test and package specific test we could run that on a VM and it could provide an infrastructure as a more general generally speaking we could provide OpenStack for our own developer use and that's it so yeah great talk, thank you one of the things that I run into sometimes is our Python packages that are within the OpenStack team and so I joined the OpenStack team so I could help out with that but it seems like arbitrary whether some general packages in the OpenStack team were in the Python modules team and I wonder if we can maybe not hear but offline have a discussion about how we can you know, not divide and conquer but work together the two teams work together my problem was that I wanted to continue my workflow and git and if that is accepted I'll move them all to the Python team I know we have some differences around exactly the workflow but for most Python modules it doesn't matter much for me if we use Pristintar or something else what matters is for the bigger projects where you have hundreds and sometimes megabytes of tables and then because problematic if it's tiny it doesn't matter Regarding your earlier question whether or not we want to keep OpenStack in Debian I think it would be unanimous hell yeah and thanks very much for doing that this far and one more question is what do you think about packaging upstream master aka currently Juno for experimental yeah so I have that already nice to experimental so I always package the beta releases so I have the latest was B2 so I have B2 currently on my Jenkins repository so if you want to use that and provide feedback I am very welcome you to do so yeah the same question or more or less the same thing he just said with my Debian system administrator's head on I would like to eat my own dog food when installing the OpenStack cluster which is planned for the development stuff so I would like to see the packages in Debian jassie to have OpenStack packages in Debian jassie ok I'll do my best at least 2 presents so you mentioned using Garrett you Debian packages have you already done that so I am asking because I tried to do that it works because Garrett doesn't deal very well with the fact that in Debian we usually maintain packages with many branches it's quite easy to get confused in the workflow because you need to validate to submit commits in a particular order for it not to break which is why I think it should be connected to Deget because if I'm not mistaken I never use Deget but as I understand it uses only a single branch right I don't know I've never used it yeah I use Garrett with Debit packages yeah Deget would use only one branch if I'm not mistaken and therefore it would be easier and then we will have a Git repository for all packages by default so to clarify if we do get OpenStack in jassie it will be Icehouse not Juno yes because Juno will be released in October on the 16th of October that would give me like 20 days before the freeze and if I do that yeah no comments and yeah and also there won't be LTS support for Juno so I don't want to be on my own for doing the security support I hope that on the next summit on the next OpenStack summit in Paris we will have a discussion with the security involved people so that instead of giving 15 months of security support since there are some people interested in doing security support for long term then we will have integrated security support for Icehouse for like 3 years so I don't know if you'd get that officially approved by the TC but I think probably there are enough different distros that we could find the people to do it so the consensus was nobody's interested in doing it but this has just changed is there any other question or discussion time is over ok perfect then