 OK, ydych chi'n ddim? A'r ydw. Cyfnod y ni. Rydyn, cyfnod. Rydyn, mae'n rhaid i gael... Mae'n dweud yn cael ychydig. Felly, mae'n dweud yn cael y diogel. Rydyn, mae gennym y dweud yn lluniau ar gyfer cyfaint ei ddweud yn teill. A'r ddechrau i'r ddweud i'r ddweud i'n dweud i'r ddweud i'r ddweud i'r ddweud i'r ddweud i'r ddweud i'r ddweud i'r ddweud. Rydyn... Rym ni i am y pryd, oes i ddelw'r ffrif yma o'r ffynffadau oherwydd am arddangos ffrif. Rym ni i amnynnu'n meddwl i maddwl i ffeddwyr argyrchu, i ddysgu'r ffynffadau, gonegylch i dechreuwn i'r ffrif, i ddopeth gyrddiau'r gwahanol. Brought bydwyd! Rydym ni'n haladnod i dwy'r gwahanol os yw ar yr hyn yn ei ddwy'r gwahanol is execute very much on those plans, but I I would love to be corrected if people have done massive amounts of things and just carefully kept them hidden from the rest of us. I think Noah did a lot of. Yeah. I think. One two Noah my hands did a lot of things for AVS. Mae'r gweithio awrthiwn yn ei gweithio, Nour yn gweithio fe yw'r ffordd. Mae'r gweithio ddweud fel y cyffredinol yn ffordd gyda'r cyffredinol. Mae gweithio ar y cyffredinol yn ei gweithio ar y gweithio ar y gyfer GCE. Mae'r cyffredinol yn gweithio i gyd yn gweithio ar ddim yn gweithio'r gweithio. ac mae'r gweithio er gweithio weithio gyd o fe ddefnyddio. Ac mae'n gwmarfod, ryw hwnnw, mae'n gwmarfod o hynny i weithio opening, i'ch arfer hynny i amfergoedd o'i cyffredin, i hefyd yn masculon, ac mae gennyddi'ch cyffredin. Efallai, mae'n gdybyl ysbylif iawn i'ch meddwl, a'i amser i weithio opening. Mae'n meddwl gyda'i cyffredin, mae'n meddwl gyrdd fyrdd ar ôl o'r ffordd. Mae'n gwybod i'r hyn i, ac mae'n gwybod i'n meddwl. Emanuel, maen nhw'n ei chweithio allu'r ddod? Mae'n enghraifft o'r amser. I'r amser o'r ddod o'r ffai o ddod. Mae'n ddod o'r ffai ar y blwyddyn yng Nghymru. O'r ffai o'r ddod o'r ffai o ddod. A we have also a test suite and script to automatically upload to the vagran cloud backend. But I'm waiting how these credentials of cloud providers will be handled from the Amazon side. So I have some ideas how we should do it for the vagran side. So dwi'n credu ddweud y gallwn ffagorol yw'r idea yw'r fforddau a'r awsiau a'r ddweud. Mae hynny'n ddweud o'r plan a'n mynd i'n fforddau. Rwy'n ddweud, mae yna'r fforddau'r fforddau o'r ddweud. Mae'r ddweud o'r ddweud o'r ddweud. Mae'n ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud. Mylwg roedd y cwm ystod yma ar hyn yn gwyfodol gyda Ie tokens. Mae'r cydwein yn hynny i'w gweithio wrth i ddiweddor ac mae'n iawn yn ehonnod abyr yng Nghymru. Felly, mae'r gwael ar y cydwein o'u cydwein. Fy weithio y bod ddiwylo'r cydwein sy'n siaraddu lle. On dwi'n wneud yn fawr ar y cydwein, maen nhw'n gwych arall ar yr hynny. Mae yna, mae'n fawr eu byddoch allan yn fawr. wedi cael ei hwn i'r ddreifeyddau o'r cyfnod y ffordd, ac ydym ni'n arweinyddio'r ffordd arall. Yn ystod, Si'n gweld ymlaen, Azzur yn ymwiel yn olygu'r ffordd, ond nid yma'n gwneud o'r ddweud. Mae gennych, mae'n ddigon. Mae'n fawr i'w ddweud. Mae'n ddweud i'n ddweud i'r ddweud i'w ddweud. Mae'n ddweud... So, I've merged the work of WALD into the OpenStackDB Animage script, though the result hasn't been tested and as I had no Azure credential, I couldn't do the tests. Credentials? You guys need Azure accounts? If you want an Azure account, the easiest thing to do is I can give you what used to be called MSD in subscription. And that is, it's now called Visual Studio. It doesn't really matter. It comes with like $150 of Azure time that's recycled every month and it's good for one year. So if anyone wants one of those, just let me know. Sorry, can we get back? Because first we want to talk about building images and so on, but it would also be good to talk about accounts and credentials and so on because it seems like people have. Different people have different access to different platforms and recently we had beginnings of discussions about AWS and IAM profiles, so it would be good to at least touch it. Hello. If you attended the DSA BoF, what we had proposed there was that with AWS at least and maybe the other two platforms, we could do either SAML or OIDC integration so that you could use your Debian credentials to authenticate against the platform. We had exposed LDAP groups and some group, the cloud team, whatever that is, could then be authorized to manipulate those LDAP groups and those would translate into IAM roles in AWS. So that way if we happen to deactivate an account in Debian for whatever reason they retire, they go MIA, their credentials and these other services since they're tied to Debian are also turned off. So this is possible. I've done it for work with IAM, AWS IAM. I don't know the status of Google Cloud Platform or Azure. So for the Google images, there would be no need to have any Google credentials whatsoever to build the image. There's no need for dependency on Google to build the image. It can be purely free build. For uploading the image to an account, we would probably want to create a sort of service account which is just like a non-human account basically. We can sort that out with individual accounts and for the service accounts to upload from the automated build system, it's basically a revocable OAuth kind of secret thing like that tied to some account. That's easy to solve though. So Culler is watching us, I know. So I'm going to ask him. I'm asking the question. I'm assuming that he and Noah have been working using FAI rather than continuing on with Bootstrap VZ. Is that true? Yeah. Ah, when Zach's just joined as well. So we still have the thing that bothers me slightly is that we haven't really made any progress, not just on switching over to new tooling. That's something we all agreed last year we wanted to do, but also testing our images against the very, very early set of tests that we defined last year. I would like to go back to creating the images because I think we agreed on testing FAI for image creation but not finally agreed on using FAI for image creation. So if we can make that decision now, Bastian told me that he would then directly start working on redoing the image using FAI for Asia. Okay. Sorry, I'm typing away and not talking. So Thomas kind of volunteered to me in private, but I'm going to expose him now. Sorry for stealing all your spoons. Sorry, this Thomas serpent to start working on a test suite. So well volunteered, it's now public. And now recorded and live and that's it. Absolutely. Do you have any ideas on how to go about it yet? Probably I will use some of my, I will start at least with some of my private scripts that I was using for testing my own packages on AWS. Basically I'm managing GPU related Python packages. So I have set of script that runs, installs dependencies, runs test and so on. So it already tests ability to SSH in and to run up install, up upgrade and run some GPU tests. It needs to be polished a bit. It needs to probably be made more generic. And it uses Python apt and not apt as in common line, but I guess because behind the scenes they both use common libraries. It should be enough at least for the beginning. Any ideas or objections for this plan? I would be interested in what's currently Google or Azure testing. So what are the real important thing? I think one thing is does the image boot, but maybe we should get the input from the current cloud providers. So for Azure we test a bunch of things that we have all our test cases up on GitHub. And so there's BVTs where we test and make sure that it runs and that the image configuration is correct. And we can do more specialized testing like network performance or InfiniBand and GPU and SROV and all these things that we've added. So some of those can be abstracted, but then of course you have the logic, which is probably going to be using the CLI tools for whatever AWS uses. So they'll be like this, how to spin up an instance and how to add a nick and all these things. So you have to abstract that out into some kind of framework unless you had some idea on how to do that already. Or what we could do is a short term thing is we can open up our CLI automation for use with official Debian images, which we're building anyway and just test them. And then we can kind of go from there just to know that you have some baseline that they're at least the equivalent to what we're publishing today. And about the testing images. There's a address I did on the Gaby document pointer to a work from a French startup that does Cloud's comparison, as I do some kind of procuring. And it did some work on comparing Debian images on BIOS providers. So that's more focused on identifying the differences between providers, but that could be relevant to your work. All right. And with regard to Google, I of course won't speak for my former colleagues, but they can speak themselves, at least one of them is on IRC right now. I'll give a quick summary of, so they do run a test suite. The actual infrastructure is too tied into their internal systems, but they can probably release, I'm guessing, some descriptions of test cases that we might want to apply. And they do test a lot. They SSH and check various settings, performance settings, making sure that their integration works nicely. They test a lot of things, yeah. Well, from what I remember from the Debian Cloud sprint and what I also see from the notes we've taken there, we already agreed on test the cloud images on 20 items that are there, and the test framework is also, I think, something like eight items or so. So for OpenStack, all the test suite is in Debian, and the script to set it up is in Debian as well. The thing that I would need for it to be run is some hardware. So I already asked for such a thing to the DSA, but received no reply to my mail. But I could set that up. For those who are not looking at IRC, Zach from Google just said, we test platform integration. Number one, any and all metadata interaction. Number two, image configuration is correct. Number three, and new infrastructure changes that may not be released yet. We have some plan to externalize all of this, but it's not quite there yet. That's what he just said. I have a question to Zach. Are there any plans when they like to move to the new tool chain? Maybe you can ask him on IRC. He is watching a live stream. He can hear that. We can't hear him. The internet is one way. I think the main thing is when do we as Debian collectively move to it, and when do we know that it works to produce bootable images and things like that? As a side note, I tried to run some tests for my packages for compiling on different sizes of Amazon images. FAI was working. I was just building image locally, so FAI built image. On one gigabyte of RAM machine, so the second smallest one, I had some problems with MT1 or T2 nano, which is the smallest one with slightly below 500 megabytes of memory. But I don't remember details what were the problems. But still that the one gig is not much required memory. Sure. It's moving and I'm losing track. So, what are we doing next? We want to get tests running on our images, for all of the platform images. We have an idea of at least the very basic set of tests that we think should be run for sanity. Now, from the discussion last year, that was so that we would be happy to call, to at least basic QA, so we can start calling things so we can continue calling things official images. We can make sure that anything that doesn't pass those tests never gets published basically. We can continue adding more and more tests as time goes on, as we get more inspiration and we find bugs, we should be adding regression tests and all that kind of stuff. What I think could be useful, though, would be maybe at a future sprint. We actually have a bunch of people working on exactly those tests and making sure that we can run them. In terms of building things and running tests, obviously the platform providers are building their images at the moment. We're going to be building more images. We had been talking about building them all on central machines and then sharing them out. That has been an ongoing discussion that I know Noah was pushing back on. Cullw and I were continuing to push we should be building official images on our hardware, but we haven't had a real discussion about that again. What do people think? I think we should build the image. It just takes a second, yeah. I think we should build the images on debion.org hardware. Probably at the same moment where we start using FAI for that. If we commit to FAI repository, we start building more or less automatically based on what's in that repository on the debion.org machine. I think it's actually okay to build on cloud machines once we actually have reproducible builds for those images. A little bit into the future. The not-cloud-related work we've done on making the ISO image built reproducibly was surprisingly easy. It's not such a crazy idea in ten years. It could take a few weeks, maximum to a couple of people who want to do that. You can do it. Sledge, you're building the images in bite mark, right? Where is physically located and the hardware where the debion images are built? Right now, today, it is on Peterson, which is a machine in Sweden. We have a new machine, Cajulana, hosted in bite mark in the UK. That is the machine that we're migrating on to. That is a really big machine with plenty of scope for lots and lots of VMs, and lots of things to run. Does that answer your question? So, will there be some available hardware to do bare metal testing with that image in bite mark, or no? No hope for that. I don't think so. What machines do we have in bite mark? We have blades and things, don't we? A provision of blade for bare metal testing? No. So, the two-minute synopsis is that debion likes to... DSA likes to have redundant equipment, redundant partners, redundant geographies, et cetera. So, UBC and bite mark are the two primary locations for many services. UBC was refreshed last year. Bite mark is getting a little bit old in the tooth and is hurting, so it needs a major refresh. So, we're in discussion with HPE about pricing, we're in discussion with bite mark about their vendor. They use super micro white boxes about getting that equipment, but there is no capacity at bite mark really for anything particularly huge. At least no disc capacity at the moment because that storage is completely full, both IO-wise and the other discs are quite filled up. I'm not sure about the blades, if the blades are actually fully used. So, for the vagrant box, we want to do bills after that on your infrastructure. So, from what I've seen from marching, it has started work inside virtual machines. So, after we switch to FAD scheme age, we want also to do bills on your infrastructure because we want probably to expand the number of different bills we do and we have always an upload step. We're building now six different, four, three different supervisors on two different releases. We want to do continuous integration and we want to upload automatically because we upload all the bills we do from local laptops and it takes time so DSL connection is boring. So, I hope we can do that afterwards. So, could we agree today on using FAI because if we have the full agreement of the full team, then we could just go forward and start building images using FAI. We already had that last year, in fact. No, we agreed that we'd try but we did not. So, definitely the next things we should be doing in that case is actually building and testing and validating FAI for all of the platforms. That's the first thing we have to do. We fully expect it to work. There's no magic here. Zach did mention in IRC about ten minutes ago that he tried briefly, had problems with stretch images that didn't boot. He didn't have much time to debug. I'm sure between us we can solve that, you know. If we can't, we have worse problems. If we can make all that work, then absolutely that's what we should be committing to. We can move to builds on Casulana. That nice big build box that I got for doing CD builds on is about to get busy, isn't it? We can then sort out automatic publishing and all of that kind of stuff. We want to get to significantly better CI story than we have today. I think we will agree on that. That's also the question of how to organise the builds on Casulana. Is it one VM per provider, one global VM? What do we use to schedule the builds? That's quite a lot of work to do. One of those. Exactly. At the moment this is all up in the air, definitely. I'm trying to get notes down in Gobi. What do you think? Returning to the FAI, I was involved with Bootstrap VC. I tried to run FAI. I didn't have problems. As far as I remember, it was easier to start with FAI just to build than with Bootstrap VC. I haven't tried to add modules or something like this, so I don't know how complex it will be. Concerning your question, Luca, we should use just one VM for buildings. The question is can we start FAI builds in parallel? I'm not sure if it's made in that insight, because I suppose it creates... you have a hard-coded path to a dev loop device and was not made in that insight, but VMs are cheap. There's no hard-coded path to... it just uses the next loop device. But there was some locking mechanism inside FAIs that may be problematic if we want to run several FAI disk image processes on one machine. However, the VM isolation layer can be very useful here. I would suggest that we run each individual build for a single image or a single run of the build tool, even if it produces one or more images in its own temporary VM that is created and destroyed around that build. Because that way there's no detritus to care about and it's all automated and reproduced, somewhat reproducible. Absolutely, it's what we should be trying to do for all of this. I'll be honest, I've been crap about doing this so far for the open-stack images and the live images that I'm building on the central W machines. There's always other things to work on, so we have a persistent VM or a set of persistent VMs that I'm using for those builds. It's not beyond the wit of man to fix that and have generated VMs, and then we do the builds in those. It's just a matter of coding. The fun thing that we have at the moment for the installer images is... I'm currently building on a host in Sweden and then we're publishing them from the same rack because cdm is.debian.org, which is the same as get.debian.org and whatever currently points to storage hosted very happily by the folks at the University of Remair. As we move over to building on Casulana, that's a much, much bigger, much, much faster machine, but we shouldn't be looking to publish directly from that machine. We'll have to then deal with syncing them over back over to Sweden. Frankly, we have very fast networking in between them. It shouldn't be an issue, but be aware there will be a non-negligible amount of time needed for that. We've been talking already in the past about actually having some words failing English it's gone, about having more than one site for publishing so we can actually have some redundancy. That might be something we do in the future. I'm bringing it up now and I don't know why. But we may end up syncing images across to multiple different sites as well as obviously up into the platform providers as well for those guys to use. I have a question. The infrastructure to create and boot the VMs where we then want to run five disk image inside. Is there something that you are already using for the CD images that we then enhance or will this be a complete new script? It's a trivial homegrown thing that I wrote on Patterson which just runs an existing persistent VM. If we want something that will generate on demand and run in a temporary one, ignore that and start again, it will be easier. So this infrastructure has also been written for... Sure. If a grant could be a starting point, but I think it's important to focus on something that can be run outside Casulana as well we might want to be doing CI build at some point and we might not be willing to do that. Exactly. The nice thing about having something that generates a VM that runs things in it is we can run that on your laptop just as easily as we can on a central machine. Obviously we'll want to be running production on a central machine but if we can get the tooling end-to-end to do the right thing it's more flexible, it's more useful. We don't want to be tied to that central machine for initial development. For the VMs you're using KVM? One idea would be to use FAI to create the virtual machines which then run FAI to create the disk images. That might be more awkward. Do you need root to generate that VM image, don't you? Yes. Explicitly something that we've done for a very long time and the reason why I'm doing live builds and the OpenStack builds in that VM in the first place is so we do not have any elevated privilege on the build machine itself. DSA are rightly very leery of us having root on those boxes especially for things running out of cron and especially because we're pulling out of Git and running it we don't really want that to be running on the bare hardware. Zach raised a question on IRC that relates to something we discussed at the sprint last year. Any thought to the fast moving package problem we discussed during the sprint where cloud software needs to be updated at faster rates than traditional Debian packages do? For example having an edge repository for cloud or something like that, not necessarily changing what's in stable. I talked to Adam two days ago and because we want to update Vala in stable because Microsoft recently published Azure Stack which is essentially new hardware for Azure and I agreed with Adam that I'm going to upload the new Linux Azure agent to stable in the next days. Maybe the answer is some clarification or tweak to the policy as it applies to this type of thing. He still wanted to review. Fundamentally, yes. I spoke to Adam and the rest and other guys on the release team after the sprint last year like I promised. Of course they're open to things going into stable updates or however we want so long as they're same. No one is about to give us a catch-all, a wild card to say, of course if it's a cloud stuff, a cloud thing you can have it, we'll need to convince them. I agree that Google's current distribution mechanism might not be ideal for that but if the policy were welcoming of a slightly retooled distribution mechanism maybe that would be incentive to them or better deliverable that we could more happily ship. The question is, does this also apply to the tool change? I think I will make some little changes to FAI which are not then in stable and I think currently the cloud image config on GitHub is not yet an official Debian package. I don't think we have a problem with having services usually run their own software stack so we just have that somewhere on the disk so you could easily have your new version of FAI image created on SRV. We end up running out of Git typically for quite a lot of the package, but in that CD building set up, the live image building set up and whatever because these are fast moving things and obviously they go in the archive too. We might not be running exactly the same version as in the archive on any given day. As I said, the cloud image config space I think it's on alias but not a Debian package currently so that should be also be done. I hope people are keeping track of this as well. I have a question. Do we have any plans on docker images? That would be nice. FAI, as far as I remember, the docker images are built from two Debian developers but we never get in touch with them. It's Tiago and I don't remember the name of the other one. Could we agree when we do the next cloud meeting which will maybe happen mid or end of October that we explicitly try to invite one or both of them? That is exactly what I was going to say. Did anybody else or did anybody actually so we go to the cloud image session yesterday? I remember we had discussions about cloud image quite a lot last year as a tool that we all depend on and there was talk about forking and working and getting a load of patches integrated. The guy presenting yesterday, of course, is the upstream maintainer in Canonical. Can anybody give us an update on the status there? The one thing that I heard of, he talked a lot about cloud config, so another config management thing, there was not that much about cloud in itself, more about cloud config. My opinion is, if it's currently working for us, we can use it, but if there are major problems, I think most of this are, maybe all of the things that are done inside there can also be done with shell script. They can, yes. Just leave it as it is now. We do not have problems currently. I know Wolde was already doing some work even at the sprint last year on cloud in it. Do you know where he got to? I've been the only person touching that package since. It's been six months, like I did a few QI uploads. Where are things up to? Are we in sync with the Ubuntu folks? No, it's an old version. We need to upgrade to whatever is the latest upstream. Do we want to follow them as upstream still? I personally just do the things that I think are relevant for my use case. I don't care about the latest upstream for cloud in it. If somebody does, then it can contribute to it, to the package. As far as I understood yesterday's presentation, and as far as I remember our discussion, our most nagging issue with cloud in it was that it was stale and it was not updated. It looks like they started updating since our sprint they had two releases. They promised to have another one by the end of the year. Possibly there is something changing, but we will need to see and we will need to see how much it is in accordance with our needs. Are they taking the patches that we thought we were talking about that were wanted? No idea yet. So we need to check on that. On the other hand, I forgot his name. He was showing that they are working on decreasing number of handling pull requests. So they might be accepting some of new features. There is a cloud in it sprint at the end of August. Is anyone going to that? That is the first I heard of. From the Azure perspective, we want cloud in it in all the images, but we couldn't get it working in batching through up his hands. We couldn't get it working in time for stretch. That is something we definitely want. I think part of it is the patches and getting the packages updated and getting support from probably canonical. If they have a sprint, definitely, who wants to volunteer to go? We should definitely get people involved if there are changes that we are going. I think there are folks from different distributions. Probably somebody in Devian is going. It was a small thing. It is actually hosted at Google, I think. I wasn't sure if it was announced or not. Do you know where? At Google? In Seattle? In Seattle, okay. Is that just mentioned? He said who to talk to to attend, I think. Isaac. That would be interesting, but definitely we want to bring that up. It is something Microsoft wants to bring up because better cross distro support is not something that canonical has been great at. They are really scratching their own itch. We understand that, but we definitely need them to do more work in that area. Right. We were down to about two minutes left on the session. I can feel the video team glaring it on the back of my neck right now. I have a question about Cloud in it. The packaging is currently using Git DPM, which I hate, but it's there. Would you mind if I was switching to GPPQ? Any opposition? Anybody care? Okay. No, fine. Plans, last thing. We had a really successful sprint last year, hosted by folks at Google. Steve and the Azure team at Microsoft would love to host us for a repeat this year. We briefly talked about dates and we were thinking, and I've forgotten the dates on what he was at 16th to 18th of October. These dates are not yet set in stone, I assume. That seems to be the best time for those of us who expressed a preference at least. That seems to work. Tentatively, 16th to 18th of October. That's a Monday to Wednesday at the Azure offices in Redmond. Absolutely. We can get people involved as well. Also, one thing we talked about during the sprint we had last year was the thing that we want to have an image locator. Martin Barrens, thankfully, started doing work on that two days ago and showed me a prototype yesterday already, which looked quite nice. Hopefully, we will have something to locate the images, including CD images, if that's okay, Steve. I'll let you write something about that in the goby. I've just been writing about the sprint. Fine. Now we have one minute, so go for that. Do we have anything else that we should talk about right here, right now? As I always do, I will write up the notes. I will go through the video of what we have here and send a summary to the list. So, especially those people who couldn't be here today, have a chance of at least seeing what we've been talking about. Please argue with me if you think I've mischaracterised or misunderstood anything that I send in that mail, and we will start planning for the sprint. Clearly, there's plenty of other things here that we can and should be working on as well before we get there. Don't wait for the sprint before doing work, but I would love it if at that sprint we can spend more time this year actually working on these things rather than we had a really productive four days of discussion last year. More time working on it and doing the plans will be even better. Those discussions were needed. Absolutely, they were needed, yes. Implementation is good, too. Thank you, everybody. I hope that was productive. Bye.