 I would say, let's start with a little boff. I intentionally didn't take that seat, but it's because it's a boff, not a talk. So I would like to make sure that we get to have a topic we want to, and so I want to have a very fair share of spring capability. I also will send long minutes afterwards to the Debian WannaBuild mailing list. So I have now two main topics on my topic list, which one is, I should say, what we have achieved within the last year or last month and a half years, so you get everyone up to speed. And second one is individual topics, which means pepperon's bike shafts integration. So this question that's already started is used for S-Build and WannaBuild for cross-building. So first I would like to get our topics complete, and then we can go into the individual thing. So any more topics right now, maybe we can always add topics later. So any more things you think we should talk about here right now? I don't know, this is a camera spot, but we can start. Yes, good topic. Okay, I hope you all should speak about new volunteer from this round here for WannaBuild, because I saw many people, I should be able to answer that. So more topics right now? Otherwise, I think we should now start with looking back on what we have achieved, I think. All that has more things to say than I do. So if you want. You want to see the DDA made or? Just what are things which we're not sure anyone in this room knows about already. Which is, I think, it's the before declaration that we had done. Presently we have to just summarize the, you may be thinking about the announcement. Okay. I think it's probably the... You said it, I don't know, I think it's probably good. I saw an announcement out there, right? Yeah, I think it's probably good. I saw it in the week. Ah, yes, I was with the Portsman, obviously. No, no, no, no, it depends on the announcement. Yes, it depends. I need to admit that I missed it and also some main list, Ash has missed it, so. Not nice? It depends on the announcement. There were a couple of things on there that it said we might get these done by the week, and whether any of those actually happened. Ah, okay, I'm so busy. Yeah. So just pointing out the points, that's the arch all packages, the Portsman, the Merge, the ROMNs, the Asynbase appels from the BuildDemons, the Algebra is upgraded to Chessi. Yeah. Can you remember? Yeah. New people have volunteered to do it. Thank you. Yeah. So basically. So, I mean, we're now almost at the end of depth comp however. So, I think we really should move on to what are we going to go. The further topics that we had, discussion about this S-build and one-up build for cross-building, already more or less start before the boss starts, so I think we should probably call to you with that one first. Yeah. So, I think you wanted something, so you should explain what you want. You know, maybe it's in support for not just the further two things we already said. Okay, so this came from a discussion with Matthias Schloser, the GCC maintainer. And we'd like to get more coverage of an idea of what the state of the archive is in terms of cross-building. So, I did some, like, one-shot cross-builds of 1,000 packages. I do plan on doing a further one-shot build of, like, 3,000 packages. But it would be much better if we had, like, real-time results in terms of what packages do support cross-building and which don't. So, the idea that we had was, like, would it be possible to integrate this in one build in some way to add another build demon running on AMD64, which is supposed to be Chi, and which is, like, having a host architecture, like, ArmHF, which is well supported for cross-building in principle, and have it, like, throw away all the binaries so not to occupy space or save them if you prefer. But in particular, publish logs in some place that is accessible to be included in Tracker Debian Arch, maybe. So, the question is, is this something that's doable within the wanna-build setup of Tracker Debian, or should we use the setup outside, and how can we integrate that in a way that it can pull source packages that are uploaded to Unstable automatically, and build them immediately? It's possible to do this in one build, because, basically, it's, the fact you throw away the binaries is more on the buildy side, but it's going to be difficult to integrate with the current wanna-build, because it means two architectures with the same name. What you could do, you could use another name for the architecture, and do on the buildy side, and map it with an architecture name, and build architecture. So, I mean, we have a lot of things that we're not using currently to make an architecture in the mapping. We used to have that in place for other things, but yeah. That's already not that nice. But you even could do it, if you really do it well in a way, so that, I mean, when wanna-build gets called, it actually sends back a YAML file to the author to build the, and says, what do I want to have? So you could, even there, and say, but in reality, it's still 864. The thing is, that's the wanna-build internally. You splint your place in the architecture. One of the cases to check if the packaging is still or not to... Yeah, it doesn't really check. I mean, that's part of the material to bring in the architecture, and I'm just thinking, maybe I said something wrong, because the installability check is probably different when you're cross-building, and when you... It is. So yeah, basically what we would need to do is, as it seems more like what we do for the port script, is that we don't have it in the normal Debian. Basically, there are different trigger runs, depending which arch it is. Basically, we need to do that part really fresh, to say, well, yeah, you actually have a different, and I think that also needs a very few places patched, which are allowed of this installability check. Wouldn't it be possible to add another suite, virtual suite to one of those that you say, this is a cross-stream starter than architecture? So it's not the same architecture, but... Yeah, that's the question, but there are still the items at the beginning, at the problem of all the installability checks everywhere you go. So the installation check has to be right written differently. Let me give you an answer. Stop, stop, stop, stop. I heard in the discussion, thanks. Well, someone who was slicing his hands quite nice for some time, and so I simply should give him a chance to say something, just as a person with a bigger outer. Yeah? Just a question. What's the purpose of this? What do we do with the cross-stream batteries, or what's the purpose of these bills? To get an idea of structural problems with cross-bulling. So until doing the initial, like, 1,000 package cross-bull, I was unaware that Debian's CMake cross-bull support was currently completely absent. The idea is to figure out this problem. It's not completely absent. It's been fixed once. It may not work very well. It's pending. Whatever. Whatever. So currently we are facing structural problems with cross-bulling already, and it would also help if maintainers could, like, figure out whether their package is cross-bulled or not and have that in the tracker. Like an additional information, your package cross-bulled successfully, and your package does not maybe want to investigate. So as to give maintainers an idea of whether that works. Okay. So the question, actually, is it worth trying to integrate into one bill if you probably require some changes in work or should we just use one of the various one-shot things running once a week, which is easier, probably. This is a separate... Technically, any bills that we have, because there is even more to for new bills, could put as a bill to the configuration to ask more than one bill. So that would work. However, in that case, you should have different suite names. Yeah, and those billchecks. I mean, those billchecks support stats, but you have to feed it with the right information. Yeah, so I mean... It might be... I mean, I have done the work for the ACOL support, and I think it's not trivial. There are plenty of places where it's used. So, yes. I mean, what are we both going to worry about? If you're just doing... There is some work to do. Yeah. If you're just doing one-run bills, you don't need to worry about the damaged part. Yeah, and that makes part of your, you know, formal convenience agreement. Yes. Exactly. One bill is the continuous flavour. Right. All our other bill, these... Except for PiBit, which will take triggers. All the others, you give a list. Right. So then I've seen that you were holding your head for so long. I think what I wanted to say about installability has been taken for all already. So, go. Okay. From what I understood, your need is just to have the packages as soon as they are uploaded, so you get a chance to upload them on your browser. Yes. And have the results as soon as possible. Well, that's the... Like, with a normal delay, like six hours or a day. Yeah, sure. Because if that's the initial need, what can we possibly do is to ask a particular master to do some push on another machine, saying, hey, there are some new packages being uploaded, so we can get them uploaded on it. And it doesn't have to be linked to one of them. Yeah, sure. I'm asking whether that's the point to integrate that. I don't know if that's going to... Well, I think... We have access to files that were included in the upload. Waiting until publishing is not an issue. I think it also makes sense to integrate this into one of those because that makes it the same thing we're using everywhere. Otherwise, you've got this for cross-bonding and this for that architecture, and then somebody else comes up with something else. I think that wouldn't bother you. The other argument is that people want to do bills of stuff for all sorts of reasons. And a lot of those people don't really need want to build. Because they don't need to continue us. They actually want to do a mattery bill, for example, or they just want to build their package for more than two architectures every so often, or every time there's a check-in. So there is a need for other sorts of building. So it's not... But I think in the long term you're right doing it the same way everything else and maybe it's a good question to see if it would be useful possible, whatever, to add one-shot bills to... We already have a package for doing a set of cross-bills in the long term. Sure, it's called X-Builder. My question is would it be useful, possible, whatever to add support to one of those to say I want everything that's in the archive currently to be rebuilt with this set of our options, please, at a temporary suite for that to your database and build things. I don't know, maybe that could be a good way forward and you have that feature and still everything is integrated. I don't know the code of one-shot bills well enough to know whether that's a feasible way forward but maybe it could be. So the reason I was wondering whether this should be integrated into one-shot bills is that having cross-bills support in one-shot bills has a benefit in its own if we ever want to use to do like architectures which are being cross-billed possibly 32-bit architectures or something about that. So that was my reason to approach this with a question what we can do out of archive as well just which is the right place. I guess we should have a look at what's missing at the moment. For a separate instance where we don't have the architecture name flash it's hard. It's hard? Well, I think there is an idea. And just the fact that it's slightly non-trivial. Just to change the code I guess there is two-three days to figure out where to to put the integrated in the archive as a flavor so there's like this architecture is crossed but it's the same thing as that's probably more work because we'll have to figure out the database. The thing that you currently don't have really any information in the tables cross building means you have two architectures together, both architecture and architecture and right now we don't have the information. Is that a problem really though? We have a suite that says part of it is the architecture and part of it is the architecture or the thing you're building for you have a suite it's confusing. We have a suite saying an MB64 cross meaning I built this on a new system. Does it have to be specific to cross building? I guess the Kling issue is the same. The suite is client built that's a different build essential and lots of content. We have the same goal of saying we just want to build. We don't necessarily have a repeat archive build. It's probably different than saying we want to cross build architecture because if you want at some point to say this architecture is cross build there are some more implications there like if you keep the binaries you run the test tubes because you can have the archive completely building but if you have a problem you can get nothing working. Maybe just the concept of a shadow architecture this is the same as same-host architecture we already have but a different build and it's not the real one that would cover a C-Lang rebuild and GCC5 rebuilds and cross rebuilds. What I'm thinking about which would be a larger change but I think more worse in the future would be to basically add or give the chance to have an option for architectures where default is just the empty option and then we could more easily add multiple architectures different small different things so we have only architecture but something like architectures slash architecture or colon architecture or an additional value in the in the yaml that is transmitted but it's definitely a very large change that we have already discussed before and I'm not sure if it's a reversal that's the first part of the work I think it's the way to go but we have to we have to do the code and rewrite probably part of one ability because there are things to change I mean even without that I think there are part of one ability that has to be redone already so if you want to do this kind of things there is probably some kind of it's probably the way to do there is still a bunch of brokerages what about there is still a bunch of brokerages yeah there is of course a brokerage and there is still the triggers are external I mean there are plenty of things to probably change there and mostly if you look we are talking about one ability but the service on bili.demo.org is actually one ability on one side there is the web interface the Python code that gets that part of the log and then you are in the SQL database I mean it's like 5 6 different pieces that some of them have their own configuration files so when you add an architecture you have to add them 4 or 5 different places so yeah it needs it's getting improved over time but it's still working on the other side ok so it's been a while since so I think you are holding up quite some time that you was an eye can you briefly summarize the initial steps of getting crossbow support to want to move what it needs to initially well I have to go into details in the code for me it's if you really want to do that you have to add a new column in the database and this is my architecture, my host architecture then you should replace everywhere in the code when both is a course and there is probably a way we have to redo also probably the trigger part because then when you start to pull external I mean other architecture more than one architecture it's difficult to know which package file to use which file to use right now and it means for integrating the trigger code inside basically so if you want to have I would prefer to call them flavors now because it's not personal honestly because what you really need to do is to make sense the database because we have currently we have two things which are which are holding is package or is architecture and then for each of those packages the individual version so we need to make we have two architecture in suite we need to extend architecture flavor in suite this needs to go all the database then there is if that's done then there is we only have one of the channels to be able to do it then there needs to be whatever I mean I could imagine that we have a different set of Zickers for such a case that would be fine with you or we extend the current Zickers Zickers are Zickers are code which is really you need to be quite careful we're touching it because it's very much standard independent so I think we currently run Zickers for something like 8 to 9 minutes or 15 minutes so I mean every 15 minutes there is a part of the key master and it definitely be much or enough short as in the whole cycle as we're going to miss some of those what does the triggers do when you understand that part it just imports a new package a source file you can put them in oracle updates one of those databases yes the changes in the outside yes the package and the source files and then it's mangled them a bit to keep on the latest version because when you are building an experimental you have unstable as a base so there is camera it's depending on the future you are building if you have been back for security for those updates the code is slightly different and you are basically it's merging these files but differently depending on where is the code where is the code where is the code Zickers is something something is coming Zickers is coming yes and if that's done then you need to take care of the code of one upgrade which could handle packages so that a big people could say I am interested in packages for unstable and for A364 please give me the gross packages if that's done so you need to just build a bit and then mostly should be working but then you still don't see the logs because the logs will just be eaten or that's a good case the bad case is logs are not eaten in just as bad for the normal unstable which will even get worse the point is that you have to modify the code passing the logs to take that into account and I think you don't have as it's written some code information in the mail or something but we should at that point also start to include a version of S build in the log file so that we could detect oh something so this is a newer log so yeah now becomes enthusiastic just a quick one I like the idea about flavors because it also enables us to do profiles like default profiles for certain suites in the future without much further work should be also note that if we are doing cross builds we need to pass the cross build profile like those builds the actual build it promotes the idea that we could collect like clang builds cross builds things under the common term flavors the easiest thing which is just to say we do an architecture MD minus cross but then we would have the same with the next architecture coming up not as easy but I think the better thing in general is to say we have architecture which could have more than one flavor so we have to do also this is next and as for code only once we can use it later on MD 64 C line cross multiple flavors is one type but anyways that's also something I'm a bit worried is that if we use the one build the official one build for that currently I don't really know any architecture we have we probably have 20 more architecture but then I'm afraid the code won't scale anymore yeah I want to have too many flavors either we have to get bigger machines that's another thing that probably will help us to scale a bit more probably we'll have to write some part of the code so it's more efficient most of these are like priority things as well we don't really want to take away from our normal build time finding out how well C lang and crossing works on one of its side already that's just a build you can easily prioritize one suite or one architect one suite or another today already you can just say build unstable first and if you find them then build expert careful so I do that today but that's not the scaling you should already know what's talking about right so I think that who don't just in a chance to speak I'm speaking as well Helmut, what's the archive coverage for a crossbeam so I think we have about 20,000 those packages of them about 3,000 are trust satisfiable so those are the ones that don't end up being uninstallable and of those I trust build 1,000 and about half of them not correctly all of them some succeeded despite reducing coverage do you think it makes sense to have some trigger from a separated instance and get better coverage in the archive and once if there's more coverage then it makes sense to have a perfect architecture cross m68k for example then we can really rethink on the code of one of these and maybe re-backdoor and adapt one of these for the newer times I was asking whether one of those places do do that I want to have the feature somewhere maybe that place is one of those maybe not for me, I think it makes sense to do it at the moment, externally and maybe in the future it would be nice to get it integrated but not with current one of these it's quite old code and it needs a lot of changes actually, one way to do it would be that we try to write a new one of these from scratch that's been used for that also as a desk time when it works we use that for if someone can do the work that would be great I'm happy with anyone who's leading up on that build however I really don't think the issue is that I think yes, if someone wants to do the mana build it would be totally ok for me of corp biology it would always be less than the real build I think that's and the other thing is I just want to add if you speak about flavors flavors are not flexed you could combine but it's just a string which could be an empty string for a normal archive or a string but it's not said it's a yeah, I could combine different flavors because it would be ingeservable to the database design ok, also as a set I would like to stop this topic for the per person bike sheds and for the aspect maintenance I think we need some time for that so for the pbs per person bike sheds any yeah for pbs pba or whatever I think it's kind of the same issue the current code is not very flexible to add new skills like architecture with suits I think it's kind of the same issue than important thing what are you not flexible I mean sorry I was thinking about making a special suit like sit pbs and helping the packages but it wouldn't be better to just have very many flavors which just automatically starts with pbs minus whatever they are of sit if you can't get back of the set way the thing is that you are the same installer if you want to check so it takes I think 3-4 minutes for suit and for architecture of course you can read that in parallel if you have multiple CPUs but if you have 500 pba you need 2,000 minutes of CPU to test run that I'm not sure it's that bad because we have a very short time to run it's very short of a few packages no it's you need to take the whole if you have 5 packages because then time is more or less it doesn't shorten it so much the other thing is we don't need to run it for another 15 minutes we would run it much more soon yeah I agree but still it's probably a scalability issue so I don't quite understand the scalability you should get is it that most of the run time is being used by running those built up check but partly I would say to sort of the definitely most so Johannes Rauer has been working on those built up check for doing the reboot strap thingy I already asked him whether he could pre-process source packages or source lists and binary package lists some does internal format and then have those work from those so since you were checking multiple packages it could cost the parsing step which is really the step that takes most time and does pure steps so parsing takes most of the time and parsing takes most of the time so parsing and representation in the internal format so indeed so if you use the same check on the same background this could save however we are changing the background a bit so because we run it again let's say unstable but then we have a certain set of overlay packages which are different from time to time is it that bad? so basically you can say I keep what I produced once before memory and just merge it to a new group with only 5 additional packages so yes I think they can be done but it's not just the same group with different things but there is a large core which should stay the same then we just add some other packages so that should be doing the same like doing the parsing of the unstable background set once as a pre-parse thing to do could be a significant time saver okay get me to touch on the show I've been working on that yeah that's the meaning well then there is still also internal to one of it maybe that can be improbable but you also when I've been internally you have to parse all the package file source file and look in the SQL database the status and the data status but in that part you only go via the packages and source files which are for this a pbs so it just contains 5 source files I was talking more in the cross building case it's a cross building case the cross building case of course takes the same amount of time as normal the pbs is really Barnaby is just Barnaby is more or less just directly proportional to the number of packages which are which in the suite are to be built so those parts it looks to me like as it's currently very proportional about the number of packages which are available as build dependencies which is of course in the pbs is a very larger number why do we have a monolithic design with one build no matter how many suites we have and not just one database or you know but it can't make more sense to split it up into individual tasks that do this job for each suite separately oh but that's what we do it's running by a monolithic not everything but not even running by a monolithic so everything can run in parallel as long as it's not the same suite is affected so for example for debian for the 50 minutes from debian to parallel for this port we can see this back off because it's running parallel it's the only thing you can't run in parallel it's an import into one suite and architecture at the same time to do something else on that part so it's running parallel we are actually using 4 cores we can go more if we get a faster machine so it's a good course but I think we on you thought it was close we do plenty of new architecture or ppa it's a lot more than what you can get by scaling the number of suites so there isn't just one database it's one database but the database is not super admission and you don't see the database in selfies the database is not basically there are two reasons about the topic of database on this side it used to be that this is one of the database but it was perc.db files it's now a database it's way back as perc.db files I'm not claiming that all suites are called as a perfect one however we profile all of the bad ones so I mean of course if somebody with a really good database engineer could go on it then the design probably will come to something better but it's not that bad so maybe another stupid question do we run those build-app checks for each source package that we try to build or do we pass multiple source packages we pass all of them what we basically do is we make a fake we have ten minutes left so we make a fake packages list which says package build, source, minus, minus, minus name with runtime dependencies on it then we have these packages and say for all packages it's this package list same as the installable and you can take all those packages list for checking out installability that's what we currently do and those lists contain for example if we build an unstable contains a list which for incoming or plus from unstable if it's instrumental from unstable incoming from instrumental instrumental incoming so the list of packages you could build against is very large but it has two different separate lists otherwise it would be just incredible slow slightly crazy question if one was considering the possibility of rewriting would we in fact want to use something like OBS and then use a real use somebody else's system who wouldn't have to maintain it all I have this horrible so it's one build I don't know whether on balance maintaining that horrible must in one place might be a reasonable plan I wonder if it works for us now it's difficult to make it evolve but porting OBS to there's also some fairly hard builders build in things that we may only not agree with we would have to make some significant changes but that would be easier than a whole rewriting no actually if I look at one of the specifics we have basically what does one of the build contain there's something which makes sure that basically to find out which packages each source package has a cycle about the state change we want to put like to allow them so we have a state machine included and well yes that's a state machine nothing special on it that used to have lots of ugly pearl code and it's gotten more and more to use it's now more than it's a state machine not as lots of pearl codes as if and then statements which is better then we have this interface to say give me the package and transfer some information on it and we have the dependency checkup and that's mostly it all of it yeah my point is that there is one of it other part as we talked before they communicate through real weights for example when I build some mails to them in advance so it's kind of complicated sometimes the builder gets on my mail and I'm all for writing the thing in full right of when I build from scratch taking into account all of that trying to use only one communication channel secure if possible the problem there is that when we want to write something in Debian if you don't have the manpower for some period of time you will end up doing nothing yeah and people have started rewriting it several times in the past but I think it's the way to go if possible I don't think it's possible right now where do you want to comment on that for example what I would prefer if someone has a bit of rewriting time currently the build demon forks a spilt away and just basically wait until it's done what I would also prefer if S builds and we still check other 5 million say am I still supposed to build suspension or if not up or because then we come together with one build channel made to S build so that's what we're doing improvements on I want to say when I build it's also probably been decided I mean rewrites the way they communicate to each other trying to get as much as possible as taking S build demons in the sense that imagine you the build demon ask when I build okay what I should do without knowing which suite is supposed to build or whatever just when I build answer all there is an unstable on wheezy this package to build the build demon should build the package if possible send the log continuously when it's getting built so when the build is finished it uploads the binaries and then there is no states to right now we have to keep the logs are still kept on the build demon then there is a separate process it has to keep the states to upload the binaries and basically if you start your job and when it's finished you say okay I can forget about everything I have done I release some logs the package and so on that will help a lot and that will make also as a result if they crash or they will boot there is no problem anymore like now we have to clean that manually sometimes yeah but how for example for that I mean for a lot of these things there are very cold parts in place so if someone says oh yeah I want to take out on this I'm happy about it so for example I'm for the log I'm not sure just use syslog for for it syslog for build logs I don't syslog is not made for the kind of stuff syslog will hopefully throw things away syslog besides it's too much coming in syslog is also not made for communicating state of a configuration management system however I know at least one would be okay but anyways 5 minutes left and I think I would if under something so that topic I would like to use in the last minutes on the expert maintenance if that's okay I'm just saying I think it's a bad idea to use syslog for that I have the best ideas I know it yeah so yeah okay it's an expert maintenance I think we brought it up well I was just wondering I think there was a bot which I wasn't able to go to so I don't know whether that decided that the maintainer has retired officially or we have a lot of things we're interested I was thinking the idea there is that the maintainer for sroot he still wants to be a streamer so there are going to be some persons going to take care of the part and on sroot there were some people interested but we have to try to say that to follow the build elements the developments to make sure that the stretching release we can say oh install the ice build from stretching and devos we have to get some communication between the actual code running the people running after code on the build elements and the people doing their build elements okay so I'm just now wondering what I should put into the maintenance of our set part I guess we'll be happy there's enough people actually doing work that it will carry on like this as a sort of team maintenance so there's somebody to be the maintainer so I've seen someone who has fancy heads which I always prefer so you want to show us a bit of work on your school then apply some different patches to experiment a little and I think you would like to know whom to get in touch with who's running that and whom to communicate can you maybe write that for the meeting minutes is Roger actually still happy to be able to talk to us about it because he left him a half with the system yeah he's still communicating and I have a code right who's that? Roger Lee he's maintaining his school I didn't bring his NMU so that we had a version in Jesse which had a reasonable amount of important fixes in and the other two people have finally moved off their fork from like 10 years ago so we put a load of their stuff back in so they're using the S-Build in order to build now suddenly everybody's in the same page which is good we just need to not let it rot it's basically I would say the most maintenance for S-Build when it comes to the S-Build on their abilities is the standard ourselves yeah but same the feature for the build demands in S-Build people want more features for running them on the desktop or the government machine so we have to just keep that in mind so we are probably to get a team which take care about adding this feature for the and other people looking at the build demands and trying to make these particular what I wanted to say to you was basically the team which is trying to make sure the things that work on the build demands can do it better than the build demands however I'm not sure if there's the right person to do it on their desktops and we have one minute left so I think that's the last remark you previously just did that by forking a version right well we are forking the right now we are forking the GC version but it's we tried, well it's a fork but all the patch are in master anyway so we just have like 5 patch left something like that we are trying to get them done down but anyways for the spit art type I mean people are also at the spit art type so if there's some need there are like 10 or 20 person and I think this guitar part is it's always at the body it is well maintained, you shouldn't need to be maintaining a fork, right? no there are specific patches in it like for example something like the don't want to build even to start if this conflict is not set okay so that's a specific patch the point also is that the build demands are running the stable version of the gun, that's one of the reasons why we are forking, we are just forking and trying to keep the important patch on both sides now we might want to get to run the unstable version of S build on the build demands but at some point there were some features that are needed on the shoot side there are some dependencies between the two we probably don't want to have to back both also the shoot and boost and so on if there are any features and as a topping I think that's not the mark because we should not end with the both for example there are times when we want to add features to run a build like we just discussed before we need to add patches to build but we can't do that basically we want to add features to build these that we can't put into a stable point like this because it's really a new code but we just need it and we need to basically throw it out the same amount of build on the run a build otherwise the little thing time is over so thank you for your participation both I will try to finish writing up some minutes of the last things that we discussed and I will also discuss the writing minutes and then I will send some out to the dev and run a build list so thank you very much all for joining and yeah I think that was one of the almost last DEBConf events for all of us so thank you for joining DEBConf and have a good and safe trip home and I hope to see you again thanks