 Welcome. We'll have now Philip Kern and Adam Barrett. They are the stable release managers and they will talk about the stable release management. Please welcome them. So, hello everyone. We wanted to talk about how we are currently managing stable. Some upcoming updates to how it might be managed in the future, some proposals, how volatile will be handled or hopefully will be handled in the future. So, the agenda is to talk about a bit about what's stable release management in general. What are we doing? What's the output of it? What are the tools we use? How can the random developer look at what we are doing and how we are doing? The second thing are point release intervals, what we are planning on doing for squeeze, what we plan to do for lany and edge, which might or might not have been worked out that well, about policies, what can go into stable update-wise and then I hope that you have questions and comments about how we can do it better or how we can do it differently than we are doing it before because we try to be open-minded on it. If there's something we can do to improve stable and especially from a user's perspective, we don't always know what users expect from stable, so it's interesting to know what people expect from us. So, the first thing is what are we actually doing? We are updating stable, which means that there are certain defined dates where we copy over packages from a suit called proposed updates into stable proper. This also means at this point we are pushing packages, not security updates out to the users. It also means we are building new CDs people can use to install from DVDs and other installation media. So, what we do to prepare is to review security advisories, actually look at the diff, most accepted as is though, and then review other fixes targeted to stable, which means requests we get on Debian release, at least which are directly uploaded to the PUNU queue, which should have landed on the list before. And for the actual point release, we prepare an announcement which people can see on Debian announce and on the website. We coordinate with the stable kernel team to get the kernel updated. We coordinate with the Debian installer team to get the recent fixes into Debian installer and maybe roll a new Debian installer release into stable. And then it's made late to contact the various teams that are involved in the point release, which are FTP masters, press team, security team and city team to set a date when to do it because it's quite an act. It requires many people to be present and that sometimes involves quite a bit of management. Of course, when we are accepting updates and to propose updates, we also have to be sure that we have all builds available for all architectures that no architecture falls actually behind. So, we are mostly closing it some week or two before the point release to ensure that we have all builds available. And the same is of course for all stable when it's still in the archive. So, what's proposed updates? Some people didn't seem to know back then that the suit is nowadays reviewed. Every package that isn't proposed updates has been reviewed by us. There's a separate new queue for it, which means that we are actually looking at what we're accepting and it's not random stuff. Some maintainer decided to upload. And we're actually telling that to accept it at some point. Yeah. Yeah, I will come to that later. So, what are we actually using for managing this is a queue viewer that's running on release.debian.org. It's even linked from there, which creates an adaptive against account version, which means either the stable version of the proposed updates one, which also does automated checks for version and installability problems to ensure that we don't create uninstallabilities in stable because we don't have something like Brittany ensuring it for us. We are of course tracking missing builds and that tool also helps us to create the point release announcements. And it also lists to do items. So, I've opened it here for people who wonder about that. It's actually reachable from the release.debian.org site on useful links. So, you can see to do items what's still left to do for the point release. Some packages that were not yet accepted into proposed updates together with the corresponding diffs and problems and missing builds if it's the case and the ones that will be part, will probably be part of the next stable release. There are sometimes cases where some packages that are not ready yet are skipped. That's mainly due to the fact that packages that are not security announcements are auto built in proposed updates, which means that for them getting built, we need to accept them. And if they are not up to date at the point release time, we need to skip them to have them actually in sync. So, what I am really curious about is who is using proposed updates? Who is using stable? So, where do you use proposed updates? I was told at one point, way back when it was probably years ago at this point, we was told that that was back before you had proposed updates new. And there used to be all sorts of random crux that would show up in proposed updates and then sometimes disappear again. So, for the systems on which they run stable and therefore proposed updates would be interesting, we didn't want to have that cruft appearing and disappearing. So, now that I know that you are checking every package that goes into it, then that makes it much easier for us to start using it on a wider basis so that we can test updates for you and let you know if they don't work. So, now I have an action item as soon as I get back to Stanford to go look at pointing all of our test dev systems that are running stable and add proposed updates to the sources list. That would be cool because we are not really sure who is actually testing the things in proposed updates. We really hope that they don't break anything. That's why we review it after all and we do some tests on them. But in general it would help if more people that can afford it, which means that it's not a machine that needs to be really, really stable to activate it and report back if there are any problems. You are already almost reply to my question but I am supposed to use proposed updates on production? No. Machines? No. So, I need to have a testing, I mean a system that can use for testing. So, this may not uncover all the possible problems actually. Right. True, true. Yeah. Well, it's basic update testing or upgrade testing and testing if the result still works. The problem is if it's not done it might happen that it gets copied into stable and at that point we cannot easily update it currently. So, it would really help us. I'm talking to developers here so it's also common that they have something to test with. But for really production machines it does make sense not to enable it. Yeah. Any more remarks about you think how we could possibly test them? Because for us it's some, yeah. You said you do some installability testing and I'm wondering what you're using for that? Installability is mainly a step check. Okay. What about something like PU parts? We could try that. What I heard about PU parts is that some packages fail on it, which is also the case in unstable and that it's pretty hard to get a bug distilled out of it. That's at least what Lukas said. But on the other hand I think he also either wrote or got a tool that does installability testing which he uses for unstable and for Lenny to testing upgrades that might be more useful and I think we will look into that. Yeah. That will of course catch the obvious installability problems. It won't catch anything that depends on the runtime of the program or the actual patch if it's not to the Debian packaging. Yeah. So, it's a bit about the QA of proposed updates we're wondering about and maybe at some point we can get to a state where people could report that they have actually tested it. Currently we are mostly relying on the maintainer testing it probably or the security team for that matter for those which come through security. Yeah. How do you want to receive reports and is there a tag or something we can use in the BTS for this or like if we are running these proposed updates and we run into problems how should we report them? You can report them to the BTS. Normally they would of course be attached to package and we would appreciate the CC which is mostly enough for us to keep track of it. So we see that there's actually an issue in stable. Just make sure it's a version in stable and it mentions that it's actually in stable you're seeing that bug. Are you interested in using effectsrelease.debian.org on bugs that are filed against the package as a way of getting it to show up in your bug list? Again please. So the BTS supports effects which lets it list a bug in multiple packages. Do you want us to say that effectsrelease.debian.org if it affects a proposed updates package? I think that would be nice here. If the BTS supports that for nonversion packages, yes. I think it does. Okay. I guess I will note that. Okay. So that was the stable release management in general part. I guess we can do questions on that later if there are any. The next point would be point release intervals. We are currently looking at doing stable updates every two months and old stable updates every four months so that doesn't currently apply and we screwed it up with Edge being released a bit too late. Actually the fixes that people pushed in after the lany release were delayed until Edge was end of life which wasn't really great and we are working on improving that by setting up a timeline when we actually intend to do the releases. It might be that point releases could be more frequent immediately after squeeze is released which is also what fits with the past history. An interesting part is when you look at the woody release which is very old, there were point releases every other year. No, sorry, every year. So this should still improve the situation and of course we have to ensure that we are not doing two point releases at the same time or doing a point release and release at the same time like we did with Latch and Lany and Edge that messed up the CD team's infrastructure a bit. So that's the thing. What are we doing in between those point releases? So there's testing, sorry, stable proposed updates. There's also volatile. So there are some packages which require timely updates besides normal point release schedule that are not security updates. One thing is TZ data which ships time zone information. Some countries actually think of introducing daylight saving changes one week before they actually do it like Argentina. So it might make sense to push that earlier and it's not a security update. So it's not appropriate to use security on that. And there's this other package. Some people disagree with its use but which is still used by quite some people which is clam AV. This package is interesting in that it does, well, didn't the past bump its API on bug fix releases because upstream thought it would be a great idea to bump API on bug fix releases. And sometimes they also deactivate new older versions through their signatures. So we do have to address this in some way which we currently did through volatile. And then there were also things that led to the introduction of volatile sloppy back then which were pigeon and purple updates to cope with instant messaging protocol changes like I CQ and I think for MSN security update even deactivated the whole protocol. So we also need to address this in some way to keep stable usable. So to get back a bit why volatile was introduced at least as I could find it because it was made at the such release I think. Volatile proper was a set of packages. Everyone can basically update to which means clam AV data was mentioned back then and TZ data which are safe to upgrade. Now clam AV data something we are going to face out because there are some concerns with it being built automatically and signed automatically and uploaded automatically. And there are other ways with proxies to handle this specific issues. Volatile sloppy which we currently don't really use was introduced for packages that actually need larger version bumps to get useful again. Which was the I guess and no AM client that needed this. So what was done back then was a separate team and a separate infrastructure with its own archive host to be run completely separate from everything else. So what we have now with volatile is that it's one by a single person. It uses an ancient deck version that has no support for the source version three. Now granted the security archive has the same problem but we hope to get that solved over the FTP masters have to solve it and currently there's no easy ability to copy over volatile builds into proposed updates which would make sense for TZ data because we want to push it into stable anyway. The only relief is that the mirroring is actually handled by the same people that mirror FTP master. So a burden less because we had a whole unofficial mirroring network for volatile with hosts that are not the official one. So yeah, it was a bit tricky. So the proposal is actually to run it on the normal infrastructure which means an FTP master. Some work has been already done on this. It's just not exported into onto the mirrors yet. And then use it as a suit pass updates more quickly to the users and point releases can if it's called updates like on other distributions or volatile is basically a bike shed issue. We will see what we do about that but we will encourage users to activate it by default, which is I think also what Debbie installed us. And then copy into volatile from proposed updates and the volatile bits will also be installed into stable at point release time. The goal is of course to keep stable as usable as possible. This does mean for some packages that we will integrate new upstream versions into stable, but I will come to that later. So now that's a bit of a topic switch. So I guess some people who remember the old rules for stable updates, what has to be fulfilled for a stable update to be accepted, which was codified in interesting way. So basically it said, if you fix a security problem, you need to go to the security team. The security team needs to accept it and do an advisory for it. Then it's accepted into stable. Nowadays we have the problem that the security team for some security issues doesn't deem it critical enough to publish an own advisory and points the people to us so that we push it through a regular point release, which is basically shifting the workload, but well, we have to cope with that. Then if the package fixes a critical bug, which can lead to data loss data corruption on overly broken system or the package is broken or not usable anymore, right, we are also currently opting to accept a bit less critical bugs that are still affecting users and annoying users to some degree. Yeah, that's a bit about the stable version of being installable, which obviously needs to be fixed. And the bits about the released architectures that have to be in sync and or it basically gets all architectures back in sync, which is what we do if we usually notice if things are out of date, introduced by some updates, and maybe scheduled in the news or something, this has happened in the past. So there was this rule, what needs to be fulfilled for it to be accepted. And there was also the bit packages which will most probably be rejected packages that fix non critical bugs packages that fix an unusable minor part of a package. So what we are now settling for is a bit different. Of course, we still let in most of the security advisories unless we really know they are broken. And what's likely to be acceptable is if you fix a security issue or fix a bug of at least severity important, or fix an installability in FTPFS bug or bring architectures back in sync. So FTPFS bugs also run fixed in a stable release, but it makes sense that the security team is actually able to recompile it for the updates. So we also accept those bugs. Then we try to apply common sense on those updates. I mean, we are also involved in testing updates. So we regularly review code and mostly you can see if a change is mostly sane or not. It, of course, makes sense to have minimal diffs to review because it makes our life much easier. Then, yeah, it's still the case that if we update a package, it must have a reason, of course. So it's not that the wish list bug of your choice can be fixed and stable because we don't want to risk to break anything. So if there's a stable package that should get fixed, and you're, for example, the maintainer of it, don't hesitate to contact us, if you think that issue should really be fixed and stable. Normally with common sense, we can see if it warrants an update or not. So there are some packages that will get newer upstream versions. We settled with that in the past. This is probably not an exhaustive list. I don't know what will happen in Screeze, but what we did is post Cresql, I still don't know how to pronounce it properly, which gets newer bug fix releases because we trust upstream not to screw it up. That's actually Martin Pitt working on it most of the time. Climby, as I already said, TZData is also basically upstream releases and the Mozilla-related packages that are also pushed through security as new upstream versions because it's unfeasible to actually backport the security fixes. There's still a problem, though, how to take packages of which we know that they get larger updates. There's some mechanism currently in use. Nobody knows of, I guess. I guess nobody in this room, that is, that tax used for limited support tax and unsupported tax with relation to security. Does anybody know that packages are tagged like that? One person. Interesting. So there was a tool planned when this was introduced to actually list the packages you have installed that are unsupported, which got never implemented, so it's basically tagged through the security tracker and nobody gets, well, sometimes when it's during a release cycle, it's announced, of course, on the security list, but there are also some packages for which are affected, which might go in into a release which are not supported. So that's kind of a similar case we should solve in the near future to actually get the information to people. So I would really like to know what you are expecting from Stable. It is, of course, hard to get, we had the edge and the half thing for the trivers updates. Those are, of course, a bit harder and mainly rely on people doing the work, which we currently lack, I suppose, for this, and backpotting stuff into the kernel. What was done for Lenny was backpots of some trivers into the Lenny kernel. Instead of a full and a half release, but it would be interesting to know what the issues are you are having with Stable. You're currently the only one. So I will say that I'm torn. The edge and a half release from the perspective of a stable user was nice to get the driver, to get more driver updates that can be reasonably back ported and we missed that a little bit with Lenny in that we have some new Dell servers for which we're running the back ports kernel because the firmware for something or other was just not available in the Lenny kernel. But that being said, from a package maintainer perspective, the edge and a half kernel was really hard because there's actually a whole bunch of out of tree kernel modules in Debian and almost none of them got rebuilt for edge and a half, which meant, and I have one of them open AFS and the way that the kernel development tends to work and then the way that upstream on those out of tree modules tends to work, you're gonna need significantly newer upstream releases of the out of tree kernel module in order to work with a new version of the kernel and it all turns into a big mess. So I think it provides a more consistent stable experience to not have that edge and a half style release, but I do think we should realize that when we go that route, that means that some of our users will have to use the back ports kernel. Right, right. I mean, that's also the issue of X drivers where I don't know how much is currently in back ports or if at all. I don't either. I mean, we mostly use stable on servers, I actually run testing and unstable on desktop side. Yeah, that's something we need to address at some point, I mean, out of tree modules. If the maintainers would actually bother to somehow provide a co-installable version of it or tell us what to rebuild, it might work, but of course, you need newer upstream versions. And if open AFS, the problem is is that it's a whole lot of software, there's a user space and a kernel thing and it gets really hard. I mean, one hope is that maybe that kernel version gets more back ports, but then it also depends on the maintainers doing them, probably, or fetching the patch somewhere. We didn't settle for anything yet for squeeze, we just said we won't do it for any other questions, any other needs you have, any concerns you have about what we are doing. Did we actually manage to break your system already or break it? First, I wanted to say thank you for helping to keep stable usable. I use it in a lot of places and one of the things I find is as time goes on, I end up back porting more and more packages and uploading those to back ports. One thing that I had heard about a while back but I don't know the status of and maybe this is relevant to the work you're doing, I think deep package to live depths was gonna be made to be more intelligent about how it determined dependencies. Such that it minimized dependencies so that you might actually be able to use packages from testing and unstable on top of stable if all they depended on was an older libc or something like that. Does that help you out at all with stable proposed updates to just be able to pull things in faster? Not currently because we don't have a technical ability to actually pull things from there unless at the point release time where we sometimes manage to push versions up into tree which means push some things from stable to testing and unstable because they weren't updated there yet. The other way around only happened on Debian Archive key ring I think where something was controlled suited into stable directly. Currently it doesn't help us. I mean you see many build blocks where it tells that the dependency is actually useless but the program links against it. I guess you're referring to a simple files? Yeah, yeah so I guess that's just a matter of fixing packages in the archive to allow for that before that will be useful. Also some packages that call that's now from a testing point of view DH make shlips dash big re which causes packages to have an shlips that matches with the current upstream version of the package. Those are really annoying because it's just guessing if there are new symbols at that point so simple files would help with that from a testing point of view and that would help significantly on migrations. From the stable point of view there's not that much currently we can do. We could do installability checks but then you're stuck with testing all architectures if they picked up similar dependencies and then putting it in which might work but then you might have to bin and amuse some of them so it's not really easy at that point. Well and you guys still have to review all the changes no matter what anyway so it would just maybe speed things up a little bit but. Yeah actually if you can take the patch and just do an appropriate version number into proposed updates that also works well. If you have to review it anyway okay it gets rebuilt but that's a price you pay. So I use volatile on my stable boxes and so also thank you. Clamav is a lifesaver. I was just curious what your philosophy was on actually I guess putting more stuff in there. I mean I think about if we're talking about content and not code things like CA certificates. Things like that. Like what? CA certificates so you can get fresh CA search on the box. Just curious some of those things would be seems like they could be useful but maybe I'm not seeing the whole picture. That's certainly useful. The problem with CA certificates is that it's packaging is fragile to say the least. So perhaps it could be split into. Yeah it should actually be split and there should actually be a same solution that all packages use the same certificates instead of Mozilla using its own certificate store. Java updating it from CA certificates. Conqueror using an own certificate store. Yeah I agree on that. There's some infrastructure that should be done on PKI. It would be an obvious candidate if it wouldn't be that fragile and I think we did one update of CA certificates for volatile actually. Yeah that seems stirring about but the general philosophy of, I mean some packages are really just content. Which ones? That would be interesting to know. I mean Climavy data is a pretty special case but. I'm thinking about ISO codes which we are maintaining which is basically content and I think the package is correct. What's using that data? I think that the GNOME is using that data. We have some input from them so I think we merely don't know who is using that data. It would need to check the dependencies actually. I mean especially if users request an update of such data packages or developers for that matter we can do that. They seem to be a little risk at that point but yeah there needs to be some use to it of course. I mean with these data on some instances it's pretty critical to have current times on data. I wouldn't mind such an update. Mostly the content of ISO code for instance is made of translations. So that brings another topic. Could be interesting how about some updates to localization stuff for stable if that happens to be doable without upgrading to a new upstream version for instance. Right so are you referring to simple translations or to depth conf translations? Actually I think none of them are doable with what we currently have and how the packaging is done. That was the idea lying under the TDEB proposal but nothing is ready for either a squeeze or a squeeze plus one or whatever. I think TDEB would help with that. This is just ideas popping in my mind and things that could be easily as soon as the infrastructure is ready for that updated for stable release. What we're currently doing is accepting some translation updates when the package is updated anyway. So if there's a bug to fix and you can also then you if you ask us you can also squeeze translation updates. There's a bug fixed and there are some depth conf translations for instance then that would be acceptable for an update. In the general case we don't have the infrastructure to do it for all packages. I have a minor bit of almost bike shedding but not quite. So historically we've not had a particularly well communicated standardized written down standard for versioning your uploads or proposed updates and currently we're basically using sweet names now mostly appended with a plus or prepended with a plus. The problem with the sweet names of course is that they don't guarantee a sort order and when we go to squeeze down they're really not gonna guarantee a sort order because S is kind of late in the alphabet. Maybe it's time to switch to version numbers now that we have a stable versioning number policy because it used to be that we weren't sure what version number the next release was gonna be but now I think we decide we're always gonna increment the major number. We can of course communicate such a policy. Indeed it's nice that squeeze is larger than Lenny and edge was less than Lenny so it actually worked for those three. Basically it's when what we currently use it if it's the testing or unstable version basically back ported because it's just a maintain upload on NMU in between then we use tiller, suit name currently and we use plus if we add patches. Sometimes it even gives collisions with the security team so... And old style been an NMU's unstable which we love. Old style been an NMU's our pain. Because yeah, because x.naught.one sorts higher than x plus Lenny one. So thankfully, sorry. Sorry I had to interrupt you. So thankfully I think been a NMU's are now using plus B uniformly which means that if we sort out between stable updates and security updates either coordinate using the same version number or give security something probably that sorts earlier than the stable updates. But probably easier to coordinate the same version number because you guys mix. I mean sometimes proposed updates is newer sometimes security is newer and it kind of drags back and forth. It's a little bit hard. But yeah, if we can sort that all out but give all of that stuff something that sorts after been NMU's I think then this all starts looking more reasonable. Yeah, yeah. And I think there was a proposal that was circulated around all those lines that even included handling the been NMU of a stable update case and the whole thing. So maybe we could, but mostly they were good ideas. They seem to solve all the problems but we never declared one of them as official and therefore no one knew to follow it. So I think it seems like now is a good time to kind of do that. Right. So I will note this down that we need to solve this. Actually what security does if there's an update in proposed updates they usually merge it in. So you can actually get proposed updates sometimes earlier than a point release. But that's mainly to save work on one side of the pond. Right. So we still have time left but it doesn't really seem like questions and I see also looks like, oh, there's a tar package. Right. That's also a package that needs updates to still provide anonymous access to a network. I could imagine updates to that to be honest but we would need to be contacted by the maintainers about it. Didn't we actually fix the specific issue? The key roll over one, I'm fairly sure we did. I think we, I know Wiesel wanted to upload packages. Indeed. I'm fairly sure we did fix that specific one. I think so too. But in general the top people are also folks that won't to get new upstream versions in for strong anonymous access, whatever. So yeah, if we are approached we might be open to that. There's a question earlier on there about, so can we expect more priority important bugs to be fixed and stable? Some maintainers don't seem to have much interest in that. I can offer my opinion, which is I think it's gonna depend a lot on the maintainer because it can be a fair amount of work to backport an important bug fix to the stable version. I mean, I know I feel kind of guilty because DSA actually asked me to do that for one of my packages and I just never ended up having time. I mean, I've done this process and it works great. So certainly any Debian package maintainer who wasn't aware of it, stable updates are great and if you have a bug like that that you wanna fix. I mean, there was this bug on kernel package which basically said you cannot build kernels, I think greater than 2635, so the count run, you can't build it on stable and the maintainer basically said, I can't do anything about that, that's a stable release department without actually seeing us. So we didn't see that bug and actually somebody replied like, please contact the release team on that address and he didn't bother to. So it really depends on the maintainer and it's somewhat different if some random guy shows up providing a patch then the maintainer actually saying, okay, I've did this, I know it works in unstable, it was tested in unstable. I'm pretty sure that backport will work and then we can do it. Yeah. I think I can confirm that we did that for some bar few times already. I think four or five times fixing important bugs and asking for permission in some way by posting a patch, it worked very well. So I think the policy you explained about important bugs being allowed to be fixed in stable, it is already working. As far as I've seen for some time, it's working, it's working very well. Not all maintainers know it though. Despite it being announced on every endeavor. We need to say it louder, it's working. It's actually sometimes even more reassuring than updating a package in unstable because you get all these people reviewing your patch and making sure it looks right. We sometimes have the case that a patch in question was, oh, well, it turned out to be uploaded to unstable later, but yeah, it should be tested first. If the bugs in unstable, in the unstable patches, we would really prefer that you have actually tested it in unstable first. And maybe some aging applied like we do for testing. Yeah. Mainly because if it turns out to have huge issues, it's much easier to just re-upload to unstable. We have had cases in the past where people have only noticed stuff after we've released a point release and then people have turned around and said, that broke something, which we did. You definitely. Okay. So you're all happy about stable otherwise or not just ignoring it? Okay. Any more questions? Nothing. Oh, there's one. As long as there's time, I'll ask. Some of the things I find myself backporting the most, I still actually run a stable desktop and the things that I find myself backporting are things where the technology's moving at a pretty fast rate. So things like VoIP clients like Icaiga or Twinkle or some of these other things. Or even web frameworks and that kind of stuff. Yeah. In a lot of cases, there's a lot of library dependencies and I end up backporting a dozen packages or something to make it work in order to get this new functionality. But that's a lot of that's desktop specific. Is that something that volatile could address to get? You mentioned Ice Weasel and Ice Dove is things to get. There's a bit of a technical necessity. There's a bit of a necessity to do it there. And it's not new branches. So it's fairly, it's at least remaining in the same branch, which you probably can't say of the software. I mean, I guess it's unavoidable for users to either use backports for some packages or make that official in some way and maybe also get more QA on it. Or to install them by themselves or backporting them by themselves. I don't think we want to provide a stable system, which means the basic services will all be there. They will all be supported. And if you're running a website and you need a new version of a web framework, we cannot complicate off of that in the same way. Sadly. Markin. That's the question. Right. So the question is if the volatile takeover is only a proposal or if it's going to be there for a squeeze. I really want like, I really want maintain the ancient DAC anymore. So the work that has been done was creating a separate suite that we will be able to manage. So I think with the help of the FTP masters, it will be doable for squeeze. I don't know how it will be called and Butterberg on the installer will be to do it by default, but I'm pretty sure it will be there for squeeze. Yeah, I just wanted to remind people that lack of hardware support is also considered a severity important bug and also acceptable for a stable release. So if you have a case where you just need a new driver that's not backboard or something like that, but feel free to file a bug and we're definitely interested in enabling new servers, new desktops and stuff like that if we can without risking regressions. And that's actually the guy that does most of the kernel stuff in stable. So anything else? So thanks for your coming. That's it.