 Okay, so welcome to the etch-in app, some things I kind of want to talk about or what exactly are we going to update, when are we going to update it, and who is going to be doing each part of the work. One of the things other people are going to be talking about. Do most of you know what etch-in app is all about? For the record it's just the idea is to add a new upstream kernel, possibly new extravers, maybe, you know, I've heard maybe the others come up, but in addition to new features, new hardware support, and an update to etch. New features or just new hardware support? New hardware support, I don't necessarily know about. When new features, only if you are very, very, very interested, don't play against it. So basically it's a node. Of course if you say you have a new module, you can load it only if it's loaded in a new feature available, it's not loaded, it doesn't. Seven I would say yes, you could do it, but I expect most cases it's just a node. At least that's my personal opinion at the moment. I think that's the good place to be. Although you will get screwed over with new versions of drivers if they don't work. You know, 20% of users stop, you know. Basically one of the things is that we don't want to drop any existing drivers at all. So we are also going to drop the existing kernel, we are just adding a new kernel. We are not making automatic updates. So if some of the updates in the kernel is screwed, please go back to the kernel you used to have that used to work for you, which is of course not what I really want to do, but it's still better than totally screwing them. What about the installer? I don't think it's a final decision yet. Part of the job is to figure out what should we do with the installer, but the third idea is that somehow we are going to do a new disk, or do a new disk with the old installer, or if you need this new stuff for the installer already, because your hardware is so broken with old things, you need to use the lany installer installed in an edge, or whatever. It seems that it's not yet decided, and you can see we can't decide. It's this moment because we need to know more what's in the new kernel, how deployed such hardware is it can't even use the installer for a story only. I think that's my critical desire today, is to make sure that we have a stable install disk that I can install on Intel hardware that has been built in the last year, because the current stable kernel can't boot on that. We're not going to install one. By the way, doesn't it boot AHCI? Really? Stated drivers. Oh, okay. And the E1000, lack of E1000 and AHCI drivers. I can't be talking about the current version. The current stable install existing cannot boot on the E1000. They might have E1000, they don't have updated AHCI drivers. AHCI moves from per device IDs to class-based, which means I can take the new AHCI drivers and run them on hardware between our chip again. This happened to me, 2.6.21. I think so, yeah. I'm not exactly sure what kernel test they're going to model. It's assumed that it's open. It's not changed. We could easily backport. I think it can be easily backported. Okay, so that's something I'd like to do earlier. I think we've talked about we're doing actually a half of them between nine and 12 months after release of Edge. Instead of starting this fall or most likely this winter. But if there's things that are safe to backport into 2.6.18, I'd rather do that sooner than later. Things that you can show that don't break other hardware. One of the things that we definitely will add in 2.6.18 are things like new GCI IDs, for example. Just so that you can get in the best of what you want to do now, and what would be already for the Edge 2.6.18? Have you read the latest Lehnun Super 3 new article? Basically the other vendors are having same problems. There's a talk between kernels developers they're having some problems... Basically Lehnun is shipping, although you know, and the price limits, and updates the kernel with heavy backwards. So basically, we are checking now the 269, and they're backporting, constantly backporting stuff. And this is becoming a model. So now discussing maybe the way to go is to shoot the latest option kernel, as is. So it may well consider to see what other distributions do with that. We're not young enough to have a problem. Yeah, we'll have a monster on the side, but it's still a problem. I think that's the big transitions they do over the last years. Intel has gone from pay to the data, and that makes it impossible to install. That's kind of a showstop for stable distribution, when we don't ship unstable install data. So I think looking at other distributions and seeing what they're planning to do, Max on the kernel team, wants to look and see what kernel other distributions are doing. That's going to be a good guiding point. But specifically, is there a hardware besides Intel stuff you already know about? Is there other hardware that's becoming popular about the new release that you already know about? I think any of the others seem to be one of the ones that have changed depending on what kind of new you're using. But does it mean the future drivers can't install the hardware? Or does that mean we are not really using the hardware? You can install one, you just don't have it. Which is pretty bad. But it's not as bad, because if you could say for example, it takes a new kernel as a USB stick, install it into a layer or whatever. It's not something I like, but it's not as bad as I would say with Intel ones. It's just a bit less bad. But with 100 machines, it really sucks. I agree. It's not that much, it's way stronger with distribution. What's that for the field? So the NVIDIA drivers, do you know what the details are? Is it the NVIDIA ICIDs? No, it's just a new virtual driver. So there are new motherboards that we have done. If you're not running the most basic kernels, it's just not working. So maybe we should come up with a more general process, like some way of tracking new hardware that a lot of people are using that doesn't work and trying to figure out a approach. So probably by reporting it's 2618, it may not be the best approach. I think it might be. So maybe it would be something that would work for people to track new hardware, or do we just want to go by it? Let's go to a new upstream and figure out what we're doing otherwise. It depends. If you're handling the main controls for things that are going to the new upstream, it might not be the best approach. Well, like you were saying, we're never going to drop the 2618 on edge. That'll always be there and be the fault. So a new upstream would just be on the side. I think it's a question of would it be on the internal media then? That's to be determined. What problems we're talking about with those? How easy would it be to keep the kernels in sync? Do we run a risk of having a new upstream or a new kernel? Right. How easy would it be to keep it in sync and such? And if you have a pathway where you can make a change, like you're originating the ID stuff, you can do a smaller button for them to make it, then you can install it with the app. So Francis, in order to explain more about what is reasoning for the linear stall there was, so if the issue is that he wants to not maintain two branches or whoever to maintain two branches, is it still reasonable maybe to build a copy of the linear branch to install it with the Edge kernel? I mean, they can get really tricky. We do try to support old versions of the kernel for as long as we can. It's really hard to say what we're going to have to do with having too many changes and how we support the modules in the old process still works. So to say how I get to these. Another thing you need to consider is that newer kernels, many times, require newer user space like you did or something. Yeah. We had a problem with Star that didn't have the internal space, but now I think they're not squeezing away from it. I'm testing soon. But you may need an update on other things. Currently the new kernels work pretty okay on Edge, so it's not really a problem again. So as long as it's not going to be a problem in the foreseeable future for this time around, you don't need to worry about that problem. We need to look on a case-by-case basis, I think, because every time it can be totally different in the future. And of course, if you just notice that there's been a change in the kernel, it's not solid fire, because the kind of user version can continue just to remove that change. Yeah, it's really a case-by-case basis. If you're close to that, if you're close to that, if you're near a time where it's allowing you to set a kernel, then you can't set a day what you wish to do. It's the first basis to set a kernel, not the fourth step in quality. Okay, so I guess all we can really do is figure out what exactly we can't do. What do we know we can't do today? So, for example, would we want to do this without B-server and SIN support? Do we consider that a must? Do we have B-server and SIN in our kernel? Yeah, we could just ask you if you could speak a bit slower and louder today, because I'm really not too well still, so normally you can just understand American people who speak slower, who speak with a little lower voice like Troy, for example. So yeah, my question was trying to figure out things that we wouldn't do. Is one of those things, a kernel without B-server, too many negatives in there, what do we do an edge and a half with a kernel that doesn't have B-server within, do we consider that a must? I would not like to drop the virtualization techniques, especially a newer hardware which you buy to virtualize it. You want to have those techniques available. I think it might be a better approach to get rid of architectures, because Intel is the architecture which mostly has new hardware, and the other architectures maybe we can get drop some architectures for the edge point half kernel. Actually, coming back to B-server, if we only say that it's a small kernel and a smaller sub-work that we have of B-server, I would say yes, we keep B-server, of course. But if we say that basically we can't do B-server, we have to take it to the edge kernel because there's no B-server development happening the next year, what I don't expect, basically. Then I would say that what your difference would have to make for us, so it depends. We need to decide on the edge, but I wouldn't say dropping B-server is a total no-go. It's one thing I would like to avoid, but other things might happen. You don't want to have big regressions because it's not your question. B-server is a question mark. It's just that it's not a flavor. What we do is say we want B-server kernel which doesn't work then. That would be a regression. The same if you don't provide B-server kernels, this is... Regression in a feature set. I don't have thought of the future of B-server. I don't think there's any doubt that I like B-server and I'm not a B-server myself, so I have to do a profit from it. Drop here some architectures. It seems to me that there's a different chance that some populations don't have the support of B-server and B-server as well. To be honest, I've never really worried too much about other architectures being there. If the newer kernels don't work very well, then you still have the older one there. So the problem may just be if they don't build for other architecture that might consider dropping it, or is there another reason you're thinking? I'm just thinking is it even worth it seems like most of the problems are in the mental hardware and there's a little bit smaller problem so is it even more a problem than you actually have to release everything or those kind of problems? There are two reasons why we would like to have the same problems as everywhere. One is, of course, to be dead and say we are doing things consistent over all architectures, of course. If there are no issues, why should we drop it? Because it's probably even easier to just build it if it works. My question is just what's more important? If there's a problem that's coming up and we just drop it or we just make the extra effort to work even though it might not be worth it? What I would like to know from the kernel team is just that if you could just say that how many issues do we have for these other architectures? Do we have the management systems or is it mostly just working? Because I really don't have what, for me at my professional point of view that somehow it's all stretched out in the kernel team and always worn on. Pictures are through the market which doesn't give much input for the better. Basically, the build values are already already involved, because you have a kernel in unstable anyway, so the build values are going to result soon enough, I think. Well, around time failures there are bugs, there are variations or something. If someone has a mixed kernel that doesn't work, but builds, you can always use that. I mean, the build time failures, yeah, we fix them anyway. Yeah, and if the build values, some of the recent ones were fuel chain issues. I still think we'll be building with the edge fuel chain. So we're going to be dropping the signal on the edge to be building with the edge fuel chain, just to make it rebuildable. It might be a good idea to push that kernel from default update to sit for something like cv, so that we have more people testing it. Even if you build it with edge fuel chain. Yeah, it would probably be an older update because before we do it entirely. I think you need like an edge and a half for both updates. Well, it'll certainly be the, for the kernel, it'll be the buildserver.net and that'll be built, built into daily build. And of course, you can even now, because if you include it in the edge, it needs to be in the false updates anyways. You definitely can add a prototype to resources list. There is this automation list because nothing gets linked, that isn't the view by the status of this team. So that would be the test base for it, of course. But I'm not too sure if it wouldn't help to also have it added an unstable for more testing. Because we know that unstable is a large test base as well. People who don't mind so much booting sessions as more stable users. Simple questions. Well, they're not simple, but what do we name the source package? Because we want to have 1x26 there. Do we do 2x618 or 2x623 or whatever? I hope. I think that would make more sense. That's the most often the solution for me. And we'll need to deal with the meta packages. So right now there's 1x26 1x image 2x6-686 or something. So 2x623-686 something like that. The meta messages are to provide for a default kernel and if the convention is to keep the default as it is you don't need to provide new meta packages. But if you're running 2x623 just take the other number you'll want to follow ABI changes to security updates. We already have 1x1-2x6 meta packages as part of the main meta package as long as ball is the version of this. No. We'll say that again. 1x1-2x2.21-666 the model already exists. Oh, I don't think so. I have a passion to take this. Does it? No. So 1x, we think there's 1x1-2x6-6-6-6-6-6 2x666 No. I don't know. I think so. I know. About 10 of you have just need to invent new meta packages. Yeah. Some 2x623 meta packages. So I guess what the X people made is a good thing about the X people talk about what you want to do and then I'd like to come back afterwards and start talking about time frames. When do we conclude? People are interested in X server or X driver updates. If you are. The new X server 1.1.3 if we're a center on it. So we really can't back forward it. Very nicely done. So you can't back forward that. The only driver updates I've noticed that we maintain are the Intel driver 2.0 and the new NV driver both of which provide new hardware support. The Intel driver can build on server 1.2? Correct. So that could be back forwarded. And then NV should be back forwarded fairly trivially. Probably have few modifications to the code. The new server H as 1.1 2.3 is there. We could probably do a new one. So how large if you say back forward to H how large does it change? Do you think we should do a new package or do you think we should just fit trumps in the current package? NV we could probably just update and it's not a huge change. Intel will be able to do a new package because it got renamed anyway so it's almost like begging to be in the package and that way people don't get screwed. Basically I don't see necessarily correlation between the changes, between the current and it's full of might one if you have the extra is early or later available. I don't see reason to not just do them when we have them. Except of course I don't think that we need to update to kernel and to X at the same moment of time. We can't just do it whenever it's available. Maybe we should back forward to 0.0 I mean that's me in the bug experience. If we add a new kernel if we add a new extra and just a new package I don't see really what we are testing before. Maybe we synchronize the kernel packages and the X packages because their kernel changes are required for the new... Okay. So we need to have the kernel changes at least before the exchanges. No, unfortunately in order to talk to new Intel hardware we need to do a kernel driver to map memory to adjust the hardware. Is that the change that we can back forward to 0.618? I think that it was supposed to be in Edge or what? I hope that we can add one hour before we are ready to send. So we need the DRAM But that would be for Edge 2 or whatever. DRAM So we need the DRAM for 9.5 and G33 as well. The stack only goes so high that's about as deep as it is. At the maximum you are going to be at the absolute maximum you are going to need a couple of programmers which are trivial. DRAM, Mesa, Xover, and drivers. That's absolutely the maximum that you need. It sounds bigger than it is. Mesa is not small. So would all be getting new package names because they failed to run off and turn? No, the Mesa changes, the kernel changes and the live DRAM changes do hardware support PCI which is effective. And they don't depend on new kernel? No, they depend on it. Is that something that's just not fair anyways? Should we just do it as soon as possible? The base says actually do we depend on new kernel? We need to get a new package name I guess for preventing people from creating new kernel packages. They don't need a new kernel. If we haven't done anything in the first kernel but they need the driver update kernel driver update. So that you mean we can do the changes after we push them out to some longer quantities to some longer quantities. But doesn't that mean that you would need to update the current 2.6.18 kernel anyway because if you push out the new X packages without updating the current 2.6.18 kernel you would not be able to use X anymore with that current kernel. And we are going to go into updates that we in Damian are going to so we force our users to update the kernel. Which part do you mean? We are going to update the kernel now for L1 anyways and my basic assumption is that we will still go on in an update in 2.6.18 regardless of things like device IDs. Yeah, I agree. It depends on the extra sum half kernel or not. But it means that it will depend on the H0.5 kernel for these X changes. If you want to do them you also have to update the 2.6.18 kernel because you cannot force people to switch to a 2.6.23 kernel just because you want to update X. Well, it is backwards for bad changes. The new X can work with the old kernel and it needs some good data. There will be some limited functional limitations. As far as I understood the updates of the 2.6.18 kernel are enough to last as packages. It doesn't I don't think so. The changes that are necessary for the new hardware support are new PCIDs and small amounts of code that are dependent only on those PCIDs. So there will be a switch statement for the new PCIDs and the new PCID will have different code. So those changes are in line with new hardware support. So there will be no regressions for any current users? I think that's the goal. So the kernel drivers do not introduce regressions for existing drivers. I think that's something we can do with 2.6.18. I don't see a reason to make any special things about any X driver. What? My father leaves the box. If new X software introduces no regressions at all for current users then I don't see a problem. The new kernel modules. The other two pieces that we need are changes to the 2D X driver and we have two routes so we can do that because the changes to the new hardware are on top of the X or 2.0 driver at this point. Ubuntu, Red Hat and Novell have all taken those changes compared to their old X server 1.1 based drivers. We can steal those drivers and ship new packages on top of Edge. If we want Ubuntu maintenance because all of a sudden now we have a fork. Now we have the old packages and we have the new packages and we can maintain essentially two total parallel tweets of packages. I don't know if we have Danwood but the other piece that we have been updating in similar fashion is the Mesa GL library. Again the hardware support for new Intel chips is on top of a year and a half, two years of development of Mesa and so back porting those has been done by other distributions. Do you want to take their patches review them and incorporate them into our stable series as well? For what sort of reason? Mesa and all of them It's pretty solid. I think it's been pretty good. I think the server is a lot because the server has been more sensitive than Mesa. Great stuff accidentally. We have a bunch of regressions of the current Mesa that are breaking the test speed. The other option in the back porting is to update the server. The other option in the back porting is to shipping desktop bits but to some extent that doesn't seem appropriate to me. It seems appropriate for us to get basic hardware functionality to make sure that people can get their systems up and running and then decide if they want new features support including fancy hardware support they need to move to a new distribution. I think that's a reasonable position because obviously SnowVal and Red Hat are in a different space but they're guaranteed five years of support and those developers are in a world of hurt right now and while they have staff that are in fact Intel has paid staff to help them with that I don't think we're in a position to be able to offer that kind of support. What new hardware did we feature? We have this new logic. Some parts of it and a bit of new logic for our system but of course we can't say we will always have the better end of stuff so I think that the extra one half thing is just a good middle position between everything and nothing. I think my goal for the ongoing stable support is to make sure that hardware that I ship today we can install. That's a good thing. I think that's a basic level of functionality and that people can use the system in some limited capacity and decide whether they want to go to a more recent distribution without having to... If I were to describe in one sense what I think is going to happen is to show that Deviant can add hardware support so I don't want to I think the first time around I think the first time around we want to add hardware support definitely not risk breaking anything just to show that we can do it right. But I think that needs to be tempered by the cost of what the hardware support is going to be for something like the hardware support for adding generic class-based HCI drivers for it absolutely I think as it approaches the hardware support but when you talk about adding Intel 965 graphics driver support we could do it but it would need a new extra What if we just did skip the base on it and just give it to you I mean it's crap but it's an all sort of package and you say well at least you got this if you're really honestly newer I think that might be appropriate where we offer limited functionality at the point where people can use the interface that are familiar and friendly but comfortable with to get to installing this... Being able to get to a useful 2D desktop from Edge and whatever so that you can then make the decision to do the testing seems like a reasonable place to try and go I know that one of the challenges we as a distribution is the fact that as recently as now there's a whole new ration of hardware that you just can't install on Edge so... So Zovell asked on IRC if the security team can think about these updates as for the kernel I'm happy with maintaining the second one assuming we can make it as close to drop as large support as possible I won't be happy about maintaining four kernel source trees at the same time but when we go down to two or just to sort of overlap what do you think about Edge Well the parts we're talking about the facts we're talking about now are debilitating on people on desktop with good problems recently it's mostly being on all the common stuff which wouldn't be effective with the common stuff and so I don't think Edge is the main problem here with the kernel we should probably go for something less than the 12 months of close to high support being provided for the common problem so this should only be a step towards landing for eventual next release and honestly it'll be easier to maintain the Edge kernel than the Edge kernel so that you're happy with it security wise? security wise I know there's a lot of institutions that have not yet switched to an Edge do not plan to switch to an Edge for the foreseeable future but are going to keep on running search Limitly search limiting security support for kernel like that going to have a major effect in the usability of Debian for large institutions well then you have to switch to Edge but not for Edge on behalf would you simply do enough resources if you asked me to choose I would say ok let's not include a new kernel Edge but let's do it continue security support wasn't the proposal to have a shorter security support cycle for the Edge on a half kernel? ok keep that timeframe for the Edge kernel yeah and my comment was once we get past right now doing the start support is going to be very difficult because I have to manually appeal every architecture once it gets to it's just too early and the Edge on half kernel will be easier to port to in this new world I think it won't be too much to keep going with it as long but you know I think 268 gets priority no I'm just going to start back to the X issue it's probably too early but what is the situation with the humidity I believe Julian you mentioned that we're going to add Intel Alur and NV oh it's not ready but probably by the time that it's in a half no I don't really know I haven't heard of it but it's really early days it's just announced like a week ago so it's really early days I don't know that it's really ready to ship to people it's very not ready do you know about the new world models this is like yeah I really need proper maintainers no one in the X-Track Force has an EVO as ships supported by an EVO none of the X-Track Force members have an EVO so we haven't really heard of the work I've heard a call for new world maintainers two people responded but NOAA has not really followed up on that neither has this other guy do you want hardware sure if you have PCI Express hardware send something out to the X-Track Force and if you have a staff to do the maintenance for this I can buy you over okay I'll pass it to Andrew I'll let you know if it's a hardware problem I can fix that I can't fix the staffing problem right I have to work on it yeah so that's been an issue we haven't had the hardware that's easy mostly just PCI Express ports okay so that's your answer what is that is there a list of drivers that was present in the X-Track Force that is no longer present in the current set problem I've had problems from the X-Track Force because the module was lost by situation and like the old firmware firmware I got doesn't work anymore that kind of problem hmm? well I didn't see for the hand made upgrade of like back port of Carlos so no I didn't there is an issue I know the BCM 43 FX module which is for Broadcom wireless found in Power Books but they changed basically they're using a newer firmware now by Broadcom so the new gallery depends on the newer firmware so I've been very familiar that's an issue so no matter I don't have that list we should probably make that list we got a complaint from the DGDU user that his old X-Track race controller doesn't work anymore and then we have the TV input it seems like this old current module is no longer present in the current and we have no idea what the new name is and it doesn't act that so maybe it's just published um you can just make it different well sure if I had the machine yeah well I'll be there you're talking about actual file differences in modules sure that needs to exist but let me tell this back to this conversation a little more what if 2620's whatever doesn't have a certain driver that 2618 had do we consider that to have if it's required to support for the hardware but 2618 is still there so the hardware should drop the driver so basically we're talking about a new system like an Intel newer one that gives a hardware that isn't supported in the newer current I don't think that's a relation actually I think that's the same it's the same as that we hope to be so if you want to like it if you can afford it but not at all cost so it's important things it's a big game as the name when we go to a newer current version but it's also a kind of niche piece of hardware if you just go to the hardware it's supported for PCI as graphics I would say yes to the question and hardware I would say no but I just have a piece of hardware that I expect that they can ever use it might enough document it as a nice note so my personal perception is that the transition from Sarge stable to Dutch stable was difficult for some people because of things that we decided to change and what we're building and delivering might not be included in the keyspan the real question is are there any meaningful reductions in the list of delivered drivers from the 18 to whatever that's been supposed to place for a pension and a half I don't know the answer if some truly obscure something got dropped by from.org upstream between those two versions I wouldn't spot that too much but if there's something where we've made any decisions or fixed the number or we forgot to fix that and took something else away for a pension and a half yeah of course because the situation for example with the keyspan and serial stuff the motivation for taking that action was quite clear to everyone for reasons that rationale understands what the point is but it was a situation where one day hardware that many people have and use in the system used to work after they did an update and that's the kind of regression that I think you have to be careful about sort of doing intentionally that's the way if you don't know if you were already here when it was casted but we are definitely not intending to change the defaults so that people get the new kernels out making a voluntary choice about it and they are still supporting the old kernels so I'm not as much concerned as it would be because of these languages now there may be a choice to make between do I want this new thing or lose something if the new one breaks they can just go back okay now it's all go back to your question and the action is on I mean some way you make a list of what we'll say on this not that much is that as simple as a configure for something no I always think that the configuration can change names I don't think we can mind play with the difference it would be great if someone you cared about this can get involved in at least now yes it's cool basically if someone has a new system that doesn't work with the current kernel an adopted kernel that doesn't work with the new kernel the kernel doesn't support him how can we support him there is a kernel that supports him how can we do that I don't think that anybody in the current scene is interested in developing vipers such as Vopset so all these vipers we have to make a hard decision we either syspun or syspun and then we have to just decide which is less or bad of course you prefer to just support all so if you have a question we don't say when we can either support syspun we just say we don't support users with all we just support but has some minor stuff to support I think it's easy to answer I think we have to have a discussion about that when we really go to the decision survey what is our open currency which is a danger for us which are the things to be gained we have to make a decision I don't have enough I don't see what will happen with the kernel in the next half year to make any decisions about well let's document inside the kernel source tree in the schedule of future renewable and it only lists really really obscure junk they are extremely cautious by dropping existing drivers they even maintain the whole completely obsolete open sound system back from the old 90s just because some of the obscure legal hardware nobody knows any more is not supported by ALSA so don't take that back to the issue the main thing is whether there are rewrites of drivers like you had with Macerate when new drivers support really ancient hardware not most supported by the old driver anymore because basically the guy who did the rewrites did not have access to that ancient hardware anymore and also had something like I don't care enough for new support 20 year old with great support just the same with PCMCIA there was a major rewrite of PCMCIA in user space stuff those are big transitions and all stuff does get lost in those transitions but that's a dream's choice and if people want those re-questions fixed basically if they have the hardware they have to get involved and of course it's not the ASA cache it's the KVG cache and then it's the ASA cache and you have to do so one thing I'm not sure the partner should just be enabled for points like this I don't think we want to break upgrades between edge and edge yes especially with the HDA SDA changes which we're not prepared for at the moment speaking of that there might be a little on topic for this but do you think that the persistent gaming stuff will help solve that problem Colin told me earlier this week that he has good things for that in the I in the I we need to work together with you guys the current team to transition in upgrades support and that may even involve getting it into an edge point release at some point to have things own people's system as much as possible but it will be a major edge landing upgrades attic so 10 minutes left should we is it too early to start talking about time frames for this all I know right now is 2621 now but beyond that there's half what did you say if it's supposed to be edge and a half then it's a monster I think we need to get quicker you need to start quite quickly if you want to think the oddest thing I've been mentioning is just doing continuously there are two things that we could include in edge R2 which we don't have so I think we should do that as soon as possible just that on my base one is now mostly close so it's just R2 which would be okay and then we could get at least a new little harder version of edge and I hope we are able to get out edge R2 in two to three months that would be good and if we have that I think the most I hope that's all there's more to it and so it takes a new quality it should be another whatever 5 months of that 8 months from now I think my default would be to want to take 2623 as soon as it's unstable and start making a back port branch sort of probably 2623 can I go to those releases to break things separately like 631 was breaking many stuff because of the dynamic these things and you don't know what's going to be on 2323 but when it will be at a pretty stable release for example we know that for sure but we don't know that yet it might be that we see that other distributions start going on 22 when we do that actually I've learned not that so much if AppStream tells us that this kind of release is going on at a stable release so we have to all focus on any release or basically you will have but when they say the next one is going to be very unstable you can't believe them actually that's too far and they said that for 21 yes actually we are going to take it to 21 of course when do you expect 23 upstream they've been pretty regular I believe it's every two months are they doing upstream releases 2 to 3 so 22 I have a late summer I would guess people on the current I've heard I've seen some of it come in September it would seem likely that it would be before or right after the historical and I want to drop a new release just before I don't know September September I think it's about time to start September it's going to get unstable and then initially it will it's happening at the same time as long as it's coming from Europe and in Cambridge this year it's not just next year so all of this 88 changes that means the device that is happening after so after so we will see the changes anyways so we don't have to care about that now are we going to try to get the same kernel for a decent time into unstable first yes I'd like to use unstable to the early architectures building and maybe delay a possible next version in unstable to get an extended standardization period for this kernel that we target for H&R yeah either way I would ask obviously watching unstable and emerging stuff I would defer what I thought seems logical to me if you just had it like you would have H&R's edge code or which H&Rs that need further trust put an unstable, maybe stable stuff that was in its occasion get it into testing yes so then we have all you really have to work with the insoluble kernel thing on this to make sure things happen as we wanted basically the current kernel thing ignores testing completely which sucks that's not true, water is also part of the kernel team the people that manage to release us don't really ignore testing I'm not sure they ever know how to manage this because it's a kernel but yes they know what you mean but yes they can't now say anything about some people oh I don't care because you don't know how to develop I am are there any efforts going on to make sure that the SIT unstable kernels actually are buildable in that at the moment they are not because they depend on unicorrigals and all those things yeah I mean there's certainly we're not going to take the edge kernel I mean just recompile it there will be a branch of stores to deal with that stuff and I think while these have been when we have already done this dropped kernel package dependencies back with the old build system that we used in Edge and just dropped it to 21 or 23 or whatever that's all to be determined we'll figure out how to make it buildable and install it that'll be the same I would suggest to make them back portable like with only our rebuilds re-build the system actually working at when you talk about SIT kernels you can talk to Wally about that what are the edge kernels a lot of things have been blocked for a long while because of the libc transition so testing hasn't been as up to date as it would normally be that's quite normal acting just after a stable release my best major change is happening all over the distribution I don't think the fact that the unstable kernel can buildable for a while is a major issue interesting for better reason as far as I can see from that packet it does different semantics of read-link that enforce the new coroutils dependency and I'm pretty sure it's possible to write a transcript of both Edge and stable for read-link we're at a time now I'm sorry I don't have questions or anything but I think the only question that still is open but have you just left it open from now would be how do you do this in Stoner I have one small question where is the discussion fraction happening to take place on this is it getting released do you care actually we prefer to not have too many discussion on the least so we could take them in the well but if it's necessary to there's not so many people because they're still in switch to release or watch out or whatever you like because we like to avoid the discussions happening on the least because it's hard to tell if you're going to stop discussion on this but I agree that just having a kernel is probably not the right because it has a good quality impact how about the value well I guess that's it thank you