 Okay let's get started. So I don't I'm not sure what the agenda should be today but maybe I'll just give a quick update on what's been happening in the past seven days. Unfortunately master hasn't opened for 4.6 yet so we can't merge any of the MCO PRs and well that's one thing and on the installer side Vadim and what's his name again the CRC guy I'm sorry I forgot Praveen I think so they've done a lot of work to enable and also Charo and every well so many people have been working on getting the single node clusters ready so that's awesome I think we'll be there very soon yeah and I don't really have any more big updates maybe Charo and Vadim can give a little bit more of a detailed update on the single node issue yeah sure some more some more news first of all Josef has fixed the Azure installer so there is a decent way to make it run I didn't get a chance to do that actually yet but eventually I'll have my Azure account and we'll give it a try which is pretty cool. More stuff on single nodes so me and Praveen have been working to rebase the C. I think you cut out Vadim or is it just me. With that we would be able to and with that we would be able to run so the CRC would probably be OKD based in the future releases. I also have a few patches for right I also have a few patches for the installer to re-enable to fix a few things in Liberty installer and automatically detect that we're running on a single node and eject a few manifest for etcd for ingress and a patch to MCO to scale down the etcd quorum guard so single node should be installed should be able to to run without additional hacks. Charo has a great repo how to achieve that today but it needs scaling down a few operators and we're trying to avoid that. On a related node I release I created a beta 5 release today but it fails to update again and I'm thinking it's an actual bug so we will have to track it down. I think we won't be taking down the beta 5 due to that because it's been quite a long time since previous beta but we can discuss it today. Yeah as Christian has mentioned master benches are still locked to us so we can merge fixes and start working on 4.5 and 4.6. We'll use that time to improve 4.4 for now. Yeah that's pretty much all I've got. All right great thanks Vadim. Yeah do we have anything else really to talk about? I'm looking through the the board here but yeah none of this is really current I think K3S became a proposal to the CNCF. Yes but K3S isn't real Kubernetes because it's there's no etcd but yeah sure. Well I mean now we just have to come up with some cool wacky name for OKD at that size like yeah I mean we could easily just compile at CD inside and make it like not do RAF consensus which is essentially what K3S does. I just don't know what the hell is going on in K3S at all because it's all of their other choices are stupid too but you know whatever. Good replace the net to the operator with our custom mescalite operator and have a similar thing just like K3S. The point is well why. No there's no point to it because well actually there is a point like the main reason to switch to SQLite was to get rid of RAF consensus that's that's it and also because SQLite's written in C it's actually way smaller at runtime like go is fat that's pretty much it that's pretty much the the meat and potatoes of it and cutting the go fat makes it small enough to fit on a Raspberry Pi. Because everybody wants to run Kubernetes on a mobile phone right. Oh my fucking god no please no. I wouldn't mind but I have a quite a few spare but the point is joining them in the cluster and another point is having what's the use of it. Which containers can I run there? Well it bundles container D in there too so like clearly we're just we're just going full on for for stupid. But what's the payload? Bailey can find arm seven base images maybe some alpine nonsense. Although I think there's even helm charts for stuff like home assistant things that you can automate your home with a little Raspberry Pi cluster. Okay that'll work but arm 64 sounds better because we probably would have Fedora OS built on that. We can use Fedora containers which brings us to the idea of reusing Fedora build containers as image streams. That's definitely we should definitely tackle that sometime. Yeah are you talking about like a Fedora based version of the UBI? Yeah basically like that. We could inject a few image streams right in the default Tokyo installation and have them based on Fedora. That'll be useful. Absolutely. Yeah because at least right now one of the things you have to do is remove the samples operator since it it can't actually get to the images that it's trying to pull. Yeah that's a long-standing problem. There is like at least three wrong ways how to fix that. The most correct one is actually making samples operator detect that it's an OQT system and there is no redhead.io in the pull secret and inject sent to us. But we cannot start with an enhancement for OQT official one so we cannot officially push people to merge that. And I don't want to say as far away from anything involving sent us as possible. I mean yeah maybe sent us would be not too bad of a solution for this though because I think in the in the medium term we'll also have a sent us stream based OQT. No no no I want to personally say this I don't want that to exist for as long as possible. Well we're not we're not we've been pretty clear we're focusing on Fedora and we're not gonna focus on that but if we if we chose Fedora images for all the containers or all the operators on operator hub you know we would still need sent us based containers for the sent us version while we can use sent us containers on F cost based OQT. Right but yeah I mean there I'd be totally fine with just having Fedora containers and push Fedora everywhere. But because I just don't want to see a repeat of what happened with OpenShift 3 a repeat what's happened with OpenStack. I just don't want to see all the oxygen just completely sucked out again and again and again. So like that's why I'm heavily resistant to it. And I do think we should probably split the the operator hub thing into a second proposal enhancement proposal Vadim because the OQT enhancement proposal right now is more well the one that I still haven't published is really focused on just getting the core operators merged and then the F cost branches forks sort of to get rid of those and to be able to just use one operator one build of the operators for both OQT and OCP and the operator hub is sort of you know it's a little bit external to the core OQT even though of course it's part of the experience definitely. We don't have to mention every single change we will do that. You just have to have a feel that we just have to discuss how to detect that it's an OQT system because we definitely want some customization and once we settle on the way how to do that be it a flag in the infrastructure or just detecting that you don't have access to redhead.io once we settle on that we can make patches to samples operator we can do patches to install or an inject custom content and so on. We don't need a full-blown enhancement for that at least I really hope that we don't need to do that. Yeah that seems like a lot of overhead or not a lot of gain. Just way too many discussion in enhancement. These take like months to get merged and it's not clear if we can start working on that or we have to wait until every single detail is settled. I would definitely choose the former but we'll see what happens. Yeah I think the way I want to approach it with the first enhancement is just you know have the code ready all ready ready to go and then just need it merged. That's at least what you know we're planning on doing in the 4.6 time frame which is coming up with the MCO. The installer will have to wait a little longer probably but even that we can maybe get all merged in 4.6 and even if not we won't I think we won't block on the on the installer to to be merged with master for OKD to go GA because I think once once we have sort of aligned all the operators that's enough the installer just runs once at installation time and it's not yeah it's not that big of a deal to maintain that one fork for a little bit longer. All right are there any other questions any other topics you want to discuss today yeah I just saw Mike's question is okay they still on track for a 4.6 GA so yeah I think we're still on track there which you know it we might even be releasing in the 4.5 OCP time frame because we'll merge the the F course MCO with master in the F in the 4.6 development time frame which is way before the release and then once we have sort of everything in one master we can just backport the OKD specific things that are already in master to an F course 4.5 branch and that would be that would be good to go as GA because from 4.6 on we would just have really one one master branch and one release branch eventually to build off of so I don't expect a lot of problems I have a PR open that sort of adds dual support for ignition spec v3 and spec v2 so you can there'll be a sort of a config knob in the controller config of the machine config controller to tell whether it should be rendering back to us back three it'll it'll support both and the installer will just have to tell it in the beginning what to serve that still doesn't solve the problem for OCP to migrate from v2 to v3 but it's sort of it's already a framework it the only thing missing there is sort of the transition the reconciliation between a system that has v2 on it and that goes to go to v3 but luckily for us that's not a problem we'll have to deal with in OKD because we are already on spec 3 and we'll just have an MCO that supports spec 3 and that that'll be good enough so I don't really expect a lot of problems on that side yeah it's essentially yeah exactly that the biggest tasks is getting MCO merged and then creating an installer org for 4.5 that will need a few where we just need to carry the OKD specific things that we haven't figured out how to merge yet and even from them like the pull secret stuff disabling that we will need to eventually find a way to get that into master as well and if we decide on on adding a build flag there we may even be able to merge it in the 4.6 window there as well but yeah I to be honest I don't expect a lot of problems it's really just we have to now align with the OCP releases of course so we can sort of get the 4.5 OCP GA and backport all the OKD patches to that and then release that as OKD GA 4.5 do you agree Vadim am I saying anything that's not true well you're saying correct things the the the main point is we are independent of OCP release timeframes we can have that as soon as possible we can have release tomorrow if you like broken upgrades and why not hey I mean sure if everybody has fun that way right we'll just call it release good enough ship it yep just the ship it just sailed right so what we're targeting is to get things into master and have enhancement merge so that every single team in the redhead would be bound by that and they would have to fix box for OKD as well oh I kind of wish that was already true but that's a little hard fight to know that that isn't I mean that is that is exactly the goal we're all doing this year right the fact though that is true but we cannot come to sample separator books and make them merge BRs oh I mean you could just click the merge button right I mean you're in the org no it's a bot we can't do anything like that there has to be an approved label and it looks good to me label on there and then the bot merges it automatically if all if all required tests pass of course right and then oh yeah well you know picking a broken infrastructure we also need signing keys so that your upgrades won't have to be forced so all of these problems are not related to OCP release time frame but approximately by four six time we would be confident enough and we just can call it GA that's pretty much what what what's the plan right now yeah and the one thing that we sort of are how we are depending on OCP is in the future we of course want to be able to to deliver a stable system so we want to we don't want to release off of branches that haven't really been released like the master branch but instead we want to are built off of release branches so for each OCP release each OCP release gets branched off of master so we have like a release 4.2 release 4.3 and so on branch for every release and we want to be building off of the same release branches and then when the next branching happens or the next release happens we will transition with an update of course very quickly from from the current onto the next branch or from from the previously current onto the new current branch so we won't be maintaining many release branches side-by-side for OKD always just one and maybe of course that will overlap a little bit but we really want to have people upgrade quickly with OKD you know of course if you're paying for the product you can stay on 4.2 forever that may not be true but you know that there's more longer support and security patches and everything gets backported a lot longer than we will be doing in OKD we'll essentially do it like Fedora where we have a maximum of one past release that is still sort of kept up to date well we still have nightlies which are built automatically and there is nothing which can stop them from building the issues that they live just 48 hours so if you mirror them very quickly you would get a pretty much stable release we don't test their upgrades so you're on your own here but the official for stable branch yeah it would be rolling definitely we would switch to 4.5 4.6 seamlessly unless we have any other ideas but that's pretty much the goal right now that's how it would be different for most to be and we want to have to align with the time frames if you want to roll out some new feature or bug fix yeah exactly so we'll always have the the the builds off of the master branch of course and you'll be able to use them and they'll be upstream to what's in the product right now but you know for the but just the usual OKD user we will sort of be building off of the same release branches and yeah Joseph you asked whether OKD 4 will never be real upstream it's sort of a a sibling stream it's the same code base but built for a different operating system for a different base OS and of course you know it's OKD there's no paid support there's community support which is what what we've been doing here a little bit and that will hopefully be driving a little bit more once more people get to use it and actually can participate in helping each other out but yeah it's more of an unsupported OCP yeah like Fedora well yeah maybe yeah it's kind of like rel and Fedora but different I mean the packages are also kept up to date but the kernel isn't and that's kind of the same we will have the same operators and maybe build on on two different container images but they're essentially the same code it's just the base operating system is a different one has a different kernel and yeah and so when of course in OKD we we it'll be the sort of the the testing round for all kinds of new features we want to bring into OKD so it may be that we'll build OKD with C groups V2 enabled or I'm pretty certain we will do that before that will land in OCP but that should really be just a configuration thing and not really require any huge differences in code at that at that point the well it will eventually lead to differences in code initially but the idea is that it would eventually propagate yeah exactly all of the C groups V2 codes would be in Kubernetes so in 1.19 there are few more fixes which need to land for the full support and you would be able to run it in nested containers even but yes we'll definitely roll out the C group which in OCD much sooner than it happens in OCP mostly because of the better kernel support yeah exactly so I assume or I expect that OCP will handle this transition much more conservatively than we will in OKD because yeah so I think we will see C groups V2 in OKD before we'll see it in OCP enabled but that's just it's not on the cluster side it's more because the base operating system maybe which is you know redhead CoroS as opposed to Fedora CoroS a redhead CoroS wouldn't maybe support it yet well Fedora CoroS theoretically already supports it yeah well the funny thing is I'm pretty sure that in that particular example it'll be backwards rel CoroS will get to it first because Fedora CoroS keeps turning it off because of OKD so and Docker so it's gonna be interesting I'm pretty sure rel CoroS will do it first in this particular circumstance because it doesn't have to care about anything else except for OKD or sorry OCP yeah the open-shift thingy yeah we're not really blocking C groups V2 on OKD's behalf but more on on the Docker use case in Fedora CoroS right which is which I find very unfortunate that we sort of have to keep that I've been pushing towards just focusing on you know the potman and and cubelet use cases essentially so the the question whether OKD4 will be tested on all available platforms yes it will I can't promise we'll do it for the nightlies but certainly all the releases will have to pass on all platforms at least that's what I expect yeah I'm not sure how the how you how a C groups version mismatch between a container and a host how that behaves to be honest so it shouldn't it shouldn't be a problem if system D in the child isn't instantiating its own C groups which I don't think in this case that it would so the the problem most likely is that system D isn't handling being a deputy in it very well inside of a container because it's not really fit one and it has to and there are there are certain things that has to happen for it to behave properly so if you can James I would suggest filing a bug with Red Hat and file a support ticket because that really has to be fixed since rel 8.2 now fully supports C groups V2 yeah I agree please do file file an issue and to tackle this from a different angle a little bit really and I've been sort of fighting for this in Fedora for some time we should not include system D in containers I think so I will I will actually be the lone guy who says that we need a way to have system D in containers because I work with a lot of PHP based web applications for example and PHP has no good mechanism to be able to do all the same things that for example Python and Java and what not do you inherently need a multi-service container to do the same thing you do with a single like whiskey server thing in in Python that PHP requires you having and genetics or Apache and a fast CGI server of some kind connected to each other to do the right thing so there is a valuable reason to have it whether it is universally required is is a different matter but like I personally as part of my work like data does PHP web applications out of the wazoo like we're gonna wind up like as we build out containers we're gonna heavily start relying on multi-service container stuff because quite frankly it's the only reasonable way to do this like I have personally explored the other architectures it sucks it really really sucks for those types of apps so like I I agree with you I think it'd be nice if we didn't need them but the reality is that there's enough architectures out there that I that I and others have to deal with a PHP still like two-thirds of the internet it's just a reality we need to have a proper way to do it otherwise people hack bad solutions together for everything else and that's worse yeah I agree I mean there's definitely a huge use case for having that and I think redhead was you know we we essentially enable system D in containers in the first place for you know there's services like free IPA that it's yeah and even just PHP it's multi multi services multiple services inside a container and that that's totally fine I mean I'm not I'm not one of I'm not a great fan of the docker mantra to just have one process per container that doesn't make sense to me but sometimes if you only really need one process let's say you run I don't know an email server post fix or something that already handles supervising properly enough for those kinds of things and in that case you don't need a service manager on top of the on top of the container supervisor yeah exactly and right now it's just it's it's it's not really possible except you sort of create your own container from scratch and even if you just install an RPM it always pulls in system D and I mean yeah it's even though starting up the services inside the container without system D is always kind of ugly but yeah that's a different discussion I guess I mean what I would personally like to see is that after we've started to get okd for out the door that I would like to see us start maybe figuring out a way to build like some kind of ecosystem of where like Fedora containers and think Fedora based containers and language stacks and things like that like help bring some bonds between the okd people and the Fedora communities to make it so that like we can populate language stacks operators things like that like Fedora in for us working on a Koji operator like wouldn't that be nice to be an operator hub and something that people could just push click have in definitely so like after we get okd for like out the door enough that people can like actively start using it this is I think where we should go next with the working group we should move to we should swiftly move to make it so the Fedora community could be better enabled to to leverage okd for things especially since there's a community open shift like it'd be good to start like making that make Fedora a valuable part of our ecosystem oh I agree wholeheartedly here I really want to move the open shift and Fedora communities closer together because I've you know open shift hasn't been a prime example of how to and courage you know a community to grow and to contribute because it just you know hasn't really been possible in the past but yeah with okd for I think we should really go in the direction that Fedora has gone and you know just try to get people from Fedora over here and also try first as I'm also from the Fedora community of course I'd like to encourage people from okd to just go try out Fedora right but yeah I want I want an avenue that doesn't involve in what happened with RDO like I just I want to completely avoid the at the end state of RDO which is they advertise that they're about the red hat ecosystem and they're not and they're just about rel and centos and there's no real strong community buy-in in the sense of like community contributions active work and stuff by people that aren't just red hat employees working on the open stack team like I want us to be way more successful than that yeah definitely I mean I want to be here to stay with okd and not you know disappear again and yeah and not not use so much land so I mean yeah and that's that's why I don't want to focus on centos either Fedora is the future I think for this and for okd so yeah so like it's I'm happy to see how things have been moving forward and I know that this whole the fact that okd has been focusing on Fedora core OS and moving towards the future and being an expanded scope it is certainly made okd and to and also even open shift an easier and an easier thing to work with within you know from my perspective so we're doing the right things right now and I hope we continue doing so thanks Neil I'm glad you see it that way yeah now we just need to be able to have people be able to play with it yeah I know we'll get there we'll get there you only need 32 gig of RAM now and for VCP use but you can do it oh good now I just need to buy a server because I don't have one at harm so that means I can run it on my laptop now my laptop's not big enough I managed to run it on a single AWS instance with 16 gigs 8 gigs is not enough I think 12 should be fine but the last time I tried to ran CRC it completely fell over with the default configuration of making 8 gig VMs I had to double it to make it so that it wouldn't it would stop failing I'm not about here seeing gigabytes of RAM is what I've been using for the for the master node you only need the other RAM while the bootstrap is running and then you shut it down and and actually you can shut down the master VM at that point edit its configuration to give it the RAM that the bootstrap had and you've got a fully functional single cluster you also don't really need a bootstrap VM you can run it in a container with a few hacks oh really yeah because all it all the bootstrap container all the bootstrap machine as far as to serve the ignition files for the other nodes you could you can you can totally cheese that yeah but again you can still cheese that too but you can you can cheese all the things that it does so MCD supports a mode called once from so it emulates a proper ignition parsing and applying files we do that when we add a real seven node and you can create a sento s8 container run MCD inside of it give it the bootstrap ignition and that would become a full-blown bootstrap node you just need well connections that would be a fun experiment to play with I think I have some code pieces to do that but your your machine and the container and the masters all have to be in the same node which is pretty tricky usually yeah I'll add a card to to probably to the provost agent items and have a play with that yeah and adding to the chat I think we'll have lots of interesting things to look at right now we're really still in in this phase of getting to GA and can't really say what we'll be focusing on after this but yeah it won't get boring there's you know see groups V2 there's more operators we should try to get running and you know facilitate the community to actually be able to install that without a hassle and you know provide that as a catalog on operate up and so on and then there's interesting things on the host side again like psyllium the ebpf filtering stuff and yeah I mean I'm not sure what what makes sense to focus on next but I think we'll just see that as we go and we'll survey the the community continuously which is why it's important that you keep coming back here and yeah I think that that's gonna be a way to find for us to find out what what to focus on next for me for me the most interesting thing I want to see is that once we have okd once we have okd I want to see us I think I want us to merge with the fedora container stick and actually start driving workflows and patterns and practices and things like that to help bring server software package in fedora as applications and stuff that people can use on an open shift system because there is a whole world of applications that people maintain as rpms and it'd be good to have a good way to bring those those high quality stuff with all the cleanups and and fixes and improvements you know give a good path for those things to be able to run an open shift as well and and enable a true proper ecosystem of supporting containerization of applications and services and maybe even help people make operators and stuff like that cuz hell I want to make an operator for example a pager but I don't know how to I don't know how to get started I don't know what I'm supposed to do like I barely have a clue about anything but I want to be able to do it so these are the kinds of things that I I think we should do next that's an excellent idea merging with the container stick makes a lot of sense to me I mean it's it's inactive or has been inactive for a while kind of at least and yeah yeah and confidora containers haven't really taken off you know mostly because the maintenance is hard and the tooling is kind of crappy like I mean we do have all of this in an open shift in okd in an automated manner so it just makes sense to sort of set it up for fedora and go from there yeah so that makes a lot of sense for me to me and I think that's that's a great really great point once we go GA we should really invite the container stick to join us here and you know create some yeah some high quality content essentially to run on okd there yeah like this is this is the real opportunity to not only help help make okd more broadly useful as a community but also to help make to help bring fedora to the next level in terms of this stuff like everyone keeps talking about well containers are eaten all the things and like do we care about operating systems and whatnot but like there's still this whole groundswell of people who are working to make applications that work available to people to consume them and and we be providing a way to bridge that gap in a reasonably automated fashion would be good and it would help and like part of that is I also want to revive the kubernetes support in fedora itself it's pretty it's in bad shape like we got to bring bring that that shore that up and like make it so that it is a first-class citizen as in the fedora ecosystem and that will I think make it a lot more appealing and a lot more useful a lot more people yeah I completely agree with that all right cool I think today's been more of a free flow meeting when Diane's not here we just ran yeah yeah I mean it went better than last time which was just I think Charo Sri and myself were we're basically ranting about how our systems weren't working because everything kept breaking every time we tried to set it up yeah yeah I'm definitely glad about that too yes there's been progress on that front because I don't think I don't think Charo you've actually had a problem with with getting your systems up and running in the last like month or so and I know Sri's cluster has been pretty stable since early March yeah in fact the in the testing stream of Fedora core OS now they've got a fix for the multiple nicks with fixed IP addresses I'm getting ready to fire up a cluster build with dual nicks in this to put put one of the nicks on a storage network with ice guzzy provision to PV's so that will be interesting and if I get that up and running I'm wanting to try out cube vert that one is interesting to me I find the idea of making it so that like I can declare a definition that includes a group of containers and a virtual machine because the virtual machine is kind of special and bringing those together as like a single kind of packaged thing because like I certainly do have applications that for example one of them requires loading a custom kernel module and things like that and you just really can't do that without it without a virtualization wet layer and I'm not really going to inject random kernel modules into the host of open shift like that's just not going to happen sorry so like being able to bring those two together would be actually a really compelling and interesting story Kata might work but I don't actually know anything about how Kata containers work they've changed quite a bit since clear containers so and while I did review all the packages to make that runtime work in Fedora I still have no idea how to use it yeah just from a from an infrastructure management perspective if if operations folks just learn open shift and we can migrate all of those other special ecosystems into that it it's a much reduced workload for them they're not having to worry about VMware ESX and vMotion and rev open stack it just it becomes open shift well I think that rev and open stack it becomes a little harder of a proposition but I think rev is still valuable because rev is the thing that is not designed for you to just you know just spawn machines for no reason at all there are very much they stick around they kind of live forever kind of thing so rev to me still makes sense if we're talking about 99% of my workload is containers and then I have like maybe 1% of it that offers you know some that require some virtual machines to go with it then then open stack starts making a bit less sense but then there's the whole like I want to autoscale my open shift cluster and to be quite frank open stacks pretty much the only thing I've seen that does this nice and don't even talk to me about airship airship just like makes my brain hurt it's just weird like that's airship is for those who don't know that is that is where you run parts of open stack on top of kubernetes to orchestrate the rest of open stack it's kind of nuts definitely interesting times for sure yes yeah great you were you were sharing your chat screen it was okay I it was nice to see that oh apparently even Corus has just too many slacks I mean at least it's not 15 like mine is my nice my slack window my slack application has way too many and it makes me sad when if I look at it yeah I had the screen sharing on for too long I forgot about it no it's okay yeah nothing secret I think nothing secret leaked here today no no product roadmaps or anything like that nothing no no just that even saying lots of hopes and dreams here and I can't wait till Diane watches this recording take your meeting back Christian we should do more meetings like this too like this is fun I mean yeah usually we just talk about what's not done yet and you know I really hope we'll get to to the kind of kind of like not not just you know free flow like this but I really want to survey the community more and find out what you know what you guys want to do want to use ok d4 and you know have getting feedback like we should merge with the container sick I think that's an excellent idea and also you know the koji operator of course that should be an operator up to just you click and install so yeah those kinds of things and also just you know what's the new tech you want to see in okd so we really should oh yeah running copper as an operator would be cool like like if people want to have their own copper instance I understand red hats instance of copper is actually already containerized it so it would be interesting if like people who wanted to run their own version of the door copper internally with all of its fun multi-district support and the fact that it has a nicer abstraction for projects and packages running that on in their own infrastructure through OpenShift would be really neat and I obviously as I said earlier I want to have packer in there because my thought would be well packer is a really minimal kit server what if pairing that with things like tecton and other kubernetes native the ICD things it's not like competing with everything and it'd be just kind of fun to start using that stuff and I want to use that also in the fedora community because if we can do this with a packer that can run in OpenShift we can certainly use it for packer IO or stuff fedora project org start using tecton community shift like there's there's a lot of opportunities there for the kinds of interesting things that can be done and so I I really want us to get there because like I think there's a lot of opportunity here for making this to be I think one of the rare times where red hat product turns into a breakout community project because that doesn't happen so much anymore and it makes me very sad okay cool yeah we've done the hour yes all right thank you all for joining in and I'll see you in two weeks thank you