 Rwy'n ffordd, rydych i ni. Mae'r ddechrau yn bwysig yn rwy'n dechrau. Dwi'n arwain, oedd yn ei ddim yn ychydig. Dwi'n arwain'n ddiw. Mae'r dweud o gweithio'r ddweud o'r ddw i'w gweithio'r ddweud o'r ddweud i ni. Felly, dyna'n gwybod ar y ddweudio'r ddweud i ni. Efallai, unrhyw i'n ddweud i ni'n gwneud ym Lwyth, i'n meddwl i'n meddwl i'r ddweud i ni. Mae'n gweithio'r ddweud i ni. is quite interesting. There's a main loop which has all of the incoming IO, so window manager messages, events posted to the operating system, keystrokes, and this sort of thing comes in, and then there's one timer. And this timer is then used when we build the main loop abstraction on top of that, and we set that timer to the shortest duration of all of the timers that we have, and we wait for the time out and then emit stuff and we start again. And this is how it's worked for a long time. But what this meant was that in order to order events, when you want to get some kind of forgery, this should happen first and then that should happen later, we just picked random numbers. Now it's fine to pick random numbers, but these random numbers are also times. So if you wanted to repaint the window, you invalidate the window, you ask for it to be repainted, and then you wait 30 milliseconds. Why not? If you want to resize the window and relay it out, then that's 50 milliseconds. Nobody quite knows why, right? These are sort of from the dawn of time it's been like that. And so what we found was, when we did this analysis, and this was done with some students in Munich, I was mentoring and I think various people remember, there's a complete zoo of different length timers, 250 different timers with different random numbers in, or some that were saying and nobody knew why. So then we've sort of fixed it. So we still have timeouts. Some things are periodic. For example, your cursor blinks. And we can make your cursor blink. But it's not going to help, right? But other things, we pretty want to get done as soon as possible. So you're spell checking, for example. Spell checking happens in the background. We don't want it to get in the way of the application. So we do a little bit of spell checking and we give it back to the application. We do another 1,100 paragraphs and we hand back and so on. But we still want it to spell check as quickly as we can. So now we're doing this in an idle handler instead of a, I forget what the number was, 50, 100 millisecond timeout. So we can get work done and get the CPU asleep as well. So from a power perspective, this is also a disaster. What you don't want your CPU to do is to constantly wake up, drag all this infrastructure up to these high power states, do a little bit of work and then go to sleep again. While the processor is desperately trying to shut everything down and clock gate all these bits of logic. At which point suddenly up again it all comes and this is just terrible. So this should also save us power and let us get to low power states so the leap office is not consuming your laptop. That's the hope. So we have this idle concept and this is stuff that happens when nothing else is going on. So do stuff later and of course we need to prioritise that. So we get rid of all these timeouts but we say that resize is the higher priority than the paint. So do this one first, then do this one. So we keep that ordering constraint and it's now a strong ordering constraint instead of a weak one because in the past the timers that had expired were executed in whatever order they happened to be added in. So you think you have a nice strong ordering you probably do most of the time but if something happens that makes them both expire it's very unclear which one will happen first. So you get these sort of misordered events that cause glitches. So now when you invalidate a widget it should get re-rendered as soon as we hit the main loop and that's very important for double buffer rendering for GR rendering and so on. There's another feature of Windows that I didn't believe possible until I read the documentation. So to Windows there's a very nice call which is the core of the main loop which is wait for multiple objects. It's basically a poll system call wakening up when something happens. And these calls typically have a timer in them so wait for this long and if nothing happens then I'll go and pick us up. Or if nothing happens then I'll do some other event. Unfortunately Windows rounds up the time out you put in here to 10 milliseconds. So the number is just arbitrary increased. You can say I want to sleep for 1 millisecond and you get 10. So great. This is not good for performance. It turns out there is a method call that you can call as an application that alters the behaviour of Windows or every other application. And that makes your timers actually work. A high resolution, the problem is that everyone else's timers also work. Which could be really bad. Small changes in timing behaviour we have seen can have a big impact in stability. So why does my app only work or not work when some other app is installed? Yeah, well this is bad. So Candy, thankfully, implemented these high resolution timers so Windows spawns a lot of threads. I don't know why it does it. But when you ask for these things you get about 8 threads. And the function of these 8 threads is apparently to allow you to sleep for less than 10 milliseconds. So there you go. Awesome. Anyhow, thanks to me you guys. Jennifer and the All Tivias, Florent Gaffer and so on. There's a whole blurb about there, the work that they did there, do you live at the loop? There are some problems that come up and there may be some of you have seen from this. So previously we would have code that would re-render something every 30 milliseconds. But that's not a huge problem. I think it's bad, but it's not terribly bad. People didn't even notice it. Plenty of time to do other work in between times. We had writer, when you typed, if you had typed a badly spelt word, but you hadn't pressed space yet. I think that's right, might wish I'll fix this recently. But there's this use case where you end up with a word that we would really like someone to add more to, to make it correctly spelt. But we're just going to wait for a bit and hope they do something. Unfortunately the wait for the bit used to be a sleep of 100 milliseconds, 50 milliseconds, when it was only 10 times a second was re-checking this pointlessly. But now suddenly we did that really fast, 10,000 times a second or 1,000 times a second, suddenly your CPU spikes to 100% waiting busily for this keystroke. So it was quite some work, a keystroke and then check again rather than being a bit lazy. So the right place is now where we see these 100% CPU burning hot loops and we just need to fix them. For now there's a bit of a hack in there but sort of is a bit horrible. So if you have a very low priority idle handler we don't let it happen more than 5 milliseconds, more than every 5 milliseconds. So we should kill that one master and fully tune this up. But as a hack it helps. The other thing we often see or may have seen and we've fixed a number of these is you get LibreOffice and it's working perfectly but something isn't rendering. So you get it as a window but nothing is drawn in it. It's black or garbage. And often this is just starvation so there is some kind of high priority idle handler but it's just banging very hard on the processor and no lower priority handle will get in and a lower priority handler includes the guy that renders the screen. This is not good. Again another stupid bug just very silly but luckily events still come in so the app is still usable and there's just one dialogue that doesn't render. Pretty silly. Anyhow so arguably it's better to see these things to fix them, to fix the power problems to get rid of the races so here's one that's quite on fixed where was a configure event is something the operating system sends you to say a window has been resized or redelt with any units and was a race between this event getting this event and a paint timer so previously the paint timer was 30 milliseconds later so we had time most of the time for the operating system to send you the configure event for us to process it and then paint and so all was well, it didn't crash unless of course you're on a slow machine that was heavy load da da da da da and insert reason why you might trigger this race and now of course we do the paint immediately so we have to actually catch these problems that have been lurking there for a while and still if you have a question or a hackling do just throw things at so VCL pointer is the next big thing this is all stuff we did in the last year so a VCL pointer was my brainwave it was intended to be a minimal not a complete fix the smallest a unit of change that we could do the basics so Null and I rolled boldly set out to to avoid getting too stuck to the tar baby anyway it's the way you tell them once of the year so in the end just when we merged the initial branch before the bug fixing afterwards you had 276 minutes two and a half thousand files 24,000 lines plus 41,000 lines minus which is pretty good and make check past at least two days before we merged the problem is that rebasing this kind of patch is extremely problematic so actually when we merged it it didn't pass anymore but we soon fixed that and it's pretty hard to keep these things we've written a new unit test to some of these things systematically although we have zero open VCL or pointer bugs today you can check, track that we fixed something like 61 regression bugs on that from different places in the code around the place and actually I was really impressed that the QA team are really running master builds that's just a revelation to me there were people using weird corners of the app but I didn't know existed which is great and finding bugs to a degree we left the very paranoid assertion on that we no longer need to try and find where we screwed up some of these problems so actually a debug util build will fail but the product build shouldn't and all of the 61 only five actually escaped to users so three were fixed in 501 and another two in 502 there will be more so what was the change so previously there was a whole lot of different ways the life cycle could be handled of a window so a window or a widget they mean the same thing to me so you could have a widget that was a member so the whole button lives as part of it's parent window it's a member it's sort of inline there when you create a window you create a button not when you just throw in the window it just magically goes away you can have these things allocated on the stack so you create a message dialog for example and that bit would come and when you click exit you just leave the function and it will be free you can have the heap allocated so you can whack them on the heap and so it's absolutely normal to have some kind of mix of these going on so it was normal to have a heap allocated parent and then a whole load of stack or member allocated not stack member children inside it and so you delete the parent and the children would go away too unfortunately this can make the life cycle pretty difficult to follow you don't have a pointer or a reference to a widget somewhere and you don't know anything about it what you can count on when it will be there when it will go away and it gets worse so a window can have an uno peer alongside it which has a strong uno reference count so there's a very nice life cycle semantic here for this peer and depending on how this is handled this could actually control the life cycle of this so when you unreference the last one of these it could delete this guy or maybe not so if this is on the stack obviously you don't want that so this is kind of a penetrable mess here and that's before you start wrapping the pointers in boost so now you have another means of tracking these things so depending on who has taken a copy of this pointer again the life cycle can get pretty confusing and so it impresses a particularly good example you could impress because it was trendy a new code and normally a shared pointer is a good sign there was a mix of you know life cycle shared pointers around place and just by changing a little bit of ordering you could completely break the press in really embarrassing ways that no one could predict unless they were an expert in understanding how all the shells interacted and how the framework worked and how the events were propagated and it impresses a particularly interesting stuff there so this just made it extremely fragile and hard to see what's going on partly because of this this is a social item I mentioned before the windows are very paranoid that their children were destroyed before they were because otherwise very bad things would happen you don't have pointers all around the place so yeah so if you see those they're I guess bad we should fix them but they're not terribly helpful and the other problem that we saw in the code case is that lots of the code is not really very safe so for example in this case so in order to try and work out if a window or widget has been destroyed there was this listener pattern so the punchline is that the window was created at some point and deleted at some point by someone else but if you're running code and you're on the stack and you're about to emit say a key press well that key press could be alt f4 which is going to close your window and destroy it but that's emitted on the window you're about to destroy so when you come back from that pullback you're this pointer is actually deleted already so you end up inside a class which has already been deleted by the time you get back into that code and the problem is that this pattern could happen almost anywhere the re-entrancy hazards here particularly when you start running main loops as children are a disaster and so in theory good code every time it called out or called something that you didn't know what it would do should have a pattern where you have a listener we track this we add ourselves to this listener then we check are we deleted are this pointer invalid and if so we do something we can't we just stop doing things because otherwise bad things will happen if it's still valid then we carry on so you can see this pattern in some places in DCL the places where it actually crashed and actually someone came and they both hit like the keyboard handler like the mouse event handler but there are loads of other places and so this is really unexpected it's really unpleasant it's hard to maintain and the beautiful thing about DCL pointer is that it gets very trivial to do this just hold a reference count on this thing if you have a DCL pointer parameter that's passed to you that will be alive it will be valid in memory this pointer will not disappear while you're executing a method eventually of course it will when the reference count gets to 0 and so this makes a whole lot of code paths safe now of course we have this dispose pattern and a reference count when I explain that in a minute but we made really dozens of code paths much much safer hitting this thing that so I don't know what bugs in the bug tracker we can close that intermittently when I did this with this race anyway I'm an idiot as you know but here's Philip Lowman's take on it who maintained DCL for many years I just told him what we done it always seemed like a good idea part of the reason that this wasn't done was that it was a huge ADI change and in the old start-up it was hard to know when to do the huge ADI change because there was always commercial pressure but also just the building infrastructure made it very hard to do a big load of changes so you ended up digging deeper and deeper holes and you couldn't get out of so now all children of all widgets are DCL pointers it's a VCL pointer in a null or it points to valid memory and it could point to a disposable node so let's look at that one of the problems here is that life cycles, reference cycles are in place DCL have lots of references to itself anyway every widget has a VCL pointer to its parents to its children so any widget you look at will have a big reference count like 6 or 7 before you've done anything into it so lots of references around the place no weak references they're all pretty strong so how do we free anything how can we get that reference count down to 0 and actually release anything so we're going to display it back and of course this is familiar to people as you know the X component of the face has that of a run out of time or something not that you're giving us support ok perfect so here we are, these arrows are ownership pointers this guy owns a pointer to this one and this one owns a pointer to that one and so we want to destroy this window and tear it out of the hierarchy and make it visible do you do coloured lights too? I can dance so we want to dispose Peter so we call dispose on it and what dispose does is it throws away all of the platform here logic that goes through the window oh it also even better it just likes me so this gets rid of then all of the references that Peter owns so instead of Peter having a reference here no longer has it, you see the arrow is gone instead of having a reference to something else it's dropped that as well so it's basically tidying up all of its pointers and then we dispose Jane for example and Jane had a pointer out another pointer out at this point appointed to herself and would particularly irritate him really screw up a lot of stuff and they created during construction so anyway and so she got rid of all of her pointers and now no one points to her and she doesn't point to anyone so actually we can then take this away we're sure it's safe to get rid of this guy so Jane would be deleted at this point Peter is kept around because somebody is still still got a reference to her but Peter isn't a pretty sad place because he doesn't actually have anything inside himself really he's now just an empty shell he doesn't have any real resources backing him all of his pointers are mostly null there's not much there but there is some valid memory and you can say are you alive have you been disposed and you can call methods on him you can call give me your title or something and you'll get an empty screen not a safe fault you can call get me your parent and return null and so hopefully we've arm of the code and made a lot more defensive in a lot of these bases so that actually this empty place already relatively safe and it trickled through existing code without causing problems we only call the disposed ones we have a little ramp of the disposed ones that sets a bool in to make that fly and ideally methods called on a disposed object don't set fault so that's the ideal it's not there we've done a lot of the most common ones so usually we'd fix the bug if it causes this to happen and also fix the methods so they don't crash so the dog tag stuff is still there but we don't need it really anymore so we'll be cleaning that up that's my hope in the future but we have some problem types that jump out of here so we move a whole load of the window subclasses so take a button we move most of the logic from the destructor into a disposed method right? so we have this disposed method and then the destructor calls disposed ones and we have a clang plugin to check that you're doing this right so if you're getting warnings saying your destructor doesn't have a disposed ones call in it your tin of ox will fail and bad things will happen so there's various checking things around here to make sure that every ECL points of member you have is either cleared or disposed in your disposed method so you need a disposed method, you need to clear these guys and you need to have a destructor that works so this catches a lot of people that do that stuff the problem is that there can be other members of who that are not window subclasses and we can't see those and catch those and we need to previously these would have been destroyed as part of who's destructor so this method as well as having a code here also destroys all of its members first right? at the end destroys them all and that of course no longer happens in the right place because we're doing most of what was here now in a disposed method in the occasional cases we need to add new disposed methods to other object types whose ordering is important that was a very complicated statement but you can read the slides later and I can say I told you so when you can't find the bike the other interesting thing is that the tables are manual as you destroy stuff so here's another complicated example we have a basic class and a virtual function here Doofoo, the prince hello world and you have an inherited class prince whatever down here this guy which is the destruction of a base when he calls Doofoo he doesn't on an inherit object so you have one of these guys you delete it this guy is going to call this method not this method so as the destruction happens the v table is tweaked to make that object look like the kind it is and this is kind of obvious to any programmer et cetera but the problem is when you move to a disposed pattern so if you move all of this out of here and you put this in a disposed virtual method it's not true suddenly you get whatever out instead of hello world and this also creates problems so there are a number of problem types we fix there a number of ways we try to detect that this is happening and deliberately not call children there's a number of problems there but again these I think are mostly fixed by now anyhow some benefits I hope we can influence interfaces directly inside VCL windows so in the past we've often had to have peers that are separate objects and then again you have another life cycle problem obtaining these two things together it's more cable and more pain and accessibility peers are a great example of this of objects that are created and life cycle managed and unhelpful there's no good ways to do that at the moment I hope it's more stable reliable and lots of people can dispose of it doesn't actually matter you can do it multiple times there are a number of ways that makes life easier and we found a fix quite a few leaks so whilst walking through this thing there's a number of dialogues that we've never destroyed that we've created a few more whilst doing it it's a monster once every dialogue is leaked I think quailon fixed that the other day there was a thing that wasn't a VCL it was holding a VCL pointer and it wasn't checked by flame the other thing is that everything is my fault so previously quailon touched every dialogue and broke there's bugs in anything that you do but now we've forgotten that and it's amazing the regressions that my work has caused in fact this particular regression was when you brought this 4.2 predating the work by a year and caused the regression so there you go so those are the VCL points that you'd come and see me if you don't understand it I'm sure you've got some fun questions at the end another thing we did in the last year was create a small test app and if I had something for VCL I would actually exercise the API in a reliable way and allow us to see how it's behaving so as a test app you can click on these various bits and you can test virtual devices and can you render them and get the contents back and so on and this was really very important for the OpenGL work to be able to be sure that we're doing it there are even unit tests in there there are benchmarking tests so you can run these things rapidly in the loop to see what's happening OpenGL rendering so two and a bit man years in here lots of fun some code in various places in VCL so the OpenGL context which is used outside VCL by 3D transitions and impress 3D chart rendering in Calc something else probably the GL canvas I think uses it and then there's the VCL back end it implements effectively the graphics scoring stuff that VCL does and then there's some windows that's specific and why did it take so long to get this to work well you know I argued that OpenGL is just an appalling programming API it's really unbelievably bad it has all this global state so there's behind it there are global variables everywhere and so you read the code unless you're tracking mentally the global state and every call path comes into this method and this encourages the most appalling construction because once someone has nailed down the fact that global variables are great as the first design premise everyone builds on this and creates an even bigger mess so you would think that it would be possible to do something about this and rack the state and da da da but the problem is that changing the state is unbelievably expensive so it would be nice to create an API on top of it but if you're talking sort of 5 10 milliseconds to switch context or to sporadically read all the state you might as well give up so this encourages some horrendous programming and hides the misty horrible vibes but hopefully we've got many of the fix I hope Tarsh here will give us a demo of API traits and how we can start to trace what's going on and see how it works so getting into the form well is fun but I'm pretty happy with what we've got a little improved so we're heavily using frame buffers for GL context switching so when you have virtual devices we could use a heck of a lot we try to pull them all into the same GL context I thank so much to all these guys who've done fantastic work on this being here at the market so you spearheaded lots of this and LFRB as we call them so what else right in terms of understanding OpenGL so OpenGL your GPU runs just this massively parallel computer it's very very different to how a CPU works very very different and it can do so much so quickly but any of a certain kind so for example there's a whole of what I call free work the GPU has a certain fill rate memory access rate and actually once it's got the memory it can then work on it really a lot and if you don't do much work you don't get any better performance so say you have a particularly funky algorithm you can perform almost as well as a really very simple algorithm just because there are bottlenecks or elsewhere in the work so we can do all of our alpha blending well we'll see some other things that we do on the GPU and get it effectively for free and one of the slight problems we have is a lot of texture for actually changing little textures if you imagine a button which has an icon on it, you know like 16x16 and so aggregating those together really makes a lot of sense to have a single big texture and then cut little bits out of it for rendering to laws and so much of the great job by handing out hatches of pixels to people who want it various funky algorithms for shading individual pixels of course the GPU comes from a game world so if you can imagine it's really optimised for working out normals and like complicated lighting algorithms and what else every pixel in the scene you see at 60 frames a second has very significant floating point maths happening to calculate its colour in a 3D world and so we can use that to make sure our pixels are pretty so anti-aliased lines for example is then running quite a funky gradient algorithm across there on each pixel doing the work again and again a font rendering so we went towards some great work here being a fool again so splitting the rendering so previously the initial pass used the operating system to render text into a GPU buffer and then upload that to the GPU and stick it on the screen but this doesn't really use the GPU as well as it could be texture upload is potentially slow they potentially mean a DNA across a PCI express bus and so it's much much better if we can compose and pre-render all these glyphs into a big series of bitmaps and then copy and paste bits of bitmap across so the rendering of text becomes copying small chunks and output blending on top of each other to make the text up so that's really pretty nice next funky stuff which Marcus has reserved is that he has a passion for this and that's cool to do the glyph rendering on the GPU so really looking forward to seeing that and using up some of that great spare capacity in the GPU's calculation engine to render the glyphs actually there from the side-distance fields are you next to my... your next am I doing I'm a short one Thanks CRC calculation so it turns out when you load a big impress slide deck or you load a write document lots of images one of the big costs as you load it is actually check something all the images to make sure they're not duplicates of other images which is a questionable choice at the best of times but possibly useful like if you're loading this in a dozen documents the same images you don't want to have lots of copies of them around so previously we used CRC32 which is really not ideal collision-wise for that we switched to CRC64T and just as an example of a different CPU the cost of this on the CPU is virtually negligible like it doesn't really impact the cost of calculation using 64-bit numbers because the CPU has all this spare resource lying around you know and 32 or 64 or whatever but on the GPU of course it actually makes a measurable difference having a 64 versus 32-bit calculation anyway so it's a very small picture getting some kind of factor of 2-ish performance win for a slightly bigger but still smaller in terms of modern cameras image getting a 4x I think as these images get bigger and bigger and you can bring the GPUs available on this we'll get a lot better numbers and particularly since the previous approach of pulling the image back from the GPU I transferred it all there we'll bring it all back to the CPU algorithm on it which is not good anyway so we do a double pass success of reduction we shrink it by 16 and then by 16 and then we do the CPU again and we'll have you in the test on what is done the most from there so gsk3, I'm not going to say anything about that because Quinoa has a talk on Friday which should be awesome but there's a big amount of work gone in there so my unfunded idea is my wish list here is the one of the biggest holes in VCL is that we still have an alpha transparency design that comes from the 70s or 80s we have a separate alpha and this is a disaster because the GPU itself is using almost a uniform RGBA underneath so you allocate the memory anyway and then you have to have another texture which you're looking up constantly to find the alpha ring and so this is just really stupid and if it's less stupid it's even worse because you can't save paths in the GL drivers which you don't want to do so we could drastically simplify the code we could improve performance we could save memory it's kind of a big thing when I go into space that means removing one bitmap EX and bitmap and other mask and rationalising love someone to volunteer to do that other things we discovered was that the windows were copying the area under the window when it pops up and then trying to restore it when it pops down which is again a good idea in the 80s probably and also tracking any rendering events that happen to this area and re-rendering it off-screen in a version nasty stuff so there's some easy hacks there to pull rubbish out another thing that VCR has is this idea that it's a reusable toolkit and it isn't and it would be nice if we could push some of the stuff down in the upper layers into VCR themselves so we have less of this so we can inherit something but actually there's only one guy all the way down this inheritance chain that just adds complexity for no real use at the slideshow there's a lot of horrible hacks around all VCR problems on the slideshow so now we have high resolution timers we don't need our own thread and our own custom main loop and stuff inside the slideshow it would be really good to get rid of that and at least do you know where it's safe to present a console as some nasty problems then we need to finish the idle pre-work I guess there's more we can do there to unify it and make it more clean and we need to move more things to lower priority idle standards I think there's some bugs to shake out there LibreOffice Cater Class has a very different use model but we want to save memory while wasting it in great jobs LibreOffice Online will do a lot of pre-initialisation and fork so you can load lots of documents in small children and so it makes a lot of sense to do as much work as you can before you fork because then your children will share that so for example rendering every glyph on the system at every size that people think is sensible into bitmaps so you can then very rapidly compose them afterwards wasting a gig of RAM actually could potentially save you significant CPU time and lots of memory so slightly different use case so it's hard to get your head around maybe The other thing I'm really eager to do is stubbing the font layout stuff so we can actually unit test the layout I think it's critical for the next round of finding parameters so currently across platform we can't get something whether it's a good layout or a bad layout we can't get a consistent glyph layout and we want to hopefully be a tender for that if the board agree and we'll get some really good work there OpenGL is written with opportunities The virtual device API loves to create one by one pixel virtual devices and then throw them away and resize them so it's really hard not to do this so there's just this ridiculous trash of creating one pixel square textures and then setting them up and then throwing them away again subsequently I'm really stupid so Glythi, yeah it was some great stuff I'm hoping Marcus can get that I'm keeping geometry on the GPU so if we constantly rebuild and retesolate and redo work we could keep on the GPU cache and then move around which would be awesome Image and sensitisation your toolbar icons when you can't click on them they visually change in a way that you can see that and it would be great to do that in the free cycles we have on the GPU instead of pulling them to the CPU banging the edge pixel and pushing them back again which again makes no sense it would be nice to get a double buffering on because it's good to get it on but also to stop duplicating that so if you look at writer now the whole core of what goes in a writer bug is not only a double buffer when he turns it on but there's a virtual device that it's all rendered into and then it's copied to the screen and it would be great to un-line some of these older optimisations and get it actually just a directly in and yeah reduce typical work GL flush less blah blah blah so I don't talk too far over I've just gone a long way lots and lots of work just a huge amount of work that's gone into VCL in the last year there will be some problems please be patient help us get it right there with us big performance when lots of long term swamp drain dryland we can stand on and really improving the cross-platform future and done along with you guys as a team if you're interested in helping out with me I'll see whatever