 And we are recording. So let's talk about Koji. Just hand her to the audience here. Raise your hand if you use Koji. Now, raise your hand if you have run a Koji instance or installed a Koji instance. Raise your hand if you wanted to pull your hair out. Fair enough. All right. So Koji's been around for a while. Let me start about me real quick, because we haven't met before. Hi, I'm Mike. And I have been. Yes. Far too long. Far too long, yes. With Red Hat, since 2001, not as long as some people in this room, but in a while. I actually wrote most of Koji, most of the core stuff. It's getting up there. I'm not the only person that worked on Koji. There's lots of other people. I don't mean to minimize the other work that people have done. But for the release configuration management team at Red Hat, but everybody calls us Lisa Engineering because that's what we used to call ourselves. And a long time ago, I actually worked on Installer QA in the past life. I was doing that max. You can find me in Fedora, Mike M, on Reno, Mike M, Red Hat, Mike M, a few other places, Mike M, with variations like that. You guys all know about Koji. You may not realize that a lot of other people use Koji besides Fedora as Amazon. And there's a bunch more. So the Koji's project page, and look at the Runs Here page, there's a list. And that's almost certainly not all. That's just self-reporting. So Koji's kind of old. I wrote the first line of code more than 10 years ago. They're religiously public. We was internal only at Red Hat for a while. And we used it to build Fedora Core 6 when we were still building Fedora Core 6. Fedora Core inside of Red Hat's walls for Merchant Extras, if anybody remembers that. Shameful past. We publicly released it in 2007 so that Fedora could use it for its build system. And that's when it became called Koji before that. There was no such thing as Koji, well, that was, but used it to make sake. And if you want to read more about this, I don't want to go a while under the history. But there's an article that I wrote for open source about a few years ago, Google Free isn't sake. Also stop me if you have questions, because I love answering questions. I mentioned before, there's plenty of pain points in Koji. You felt them. I felt them. And they're not all Koji itself. Limited documentation is one of the big ones. Code's getting old. I'm lying here. I say it's written for Python. That's currently written for Python 243. It's currently written to run on row five. When we initially wrote it, it was written to run on row four. 24, no, it was 235, I think. Is that right? I think 235. Yeah, fair enough. Excellent to see you seem like a good idea at the time. At the time, we only cared about RPMs. Very shortly afterwards, we started caring about more than that. But Koji's very RPM centric. We've added on features for other things that aren't RPMs, but they do have a little bit of a bolted on feel. Just because of the basic data structures in there weren't quite written with other stuff in mind. It can be a pain to deploy initially, see the notes about them, documentate them. And there's a number of restrictions that made a lot of sense when we were writing a build system to build RPMs for Linux distro. But some of those restrictions don't necessarily make sense in all workflows, but they're kind of hard to get around when there are unique key constraints on the database. So there are pain points. Don't get me wrong, I love Koji, but it's got its problems. And we're facing new challenges. Dotnext, whatever dotnext you care about, dotnext is gonna be a challenge. We care about more than just RPMs now. That's been true for a while. It's getting worse and worse and worse, getting more and so and more so. The workflows in Koji can be sort of a hassle. I hear complaints that get in the way sometimes. Developers getting their work done. Use of integration is a big concern. Koji is not a continuous integration system. Probably never will be, but it could really play nicer with systems that do that sort of thing. And overall, we have more people using Koji than we ever did before than we ever thought we would. So they have different needs. So Koji needs to grow and change to adapt. So over the years, we've sort of talked about things we'd like to do and said, well, that would be nice, but if we do that, we're gonna have to change everything. We're gonna have to change deep core stuff in the database and make for a heck of a migration. So we'll put that off to 2.0. Pretty soon, you realize you have to finally do 2.0. So we're doing 2.0, let's do 2.0. I, we posted a draft roadmap to Koji Developlist for last. There's a link, it's the very first email to Koji Developlist. As you can tell from the URL there. Then we discussed it more at the Floor Activity Day that we had in June. And the conversation is still going. If you're interested in Koji 2.0, if you are interested in the future directions of Koji, then now is a good time to hop on the list or talk to me or talk to other people and make your voice heard because. So let's talk about what we'd like to see in Koji. I can't talk, I don't think I have time to talk about ever some of the highlights. So at a high level, put these goals about almost any project, but really want to do better documentation. I feel like that is a, and I'm not just giving that lip service here. I really feel like that's a big barrier to entry for Koji, not just for people to use it, but for people to deploy it and for people to contribute code. So I don't want to take that top one, seriously, that's one of the reasons that we need better documentation. I'm going to write some, if you have expertise, I would love to see contributions. Even if it's just, even if it's not something you wrote, even if it's something that you found helpful, that you could point me out and say, hey, this is much better than that stuff you have on your website, but why can't we use that? More community involvement. We've gotten some very helpful patches from the community and we've definitely gotten very helpful feedback from the community, but I want more. There's a lot of work to do for Koji 2.0 and I can't. As I mentioned, Koji is getting a little gray around the muzzle, so it's time to refactor a lot of code that was written a long time ago. I'm less about Python than I do now. We need to modernize, we need to get rid of some old dependencies that don't make sense to me. Yes, thank you. So for the refactor of modernize, modernization of Python you're aiming for? I'll get to that. So because we still care about some older rel releases to some extent, we can't just target Python 3. As you pointed out, even rel 7 doesn't really have Python 3. So 2.6? 2.6 isn't that what is in rel 7? I think that's what is in rel 7. 2.6 isn't that 6. I don't know 6. 6. 6. I think I picked 2.6 because of rel 6. Rel 6 has a bit of life left in it. And my personally, the Koji instance I care most about uses a lot of rel 6 still, and that's not likely to change. Yeah. That is a possibility. Yes. But there's a lot of code that needs to run on these plates. There's a lot of code that's used all over. So for that code, I think 2.6 plus one of the adaption layers is really the way to go. But I think I have that covered in a response to me. There's a lot of things that I would like to see. Okay, let the backend be in whichever version of Python, but provide client support for Python 3. Because nowadays, if I want to write code which will interact with Koji, I am limited to Python 2. Well, we will certainly have that, yes. We will certainly support Python 3. But we're just doing it in a way so that we don't cease to support Python 2. And that's a bit of a trade off because if you support both Python 2 and Python 3 at the same time, you sort of say, hey, there's always really great features in Python 3 that I am just not going to use. Because... And so at the same time, if I have support 3 versions of rel, it's really a thing to play for all the versions. Mostly a violent agreement here. Okay, well, so in five years, in five years, I'm hoping that we'll be talking about Koji 3.0, if not, all right, if. If. Well, thank you. The mission is... It was limited to 1.0, I'm fine now. But maybe this is clearly your attempt to fix the Python. We haven't been good looking forward, so we're going to take on the big, the dark approach. Yeah. I will come back to Python 3 when I get to that slide. All right. Different types of build processes, and I'll get to what that means a little later. But right now, we mostly, what do we support right now? We support building RPMs with Monk, building jars with Maven, building images with Image Factory. I want to throw out a family feud reference with deprecated, but I don't know if everybody read that reference, so. All right. But yes, we, but each of those different types of builds have been sort of difficult to add into Koji, and they've always sort of felt bolted on. The schema is built around RPMs again, so we want to try to open that up, and we'll get more into what that means a little later. Different types of build output related. If you have a different build process, it might produce a different type of output. Hardwire restrictions need to be more configurable. For example, the NVR uniqueness. You have such a thing. There are plenty of workflows where NVR uniqueness makes no sense. For example, if you are doing the core CentOS build system and you need to rebuild the same NVR over and over and over again until you get it right, or if you're doing a CI system and you rebuild the same NVR over and over and over again until you get it right, or various other access controls that are just just hardwired to be a certain way and you can't really change this user to do this garbage collection. If you want to, you say I don't really care about keeping all this reference, all these reference builds around because I don't really care about reproducing build routes from five years ago, but you can't do that. The GC one? No. Oh, the NVR one? Yeah. Yes. Yes. No, I'm not going to pretend I have solutions to all of these. This is still planning. We want to make it easier to deploy. Having better docs would get that started, but also it would be nice to just have fewer steps in between getting the code on your server and getting the system running. There's lots of little knobs and configuration files to twiddle. I know that there are ways. Better QA process, which would mean a QA process, which is not true. I do a lot of testing myself, but I don't really know who else does and it's not. Yeah, we kind of need CI for coaching. Yeah, right. Yeah, QA on the server. Yeah. And better release process because we don't really formalize it. What we do, I mean, I put off more leases, but if you look at the history of them, they've gotten further and further apart. I will talk about scratch goals later. There are a few sort of big ticket things that I believe are going to land in 2.0. So Python 3 support, see I told you there was a slot. And I guess you were looking ahead because you saw that thing. As I said, older systems don't matter. My plan right now is to target Python 2.6 and use Python 6 as a glue library to sort of make it work all places. That's not set in stone if there's a giant up swelling of people saying no, no, no, 2.7, then we can, me personally, I'm not going to stop caring about real five immediately. So I'm probably still going to have some sort of basic client lib that will work on just going to be separate. It's going to, it'll be, it'll be a complicated migration. I'm not saying we won't, I think we'll be able to migrate the data. It might not be as clean a migration as you like. I mean, it's a difference between migrating from, I don't know, a migrating from Mercurial to Git versus migrating from Git to CVS. Or, I don't know. I don't know what you're talking about. It's important not run the stool on the database. Yes, it's whereas right now all our migrations so far have been here apply this relatively painless scheme update. Sometimes they take a while because they update a big table, but I've never, I remember them ever taking more than 20 minutes on my slow server. So whereas, and it's just SQL code, whereas the migration for this will probably be a Python application that reads in your database and does a whole lot of calculation to figure out how this data maps to this data. Probably tracing through the entire history that's in the Koji database and rewriting that in the new history as new history in the other one will be pain. It's not gonna lose. Right, well, and that's me. You know, the Koji instances I care about, I don't want to throw away the history. Other people may be fine with saying, okay, well, here's our new Koji. Here's our new Koji 2 instance. Start building here. It's empty, have fun. So were there any, I sort of punted Python three conversations before. It was very, left there to get off your chest. All right, awesome. I think that says it all. Well, we figure we're gonna have a call center. Builders would just, we'll subcontract it out and they'll do manual data entry. I like JSON. I think, we'll do some sort of JSON based RPC. Exactly what form that takes is a matter of discussion. I'm open to suggestion. I'm open to argument there. But, and I really hope we can dodge having to write some sort of external operasy compact thing for this. Because I don't want to do it. But if anybody out there has like a giant ton of scripts that they think they can't port and just have to work, then, well, you're probably on your own, but. Yeah. Build namespaces. There's actually a partial implementationist out there. One of the, where this gets painful is not so much adding a namespace field in the database. That's pretty easy to do. Tweak the unit's condition, boom. The problem is that we have this file system layout in Koji, where we go mount Koji packages, RPM name, version, release. So as soon as you do this, the, that path isn't unique anymore. So you have to change it. And I feel like that's a pretty big deal. I have a, the patch that I have, which is experimental and buggy, preserves the namespace for the def, preserves that file system file path for the default namespace, namespace zero. So as long as you're a namespace zero, if you were only using that one namespace, then it works just like it did before. But when you move anything to another namespace, then. So I touched on this before. It's useful for a number of workflows and it's useful for anybody that cares about not having NDR any of this. And an interesting side effect of this is, depending on how you implement it and the implementation I have, allows for the namespace to be null. And when the namespace is null, they're like, like Bosons. You can just cram as many in the nulls in the namespace as you want. So scratch builds would become null namespace builds, which would mean that a scratch build could do a namespace. And because the way this would work, and we'd have, it would be a real build, just in a null namespace. So you'd have all the same metadata. Scratch builds are already just like regular builds, except they don't get imported. This would actually import them into the null namespace. And well, and you'd have to, yeah, a garbage collection has to account for this. You'd probably have, default garbage collection rules would probably be very aggressive about null namespace builds. But you could save that and move it out of the namespace and boom, it's a real build again. Like, it's a real boy, like Pinocchio. No, no, no, so before I go on to the next thing, does that make sense? Anybody have any burning questions about build namespaces? Actually, I have, there's an interesting open question here, which I don't yet feel like I'm 100% sure on the answer to, that is this. Does it make sense, once you have multiple namespaces, for a build to occupy, for a single build to occupy the same NVR in multiple namespaces? My namespace is just, just a namespace. It's just an extra key in a uniqueness condition. Sure, but it can replace thousand nodes. So you have namespace for $1.23. So I don't think you want, you don't think you wanna, you don't think you wanna mix those up. Although one thing, I do have an implementation is that a tag has a namespace associated with it and what that means is that when you build into a particular tag, that's the default namespace that the build gets if it's not a scratch build. Because it's like, in one moment, we could get coaching which has tags and namespaces and you'd be like, okay, this might go to some namespace or some tag or... So here's the thing, right now we have builds in Koji that are multiply tagged. It happens all the time, right? And we're just, yet in namespaces it's not clear whether or not we would have them multiply namespaced. And when we have builds multiply tagged, often we really do want those to be the same, live in the same namespace, if you know what I'm saying. You don't necessarily want the namespace to be, you may have a whole set of tags for, let's say you've got Fudora 37, which is like Fudora 25. Say you have a set of Fudora 25 tags, you might have candidate tag, overwrite tag, build tag. You don't want to be in the position of having the same, having the same, different builds occupy the same NVR in those very related tags. So they are separate things. It's a tricky thing to introduce this. There's, it opens up a lot of questions. It's a little bit of a can of worms, but I really think we need it. Another open question is this. Do we actually need more than one namespace? Depends on what you want to do. I think in Fudora we might not. It might be enough to have named in... What? That's two. Well, null is not so much a namespace as lack of a namespace. It's not time to. Well, so a Cobra workflow, if you wanted to use, say Koji as a backend for, an organizational backend for Cobra, then yes, you'd want multiple namespaces, you'd want each. Well, I was thinking of the CI, using Koji as the backend for CI. So if you would have Tito in the farm builds, so Tito, if we did them in mock, I think we'd want it to Koji. Anyway, I'm just... Yes, yes, that would, namespace would definitely be one of the factors that garbage collection would look at. I don't. Nope. All right, so let's talk about content generators. This is a big one. And it might not be obvious to everybody what it means. Content generators is actually not a Koji 2.0 feature. It's a Koji 1.11 feature. All right. But... Well, they don't know what it is yet. I'm not the person who's freaking out about this. But in 2.0, I want to really embrace content generators. And so let me tell you what content generators are. Content generators means in a nutshell, a mechanism in Koji to allow something that isn't Koji to act as a build source. So in a sense, it's a glorified import. So as we grow and handle different types of build processes and different types of workflows, we may have cases where somebody needs to just have a radically different system for creating builds. We'd still want to have that get into Koji for tracking garbage collection, shipping, releasing, et cetera. Yes, I mean, at an extreme, no, I'm not going to guess at what all people might want to use this for. But yes, that's a distinct possibility. But it's not, we're trying not to make it a free for all. The idea here is, and I don't know if you can read the text, robust metadata import. The idea is to sort of hold these content generators to some sort of standard that matches what we sort of expect from Koji. That is, you're not just chucking a bunch of files onto into a database and saying here, I built this, have some devs and some logs and be done with it. The metadata format is actually, you can look at the link later if you're interested, specifies details about the build environment, mappings between the contents in the build, how, which build environment it was built in which content generator built it, et cetera. It's a complex, robust metadata import that is similar in some ways, more robust than what Koji is currently doing now when it imports builds. So, 111, we're going to have this added on as a new set of hub calls that allows a content generator to do these sort of imports. And so, so I could write a content generator already running in some places. And in 2.0, I want everything in Koji that generates builds to use the same import calls and use the same data structure. So, I want it unified in 2.0. We won't have that in 111, but that's, that's, say again? Right, right, in a sense. If I was going to do that, basically the content generator would be inside Koji. Literally, yes. Koji would become a native Koji content generator, exactly. Next four months along those lines. Can we say Koji puts up a, a thing that can be repoded during my data work? Well, I mean, technically, right. A Koji instance could serve as a content generator for another Koji instance. If you set that up, yes. You may or may not want to do that. Koji Federation is one of those line items on the wish list that I do not have in the slides. But it's definitely something that. Well, I think we need to be more like, get rid of, of, of, of, or make a Fed package wrap this up too well, then give you those on which it would be able to, to get commands. I'm following, but. It's fine. It's fine. It's fine. It's fine. The IQ drop, my IQ drops with every slide. So, just. Unify build types right now. I think we've touched on this before. We have a bunch of build types, RPM builds, which were the original build type and we also have native builds, window builds, which I by realizes is there. Don't, don't ask me how to do it. Yeah, I debated even putting that line in there. But it's in the code. It's off by default. You have to turn it on. It's not on in my, also which builds and whatever else might come up in the future. Right now these are all sort of handled separately. The RPM builds is very different from the other three. The other three are sort of unified, but it's all still a little awkward and adding new ones is still feels even more awkward. So, I want to have it more unified because we're gonna have more. We know we're gonna have more. One of the things that I was debating about when I initially did the tour about this, but then it became very clear during the viewer activity day that this is pretty much probably, seems like I must have, is for Koji to grow a message bus. And that does not mean, that's not the same as Koji talking on Fed message because Koji already talks on Fed message. We have a plugin that does just that. This means Koji has its own message bus. And why would Koji want its own message bus? Because right now Koji does a lot of polling. Every time you run a building Koji, you're a client is sitting there going, hey Koji, what's happening with this build? Hey Koji, what's happening with this build? Instead it could get on message bus and say, hey, tell me when something happens on this build? Similarly with all the builders, they're doing something very similar to polling. Specifically, as soon as you run the Fed message, you only sense the Fed message. There is a quick to throw up if you don't want it. So let's distinguish between the Fed message software and the Fed message in the instance. Then I'm not specifically speaking about the Fed message software. Okay, so it may well be the Fed message software. Yes. But we don't necessarily have to do scheduling in the same way that we probably won't do scheduling the same way that we have been. The sort of ad hoc, fair race that the KojiD currently does. KojiD is currently due to decide who gets what task is probably gonna go away too, but I don't really have. So this is something that I have much less of a sense of what it's gonna look like in the end than some of the others. But I feel like, but it became very clear to me that it's something like this has to happen. So I think it's gonna happen. It may or may not be Fed message or zero MQ, but it will be a message to us because we really do have to get rid of some of the noise in here. And it'll make people a lot happier because you'll get much better responsiveness out of the client and much better responsiveness out of the builders, which we won't have to wait 30 seconds for a builder to notice that it has a task assigned to it. Yeah, I'll touch on that in a minute. But yes, I do, the schedule needs some help. It's just not smart enough. And the only way it's gonna get smart is if he centralize it. Plugins, we already have plugins in Koji. We have plugins for the hub plugins for the builder. Have themes for the web UI, but no plugins. So I want plugins in more places. I want them done cleaner and consistently and more Pythonically because people need to extend this. In a reasonable and documented way. So still more. Like I said, I can't go over everything we want to do and there you go, smart task schedule. Modular authentication. Right now, we support multiple types of authentication, but if you wanted to add a new one, it would be hard. I know that some people in this room would really like to have Koji support over an ID. At least people have talked to me about that, so it's easier to do something like that. The web UI could use a lot of updates. Yeah, I mean, in who I know, for example, Koji slash bit slash number. Yeah. Koji slash package slash code use. Yeah. Well, I got a title of AIDS for this building. Yeah. The web UI is something that, I'll get to that in a minute, but yeah, I mean, that's the front face for a lot of people for Koji. Not from me, I just use the command line. But. But. I mean, we could write the Twitter plug-in tomorrow. We don't need 2.0 to have a Twitter plug-in. We did that with two messages. So, through it, probably going to use Flask for the, as a lightweight framework instead of the ad hoc, no web framework that we have now. I don't want to use anything too big, but Flask seems like it's probably about the right size. And again, I just sort of went through, these are sort of the best of the rest, and there's transient dynamic builders would mean that it would make it easier to set up and tear down builders in a cloud. Right now, setting up builders kind of heavy duty because at a database, you have to get the credentials, you have to do all these things, transient dynamic builders would probably take the shape of a content generator that is spinning off these transient dynamic builders and acting as a proxy, as to talk between those and Koji, but not unlike how Koper does things, actually. Oh, keeping the SSL off, you mean? Well, sure, if we make it easier to add more vacation methods, we probably will add more, I'd love to add more of that vacation method. SSL, using SSL searches feels a little clunky short, so we don't have to do it that way. Is this just an idea, is that something which you're doing to set it up or is that something you're choosing? Yeah, it would be, that would be, and that would dovetail nicely into the make it easier to set up an instance, because one of the big pain points when you're setting up a new Koji instance is you're either going to set up a whole Kerberos infrastructure to do your authentication, which is as trivial, unless you already have one, and then it's easy. Or you go, or you plow through the SSL docs and try to figure out how to set up the keys, and then you have to have a system for handing them out, which Koji doesn't have, so. Because you're only turning on one if you don't care about any authentication. Well, yeah, if you wanna use the password, you can do that too. Normally, normally I just do, for my own instances, I just do SSL. Once you do the little bit of legwork of creating and or stealing a few scripts to set them up, it's not too bad. We use SSL for off because Play used SSL for off. I mean, that's, when originally Koji was purely, the software was purely internal Red Hat, and it did not support SSL off. We used Kerberos, that's what we had inside of Red Hat. That was it, Kerberos, period. When Fedora adopted Koji and we released it under a source license, one of the very first things we did was had SSL off support. And that code hasn't really changed much since then. Totally open to new authentication methods, yes. So, is this ambitious? Yes, it's very ambitious, but it needs to be done. And I think we can do it. And some of the things that we've gone through on the list do not necessarily have to fully land in 2.0. 2.0 will have many releases. 2.1, 2.2, many happy releases. So some of these features may be a little small. But we'll get them right. But the important thing is to build the framework and make the big core changes to Koji that we need to do the big shaking up one time and not have to do it again for a few years so that we can build incrementally from there. So that's, but it is ambitious and that's why I need your help. So here's how to help. If you're interested in helping, join the Koji Develop mailing list on it's a Fedora project mailing list, not a Fedora, sorry, it's Fedora, it's your right, it's a Fedora hosted mailing list, not a Fedora project mailing list. So we need everything, we need coding, we need testing, we need docs, we need feedback. Really anything you want to give. It's not comments, that's fine too. So, questions. What are the types of things? Maybe answers. That's a tricky question. It depends on that. Right, right. It's early, we're still planning, as I said. All of this is specious, some of it has code written, a lot of it does not. Some of it, I feel like I could write myself without, with given sufficient time, which isn't always easy when you have day job. That only cares about one Koji instance, not 10 of them. But, if we're back here again next summer and I don't have something on a Koji 2.0 branch that you can run, now I wouldn't maybe call it shit, but if I don't have something workable, if we don't have something workable by within the year, I'll be very disappointed. So, that's the, does that help? Is that a guidance? We're not going to drop the 1.x line very quickly because we're likely to pin on that and the migration is going to be tough, even once we have 2.0 working. X supported for a little while. No, this is not a from scratch rewrite, but it's going to be a big relief. A lot of code is going to change, but I'm a believer in not throwing out the accumulated knowledge of 10 years of the build system. So, we want to be careful not to throw out the smarts. Yeah. You mentioned Koji 1.11. Koji 1.11, Ballpark October. I've gotten the same question too. Every time I look at OBS, my brain starts to sort of crunch in on itself a little bit. I think it's the, because it's 10 times the code size of Koji and it's written in three different languages and the docs look nice until you try to read them. I'm like, oh, hey, it has docs. Wait a minute. Not that I should cast any stones about docs. I was just curious because I didn't ask that, and I don't exactly know what it is. So, the thing is that OBS, as I understand it, and feel pretty correct me, because I'm sure they're very likely somebody in here understands OBS way better than I do. But as I understand it, OBS is a very different sort of build system than Koji. It's a build farm manager which can give you, in which you can run a wide selection of content generator. You can run the VM natively, you can build that natively, you can cross compile them. Yeah. But I don't think it has the sort of, when I've looked at it, it didn't seem to have the sort of quite the level of organizational structure that Koji does, maybe not quite the level of data interconnectiveness that Koji does. There's lots of things which Koji doesn't need. Right, there's that too. But, here's an interesting idea. Could OBS be a content generator? If, you know, why not? If you don't like yourself very much, and you wanted to turn it into a content generator, then that's a possibility. I mean, I think like a real, I guess concrete, it's only where to come across an IRC channel and say, why don't we use it? It doesn't cater to certain requirements. Because a lot of, like OBS, it does a lot of things for kind of cater to the wild less than what it does. Oh, yeah, oh, I'm fact I'm over. So, thank you. Were you holding up a flag that I was supposed to notice? Okay, so, luckily that is my last one. You mentioned reproducibility. I'm giving another talk at 2.30 on reproducibility in Koji. So, if you're interested in that, you can come to that and learn about what we do and what that means. Thanks guys. Oh, can do I? Process.