 Hey everybody, welcome to another Monday OpenShift Commons briefing with another wonderful upstream project, this one Claire. We have Louis De Los Santos here, who's a principal software engineer working on the Claire project for Red Hat, and he's going to take us inside the indexer. And so I'm going to let Louis introduce himself, introduce what everything's going on in the Claire world today. And if you ask your questions, throw them in the chat and we will answer them at the end of the presentation. So take it away, Louis, and thanks for your persistence. Yes, thanks for the introduction. So today we're going to do a talk called inside the indexer, how Claire before extracts and persists the content of containers. So what this talk is really trying to uncover is getting the community or the watchers of its presentation more acquainted with the internals of how Claire works. Claire's fundamental goal is to provide insights about containers to the client, whether that's a developer or an operations team. We want to show you what exactly is inside the container and what might be vulnerable and have your teams patch those things or act accordingly. To do this, it becomes obvious that we need to understand what's inside the container, extract the contents and place them into some kind of schema which is searchable. And that's what this talk focuses on inside the indexer indexer is a service which actually takes all the layers from a container looks inside them, pools out the contents and creates a report. So what is indexing indexing is a term that that we use with the process of extracting the contents of the container itself. It is the first step in Claire's analysis pipeline. So inside Claire's pipeline we're trying to take a container and we're trying to understand what content is vulnerable. We split this pipeline into several phases and indexing is the very first phase. It's responsible for creating an index report which we're going to go into detail in just a bit. So if we're looking at the complete Claire pipeline to create a vulnerability report this is what we're looking at this is the 30,000 foot view. I have highlighted the portion of the application the portion of the pipeline which we're going to cover today in this talk. What you'll notice is we take a container manifest. We feed that to the indexer. The indexer performs a bunch of work which we're going to go into in detail in this talk and then it generates an index report which is the findings of the work that it just performed on the container manifest. So there's a couple key components here. Now if you'd like to follow along or you go back and you would want to look at this talk. If we're in Claire core, which is our project, this is the engine. This is what's really doing the scanning in the Claire project. If you do want to follow along in our source tree, the indexer code is in this internal package. And then there's the indexer directory here. Almost everything we're going to cover in this talk is laid out within this indexer directory. And there will be a lot of references back to this. So if you are interested in following along or just, you know, you're looking at this talk at a later date and you're trying to piece together what we're talking about to the code. This is the directory of interest. So the key components in this little section I'm going to cover the data models, basically how we go and structure our data to accomplish this goal of extracting the contents and reporting what we found inside the container. So first we have a manifest. The manifest represents container image for us. You'll notice it's made of a slice of layers. Those are order dependent. So if you go and you created a container, you know with with Docker or pod man. Those layers are created with a parent child relationship, and we represent that with the slice. So when you submit a manifest to us, you're creating the same concept as the layers hierarchy, the math, sorry, the containers hierarchy of layers. You represent that with a slice of layers to us and then just the hash digest of the manifest the content addressable hash signifying the manifest as a whole. Now this is the index report. This is how we communicate to clients what exactly we found inside containers. So we'll start again with the with a hash right this is a hash of the manifest as a whole so you can think of it as a unique identifier for the container and its layers in that unique ordering. And what happens to obtain this hash, you might know about this if you're if you know about how Docker images are built but if you don't, this hash is actually computed by taking the hash over each individual layer inside the container. And that computes a final content addressable hash. The state this is used internally and it is exposed to HTTP clients who might want to query the indexer. So the way this works is that when you submit a job to the indexer. If you were to try to submit the same job, we would actually give you back this structure and give you the state of the index. So Claire is smart enough to know like, hey, we're working on this right now. But here's the state you might want to pull the state if you want to pull the state just this wait until you see an error, or you see that it is successful. So we don't do this in quay right now. But as a usability factor, you could write clients which just kind of sit there and like poll on their job. It's part of the design specification for the indexer itself. Packages, this acts as a, let me actually take a step back, you'll notice packages distributions repositories. They're actually maps, right. It's a map string with the actual structure of the package distribution and repository. So the index report is really acting like a portable database. We do this for deduplication reasons. It would be unfortunate if we just continue to write the same package strings for every layer that we founded in or had to duplicate that information. So when you're looking at the index report, you actually kind of want to treat it as a database, which has key values that you can string together to understand where certain packages were found. So the way this works is you'll look at the packages, we'll call it a database, and it has the ID, and then it has the package name. So right now, you can picture this as a deduplicated database of all the packages that were found inside your container. Same thing with distributions. We could technically identify more than one distribution. We typically don't if it's a normal container. But sometimes there's dist upgrades or sometimes, you know, there'll be more than one file that gives us a hint on the distribution of the actual container. And this is, you know, whether the container is well, whether the container is sent OS, whether the container is Debian, that's what this is representing. And then repositories. These are usually language repositories. So if we find pip, if we find npm, they'll be represented here. And then the environments is what strings this together. So when you're looking at environments, this will basically give you the idea of saying, okay, we found this package in this layer at this file system path. And we needed to do this to support language packages. Because once you start supporting language packages, you have this predicament where the same packages could exist in multiple directories across the file system. For instance, if you are using npm, and you have a forms library, you might use that in five projects that are scattered around the containers environment. So we record each one uniquely without having to duplicate the packages identity by compressing them into these small databases. So that's really the bulk of what the index report is providing you. Now there's just some bookkeeping, whether you had a success or not. So this is again, if you're a client that's polling, and you want to know, okay, did we have a successful index or not, you can pull for that. And then if we didn't, if success is false, then we'll give you a detailed error message, which is just helpful for debugging. That's the index report. That's the output of the index error. So now we have scanner interfaces. This is a very important concept when dealing with the indexer because this is the externally implemented sections of code. Each scanner is in charge of taking a container layer and then parsing through it and finding the desired content that they're interested in. So we wrote these as interfaces, allowing other teams, other upstream contributors to come in and say, okay, I want a jar package scanner. You'd come in, you'd implement this interface, which takes the layer looks for jars and just parses them into packages returns it to the clear code. Very simple and this has been proving itself useful with the CRDA integrations we've done. I was working with a rune on code ready containers and I discussed this with him I was like, how are interfaces doing, and he showed us the PR it was all code add nothing needed to be changed. So this abstraction has been working pretty well for us. The same goes for distribution scanner or repository scanners. This plays an important role later on in the talk, but as you can imagine the indexer is taking container layers and trying to understand what's inside them. This does the bulk of that work. We hand each implementation a layer, and it can go ahead and scan through it and understand with its own business logic, the clear code proper isn't too concerned about what's happening in there. So the flexibility is there's a lot of flexibility there to to perform package scanning distribution scanning and repository scanners scanning the way the ecosystem sees fit whether that's npm or python or whatever. And then there's a coalescer. So this this is probably my favorite part of the indexer. And it's somewhat this took a little bit of time to figure out, you know, how do we do this right. So let me think of the best way to explain this. When you have a normal container. It's a series of discrete tar balls right and that they represent file system layers. Now, inside those layers, there might be what's called a white out file. And what that allows us to do is handle deletions between the layers. Now, Claire doesn't necessarily need to understand that. But at some way shape or form Claire needs to understand. I've had these layers. There might be situations where the packages I found in layer one, don't even exist in the later layers. So we don't want to put that in the final index report because they were deleted in some intermediate layer. So the coalescer is another interface which handles this business logic. So it can go ahead and it looks at layer artifacts, which are similar to the index report, but they represent the individual packages distributions repositories found inside an individual layer. So the coalescer will take a list of these artifacts. And with its own business logic, it will understand whether it should actually keep or remove particular artifacts from the final index report. In a similar fashion as if you were a container runtime, and you had to apply a set of layers on top of each other to get the final container file system image that's going to run on the host. We do this, we obviously don't have to do it with the file system in mind. We have to do it with the end goal of creating an index report in mind. So a little bit of in-depth detail there. There's two implementations of the coalescer currently. There's one specifically for REL that you'll find. And then there's a generic one. So let's take a look at that real quick. Inside this, inside our root directory, if you go into internal indexer, Linux, because this is a Linux focus coalescer, the coalescer is inside here. And this would be a really valuable piece of code to understand how we actually go about creating these final index report. And there's a little bit of heuristic in here. We have to kind of identify distributions in a very piecemeal way, right? Because while it's not the common case, what could happen is that you have layer 0, layer 1, and then finally in layer 2, we find something that gives us a hint on the distribution of the container. We now have to somewhat backfill that information to previous containers and then attribute the packages found in those previous containers with the distribution information we found later on. And this is just the nature of Clare, right? Like this is kind of what makes Clare a unique application in the fact that it's dealing with piecemeal information the entire way through. And we're kind of finding novel ways to stitch this information together and then create a cohesive result that represents the final image. So the architecture of the indexer itself, it has a RESTful HTTP API, and we've written this in such a way that you could theoretically, if your application needs, we're simply just, I want to know what's inside the container. I don't care about vulnerabilities or matching them against anything. You could go and you can take the indexer and use it as a discrete service. It has no other dependencies. So if for some reason you had the idea of like, okay, you know, I'll go and I'll do my own vulnerability matching. Given that I have this little piece of code, this service that's able to give me the contents of a container, then you can simply just use this alone. And there's a RESTful HTTP API to do that. It is also architected and modeled as a finite state machine. And the reason we did this, well, if you're not quite sure what a finite state machine is, it's a set of logical steps, states, if you will, that house business logic. So as you're moving through this business logic, you're transitioning via states. What this allows you to do is when we were re-architecting Clare v4, we wanted to be able to basically quickly say, okay, there's something else we need to do. We don't want to refactor the entire application. So if we model it in a state diagram like this, don't worry about knowing all this. We're literally going to go over every step in this. But if we model it as a state diagram, and then we decided like, hey, after scan layers, we actually needed to do this new thing. We just pop it into the diagram and then do the plumbing necessary. Almost no code has to be refactored, which has worked out very well for us because when we were re-architecting, for instance, index manifest came as a requirement much later in our development cycle. And it was almost no refactoring was necessary because we just created a new state and popped it into the state diagram. It is a common pattern. I'm just explaining it in case you're not too aware of what that looks like. So I have a little snippet of code here about how the state machine runs. And let me go back to the source code just in case you do want to follow along. The actual state machine is in the same directory, the internal indexer. And we call it a controller to follow along with a lot of the semantics around Claire. Maybe a small aside is that a lot of times you'll see interfaces and then controllers. And the way we kind of architected Claire as a whole is that people upstream individuals very simply just implement interfaces and the controller handles most of the business logic. This separation has made contribution pretty seamless because contributors don't need to worry about databases. They don't need to worry about how Claire actually stitches things together. All they really worry about is implementing interfaces and then we have controllers which control these interfaces. So if you are interested in following along anyway, this controller is the actual implementation of the state machine and the guts of it is really in controller.go here. So I'm just going to go over the actual run method here because I think if you are trying to follow along then it can give you good insights. We have this this dictionary or this map of state names to state functions. And as you could assume state functions are the actual business logics of each of these states. So you'll probably see like a fetch layer function. What we do is we get the current state of the state machine and then we automatically run that state function that state function is going to return a new state is a very recursive algorithm here. It's going to return a new state. We're going to see if we need to do any error handling. We're going to check if we're at the terminal state, which is a canonical way of saying everything's done. You can halt the machine. Once we have once we determine it's not the terminal state, we set the machine state to the one that was just returned. We do a little bit of bookkeeping here, which writes the new state to the database. This goes back to if a client is polling the indexer. This is actually the hook that says, hey, there's been an update to the to the index report, you probably want to be aware of that. If we can't do that we do some error handling. And then finally we just recursively call run, which just does the whole thing again with the new state. So it's a recursive algorithm. But again, what's nice is that when we update the state diagram, we don't really refactor anything. We just add a new state function and then we update the state map. Okay, so now I want to dig into each of these states individually to give you an idea of what the indexer is doing. So the very first state that that we enter when you submit a container manifest to the indexer is called check manifest. And it's exactly what you think it is. It's we determine if we've ever seen this manifest before. So, we can do this because of content address ability so I don't know if you've if anyone has watched previous talks with me and Claire, we're always hammering on this the content address ability aspect. What this really means is that if we see a manifest with a particular hash, it's content addressable it's the same content. No matter when we scan it again, no matter when we see it, we can always be sure that the same layers with the same content make up that manifest. Therefore, if Claire sees it, it can go. Oh, okay, I've seen this manifest. I don't need to do anything else. I can literally just return the index report that I've already computed for this manifest. Now in this case, we're going to say Claire has not seen this manifest so it's going to go okay. Move it through the pipeline. We need to we need to move it forward. Now there's a bit of a subtlety here and just something that's nice to know. When you're working with multiple scanners. What we'll actually do here is let's say that let's say Claire has seen the manifest, but now the implementer of the jar scanner for Claire made some changes to it. And it might detect things a little differently. The check manifest state is smart enough to say, Oh, that jar scanner has changed. So I'm actually going to submit this manifest down the pipeline to the next state, but I'm only going to scan it with the new jar scanner. So this adds to the ability of just doing as little work as possible. And you can see that actually in the source code, there's a little section where we we clip off scanners. If they have actually scanned the manifest before so just a little tidbit of information that I think is is nice to bring along just in case if you see it in the source code. So now, once we have said okay, we haven't seen this manifest before. And now we're going to move it down the pipeline. The next state is what we call the fetch layer state. So in this, in this state, Claire is trying to determine which layers it actually needs to go out, spend system resources to fetch, download, decompress possibly, and then scan. So in this diagram here, I express the common case of Claire deciding this base layer. I've seen it already. I don't need to go and grab it. It might be UBI 8 base layer. It might be a Ubuntu base layer. It's a very common case because a lot of containers are just built from the same from in the Dockerfile. So this is another example of basically how we're doing less work when it's possible to do so. When Claire fetches the layers, it's going to buffer set buffer them to disk. This is a small change which adds a lot of benefits from Claire v2 to Claire v4. Claire v2 would actually do all the work in memory, which can be problematic if you're trying to pull down gigs of layers, right. So now we buffered a disk when you're running Claire. We advise, you know, definitely have at least 100 gigs of scratch space SSDs will help because we actually do use the disk layer quite a bit for buffering data, especially for very large layers. Now we go into the next state. So we have the layers. They're local on the file system. Now we take the scanners which were computed in the check manifest state. We say, okay, we have this list of scanners. We know what we want to scan inside the container. Now we do that work. So the way the scanning state works is it takes that list of scanners and it will concurrently via go routines fan out the scanning business logic. The controller does this right. It knows the implemented scanners that are configured. It'll scan them out and then hand them each layers. And then they just begin scanning the layers. And then they return their contents back to the controller and then the controller will write those contents to the database. So in the scan layer phase is when we are actually computing what's inside each layer and then storing the partial results of each layer into the database. So to touch on this a little bit, I went over this in the components section a little bit, but just as a refresher, because there's a lot of data throughout the talk. So I wanted to put little reminders. This is what the package scanner, the distribution scanner and repository scanners look like. As you can tell, when you call their scan methods, they're given a layer. If we go back just a bit, you will know that the layer was buffered to disk. So it's a little bit abstracted, but the scanner can get tar handle to the layer if they want, or we do have some abstraction methods on the layer that says hey just give me this file. So an example of this, if I was implementing an MPM package scanner, I would implement the scan method to look at the layer, grab a tar handle, and then look for all the node modules directories that I can find. Parse any packages found into clear core packages and just return them, and then your implementations done. So that's the level of abstraction that you can expect if you're trying to implement these scanners yourself for your own purposes inside Claire. That's all you have to do. And when we get back these clear core packages, these clear core distributions and repositories, we simply just write them to the database with our with our own database handling logic, all internal to Claire. So any implementers will not have to worry about that. This is a look inside the Claire data model of how we actually stitch these items that are found during scanning together into an ERD for searchability so we created the idea of scan artifacts inside the Claire database. And what this is really doing is it's just tying together the ability to search saying, okay, we found this package in this layer and it was found by this scanner. This data model makes it possible to say a new scanner has been changed. Let's scan that layer again, because we're recording exactly the scanner name version and kind, which found these artifacts. So this is just a little bit of details in how we stitch everything together inside Claire's database. And then we get into the coalescing step. So I mentioned coalescing a little bit about how this is how Claire computes all this partial data into a final index report. And the way this works is the business logic in in the controller will go ahead and it'll ask for the scan artifacts for both these layers right the the two layers that we scanned. It's going to go and it's going to get this layer artifact structure, which has the packages the distributions the repositories. And it's going to go ahead and it's going to get the the other layer artifacts from DF zero. And you're going to feed both these layer artifacts trucks to the coalescer. Now what the coalescer wants to figure out is how can I attribute packages to distributions, we touched on this a little bit but the distribution information might be you know in layer 10. And the packages database might be in layer two. So we have to somehow coalesce this and backfill distribution information. We also have to figure out which packages should remain in the final index report and what packages should be deleted, you know, based on the state of each individual layer. So the coalescer works similar to the ganners in the fact that the business logic in the controller will spawn coalescers with go routines, run them in parallel. They'll both create their own representation of the final index report. And then we just merge them together to get the final index report with the final set of contents that are left inside the image. So this is in the index manifest state is where we make the contents of a container searchable. It's not a super complex data model or ERD diagram. But basically what we are doing is we just have a giant link table that basically says we found this package in this manifest we found this distribution in this manifest. We found this repository in this manifest. Where this comes in handy is when a new vulnerability enters the Claire system. Vulnerabilities are usually tied to packages and distributions right so if you have the rel pulp security database. When you look at a vulnerability it's going to say like open SSL rel eight. You take that vulnerability and ask the indexer hey which manifests have open SSL and are of the distribution rel eight. And this index manifest makes that possible to give you that answer and it happens after coalescing. So we get the final computed results of what's available inside the container image. We index that hence the name indexer and then it becomes searchable in the way the aforementioned way when a vulnerability is attributed to a particular package and distribution. And this is exactly what the data model looks like. At the end I'll go through the ERD diagram and I'll go through basically the database code. And then finally we have index finished and this is very simple state this basically just massages the the state and success keys. Well the values in the index report and then writes them to the database deferring work so Claire. We touched upon this throughout the talk, but one of Claire's main goals is to do as little work as possible so we can compute results and give them to the client as fast as possible. So I want to just review real quick some of the ways that player before does this deferment of work. So the big first part is just the manifest scene start right because of content address ability if we have seen a manifest before we're simply just not going to do any work we're just going to go right to the database and we're going to say okay I have an index report for this manifest hash. I'm just going to return it. Now again this is excluding when scanners might have changed. Or Claire is just configured in a separate way, or a different way rather than when Claire can understand that it's configuration has changed, it will go ahead and it will scan the manifest again. So another way of deferring work is determining which layers to actually scan. This is fundamentally the same as the check manifest state just on an individual layer basis. So again, content address ability indicates that if I see this hash, if I ever see it again, the contents haven't changed, therefore I don't need to re scan it. So it's another way that we're able to do less work. And another reason why indexing large amounts of images might not be as scary as it sounds, as long as they are sharing, you know, several layers. A very common thing to do is have a base layer with a dependency layer, and then finally a third layer that just changes, you know, your application. And if that's the case, then Claire is really only doing work on a single layer every time you push an application update as long as your dependencies aren't changing. And you write your Docker containers in a same way, which utilizes this, this separation between base dependencies and application. This is just touching upon, when we do decide that we're only going to scan particular layers, we just go right out to our own database that has this information in it already, grab the information, and then bring that information with us for the other steps, the other portions of the pipeline. Cool. So that is inside the indexer. I have some information here for you, which is my email address, if you'd like to get in contact with me. I have the Claire GitHub repository. And the Claire core GitHub repository. So that's all I have for the presentation. I'm into either doing a little bit of code digging or we can go right to questions. What do you think, Diane? A little Q&A here because there's one question and the other thing that I would have you do is go to your site and show the schedule for community meetings, because you've just done an amazing run through to give insights into how to contribute and how it all works. So I want to make sure people know how to find you and get into the community and get started. And while you're doing that, I'll read off Andre's question here. So which phase of the general CI CD pipeline should be the appropriate position for Claire scanners after some deployment or somewhere as a testing linting health check phase of CI CD or as part of the security vulnerability management and QA process? I know you have opinions about that, but I think everybody has an opinion about that. Yeah, definitely. There's there's opinions about that me personally. If I have a build system, and I am performing, you know, staging builds. When you create those staging containers, and they get pushed to a repository, that's really your time to do that scanning understand the vulnerabilities that might be inside your container before they ever hit production right. If you don't have a staging environment and you simply push containers and then deploy them to production, you still have that period of time where you've built a container, you've pushed it to a registry. It's available for Claire to analyze do that right before you were actually deploy your code. So in the CI CD pipeline, I would say as a, you know, as a general best practice, do it as early as possible, right, like as soon as you have the container built. And obviously before you push it out to an environment, then I would do the scanning as early as possible in your CI CD pipeline. All right, and Andre says thank you very much for that. You're welcome. And now on dev has a, I'm sure it's an interesting thing that he's posted is wait, there's a state management solution for go line. Can you talk a little bit about that and you must have mentioned it earlier and I'm not exactly sure what you're referring to. But we have written that state machine code just in pure go as a incarnation of our own development. We're not using a library for state management. But if you are interested in state management and you'd like to see how Claire does it, I would definitely check out that code. It's probably a decent representation of what a, you know, F FSM or finite state machine implementation and go could look like. Yeah, it was it was written to serve a purpose. It might not be the shiniest cleanest thing, but it works and it works well. So if you do want to take a look at how we work that finite state machine architecture in, again, you can go to our source, which is clear core. It's the internal directory indexer and controller. And what's really interested to you would be this controller dot go and state dot go. And this is basically how we created the state transition tables, which maps straight states to functions. But yeah, no, no library, no manate state management solution for that. We just, we just coded it. Yeah, that's it's usually how things get done. And then eventually someone says, Hey, that could be something something useful somewhere else to modularity and creating libraries. Absolutely. Yeah, someday. But I think what what you've just done, which I wish I could get every upstream project is to really explain how Claire works internally, and in order to contribute to a project. That's like one of the often one of the missing pieces because you know a bunch of engineers from Red Hat and elsewhere that have been contributing over and over and taking the time to really explain how it works is wonderful. I can't thank you enough. I'm hoping that will drive people who watch this and want to use Claire in whatever projects or products or states that they want to. We'll come to these community meetings and then, you know, take a look at it, whether you want to create, you know, take a look at the state management solution. Code base, I guess, rather than solution or contribute to this, that would be a lovely thing. And so thank you for powering through your power outage there. This one is making this happen. My pleasure. Anything else you want to add in terms of what's next for Claire and the Claire community. Yeah, maybe just a couple of touch points on what's coming up on the our internal agenda. Right now we have the 4.1 release baking. And this release has a pretty paramount feature called what we're calling enrichments. You might have noticed when we redesigned Claire before, we wanted to remove false positives as much as we can. By doing so, we've removed nvd as a vulnerability data source. We did that somewhat opinionated. I think a lot of people share our opinions that nvd might not be the best source of data. However, when we did that, we removed a lot of the severity information that people became accustomed to. So the enrichment specification and 4.1 roadmap goal is all about allowing auxiliary data information to enrich our vulnerability report. So we took kind of a best of both worlds approach in my opinion is that we're sticking with the official upstream vulnerability data. But now we're just enriching that data with nvd metadata. So it's a little different from going to nvd and trusting all of it. Instead, we have the trusted source and then we're adding information to the information that we already trust. So this is a 4.1 goal and you'll notice that a lot of information for vulnerabilities will become richer. And if you'd like to follow that development in any way, shape or form, you can go to quay Claire. And on ours, is this big enough to see? Oh, a little bit bigger would be better. There you go. Perfect. So you can go to our discussions and inside this design tab right here, you'll see Claire enrichment specification. And just by the way, we practice open design. So any big ticket changes that are going to happen to Claire will be in this section. So it's just a good good area just to watch. And this is the Claire enrichment specification. And there's a link to our GitHub repository. So the spec is here and the implementation details are here. I'm working mostly on this implementation, but community contributions are completely welcome. Every single detail to my best ability is outlined here. Some things may come up just from, you know, implementing software is not, it's not always so easy to foresee everything that's necessary. But the majority and the chunk of work that needs to happen is all here and welcome for community development. So if you'd like to speed up the rate at which NVD data winds up back in clay, it's a good one to just, you know, be abreast of and take a look. Yeah, other than that, I think that just kind of being aware that we have a community development meeting. Every second Tuesday of the month. I'm hesitating about that. Is it every second Tuesday or is it just every second Tuesday? Every second Tuesday. I'm not sure. That's good. Peer review. This is why we do this. All right. We didn't have to do a pull request for it. That's great. Yeah. Yeah. So cool. So what I'm, I'm hoping I'm looking to see if anyone else has any questions, whether you're out there in Twitch land or in blue jeans or wherever you tubing and watching this or on Facebook, even post your questions. Otherwise, we're all clear with all the questions and we'll go, we'll let you go back to your day, Lewis. And if you can share your slides with me, I'll share it with the community as well. And we'll upload this to YouTube and hopefully. Now that everybody understands how the indexer works, they'll be excited about contributing to it and come to a community meeting. So thanks again for taking the time today. My pleasure. My pleasure. It's very fun.