 Hey Alina. Hey Carter, how are you? Good, how are you? Good. Hopefully they're going to show up today. I'll see. Let's see. I know 8 a.m. is the time that works like for everybody in EU, China and U.S. But I fear that sometimes it can be too early for people in the United States. Yeah. I think especially people in engineering and like they, some of them actually start working a little bit later, like maybe nine or 10. Is that too young? There is. Hi there, how are we doing? Hi Adrian. Can you hear me okay? Yeah, we can. I can hear you and. Another couple of minutes and see if somebody else shows up and we can get started. See if I can find the minutes. Yeah, I just posted them on the. Zoom chat. Okay. So I think we can start. People are. Maybe going to show up a little bit later, but. Yeah, so, so thank you for taking the time to. Present a trout excited to be here. So the presentation will be recorded. So it's, you know, other people can actually watch it later. And yeah, and then happy to learn more about like how you implemented a container registry in Rust. Yeah, cool. That's good to hear. Do you want me to start now or do you have any business? Okay, cool. I did prefer a few slides. I'll share those. I am quite keen to have, you know, a discussion as well though. So like, if you're ready to stop me or whatever and ask any questions, I'm really keen to get like feedback on the ideas and so on. Yeah. I'm okay. How do I, I'm not. There we go. Big share screen, but probably help. And then I'm a bit worried you'll lose it when I click present. Can you see the slide? Yep. Cool. Yeah. So I work for container solutions and we have a product called trial, which is an open source registry implementation. And yeah, as I was saying, I do want to kind of a discussion about things. I thought it might be interesting to sort of first talk about what's going on with like registries and so on in general. And also I kept this talk fairly technical given it was that the Sig runtime, which I assume was the right decision so you can, we can talk about standards and stuff. Yeah. There's a few things that are going on in the registry world, which I think are quite interesting. And it's, you know, a space that's kind of need some updating because not a lot has really been happening until the last year or so. Yeah, really, you know, it's not changed much since the initial versions of Docker distribution, or at least since it moved to V2, which is a good few years now. So what's happened recently? Well, Docker distribution, as I'm sure you know, was donated to the CNCF. So I'll be interesting to see where that goes next because it was, you know, it wasn't something dogs squirming about. And it was in a state where people were asking for updates. And to be frank, Docker weren't doing much with it. Like there was being very little progress on it. So it'll be interesting to see what happens now. That's part of the CNCF. And a lot of registries. And I'm particularly thinking of things like the Google was artifact registry and Azure have one. And so they've all started moving towards supporting multiple artifacts. So registries are no longer just for container images. There are also for things like Helm charts, OPA config files, Xenab bundles, things like that. I guess cloud native stuff. It's probably an argument that we've really recreated FTP servers, but that's maybe a bit cynical and added a rest front end on the top. There is, as I'm sure you're also aware, a standardization process of the OCI. So you've got the, I don't know what it's called now, but effectively distribution spec. I'm talking to remember the official title, but there is a spec and it's also conform suite. So you can like verify whether or not a particular implementation conforms to the standard. I'm starting to talk about doing extensions. And in particular, when the most important things that's being talked about is notary v2. So you're probably aware of notary v1, which was how we did signing or like how we did signing, but was an implementation or signing for container images. So people could sign image and then people who had that image could verify that it came from who it claimed to come from effectively. And also it was quite impressive implementation because it had all the update framework stuff in it. So it could also verify that it was up to date and things like that. Now there were a lot of problems, or there are problems to the first version of notary. It's not seen as much uptake as we like. And there's issues like we can't, you know, signatures don't travel with images. So one thing we'd really like to see is if an image is moved from one registry or another, you can still somehow move the sign with it and still, you know, tell where it came from. And that's the sort of stuff that's been worked on in notary v2. And I know both Docker and Justin Cormack and Microsoft and Steve Lasker and people are working heavily on this. The interesting thing is they've kind of, notary v2 is very different from notary v1. It's not, you know, they've kind of moved away from the first version of notary, which I thought was an interesting decision, but you know, it maybe makes sense given the problems they were trying to solve. And the reason I'm bringing this up in this context is because it's going to be very important for registries, and I think the kind of sign that is going to affect how things are going forward. And that leads me on to the other point that I think is going to become very important over the next few years. And that's supply chain security. And some people have started looking at this. You've probably seen projects like Intoto and Grafias. And I think the Google Cloud platform has even integrated some of those solutions. I've not really played with what exactly they've done yet, but I think this is going to be quite a big thing. And, you know, you'll see more sort of solutions and people talking about this, particularly in the light of things like the, what was it, Cloud? I forget the company recently that had an enormous breach that affected Microsoft and everybody. SolarWinds, yes. I'm sure why I'm saying Clouds. You're right, SolarWinds. And you know, a lot of that was all about supply chain security and being knowing where stuff came from and being able to prove where it came from, basically. Okay. So the other question you can ask yourself is why, what's the point in balding and the registry given we have Docker distribution and so on. Well, the first thing I'd argue that it's not actually that many open source registries. The two big ones are Docker distribution and key or quay or however you're going to pronounce it. That, you know, was CoroS and it's now Red Hat and I believe they open sourced it. I don't know how many people are running the open source version of, I struggled to say quay, but I guess that's the official pronunciation because as you probably were in the UK and Europe, it's not my finance key. And that one's written in Python. Docker distribution is obviously very popular, particularly because it's used in Harvard. So if you use in Harvard, you're really using Docker distribution plus a few other things. Yeah, we had a comment. So we had quay also present a few months ago and Harvard is also another CNCF project. Yeah, I'm not, yeah, I think they're, I'm not knocking any of them. They're all fantastic. I'm particularly interested in, I wanted to like dig into key a bit more or quay and figure out how I did some of this stuff because they did some interesting stuff with like disputed downloads and things. Yeah, and Harvard has added on a whole bunch of stuff that's very important, like vulnerabilities, and a nice GUI and things like that, and the things that are sort of essential for enterprise. But the thing is with trial is I've kind of focused on slightly different things. And the way I started describing it, I'm still kind of working on how I describe it and how I think about it. So I'm very interested in the feedback. But I sort of started talking about the working set. So most registries at the minute, they're designed to store all your images for all time if you like. So you push all your test images. You know, you have old versions of the images dating back to V0.1 of the software and they all live in your clusters. You can go back and get them and check whatever's going on. But with trial, I'm thinking about things a bit differently. And what I want to focus on was like the working set of images. So I'm being able to sort of securely and efficiently deliver those to the nodes within a cluster. And so by working set, what I mean is just a bare set of images that are required to run your applications or your system. So it's not the full history. It's not like all the things going back in time. It's just what do you need to get your application up and running? Maybe like a version back for rollback or whatever. But it's a much more constrained problem. And leading on from that, the design that I know you can use is just a registry. So you can do what you like with trial. You could store everything and there's absolutely no reason you can't. But the way I sort of designed it is it's normally we'll run inside a cluster. So typically a Kubernetes cluster. So if you have a system of multiple clusters, you'd have multiple instances of trial. And those instances of trial could then talk to, you know, another registry, which may be storing all your images, for example. So it's not that. So in a lot of cases, try might not replace harbor or whatever it may be. It could work alongside it, for example. I have, there is no choice of storage back ends and try the minute it just saves to file. That was deliberate decision. I might revisit it at some point, but I definitely want to keep the simplicity of that. You know, there's a lot of problems in Docker distribution because they support S3 and things like that. And the sort of guarantees that S3 gives you a very different. I'm creating a lot of complications. And that's, you know, one of the reasons that deleting stuff is so hard in Docker distribution. Yeah, I also really want to think about security and auditing. So one thing I really want to talk about is a little bit later, hopefully, is when I'm thinking about auditing, you know, the registry, if it runs inside the cluster and then the registry should really give you a good overview of what's happening in the cluster. I should be able to look at it and see, okay, what are the images that are currently used in the cluster and how they've changed over time and who made what changes and so on. So I think there's a lot of benefits to an approach like this for auditing and security. And finally, lightweight. So, you know, it needs to run in the cluster or intend it to run in the cluster. And so I don't want to consume a lot of resources. And I'm not, you know, I'm thinking like CPU as much as anything else here. And that's one of the reasons that I chose Rust as you pointed out earlier. So what's some of the current features? So at the minute is OCI standard compliant. It does have the catalog API, which is the thing that lets you say, you know, list all the repositories and images within the cluster. I also added what I call the tag history API. So I can say, you know, say you've got image Redis 3.4. I can say, you know, if you ask for that tag, it will give you all the chance of all the images that I've ever pointed to that tag, for example, which again can be interesting for history. One of the things I've been thinking about is like how you can integrate better with clusters. So one of the first things it did was add some image controls. So the idea is... So what I've got at the minute is an emission controller that you can spin up. The emission controller talks to Trout. And so if you create a new deployment, the image controller will check the images in the deployment. And by default, what we'll say is if this image does not exist within the registry, then disallow it. And you can also expand this with Regex. So you can say things like, okay, if the image exists in this local registry, allow it, but also allow official images from the Docker Hub but not user images and things like that. So I tried to make it easier to add controls like that. You can also do some very similar things with OPPA. So that's another way to go for that. Proxen Docker Hub. So that's a thing I implemented at the end of last year. And that's another thing that I see as an essential feature for really to try and take, try forward is sort of being able to proxy and cache images. So this goes back to working alongside other registries, if you like. But the proxy in the Docker Hub was, as you're probably aware, there was the Docker Hub added limits on how often you could download images. So with the proxy in the Docker Hub thing, just allows you to have a local copy and therefore control or reduce the number of times you need to go to Docker Hub. A question. So when you proxy Docker through Docker Hub, do you keep track of the limits or you don't yet? So like somebody wants to keep on pulling images from different versions. Does it have a mechanism to throw back or doesn't yet? No, there's nothing like that. But having said that, you can associate a user. So I mean, and that works both for, excuse me, for pulling private images and for the limits per user. So like if you authenticate the Docker Hub, you get like a higher amount of limits. So you can use a certain user. So the limits like, you know, it's, I can't remember, is it per hour? They're a bit odd. So it's actually, and also they don't enforce them strictly. But there is a way you can ask the Docker Hub, you know, what's left of my current quota and things. So I could actually do something like that. It's not something I really thought about, to be honest. Got it. Yeah. And I think the limits are more for free users. I think if you have the pay version then. I mean, if you're using the proxy, if you were proxying with something like try, I think you'd be hard pushed to, to hit the limit, to be honest. I could be wrong there. Thanks. Yeah. The other thing is, yeah, it's written in Rust. I knew I chose Rust at the start. And it was out of, I nearly went with go, but to be honest, I wasn't not a big fan of go. But I am happy that I chose Rust for like the safety and the speed things. I think it potentially will give us the ability to create a very efficient solution. It's not that efficient at the minute. I have a lot of work to do there. But the potential, I think is pretty good. The issue of choosing Rust was web frameworks and stuff. So I have, the libraries, especially at the start, they're not that strong. It's actually getting there now for a lot of them. But I think it's the right choice for like low level common components. And I think you'll see a lot of sort of cloud native infrastructure and possibly being new stuff being written in Rust. Yeah. Yeah. So to install it, I created like a couple of different methods. There's a quick install method, which I can, I can demo if you're interested. There's also a standard install methods. The first one I did was with Customize, which I really quite like, but everybody wants to use Helm. So I've had to start trying to do support that properly. And there is a Helm install now. But the quick install is quite interesting because normally when you install a registry, you have to faff about setting up a domain name and pointing it at it and so on, which is quite right. But you know, if you have just a development cluster, you probably don't have a domain name or you can't be pointing it at a domain name. So the quick install has some hacks that kind of gets around that. Yeah, we happy to see the quick install demo. Yeah, let's do it. Cool. I won't take too long. And it might give you a chance to ask any other questions. Right. So let me see if I can get this right. So just before I started this, I did, from the Zoom window, I did spin up a Kubernetes cluster. I've not even connected to it yet. Let me see if I can share my... I think it's this one. So let me just grab the gcloud command. Oh, Jesus. It's kind of a bigger cluster than I meant to connect. So, so can you see my terminal? Yep. You can see, you're actually using the Rust 152, which is likely to be the 150th stable release. Yeah. So that actually comes back to, like, I've been using Rocket, which is a Rust web framework. And it's been pretty good. But I'm actually in the middle of trying to move off it to possibly to Actix because it's a lot faster. But one of the problems with Rocket is that I think that maybe in the latest versions of Rocket, it's maybe changed, but it was nightly only. So I was, yeah, I had to be on nightly, which was actually a bit frustrating. I would much rather have been on a stable version of Rust. Okay. That's me connected. So this is a completely fresh cluster. I hope it works. There we go. Oh, you get a bunch of stuff. And you've used to be less there, I think. Okay. Apparently it's got stack driver. I'm sure that's costing me a fortune. Anyway. So this is a, you know, that's all, you know, that's only the stuff that comes by default. I don't have anything going on there at all yet. Which directs right away. So this is a fresh check out. You know, I created a new check out of the repo, new get clone, just so that I didn't affect the branch that I was working on. And inside here, there's a quick install directory, which has this installed SH. And if we've run that, it tells you a little bit about what it's going to do, which is, yeah, create a service account, associate roles or trial, create a Kubernetes service deployment. And the interesting thing is it handles all this TLS certificates and actually uses the Kubernetes CA. And then it will copy that certificate to nodes and also to the local laptop. And that's what lets you get around the, you know, not setting up a domain name and using something like a certificate. I can't remember. You know, there's like a, there's a couple of ways you can handle certificates and Kubernetes. But it tends to be annoyingly complex, especially for just developing. Yeah. If you're not in JKE, you need to run this. And this I've done it before. So I don't need to, one of them is just to open the port on the firewall or Kube CTL. And there's other one. Yeah. Also do with writes. Yeah. So this script is very hacky, but it's kind of cool. You can set the namespace you want to install. And for some silly reason that I installed, to keep public by default, I think that was a mistake. I should probably have created a trial namespace. This step, I should push this a minute ago as well, because this step takes a little while. I've never really figured out why, but for some reason, when you submit a certificate to the Kubernetes CA, it takes a little while before it approves it. Oh, yeah. It's already created a bunch of the deployments, the service, role bindings. Yeah. I think this is, you know, what sometimes we complain about with Kubernetes, you can kind of see it there just for, all I'm really trying to spin up as a single container, but you end up with like a whole bunch of effectively config around about it as well. Okay. There's the certificate. So what we insert, it's the knowledge is really a bit of a hack. It arguably shouldn't be allowed. I think that you can do it. And also this bit, when installing a certain, this host and sitting up Etsy hosts. So it points to the, to the remote cluster. So where do these certificates live? They live in the container and the pod or. Yeah. The way that, like this hacked version only works with Docker. Like if you have a container D based, Kubernetes distribution, then it's actually going to break. I need to figure out a way to, to make things easy and container D as well. And it's a do with the, where the certificates live basically. So what I've done here and all the nodes I've configured to know this trial.cube public address. And I've copied the certificate to Docker directory. So in container D, that changes to be a different, it's not even a directory actually. You've got like, set it in the config and it's, it's a bit of a mess actually. But in Docker, what you can do is you can just put the certificate in a specific Docker directory and Docker sort of picks up at runtime. So I don't have to restart Docker or anything. But that's not the same with container D. So it's a bit of a problem to be honest. I think they're actually changing that, but I'm not 100% sure. But anyway, so it's, it's copied the stuff gets to the Docker directory. I can't remember the exact name of it. One question here. So the, when you modify the Etsy host and the nodes, I guess you have to assume that you need to have a usage access to those nodes, right? So from the installation, I think it's even worse than that. I can't remember how to look exactly what I did. I think I, it creates empty dirt. I can't remember is it's possibly a security poll actually. I mean, I've not done anything to this Kubernetes cluster. This is like a default Kubernetes cluster. And it's actually surprising what you can achieve. Yeah. Okay. That's what it is. That's what the GKE right? Yeah. Yeah. It's a default GKE thing. I wish we'd remember the details basically. Yeah. You can edit Etsy host basically for the node. Could you just mount it? I think. I can't remember the details, but it's a while ago since I wrote it, but it stayed, it's kept working for a couple of years now. But it's a hack. It's not, this is purely for setting up a development cluster. It's not something you should ever do in production. Okay. So the other bit I've got down here is, yeah, we've added to local laptop. So this is the, just adding it to local laptop. So try to keep public. Yeah. Can be, can be rooted. But this bit is a bit more interesting. Try with the validation weapon. So that's the admission control I was talking about earlier. So I'm going to say yes. If I said no, then it'll let any image run. If I say yes, it's only going to allow images from the, the tri-register to run in this cluster. Oh, and I had to like, I actually, I lie a little bit, I had to special case, given that these images, because I didn't do that once and you couldn't update or anything. So that was a, a very bad scene. Okay. Yeah. I think you couldn't even add new nodes. So you have to special case some stuff. Okay. So what we can do next. So say we pull an image. We're going to tag it. I've run this before. So I should be in my history. Right. So I've renamed it to try. You public port 31,000 slash test, which is, you know, where try was running in this new GKE cluster. That's set up to root via my Etsy hosts. And we're calling it tests engine X, Alpine, and we're calling it tests engine X, Alpine, so I should be able to push that now. Oops. Yeah. I've got two things in there. That was a bit silly. I should have just searched for it in my history. Okay. So that looks like it's pushed. Um, that looks right. Yeah. So we can create a deployment to kind of try test. And we're going to use this image. And that should. Come up pretty much immediately. Yeah. So that's up and running already. Um, now, because I put that admission controller on, we should find that if I create a deployment. Two. Oops. And this time, we're going to use this image. And that should come up pretty much immediately. Yeah. So that's up and running already. Um, now, because I put that admission controller on, I'm going to use this image. And this time we can point it to, uh, An image on the. Docker hub. So I can just say. Redis. I might come over and say docker.io. Redis. Um, and hopefully. So I'm expecting this to be refused because it's not in my registry. Oops. Yeah. So that looks like it has been refused. It's a little bit clunky how you can get the error message because I think it's actually on the. Um, replica set. So not, not on the deployment. Yeah. I mean, you will see. Well, we can look at that and you'll see what I mean. So if I go to the replica set. Then you get quite a good error message. Um, you know, error creating admission, where to try to deny the request. Remote image. Redis is this allowed as it's not in this registry and not in the loudest. Um, you know, that's, that's the. Air message that tribes sent to Kubernetes. You like when it. Um, denied the request. Um, but if you look on the. Deployment. Yeah. Yeah. Yeah. Yeah. Yeah. It'd be nice if it was there as well, wouldn't it? But you got to dig into the represent control to see the proper error message. So, uh, where's, uh, so how many replicas of trial do you have running on a cluster? So you said that it's redundant, right? Or is it a single instance or no, it's at the minutes of single instance, there is like when, it's again, like, uh, my ambitions kind of get ahead of me sometimes. So when I designed it, I did design it for the idea of being distributed. You could have multiple instances for H and so on. Uh, but I've not really got that far yet, but it's, it's actually quite nice. It's all the same because, um, it's all based off disk at the minute. So, you know, if you, um, pause it or restart it or, you know, just move it, the, the disk somewhere else, then it will all, um, just start up and work. It's quite nice. So it is fairly reliable. And you can just put it in like a, a volume and attach it and things. Um, yeah. So any other questions about trial itself? I've got a couple more slides, but they're not particularly interesting. Is there anything you want to see in a demo, I guess? I can always come back to it as well. I'll leave it open. Yeah. Where's the trial deployed and what, uh, you, uh, is that in cube public and the. Yeah, that was. Can we see the login or. No, I said type in Docker. It's just been a while since I played with a Kubernetes. I'm normally just working in the. And VS cause. I think it was cute public. Oh yeah. Sorry. That's going to be in the video now. Um, yeah. So that's it there. So that was, you were asking about how I copied the search. It's actually a job that goes and copies the search onto each node. Um, in a slightly hacky way. And so that's those ones you can see completed there. And there's our deploy. How do we get logs again? Ctl. Yeah. Yeah. I think I'll need to do. Public. Um, Yeah, that's very apparently this isn't an issue this. I actually opened a bug about this error message. It's apparently nothing to worry about, but it goes everywhere, which is very annoying. Um, but yeah. So. When it starts up, what you see is starting trial, the version. Um, Port. Yeah, it explains because we turned on the. Admission controller. It makes it explicit, like what's alive. So we can push anything that's prefixed with Q and X at GCR.io. Um, Oh, and myself, I think that was another problem I had. Um, You know, you need to be able to pull yourself in some cases. Yeah. It's update and so on. And you can also say things like, um, This is specific images explicitly allowed. So that would be like a full image name, or you can say an image of the prefix is allowed. So you could say, um, this repository on the docker hubs allowed. Any image underneath it. If that makes sense. Yeah. Where's the, The mission hook. I mean, Cause he, I mean, on the previous example, you specified readiness, right? And it was actually denied it here. So, but then. When. What makes it so that every deployment. Can. Go through that admission. Controller. And then basically be prevented from pulling. Yeah. I mean, that's pretty standard Kubernetes stuff. Um, Yeah. Um, We'll describe work here. Validating admission. I hit the wrong button there. I've got a new keyboard. I'm so used to it. Here we go. Nice. There it is. So yeah, you've got this validating web configuration. And that should be pointing. You have to go and look at the YAML web. Here we go. It's got a whole C. Oh, actually just looks at the sale. It just looks for the service, I guess. And the thing and then it causes validate image. Um, It's actually important for three. Um, and if it can't read it, so that's quite interesting. If you can't reach it, then you fail. Yeah. Got it. Got it. Cool. Thanks. No, that's a good question. Yeah. I mean, that's one of the interesting things in Kubernetes, I think because the second people start adding admission controllers, especially mutating ones, then you end up with like very different clusters. So what's something that works in one cluster doesn't necessarily work in another cluster. Yeah. It's something else you'd like to see or really finish off the slides or. I don't have any more questions here, but I don't know. I'm asking all the questions. So. I don't know if anybody. I like the questions. Okay. I'll tell you what, I'll go back to the slides for the minute. Okay. My zoom is gone. One question I had was regarding, um, un-tagged images. If, like, you know, image manifests were to remain to set something up. So I think that's a good question. If, like, you know, image manifests were to remain to set something trial would help, you know, purge or is there some policy, you know, like administrative good set for that. Um, to be honest, that's not something I've implemented. Well, actually it is a little bit. There is. So. It's actually a little bit strange because in a standard, the distribution standard, there is a delete command. So you can call, you know, from like a. Sort of rest command line. You can do an 82 delete and give it. A shot. And that will delete the associated blob. Um, now that's obviously quite a low level way of doing things. Um, it's also a bit dangerous because you can delete blobs that are used by more than one image, for example. Um, and for that reason, several registries don't actually support the method of deletion. And I think it's probably going to be changed a bit in, in the standard. I'm not quite sure what I should go and check exactly what, what the situation with that is at the minute. Um, but it arguably makes more sense to, uh, doing a pretty much the same thing that Docker does. So if I say Docker delete, uh, an image, I give an image tag, uh, and it will only actually delete the underlying resources. If there's no tags that point to the underlying resources, if you see what I mean. So there's two tags pointing to resource and I delete one of them. It doesn't delete that resource until a second tag is deleted. Um, and really I think we should probably do the same thing with Docker distribution, not Docker distribution, the distribution standard, but I think the reason they wanted to allow things to be deleted by Sha was say you upload some sensitive content by accident and you want to be able to immediately delete it. I think that was the thinking. Um, but it's probably not. I think that was possibly overthinking things rather than, uh, an actual good idea. If you see what I mean, but I really would like to add, um, you know, methods for automatically cleaning stuff up and deleting all the images and so on. One thing that we're working on at the minute is actually a GUI. Uh, and I think that would be a nice place to surface things like, you know, old images and stuff that could be cleaned up and how much this space you could save or how much you're using at the minute and things like that. Yeah, absolutely. Um, and then there was something that I saw at an OCI meeting regarding registry benchmarking. I don't know if you had come across a project like that. I wanted to go to that. That was like only a week or two ago, right? Yeah, it was fairly recent. So I just was wondering if, um, I just briefly looked over the results, but I didn't know if they had saw, you know, or had done anything with trial specifically and was just curious to see the performance there. It's well, I'm not sure they have. Um, so at the minute, like I mentioned before, there was, um, we've been using rocket and I'm in the middle of trying to change to a different front end and refactor a few things. So at the minute tries pretty slow, but it's purely because of some arguably bad decisions are too great. So it's quite reliable at the minute, but it's slow. Um, but what I'm working on at the minute would, would be like an order of magnitude speed up, or I think it might actually two orders of magnitude speed up. And in which case, I think try, well, at that point be very competitive. I think it should be possible for try to become one of the faster implementations once I've done a few changes. At the minute, it won't be though. So I'm quite happy it wasn't included. Okay. Yeah, thank you. Sorry, my son's with me here in the background. Nice, but no, thank you. Appreciate the insight. No, no worries. Okay. I'll, I think I'll share my couple of last slides and then we can, there's any more questions or things you want to talk about? That'd be cool. Alina, you saw me for a second. Did you? And I just wanted to say that I had a same situation. He's in the background. So I had to turn off the video and audio. Thank you for the presentation Adrian. It's pretty cool. No worries. Thanks. Okay. Yeah. So what's happening in the future? One thing is vulnerability scans. And actually that's one of the things that harbor has already, but one of the very nice things they did was to basically create a standard. So there's basically standard in harbor for how to connect new vulnerability scanners. So if I, the idea is that I can implement pretty much the same interface in through and then you can plug in whatever vulnerability scanner you'd like assuming it implements the interface. And GUI, that's actually been worked on a minute. But when my colleagues, so I'm very keen to get that working and usable. There is a question about whether or not, you know, we should do a GUI that's usable by any sort of OCI compliant registry at the minute. It's just compatible with trial. But, you know, it's definitely something to think about. I'm a full audit log. Yeah. I talked about that earlier. I'm very keen on this idea that, you know, we can look at trial and see what's happening in trial. And that gives us a very good idea of what's happening in our clusters or should just very good idea of what's happening in clusters. Mutable tags. Yeah. So that's actually really interesting with relation to Kubernetes. You know, when you do like Kubernetes, like the first time you spin up a cluster, you probably thought, right, I want to update this image at some point. And so you just push a new image and then you're sitting thinking, hang on. How do I, how do I do that? And then you do, you know, you did the Acupacitail deploy. And then happened because the YAML hadn't changed because the image name hadn't changed. And that's because Kubernetes effectively sees tags as immutable. Right. If you give something a name, it expects that name to only point to this one thing, which isn't the same as Docker, where you can have like the latest tag, which changes over time and so on. But it would be nice to be able to at least support immutable tags and registers. And I think Haber already has this, but I'm not quite sure they did it because I don't think Docker distribution does. But yeah, it'd be nice to have some support for mutable tags. So like anything under a given or anything with certain names can't be changed once you push something to them. The other thing, and this was really where when I started dry was the main thing I was thinking of when I still not got there, but ahead of time image distribution and sort of faster dispute images. So, and that goes back to the idea of the working set. So if I push a new image and I know what's going to be needed in my cluster, why don't I send it to the nodes in the cluster before they even do the kubectl deploy and start calling stuff. And you can do nice things there also using stuff like BitTorrent or similar algorithms. Oh, I could have presented. And as you're aware, as you're probably aware, there's already a couple of projects, one of which is CNCF, which is the Dragonfly one. And it's also the cracking one from Uber for that. The both seem quite large scale projects though. So they're both around this idea of like distributing images quicker and using sort of BitTorrent style distribution. They do seem quite large projects intended more at the extremely large scale of clusters. And I would like to try and keep things perhaps a bit simpler, if anything. So they're also useful on the smaller scale. And I could be wrong there. I might be disparaging Dragonfly, for example, and I don't want to do that because it's certainly an interesting project. So I come backwards. But yeah, that's pretty much all I have. So thank you very much for listening to me and seeing more questions. I'm interested here. I'm also very interested here, just your thoughts on trial and what you think of the direction. Adrian, thank you for the presentation. I have a question is about about your users. Who currently, who's currently using your service? Yeah, that's a good question. There's not a huge amount of users. There is a handful. I think it's, I actually think a lot of people have tried it out and played with it in development because it's very quick to spin up. And I think a few people are using it in like CICD. I've not really got to the bottom of why people needed a registry in CICD, but they did. I guess, you know, it's made people push an image and test it in the later part of the pipeline. But no, there's not a large number of users. Yeah, I'm interested in the thoughts on how I can get more users. I think there's a couple of features that really need to be implemented first, especially around proxy. For I can really address the use cases that I've been talking about. If you see what I mean. It's a one more question. So, so yeah, and more about the differentiation with some of the other projects, right? So, I mean, you have written traveling rust, right? So one of the things is that maybe it's faster, but, you know, and, and then you're saying that maybe because it's in Kubernetes, then it's with the mission controller, it has those capabilities and some of the other registries don't necessarily have that. So those are the only things that, that, you know, we're crowds actually trying to differentiate itself. I'm just thinking because of there, there's a lot of different projects. And I think, you know, all of the four projects want to bring some value and make the users want to use that project. Right. So do you have any ideas or what might actually be other differentiation factors for trial or. Yeah. Yeah, I saw. I'm slightly scared to repeat myself, but I was kind of, so when I started, I was very much what happened. I had a previous project called image rule that was just about, you know, proven you could use like bit torn to speed things up. And I used the Docker distribution and basically created a hack that did that. And that was my intention with trial was to do a sort of production version of that. And unfortunately I never quite, I've not still not quite got as far as I would like with that. And now, of course, we have projects like Dragonfly and Kraken. I should really look into those. I think, I guess the main differential I see from like harbours, harbours a lot larger, more heavy weight. Whereas, you know, try to intentionally made it lighter and, you know, I guess that's why people are picking up things like CICD. The other thing I want to look at is sort of security and auditing. I think we're missing like stuff there around auditing and the supply chain and security and so on. It's not clear to me if anybody really shares my concerns though. Certainly people start talking about supply chains and notary a lot more. Yeah, so one question about that. So, I mean, a lot of these other container registries use maybe third party scanning tools. So for container images, right? So maybe one way to target different types of users is to have that integrated into the registry. And yeah. And then maybe you talked about notary V2 and maybe that could be something that could be used more directly with a container registry to sign or verify legitimate images, right? Yeah, I think that's actually potentially quite a big area, like if we can offer better notary integration, excuse me, or support than other registries. I can see that being quite a big thing. Yeah. Yeah, and then I'm just kind of put in my advice role here a little bit. So, you know what? I appreciate that. Yeah, what would be best for a container would be best for the project. I mean, to differentiate itself, right? So to make people, you know, want to use it, right? So do you have any plans to at some point donate this project to the CNCF or some nonprofit or? We'd certainly be up for that. Yeah. I did consider a start in the process, but I think we really need to have a user base before considering that. Yeah, because we have the sandbox stage and that's actually more of a playground, but I'm not really sure if I don't remember exactly. There's like a requirement for number of users. So eventually when it goes into like the next stage of incubation, obviously there needs to be some amount of users. I mean, yeah, but that may be a good place to get traction. I mean, I'm talking about sandbox, right? But I don't know. You have to look at what benefits you could get from that yourself, right? That's a project, right? No, I think it might be a good way to like find new users and stuff. So I think it's a good suggestion. It's something I thought about and perhaps should it be more proactive and done already, I'm not sure. Yeah, but some of the questions will be, you know, about differentiation with other projects. I mean, Harvard is already a CNCF project. It's a graduated project, right? And then also the question comes up, you know, how is this, yeah, how is this going to be better in the context of like the CNCF, you know, having all these projects to help end users, right? And then when you have, you know, too many projects that are doing the same thing, it may not actually look beneficial for end users because that may end up being confusing them, right? So like, which one should I use, right? Then if there's more distinct features, then there's more of a story right behind like, okay, you can use trial for this type of thing, right? So it may actually not even be about the technology. It may actually be just about the messaging, right? About, yeah, so like how you position it. Yeah, yeah, just my thinking as well. So at the end of the day of, you know, any registry is going to store images and basic use case in some ways. It's just about, yeah, specializations and use cases, I guess. Cool. All right, so thank you very much. I think I don't have any more questions. Thank you everyone for attending. Do you have any questions for us or anything that. Thank you answer my questions. I was looking for some advice and you gave it. So thank you very much for that. Yeah. So yeah, feel free to retail. I mean, we're all on Slack or anything. So if you needed me, thank you. We're happy to help you. Cool. All right. So enjoy the rest of your day and thank you. Okay. See you at KubeCon maybe. Yeah, yeah. I've been to quite a few looking forward to being in person again. Yeah, me too. So I'm waiting for this pandemic to be over. Okay. Cheers. Bye.