 Hello. I am Suley. I work for Veebooks. I'm an engineer there. I'm also a flight maintainer. And I just wanted to also mention that just a little shout out to Suley that he was very instrumental in getting Helm OCI support into Flux and also was the person who implemented Helm Cosine verification in Flux. So thank you very much. Yeah. Well, you might be walking in for the first time, so I'll just say I'm Scott Rigby from WeaveWorks. I'm also hosting the event, but I'm giving this talk as well. I am a Helm and a Flux maintainer. So thanks for joining us, everybody. So today we're going to be talking about the integration of OCI artifacts as sources in Flux. You know, Flux has the concept of sources and they can be Git as a source, OCI, excuse me, S3-compatible buckets as a source, Helm repositories as a source, and now OCI repositories as a source for everything. We're going to start with an intro where we'll see what OCI artifacts are, what their advantages are, and why we even want to use them. And we'll then cover new features, introduce and Flux around OCI artifacts. And finally we will have a demo and we'll give you some info at the end of the links. Okay, right. So thanks, Eli. Yeah, so what is an OCI artifact and why would we want to use it? So what we're showing up on the screen right now is something that many of you should be familiar with because you're here at KubeCon. But if not, in either way, we're going to do a comparison with artifacts and OCI images. So this is an actual, the JSON for an OCI image manifest. It is an OCI for everyone, hopefully, you know, stands for Open Container Initiative, and it's essentially what happens when you build a Docker image or use any other OCI compatible tool to build and distribute images. So the image manifest describes the components that makes up an image. So it's basically what pretty much everything that you will need to run a container down the road. And the config layer that you see at the top contains layer ordering information. So if you've got, you know, you've seen images have multiple layers, right? So that you want them to go in a certain order, generally speaking, and excuse me, and what basically what needs to be applied in order to get root file system. So the digest feature, or the digest key up there is a cryptographic hash of the compressed content of the target component. So the media type, and then the media type will tell us, excuse me, what compression standard is going to be used. So the OCI.image.config.v1 plus JSON gives you that, that's what I'm talking about, and it gives you that information. And so, you know, the digest is similar to digest for other things, and it's a unique, a unique hash. So if the content changes, the digest will also change, right? So when fetching a manifest, either the config or a layer, we have to apply the hash function to verify that it's consistent. This makes the content immutable and addressable by its digest. So what you have on the other side is an OCI registry, and that is where we store images. The registry gives us a nice API in order to handle OCI images. If you look at the top here, you can see we have several endpoints for every kind of component we have. We have in the endpoints, resources identifier, you have the name, and you can do some access control based on the name. And on the top, you have the tag endpoints that we can use for content discovery. So we said that digest are immutable. Tags are immutable. You can use tags for human readable versioning of our images. So if you use tag endpoints, we can do content discovery and retrieve tags for given image. Then we can use the manifest endpoint and giving a reference. So a reference is either a tag or a digest. We can retrieve the manifest. Once we have the manifest, we have everything needed to pull the configs on the blobs of our given image by pulling using the blobs endpoint. So we want to show you this because OCI registry gives us nice features. We have content deduplication. We can do reasonable push or pull. And the OCI registry is doing garbage collection as well. If an image is not reference at all, it's garbage collected first. Okay. So at some point, users really wanted an artifact. Basically, a type of OCI thing that can store other artifacts that are in containers, right? So basically, it was important to be able to... Oh, actually, do we show the artifact type? Yes. It's here on the config. It's a type. Oh, the media type. Oh, I thought you actually meant the artifact type. Yeah, yeah. So basically, this is the standard... Excuse me. Artifact type for Helm. It's registered as a media type. And that's the application vendor CNCF Helm config V1 plus JSON type. So that allowed people like, say, if you're using tools to get information from an OCI registry to say, hey, I just want my Helm charts that are stored there. In this case, it's Helm charts. Other artifacts can be stored as well. And you don't want to, say, parse through all of the container images in that same registry. This type was a thing to allow you to do that. And the Microsoft... Well, what was then the Microsoft project, which is now a CNCF vendor independent project called ORAS? Which is OCI registry as registries as storage has a really good tool set to allow these things. So you don't have to do it by API, but you could. Yeah. And we've... I'm looking at my speaker. Sorry. Yeah. I mean, ultimately, the idea is to use config media type for this. But, you know, users... There is a thing that was just added called artifact type. And that's... I think that's what you meant there was that that is a way kind of to categorize artifacts so that you don't have to register every type. So you could say, I have six different artifact types. You know, one's customized manifest, one's Helm charts, one's OPA policy, you know, whatever, right? And I want to be able to get only those things through an API. It wasn't really easy to do that unless you specify the media type. And generally, you have to register that if you want to use it for anything besides internal use. And so what the spec chain... Basically, with that addition, very soon, you'll be able to not only have a different media type, but you'll be able to use the standard media type for artifacts with different artifact type keys. So why would you use OCR artifacts? The first use case is, in order to collocate your code on your configuration, you could, for example, for those of you that know about flux image automation controllers, today we use this to automatically reconcile actually our code on application. Every time we have a new version of our code, the image control automation is reconciling it for us into our Kubernetes cluster. Today, if you have OCR, you configure OCR artifacts, you can do the same things with your configuration. The second thing is consistently referencing your images, with your digest, you can do tag versioning, and you can do access control as we showed with the registries. Let's take this OCR. Cool. Right. Yeah. So basically, we're just showing you the difference how you migrate using... If you're a flux user, how you migrate from using a Helm repository to using your charge stored in an OCI registry's repository, really it's just these two things. That's the square of red around there. It's really in the spec URL. You just put the URL as if you're a Helm user, you know that you can do this using the Helm CLI. It's the exact same URL that you would pass for where your OCI charge is stored. It just needs to use the OCI scheme in the URI. Maybe just... Because on the left, it's a Helm repository with an index, an HTTPS one that we say, and on the right side is when you move this to OCI. Yeah. Yeah. Good distinction. Yeah. So we should probably say that, yeah, that the HTTP Helm repository, if for anyone that doesn't know, has an index file, that lists all the information about all of the charts in that HTTP Helm repository and all of the versions for each of those charts. So there have been cases... This is a pretty important point for people that are concerned with bottlenecks. For example, you may or may not have ran into this, if you're a legacy, chart user for Bitnami. Bitnami is one of the major chart, distributed chart sets of maintainers who rigorously tests their charts and images. And just they've been doing this for a really long time. And so a lot of folks trust them. They had to truncate all older versions from their... Beyond a certain point for their charts, because just the index file was like something like gigabytes. It was just ridiculously huge. And the reason that it needs that is because it doesn't have something like OCI, which has its own API for doing tags listing and things like that. So that's the only actual functional change you need to make in your CRDs for Helm repository. The other change, it really just... If when you're moving from a Git repository... So that was from a Helm repository to an OCI repository. If you're moving from a Git repository as your source, it's really just that. You simply change the URL instead of pointing to your Git URL, you point to the URI for OCI. And Sembr and everything works. The Sembr constraints, the ranges and all that, they work exactly the same. So we have also implemented an integration with cosine inflex. So just to tell you how cosine works, cosine is used to sign OCI artifacts. So the way it works is you create a key pair and you keep a curve key pair. You sign your artifact manifest and you store the signature with the OCI artifact. And in order to do the decryption, the verification, you retrieve the signature from the registry. You verify the signature against a public key and you verify the claim as well. Verifying the claim means you have a payload in the signature that they knew what digest, manifest digest has been signed and you verify that the digest correspond. And there's an experimental feature that is a transparency log that can be used. That's how it does work. So let's just show you how we do the Helm release reconciliation. So for this, we assume that we have a Winning Flux controller or Helm controller or source controller and that you have applied a given manifest. Then how does it work? From a user's perspective, you push your Helm short to your OCI registry in your repository. Then Flux's source controller is managing Helm repositories and Helm releases custom resources. So what it's going to do is first pull the signature. So this is optional. That's why we have this. But if you have enabled it, you pull the signature and you do the chart verification. If the chart verification is successful, you can move on. If not, you have a nice condition in the custom resource when you do keep cattle described that's showing you why the build is not successful. Otherwise, after the verification, it is going to pull your Helm artifact, build the Helm artifact with the given optional value files because what we do with source controller is the default value file is not included if you do not specify it. Then you package the chart and you store it as an artifact with the location that can be used. And then the Kubernetes API servers receive a notification and notify all watching controllers. And one of the watching controllers is Helm controller. What Helm controller is going to do is first fetch all the values that you have set into your resources, the resources being config maps and secrets, fetch them, merge them together, and then it will fetch the artifact and we'll do the deployment of the release. The deployment being either a Helm install or a Helm update. And then it's the same process that exists today. We have also integrated, while integrating OCI, we have also integrated contextual login. And I think this is you. What we do, this is something that we already had with the image automation controller. What it does is that it takes your image, your cloud provider identity service. And if we specify that we want to use a given provider, it will use the workload identity either to the service account, attach your controller, either to the node, just the identity attached to your node. And what it's going to do is instead of providing a secret with authentication tokens, you can just set a field provider with the name of your cloud provider and what Flex will do is Flex will ask your cloud provider for a token in order to authenticate and be able to pull the given artifact. Go on, Scott. Yeah. Only one other node I wanted to mention about that is it's actually a, yeah, sorry. It's a key on the OCI registry custom resource definition that just says provider. We didn't put those in the earlier slides because we didn't want to confuse, didn't want to add that new functionality when we were talking just about migration. But yeah, you just simply say GCP, AWS, et cetera. And the open ID connects. We support two provider, GCP, AWS, and Azure cloud. Yeah. And, you know, the CLI has some, the Flex CLI, again, you don't need to use the Flex CLI in order to be a Flex user. It just, it's really handy. So most people like to use it for different things such as this. So, for example, you could use the Aura's project directly to build your artifacts, but Flex has built in tooling to call that just like Helm does for, you know, the build, push, fall. So these are the commands, Flex build artifact. You specify a path to where your manifests are when you want to package them up. And again, since this is for customized controller, it can be, your manifests in this case can be plain YAML or customize overlays. And then you just give it an output where you want the packaged artifact to live on your file system. You probably, some of you may have seen Pinky's talk earlier where she just, she talked about terraforming Flex. So in that, that's an example of if you're using a controller that reads manifests that are packaged in a different way or represented in a different way like your terraform plans or excuse me, your terraform resources, your path would be toward that directory where you keep those or the single file. And then you can do this, essentially use Flex CLI to push as well. It uses the same API under the hood. You just do your Docker login to get your config or however you want to authorize to the registry that you want to use. And you just do a flux push. You do the same thing. You give it the path that you, of the repo, excuse me, you give it the URI of the OCI registry that you want. And then the app, in this case, that's the repo. And then what this is doing is just getting the tag, the path where you want your manifests to come from. Optionally, if you hadn't already packaged it, you give it your source. The source is on the way, then you put this as annotation, actually, on the artifact. Right. Yeah, exactly. So it can comply with OCI annotation information so that you can parse that with your tooling. And in this case, why are we adding the revision now that I'm... Actually, what we do here, we take the, because this is from Git, we get the current Git commit and you put it as a revision, actually, as an annotation when you push artifacts. So when you push artifacts, you have the annotation. And you can see it on the flux side in the custom resource. That's so awesome. Suley knows more about this than I do, obviously. That's really great. And then there's also the pull command. And it's almost the opposite of the push command. Because you have the demo. Now, for the demo, we have... We will be using a fork of Podinfo from Github and Toran. And we have two scenarios. The first one is that you are going to migrate Podinfo Kubernetes manifest from Git to OCI. So we will be handling packaging and pushing the manifest to Github container registry, signing it with cosine and deploying them to Kubernetes. The second one is the same thing, but you will be using the provided HelmShot to migrate it to OCI. And again, packaging, signing and entering to Kubernetes. So, yeah, we use a fork of Podinfo. As I said, Podinfo is giving us manifest. We have customization, HPA on the service, deployment. And we have added or shared... We have added... So here we see the deployment. I'm just going to move on. And what we have added is we have added a secret, a sub-uncripted secret, just to show you how we can package together many different manifest. So this is encrypted with H. What we're going to do is we're going to package our manifest as a Tor archive. And then we're going to push them to OS. First, package them. So we put what is in shared and what is in customized in the archive. We show you what's inside. And then we're going to push with OS. When you push with OS, what we're going to do is we give the location and the name of the OS archive with the tag, and we provide as well the major type, which is going to be a layer of tar and compressed with Gzip. So then we push it. And that's just to show again that you can push with either Flux, CLI, or OS. So we have the digest. What we can go is look on GitHub that we have new digest, which is 6.2.2. Then we're going to sign it. In order to sign it, we use the cosine CLI that will be generating asymmetric keys for us, private on a public key. And we're going to sign our artifact with the private key. So when you sign, you specify the location of the artifact and cosine, you're going to pull it, retrieve the manifest, and sign it for us. And then push the signature collocated with the manifest. When it is pushing the signature, what we can see here is that it set the tag for the signature, finishing with that sig, and the tag actually is the digest of the manifest, of the OSRI manifest. So now this is done. So we need actually, we need the public key in order to do the verification in Kubernetes site, so we create a secure with this. And then what we're going to do is we're going to, what's coming back, we're going to deploy an OSRI repository, then put in full, with the location of our OSRI artifact, and we're going to provide a somewhere, and at the end, what we do is we set a verify field with the provider, set to cosine, and we give the security friends containing our public key. So we apply this, and then once it's applied, we can look at our custody, our OSRI repository, and here what you can see is that we have an artifact, the artifact containing our manifest, and you can see at the end that we have a six, in the condition, you can see that we have a condition type source verified, and that it has succeeded. And you have the verify the signature of the given revision. So now what you're going to do is we're going to, using the Flex CLI, we are going to create a customization that is going to fetch the artifact set by the OSRI repository and apply the given manifest in Kubernetes. And you provide the self-deception key for the, for the self-secret. So this is applied. Here we, what you can see, if you look at the status, you can see here, or we have a customer, the customer is giving us an inventory. In the inventory, you can see what this has applied. So here we can see it has applied a secret, a service, a deployment, and an HPA. Those are all the manifest that were part of the archive that you pushed at the beginning. So we look at them then, we check that all of them are ready, the deployment. And also we have checked that here we can see the secret. The secret named record has been decrypted. It's a local configuration. It's not, it's not anymore a substantive secret. But that was the first one. So let's move on to the second one. So the second one is now, so this is not the second one. This is the second one. It's putting for providers also with a chart. So here we have the Helm chart. The version is 6.2.1. So let's move on. What we're going to do is we will package it with the Helm CLI, Helm package, and then we will push it to, this time we are using, we are using a GCP artifact registry, and we are going to use the experimental feature of cosine. We are going to do a keyless signing. So we package this, we push it, we check that we have a new version 6.2.1. Then we do experimental cosine. So when we do the experimental cosine, keyless signature, what it's going to do is it's going to authenticate us with the new IDC scheme. So what we're going to do is it's going to authenticate us because what it's going to do is it's using the special certificate authority to generate a really short-lived certificate to do the signature. Short-lived meaning 10 minutes here, when you generate the certificate as an alternate name, you will have your email address, actually. This is something to know. So now this is done, and here what we can see is that it has signed and pushed the signature, at the bottom we can see that it has signed and pushed the signature to our registry. So then what we're going to do, the same thing here that we showed, our helmet repository pointing to our location, and we have set provider GCP here. That means we're doing contextual logging. We're asking Helm, we are asking Flex to use our identity, not identity, to fetch our artifact from that registry without providing any authentication token or anything otherwise. And then we declare a Helm release that is pointing to this repository that we fetched putting for and apply it for us. So let's see what once we do this to be applied everything. So first we declare the Helm repository. We see that this has succeeded. So it has succeeded in logging and verifying the authentication to our repository. And then we see the release. And we see that the release has succeeded and it has installed to put it for the Helm chart. And it has done the verification. Perhaps we can see here that we set verify provider cosine. And if you look at the Helm chart itself, we can see here that the Helm chart has set the artifact. And then in the condition we can see that the source verify has succeeded as well. And that was the end. So it was we showed. Can you just do the conclusion? Yeah, yeah, absolutely. So basically, thanks for we're maybe going over like a minute and a half, two minutes. So then the coffee break. So basically, just like wrap up a couple of links and some just info how you can get in touch with us. So basically, again, why are we transitioning to OCI from Helm? It's to make sure that we adopt the same delivery. It's when you want to, I shouldn't say, when you want to adopt the same delivery mechanism for applications and configs. So there's also no scaling issues. Related to the index.yaml that I mentioned before as an example with Badami, but that's just one example of many. Really, every time you call out to the index file, you're making another round trip. So it takes that away. It also enables charts, signature verification the same way that you do your images. And it as Suley was just showing in the demo, you can leverage your provider ID service for often. So and then why would you transition to OCI? Very similar first reason adopting the same delivery mechanism. You also avoid distributing objects with different life cycles within the same repository. And similarly, you enable your config objects, sig verification using cosine. It also allows better access control over a given artifact. And the same with your off end as well. So there's some links if anyone wants to screenshot it. We'll also upload this PDF with these links. And please, please, um, yeah, please stop by the flux booth. We've got a lot of good talks coming up on this topic and other topics. So, uh, thank you very much.