 Good morning, good afternoon, or good evening, depending on where you are. We're very happy that you can attend this session. And today, myself and my co-presenter will talk a little bit about the Harper project. So this is supposed to be an introductory session. It's going to be fairly light. And we'll give a high-level overview of the Harper group registry and what it is, why you might need it, and the current status of the project, some of the things that we worked on for the last couple of releases. And then there's also a deep-dive session where our engineers will go into more details on some of the specific features that are coming out, about the 2.1 release, the upcoming release. And then we have a demo prepared for you today as well. And we'll leave the last 10 minutes for Q&A. So a quick introduction. I'm Alex Xu. I'm a product manager in a cloud-native team at VMware, leading the Harper effort, sort of trying to be responsible for trying to understand the requirements around the registry, driving the roadmap, collecting feedback, and making sure we're constantly improving and seeing sustained growth within the community. And with me today is my co-presenter, Stephen Rin. Stephen. Hi, everyone. Nice to meet you. And welcome to join our Harper instruction. And I'm Stephen Rin. So I'm working for Harper as an engineer manager in VMware. So I'm mainly responsible for developing the feature and managing the release. In the same time, in VMware, I also manage the Tanzu Kubernetes Grid Equation product. All right. Thank you, Stephen. So what is Harper? Well, Harper is a trusted cloud-native registry that can store, sign, and scan content. And so the registry is basically just a place to hold hosts and manage our artifacts. And so when we started this project, we set out to build an on-prem registry leveraging the Docker distribution, which was the fact of standard for storing content images at the time. But we also wanted to address on the issues that we came across while using Docker Hub and some of the other alternatives. And then over time, we've added more and more services and features to it, like cycle management features, security features like scanning and signing. And so our mission right now is still to be the best cloud-native registry for Kubernetes. And we started support for Docker images and then expanded to Helm charts in the 1.6 release. And now with OCI, which is the main theme for the 2.0 release that I'll talk about in a little bit, we can support any OCI-compatible artifacts. So this is a quick timeline of the project. The biggest announcement this time around is that we have officially reached graduated status in CNCF and becomes the 11th project in CNCF to do so, along with projects like Kubernetes, Container and Gain, Helm, more Prometheus. And so you can see that we started this journey back in 2014. So we put a lot of work and attention into it. And so we're very happy and we're very proud of what we've accomplished, but we could not have done this without everyone in the community, including all of you guys on the call today. And the community plays a huge role in helping us understand, test, giving feedback, and sort of driving the roadmap with us. So we're very appreciative of everyone's contribution here. And in the project we each graduated status means that it's hit a certain level of maturity, but it's really the potential and the roadmap that drives this decision. So this is a quick overview of the various pieces or capabilities that make up Harvard. And we'll just go over a couple of these real quick. First thing is access control. You can use local DB, LDAP, or OIDC to connect to Harvard and manage your users and their permissions into Harvard projects. If you're not familiar with Harvard projects, think of it as like a namespace. It's a unit of tendency and has its own set of repositories and images that are completely isolated from each other. We also have robot accounts for service accounts, CI, CD type of scenarios to automate the pushing and pulling of images. And you can configure, reputation's referring to the ability to configure push-based or pull-based replication policies to and from other registries. So that could be a harbor. It could also be something like Docker Hub or Quay or any of the other popular SaaS registries like GCR, ACR, ECR. And then we have the ability to scan images from within Harvard, leveraging popular image standards like Clare, Encore, Trivi by security, and a couple of others. And then once you have those vulnerabilities, the list of vulnerabilities discovered from the image scans, you can actually create security policies around these. So you can create a policy that says, hey, this allow any kind of image pulling on images with certain vulnerabilities. You can also sign images, leveraging Docker Content Trust, which comes with Harvard, and likewise create image pulling policies based on the signature. So for example, you can say something like, prevent any unsigned image from being deployed. We also support managing Helm charts. So previously, this was done in the third-party extension that we had added called Charmysim. But with the 2.0, you can now push these directly to the registry, and they sit alongside your container images. So we have a single web console that you can do everything for managing your artifacts, your users, and your policies, running spectrum tasks like garbage collection, and everything you can do on UI you can do through an API. And finally, we have a couple of deployments with Harvard. So there's a document goes format. You can also deploy Harvard as a Kubernetes cluster using our Helm chart. And Bosch is another container orchestrator that's used very heavily in Cloud Foundry. So this is an overview of the latest Harvard 2.0 architecture makeup. You can see that we're still based around the document distribution. It's still a crucial piece of the Harvard. And going from top, we have a bunch of clients you can interact with Harvard through Docker CLI, Kubla, Helm client. And since 2.0 sees support for all these additional cloud native artifact types, there are other clients that different artifact authors that build for interacting with the registry. So for example, Aura is one such client that's very popular for pushing a lot of generic OCI artifacts, like OCI indexes. You can also push Helm charts and push Open Policy Agents. Chart Museum is still here, so you can post them in Chart Museum, as well as in the top distribution. But we do have clients that advocate Chart Museum in the future. And on the right hand side, we have a list of supported scanners. Scanners are added in out-of-treat fashion to Harvard for pluggable interrogation services framework. And we talked a lot about this in the 1.10 release, but we do have some changes related to the default scanner that I'll talk about in the later slide. And finally, we've expanded the list of replication targets. So you can replicate not just to Harvard and Docker Hub, but a lot of the other assets registries as well. And you can replicate all these other artifacts now beyond just container images and Helm charts. But obviously, that also depends on the supportability of these artifacts on the target registry. So if you've seen the CNCF announcement in the release block around Harvard 2.0, Harvard 2.0 was all about Harvard becoming an OCI compliant registry. And so we should talk a little bit about what OCI is before we talk about what it means for Harvard to be an OCI compatible registry, an OCI compliant, OCI capable. So following this picture, we started with Docker on the left-hand side, or the Docker distribution to be more specific. And the Docker distribution is just essentially some content store for storing your Docker images. And so Harvard as a registry built around Docker distribution is just some HTTP API backed by that content store. And so up until very recently before the 2.0, the only artifacts that you can push to Harvard and manage on Harvard over container images. Yes, you can push Helm charts, but like we said earlier, those were managed through Charm Museum separately from the images. And they didn't get the same set of features like tag retention, tag immobility that container images have. And so OCI, the Open Container Initiative, is a group that came along to define specifications around format, runtime, and distribution so that a broader set of cloud native artifacts can get the same features as container images. And it wanted the distribution to be able to deliver a registry that is secure, cross-layered, or layered, and cross-reconstable. And so it took the Docker lead 2.2 image spec as a starting point, and it created its own specifications around image format and image runtime. And so what this picture is attempting to capture is a little bit of that process or that history where OCI formalized these specs around image format and image distribution. And that work was merged back into a Docker distribution so that it fully supports OCI images. And so Docker distribution, in turn, Harvard can now support hosting all these other cloud native artifacts that we've heard so much about in these conferences, such as Helm charts, CNAP, Open Policy Agents, senior layer. So the second icon here is supposed to be a CNAP, what's known as a cloud native application bundle. It's another application deployment-oriented artifact that's worked on by the guys over in Microsoft. So being able to host Docker Manifest List is one of the things that came out, this support for OCI. And the Docker Manifest List is something that has existed for a while but was previously supported in Harvard. And the Manifest List is essentially just a packaging of Manifests. As you can see in this picture, we start with the Manifest List, which is itself a shell digest, and it's acting as a pointer to all these other images that are each built for a specific architecture. And so it's holding these together in some way and that's what allows you to use the same image name for all these different images built for different architectures. And so just try to keep this in the back of your head that there is this index structure called an OCI index. And we'll come back to this later in the demo. And so Docker Hub is actually a really good example of something that is OCI compliant. And probably the first registry that's OCI compliant. And it can fully handle the Docker Manifest List that we just talked about. In fact, pretty much all the official images that you see on Docker Hub today are using the Manifest List, allowing for multiple platforms to be supported using the same image name, image tag combination. And so here, shown here, is the busy box image with tag latest. And you can see that it's referenced in multiple shell digests for different architectures. So you have Linux, you have ARM, you have MIPS, and you even have different variants of these architectures. And so when you do a Docker pull busy box from Docker Hub today, it will actually fetch the version of that image that matches your client. So unless you're pulling specifically by digest, if you pull by tag, Docker Hub will responsibly fetch the right version of that image. And so that's the experience that we wanted to deliver with Harvard as well, that users can push and pull Manifest List, if they can manage these as a whole, but also manage the images within individually. And so that's also just one use of OCI index, this multi-architecture image on Docker Hub that we just looked at. But there are other artifacts that are leveraging this that are shown here. And so here we're looking at a project within Harvard, looking at a specific repository within that project. And you can see that we have a container image, a Helm chart, a cloud native application bundle, and another Docker image on the same project recognized by their logo on the left-hand side. And so we put this here to sort of draw contrast with the one point X version of Harvard where you only had container images, nothing else. But now you have all these other artifacts that you can manage in the same project. So that's really the biggest piece for the 2.0bies. Now, the other work we did for 2.0 was replacing Claire image scanner with Trivi, which is another open source image scanner by a company called Aqua Security. And Claire has been our default scanner and since probably going back to the first version of Harvard, but we started looking at other scanners because users have been asking us, I've set up such-and-such scanner, I've paid for such-and-such scanner, and can I get that to work with Harvard? And so, you know, in the previous one point X version that we had done a lot of work to sort of open up those partnerships with different image scanners through our plug-in interrogation services framework. But this is sort of taking things one step further with regard to the embedded default image hardware scanner. And so very, very simply, we lend it on Trivi because it's simple, it's comprehensive, it's fast, and it's accurate. It's also easy to set up, there's no need to manage a DB instance separately to run some darkness mode. Trivi also has a wider coverage for scanning different operating systems and application dependency managers, a lot of which are listed here, but there are more. And we found Trivi to just be superior in conducting deeper scans and capturing more vulnerabilities across all the different operating systems that we've tested, including DB and Ubuntu, Susie, Voton. So please check them out if you haven't, and I believe they have some sessions at this conference as well. So that's 2.0 in the nutshell, and I just wanted to spend a quick few minutes on giving a preview of the 2.1 upcoming release. And the first one I'll talk about is proxy cache, which is the ability for hardware to act as a pull-through cache for another remote registry. And we can call that registry the target or the upstream or the remote. And so this is useful in situations where you have Docker nodes that either have limited connectivity or no connectivity to that target registry. And it could be any number of reasons, right? The security or compliance reasons or it could just be pure connectivity issues and limited egress options. And so Docker Hub is a really good example. And that was the case that was brought to us, that was the most often raised to us. Docker Hub is a registry where your Docker clients from all the work are attempting to pull images if you pull too fast or you pull too frequently, then your connection gets dropped or you might even get IP banned. And so hardware as a proxy cache is meant to address this very problem. And in this case, it means that hardware will serve. So if you have a hardware deployed, it will service the middleman to pull the images from the remote registry that you're trying to hit, cache it locally and then serve it to you. And so it's much faster and it minimizes traversal over the network, right? It prevents you from getting IP banned. And what it do this is to create a project in Harbor. It's, if you're familiar with how you create a project in Harbor, it's the same process except there's an option to enable it as a proxy project which requires you to enter the target registry endpoint and the credentials. And so when you wanna do a Docker pull from that remote registry, you can instead do a Docker pull from the proxy project instead. So you will have to modify your Docker pull command and your pod manifest to hit that proxy cache instead. So I mean, in the interest of time, I'm gonna just give a one or two some summary of this. I'm not going to too much detail. You guys can learn about this in the deep dive but essentially we're improving the garbage collection so it's not blocking. So you can still push and pull and delete images while it's running. This is not the case right now because the registry is put into read only mode. And then we also have an integration with PTP providers like Rupert Kraken and Alibaba Dragonfly. So there's gonna be a way to sort of, there's an action called pre-heating which moves the images from Harbor to the PTP side where it can be geo-distributed to hundreds or thousands of nodes in a more effectively efficient way utilizing their PTP distribution efficiency. So last thing, you know, that's the 2.1. It's coming out in the August. Please give it a try and appreciate any feedback you have when it comes out. Just final word about the community. The community is doing really well and we appreciate everyone's contribution whether it's raising requirements or fixing bugs, submitting PRs or doing code reviews which is attending the community meetings and attending this could become conference for example. So we're at 12.4K stars and within 20 contributors and lots of companies are using Harbor or have partnerships with us. All right, I'm gonna let Sieben do the demo because I think we're at almost 20 minutes. So I'd love to ask Sieben. Okay, thank you. Thanks Alex, let's share my screen. I assume everyone can see my screen. So yeah, as Alex introduced, so we have several major features given interest of the time. I'm going to demonstrate the two major features from $2 and $2.1. So the first one is about the OCI support we provided in $2.0. So as Alex introduced, so for the $2.0 we have introduced OCI artifacts support. With this feature, we can not only manage the images and we can also many hem charts, scene and bundle and other formats which follow the OCI distribution standard. Yeah, first let's have a demo situation on the manifest list. As we know, manifest list is a feature that which can store all kinds of different architecture images for a single repo. So I will demo how to upload a manifest in through those steps. First, we need to create a manifest and Docker has provided a manifest command which allow you to create a manifest and you need to specify the target URL as well as the images which belong to the manifest list. So here I have included two images. So one is for the AMD architecture images and another is for the ARM architecture images. So once we have created manifest, we need to annotate those two separate images as different architecture. So once this annotation is done and we can use Docker manifest push to push to the harbor. Yeah, so once we have pushed to the harbor and we can check our UI, look at what the UI functionality we have provided in harbor. So as you can see there's a hallowart manifest uploaded and for the manifest there's a folder icon you can click in and see what the inside the manifest images are. So yeah, as you can see, the two images one is the ARM architect and I am the 64 architect. Yeah, so next I'm going to demo is that so with OCI architect support we can also upload the hamchat. As they know in harbor 2.x they are supporting user to upload hamchat and many have chat through the chat museum. So in ham three there's experimental feature user can upload the hamchats to the OCI compatible register and this is what I'm going to demo. So yeah, as the command shows that every time you need to set this experimental tag before you run any commands for this experimental feature as I have just showed we have two hamchats and I tagged before the demo so we can use hamchat push to push this to harbor and once it is pushed and we can check on the UI in harbor portal and see the hamchat is uploaded and we have a ham icon also attached to with this artifact. So if you click into this hamchat and we can see all the additional information introduction of this hamchat this is the themes that we have provided in the chat museum. Yeah, so in the same time you can also pull from harbor about the hamchat that you have uploaded and this is just the demo of this. So yeah, so in the same time we are not removing the chat museum feature because there's also existing ham two users who cannot upload charts to the OCI registry still consuming chat museum. So we keep this feature in 2.0 and 2.1 and we hope that eventually we can remove this dependency and we can store charts all using OCI registry. The third I would like to demo is that cloud native application bundle scenario. So as we know, so in cloud native world it's not just images, right? We have a chart, we have all we have seen them bundle we also have OPA bundle we have other like YAML file we need to upload actually for the OCI support and we can manage all those cloud native artifacts. This is demo that use the CNAP to OCI command to upload a CNAM bundle to harbor. As we can see this is a sample CNAM bundle JSON file. So once we have two nub load and we can see from the UI there are two, two artifacts we need to send bundle will be uploading as we can see that for the CNAP icon and we can click in through this folder icon we can see there's one image uploaded and also there's a file uploaded. Yeah, let's quickly go through the power of the OCI formats is that so besides any official formats we can also upload a top file or folder so the OOS command line to manage the other OCI artifacts. Here I'm going to show you that use the OOS to upload a tar file. So you can simply use the OOS push giving a target URL and visit the location of your current target file. And once you have uploaded and you can also use OOS, yeah, also upload a folder. So once we have uploaded, we can check the UI portal and we have above this artifact and we can also pull it down to our machine. So I'm just changing to different directory and remove the previous downloaded version and I will use pull command to pull the target file artifact as we can see. So when you pull the OOS artifact and all OOS will automatically change the format you have uploaded it before. So you will get a folder and you will get a target file. Okay, as Alex mentioned, so besides the pull and push so there are other features like scanning, replication and the tank retention. All those functionality can work very well with the new OOS artifacts. Yeah, I'm just showing the scanning works. So I think, yeah, with that let's move into the POS cache feature. For the POS cache actually we are solving two problems. One problem is that when you have isolated environment probably you just need to config the harbor as your proxy to pull image from outside your isolated environment and then have a cache in this pull the images for the first future use. So let me go directly to how the user can configure this. So first, user need to create registry endpoint. This registry endpoint is tell harbor where you would like to pull your images from. So for this demo I'm currently choosing Docker hub registry endpoint. As you can see in the previous list we are almost swatting as many replication registry endpoint currently support as possible. So once you have config this Docker hub registry endpoint what you further need to do is that you need to create a proxy cache project. So you create a project and what you need to do is that you check this POS cache feature and fill in the endpoint you have just configed. And click okay. Yeah once this is done and you can see a new project is created and you can change to the command line to push images. So here as you can see I have, yeah I removed the previous download image and I will pull a new images. So this POS cache projects. So yeah beside red is I can also pull the MySQL. I can also pull hello world. Whatever you have in your Docker hub you can pull it. Yeah so once you have pulled it successfully you can see new repository is cached in harbor in this POS cache project. Yeah I think that's all my demo. We give an intro to the time and for the other features we have introduced please join our deep dive session so we will introduce more about other features.