 who's joining us today. Welcome to today's CNCF webinar, Harbor the Trusted Cloud Native Registry for Kubernetes. I'm Jeffrey Seca. I'm a senior software engineer at Red Hat and a CNCF ambassador. I'll be moderating today's webinar. We would like to welcome our presenter today, Michael Michael, maintainer of Harbor and co-chair of Kubernetes Sig Windows at VMware. A few housekeeping items before we get started. During the webinar you are not able to talk as an attendee. There is a Q&A box in the bottom of your screen. Feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all of your fellow participants and the presenters. Please note the recording and slides will be posted later today to the CNCF webinar page. The URL will post in there but it is cncf.io slash webinars. With that, I'll hand it over to Michael to kick off today's presentation. Yeah, thank you so much for the introduction and hello everybody. Good morning, good afternoon depending or good evening depending of where you are in the world. There's a lot of things going on with Harbor lately so I'm super excited to come here in front of all of you and kind of talk a little bit about the latest news on Harbor or latest release and also give you a deep dive demo on some of the capabilities of Harbor. So let's kick things off. I want to mention one thing, Q&A please use the Q&A that Zoom provides here. If I see the question as I'm talking about a specific topic, I'll interrupt myself and try to answer your question otherwise I'll catch up later on. All right, so let's go ahead and get started here. I'm assuming all of you should be able to see my screen and we've turned off video so that would make it easier for you guys to see a full view of the presentation. So we'll be talking about Harbor today, the trusted cloud native registry for Kubernetes. When we kind of talk to users about Harbor as well as the bigger community, a lot of things come up right at the beginning of that discussion. What are some of the challenges that folks have when managing your cloud native artifacts? When we kind of look at the landscape today and the CNCF survey that came out for 2019-2020, we saw one of the biggest jumps in the use of containers in production. The actual numbers were 15% to 84%. That's super dramatic. That's huge. It means that the technology is maturing. It means that more enterprises, more users are putting containers in production. Now as the deployments are getting larger and more ubiquitous and the cloud native adoptions becoming mainstream, you also need a registry. You need a repository for all your cloud native assets. You can't really operate Kubernetes without a registry. So that makes Harbor a key ingredient in any cloud native environment. On top of that, according to SISTIC and their annual report, 52% of container images that are scanned by SISTIC have known vulnerabilities. Think about that number for a second. 52%. Now if I'm an enterprise or a user, what are some of the problems that I have thinking back around, I'm putting a lot of containers in production and a lot of them potentially could have vulnerabilities. Well, I want consistency of policy and access for my registry. I want to know when my policies look like how they're deployed and I want to know who has access to my registry, who has access to be able to push or pull images. I want a common way to describe that policy for consistency and security. That's really, I want to be able to enforce my compliance policies for my organization as they relate to my artifacts. And last and most important is I want a piece of mind that when I'm deploying artifacts in production, they're free from vulnerabilities and secure before I push them to Kubernetes or any other container runtime. What is Harbor? Harbor is a cloud native computing foundation incubating project. We actually have 12,000 stars on GitHub. The common metric for basically scaling and viewing projects today. And in a nutshell, Harbor is an open source registry that secures artifacts with policies and role-based access control. It ensures images are scanned and free from vulnerabilities and it signs images as trusted. Our mission as a Harbor project is to be the most secure, performant, scalable and available cloud native repository for Kubernetes. And we do that by delivering compliance, performance and interoperability to help you consistently and securely manage artifacts for Kubernetes. That consistency and compliance and security are super key words in terms of how we view Harbor, our vision for the project, as well as how we develop the project. So let's take a look at some of our core tenets. You know, keeping in mind what I mentioned earlier, our vision is to give you that consistent image management for Kubernetes. We start by ownership and deployment. We give you the ability to deploy Harbor as a packaged offering on your own data center, on your own compute nodes. You get to own it. You get to integrate it with existing tools and services that you have in your own data center. We'll give you multi-tenancy and that is in two different modes. We give you role-based access control with a flexible model of defining users and their permissions. And we give you isolation at the project level. So every project can be tailored to the needs of the individual users that are going to be using that project. At the core of Harbor is policy. We can push images. We can pull images. We do all the standard things that registries do. But the most important thing that we enable is that policy. We'll give you the ability to create quotas. So you can don't allow any project to run away from you. Retention policies. So you can enforce compliance. For example, I don't want any image older than one year to be in my production repository. Immutability. Don't allow anybody to override stable images. We're able to sign. So we use the notary capability from CNCF to allow you to sign images and verify provenance. And vulnerability. Being able to set up your vulnerability policy in terms of what apps can be deployed in production from container images depending on if they have high, critical, medium, low vulnerabilities. The security and compliance at the core is the identity and access management. Being able to federate your identity as a user from any identity provider you own, whether that's DAGs, Active Directory, or anything else, the ability to do scanning. So you can scan your images and verify their free and vulnerable abilities. And the CV exceptions. And the ability to give you the ability to create exceptions for certain CVs that you don't have a solution for yet. Last tenet is extensibility. We need to be able to allow our users to deploy hardware within your own infrastructure and make it compatible with existing investments in infrastructure and services. For example, you're using specific CCD tools. We need to be able to give you web hooks so you can integrate with those CCD tools or integrate the web hooks with reporting or configuration management databases. Give you a application so you can push and pull images from hardware to any other container registry. We'll give you pluggable scanners so you can bring your own static analysis scanner. I'm going to show you that to you in a demo later on. We'll have a full REST API. Anything that you can do in the user interface of hardware, you can also do through an API. We'll have robot accounts so you can do headless configuration and executions against hardware, like for example in CI CD and CLI secrets. You kind of look at this core schema. This is what we have been put down as table stakes in hardware and we're executing all our features and functionality against these tenets. Let's take a look at the hardware architecture here for a second. This is a little bit of a busy slide and I'm going to use my mouse here to kind of navigate that. In the middle, we have all the core capabilities of hardware. Harbor can be deployed using Docker compose or a home chart. Essentially, Harbor consists of a number of services that can be individually scaled in a Kubernetes cluster that deliver the core functionality of Harbor. Everything from configuration management, coders, the signature manager, retention, notifications, replication, et cetera. So a collection of services build the foundation of what Harbor is and the different services that we have. Underneath that, we have our garbage collection controller, integration with chart museum, which up to our latest release, that is how you could push and pull home charts into Harbor, have our integration with Notary for signing, and then Docker distribution for pushing and pulling container artifacts. On the bottom layer is our data access. We have three data access capabilities in Harbor. The first one is our key value storage, which is built on top of Redis, where you can deploy Redis as standalone or in a highly available manner. We have our local or remote storage, which is where we actually store the artifacts. Anything you push and pull into Harbor goes into our remote storage or local storage. So it could be a persistent volume in Kubernetes, for example. It could be an FS-based or it could be object storage. And last is the configuration management database for Harbor. This is where we collect and include our policy, our access permissions, role-based access control. Everything that Harbor has in terms of the knowledge and the brain of Harbor goes into this Postgres database. On the left, we have our identity providers. We support Active Directory and LDAP, as well as OIDC. This is how you can federate your identity so that you don't have a different user model for your registry as compared to the rest of your tools and services in your organization. At the front, we have NGINX, which basically frontends our API routing as access reverse proxy for Harbor. And then at the consuming side, we have a variety of tools that we support for interacting with Harbor. We have our web portal, where you can view all the information for Harbor to configuration management, be able to define and implement policy. And that's built on top of our API, the rest API I mentioned earlier. The Kubler from Kubernetes can push and pull artifacts from Harbor. We'll have our home integration so you can push and pull home charts, have Docker and notary client for being able to not only push and pull images, but also being able to sign them and configure the provenance of your images. And last, and this is new in the latest release, is ORAS, for being able to push and pull OCI compatible artifacts. Speaking about the integration of Harbor to one of our key pillars, we have integrated with a variety of external tools and projects to enable both the replication and the scanning capabilities of Harbor. Let's take a look at replication here for a second. Harbor can define replication where you can push or pull artifacts from Harbor and basically interact with the Docker distribution, Docker Hub, Huawei Cloud, Amazon, Google, Azure, Alibaba, Quay, Artifactory, GitLab. That list is huge. We started with just a couple of these replication providers, and then we've expanded the list with every single release of Harbor. Why do we do that? Because it's important to our users. They've told us that they want Harbor to be at the key and in the center of a hub and spoke model, where Harbor can be in charge of the policy, Harbor can be in charge of scanning images from vulnerabilities, but then be able to push the images where they need to run. So for example, if you're running your workloads on Azure, you need to be able to scan your images in Harbor and then push them at the edge or at the region where Azure is running and have your images be locally available to your Kubernetes clusters. It's all about offering choice to our users. And the scanning providers, that's a new feature that shipped with Harbor 1.10 at the end of 2019, essentially have created an engine that became a pluggable adapter engine where different security companies can come in and introduce their static analysis scanner into Harbor. So we've opened it up. We're all about extensibility or all about flexibility. If an organization is using, for example, encore engine or enterprise, and they would like that to be the scanner for their Harbor projects, they get to use that. If they're using Aqua3V, you have choice there as well. You can use Deusek or CentOS, you can do that as well. And we're actually working with CISDIC right now so that we can enable their scanner as well. So let's take a look at some of the features and progress that Harbor has made up to version 2.0 and I want to specifically call it out, stopping at 2.0 because I have a different slide for that. So what are image retention policies? And I'm going to demo pretty much all of these capabilities later on. So image retention is the ability to say, hold an image for the last 90 days or keep only the latest 10 images in my Harbor project and delete everything else. Project quotas, be able to enforce a policy that says, don't allow any project to exceed a terabyte of storage so that no user goes and creates a situation where you run out of this space at your data tier. Web hook events, being able to be notified via HTTP callbacks. If an event happens in Harbor, for example, scanning completed, scanning failed, an image was uploaded, an image was deleted and so on and so forth. Introducing new replication targets, what I've just mentioned earlier will be adding more and more replication targets so Harbor can interoperate with as many container registers out there as possible. CVE exception policies will allow you the ability to define a CVE as an exception so that if the CVE is a critical vulnerability and it will prevent your image from being pulled, it would temporarily grant you permission to pull your image until you get the ability to fix the library that introduced the CVE, for example. Immutability, being able to define an image as immutable. Don't change that image because it's important. It's my stable V2 release. I don't want anybody to ever update that. Plugable scanners kind of covered that a second ago and the last is identity and access management improvements. We added OIDC group support in 1.10. We had limited guest support in 1.10 as well and many more, which brings us to some of the Harbor news. A couple of weeks ago, we shaped our latest release of Harbor and probably one of the largest to date was Harbor version 2.0. So we actually changed the major release from a semantic versioning perspective for Harbor. We've revamped a website and we've worked very closely with the CNCF and look from that team to have a brand new website. So if you go to goharvard.io, you're going to notice a brand new website with built-in documentation into the website, including search, making it easier for our users to find content and learn more about Harbor. We've released the Harbor operator about a month and a half ago. That was in collaboration with OVH cloud, one of the public cloud providers in Europe. Basically, the Harbor operator enables us to pave the way for lifecycle management for Harbor. We're not at version 1.0 yet, but we're working towards that. Eventually, your goal is that you're going to get an all-in-one installation experience for Harbor, including higher availability. So you're going to be able to deploy, scale, upgrade, delete Harbor. So provisioning alone is huge, let alone the ability to do full lifecycle management. So we're super excited about that and we look forward to enhancements in the operator pattern that we've developed for Harbor. And then last but not least, Harbor is up for graduation in CNCF and the voting has already begun on Tuesday, May 26th. And if you are a user of Harbor, if you are a contributor of Harbor and you love our project, go and put your vote of confidence for the project. Let's take a look at Harbor 2.0 a little bit more deeply here. The core functionality of the release has all been around OCI. So one of the key things that we wanted to enable is, before Harbor only supported container images and help charts. And while that was sufficient for a huge percentage of our user base, it was not enough as the cloud native ecosystem is maturing. There's more and more artifacts that are being created and potentially have a need to be stored somewhere or policy can apply on top of them as well. And this is where OCI support comes in. It allows us to standardize on a single image format, which is OCI, so that all the registries that are OCI compliant can speak the same language in terms of the image format that they support. So by adding OCI compliance, now Harbor is able to support a wide variety of images, not just container images and help charts. I'm going to show you that in a second. Aqua's 3V was one of the pluggable scanners that we released in version 1.10 who worked with the Aqua team and were super excited to announce that Aqua 3V is now the default scanner in Harbor. So when you create a brand new project in Harbor and you're on Harbor 2.0, you're going to notice that 3V is now the built-in batteries include the default scanner for your projects. We're adding service to service SSL. There are many of our users that are concerned about security, money in the middle attacks, and they wanted us to enable certificate-based SSL for all the core services of Harbor. If you think about the diagram I showed earlier that showed all the different components of Harbor and the services that basically define what the core functionality of Harbor is, now all those services can talk to each other in an encrypted fashion. There's some customer feedback. We added the ability for robot accounts to set an exploration now. I'm going to show you that to you in a demo later on too. And we've added extended our support for CI CD integration and automation with the ability to customize the triggers for webhooks as well as add Slack integration. So for the folks that are using Slack as the means to be notified and perform work and communicate with their teams, now you can receive your webhooks in Slack. What are the additional things like tag improvements and UI dark mode based on customer feedback? And the list is endless, but if we kind of look at the high-level Harbor 2.0 has only been about OCI, OCI, OCI. Let's take a look at the OCI support for a second. We went from Docker images and home charts to full OCI now. So now we support Docker images. We support home charts. We support Cina bundles. We support open policy agents. We support singularity. But in essence, anything you can bundle into an OCI and by using the OCI format and the index can be pushed into Harbor. Now, that doesn't mean that all those artifacts will be able to take full advantage of all of the features of the Harbor, but you can at least utilize some of them. For example, our policies are on quotas and retention. And for some of those images are scanning and security policies. Those apply as well. So this is key. You went from two image formats that were supported in container images and home charts and now we've expanded the ecosystem while still taking advantage of the Harbor policy that our users have been loving so far. Now, the important thing here is that SOCI continues to advance its support. Harbor will be able to go along for the ride. So the industry standard here that defines both the image spec and the runtime specification, whether you're using Harbor as your container registry or something else will apply. We're all speaking the same language here. Kind of taking a look at here at the OCI index Docker manifest list. I don't want to go too much in detail here, but the key thing I want to show you here is that Harbor can crack open its OCI index and Docker manifest list. It can understand your configuration for your digest, the different layers that you have for in your image, and what the manifest looks like. So it's important. We can understand that. And more importantly, very likely in either the next or the next two releases of Harbor, we're going to allow end users to be able to even push their own metadata for specific OCI artifacts. So we can understand even more artifacts beyond synabandoles and singularity and things that are commonly available. So for example, if you wanted to push machine learning or AI artifacts into Harbor, you could also push your own manifest and metadata that describes those and Harbor can understand that. We're trying to do that in a plug-in interface way, and we're working with the community to identify the best way to enable that. And I want to talk a little bit about Trivi as well before we dive into the demo. So Trivi, like I mentioned, is now the default scanner in Harbor. The reason why we made Trivi the default scanner is because it aligns very well with our vision in Harbor. It's a simple, comprehensive, very fast vulnerability scanner. It has significant OS package support. So you can support everything from Alpine to Red Hat Universal Base Image, Rail, Enterprise Linux, CentOS, Oracle Linux, DB, and Ubuntu for the OS. You name it, Amazon Linux. It has support for all of those, even distro less. It has significant application dependency scanning from Bundler, Composer, NPM, Yarn, Cargo, and others. And it can even do deep scanning where it analyzes not only the top layer, but also the middle layers to find libraries and versions so they can do their static linking. So we've gotten tremendous feedback from the community that they loved Trivi after 1.10 when we had it as a plug-in scanner that it was important for us to make this the default scanner. Now, if you're using Clare today, it doesn't mean that Clare goes away. Clare continues to ship with Harbor, so now we ship both Clare and Trivi as the two scanners in Harbor. It's just that Trivi is now the default. If you're upgrading from a previous installation of Harbor where Clare was the default, you will not be affected or not changing that experience for you. All right, and it's demo time, but first there's a few questions here. Let me see if I can answer them before I switch to the demo. Is it possible, one of our users is asking, is it possible to integrate third-party vulnerability scanners into Harbor? The answer is yes. So today we have Ankor Enterprise and Engine. We have Aqua Trivi as well as CSP. We have Dusek. We're working with Sysdig to integrate their own vulnerability scanner, and then obviously Clare, which was our first vulnerability scanner in Harbor as well. So as we're working with more of the community around the security scanners out there for container images, we're going to introduce more capabilities in that arena. These are pluggable models, so all those scanners ship out of band of the core Harbor capabilities. Next question is, do you support XAML for Auth? Can I use OracleDB instead of Postgres? We only support Postgres today, but if any of our customers is interested in using a different Postgres-compatible database, please join our community meetings. Let's discuss your use cases. Let's see if we can get support from the bigger community, and if you're also interested in coming and helping us deliver that, maybe you can do that. We do support XAML for Auth through our Identity Federation. However, the one important thing that I want to mention is that when you're authenticating into Harbor, you cannot put claims or entitlements into that authentication. So our identity integration is just Auth. It's just authentication, no authorization. So Harbor depends on its own RBAC model for that. Is it possible to get results of a security scan outside of Harbor? Yes. So you can get some of the scanning results through the web hook integration, as well as the API. So Harbor has a full-blown API where you can get the scanning results for individual artifacts. Can I scan Windows-based images? There are pluggable scanners out there that are working on being able to scan Windows-based images. I believe Aqua on their enterprise solution is working on that. They might even have it ready, but none of the open-source scanners support Windows images as far as I know today. Yes, we do handle multi-architecture images. This is a key capability of Harbor to the DOE. Another question is, how do you handle vulnerabilities tied to a specific process or architecture? The vulnerabilities are tied to the actual image. So if the vulnerabilities are tied to a specific architecture implementation of your image that includes certain libraries, then the vulnerabilities will be shown on that image specifically. The question is, Harbor garbage collection takes a long time, especially when we see terabytes of images. Do you guys know any supporting that direction? If you don't mind, I'll answer that towards them because we're working on something like that. Another question is, do you support externalizing Redis, Postgres, S3 as part of Harbor installation? Absolutely. If you use our home chart deployment, there are opportunities to bring your own Redis, Postgres, or S3 in a highly available fashion. So you can have a fully distributed, as well as segmented installation of Harbor that includes AJ deployments of these tools in a separate cluster or even a separate data center. How does replication with our container registers work? I will show you that in a second. Essentially, we create a provider model where Harbor connects to the other container registry and we can either push or pull artifacts from that other registry. The question around logging, do you log reports? Is there a usage? Yes, there are quite a bit of logging in Harbor in terms of usage, push, pull, deletions, as well as with our webhooks, you're able to get a lot of insight into what's happening. But we will always continue improving our auditing capabilities here because we feel like we can do better. Questions, do you support more than 10 gigabytes of an image? I don't know if anybody has tried to push a 10 gigabyte image on Harbor. I'm fairly confident it should work. If your client that you're using to push it supports it. For example, you're using Docker client to push a 10 gigabyte image or or us, then issue work in Harbor. Feel free to try it out. If there are any issues, just let us know as far as through our testing so far, we haven't seen any problems. There's a couple more questions here that I'll try to answer later on so that in the interest of time. So let's switch to the demo here really quickly. I'm going to walk you through all Harbor and then I'll get back to some of the questions that I can answer as I'm giving the demo. So first thing I want to show you guys is check this out. Dark mode, brand new feature in Harbor 2.0. So now you are able to see for the folks that are once an interface that's a little bit easy on the eyes, you can enable dark mode. For the purpose of this demo, I'm going to switch to light mode because I think it just demos a little bit easier when we're sharing my screen here. So changing this to light mode. So I'm going to try to do a very quick deep dive across every facet of Harbor and hopefully that's going to answer some of the questions that have been coming up. So the first thing I'm going to show you is let's go into the administration of Harbor. Over here we have our set of users. In this case, I'm using the default user store for Harbor, which is the database. In this case, our user store is in the Postgres database, but I could have used OIDC or LDAP or Active Directory as my integration. In this case, I can create a new user and provide a username, password, and some of the comments here. And then I can create a new user in Harbor. If I was using Identity Federation, I wouldn't have that need. I can set any of these users as an administrator and that makes them a global administrator in Harbor. Here we have our registries. Here are some of the questions that are coming around, replication. If I want to replicate Harbor with another container registry, I have to first create an endpoint. So in this case, I can create an endpoint. I can set what the provider is, AliCloud, AWS, Azure, Docker, all of those integrations I mentioned earlier. Once you define that, you give it a name. So in this case, it might be my Ali Docker instance. For example, you can give it a description. I provide an endpoint URL, provide an access ID, a secret, and if I basically is using SSL, I can verify the remote cert. And that's it. I can create my integration. Now go ahead and create one right now. But I'm going to show you a couple that have already been created. So in this case, we have another Harbor instance that's acting as the additional registry endpoint. So you see the credentials and the secret here. And then we also have a Docker distribution here that's acting as their application endpoint for Harbor. Looking at the replications. This is where you actually get to create the replication rule. So in this case, I can create a new replication rule. So I can call it rule one. And you get to figure out if you want your replication to be push-based or pull-based. You don't have to pull images or push. And you can define the source filter. What do you want to replicate here? You get to specify the names, the tags, the labels, and then the type of resource, if it's an image, chart, or artifact. So now you get to tell Harbor, here's a parameter of what I want you to push or pull from this other registry that you created. And then the destination registry here, I get to pick either the Docker Hub registry I created or the other Harbor instance. And then I also get to pick the namespace of where we're going to push things into. So take a look at one of the changes I have here. So I'm going to take a look at the hello replication rule here. It's targeting the Docker Hub distribution. It's going into a library called hello world and getting the latest image from there. Very simple. You can make this as complicated or as simple as you need based on the needs of your organization. So you can push and pull images into Harbor. Next thing I want to show you is project CODAS. This is a new feature as of our last couple of releases where you can define an upper boundary of the storage restriction that you want to implement on a per project basis. So in this case, I can come in and edit the library and say, I would like to define that the library cannot exit more than one gigabyte of storage, for example. And once I define that, Harbor will ensure and enforce that my library stays within that boundary of one gigabyte. And right now it's using about 20 megs. So I'm in no fear of exiting that number. But in an organization where you want to set boundaries, this is super, super important. Interrogation services is where basically some of the questions earlier from the chat came up. Here's where we have all the scanners that this particular Harbor installation supports. I have three Vinclair and I can expand each of them and it will tell me where three Vs deployed, the schema that we support here, the version that is and who the vendor is that created it. I have the endpoint here which is 3V dash adapter A443. And it's healthy. It's enabled. And in this case, I'm not using an authorization means. Claire is also deployed, just like I mentioned earlier, with no removing Claire. But I can add a new scanner. So if I created a new scanner, or for example, I'm using Encore and I want Encore to be my default scanner, I can deploy that. And we have extended documentation on how to deploy each one of these scanners. And then once I deploy it, I can provide the details here, add it to Harbor, and then immediately that will be available for my projects to use. On the vulnerability side, we basically indicate when the database was last updated, because that's an important aspect of the scanners. And then you can define a schedule based scanning so that artifacts can be scanned daily, hourly, weekly, depending on the needs of your organization. Garbage collection. So today, garbage collection can be scheduled so that it can run daily, weekly or ad hoc. And we've added an extension here where garbage collection can also delete untyped artifacts. These are artifacts that don't have a tag today. I think of them almost as orphaned in Harbor. Now, the important thing here is that we understand that a lot of users are mentioning that garbage collection today takes a long time. And your repository is also locked where it's not available for for pushing images. We are working on enabling in the next release of Harbor the ability to do online garbage collection without downtime. This is an important component that that's basically going to become possible only because of the fact that now Harbor tracks all the layers in all the artifacts in our database. So we no longer depend on an external tool like the Docker distribution to track the layers, but we track them internally in Harbor. So that means that we can actually enable you to push and pull artifacts without impacting the garbage collection that's happening behind the scenes because Harbor has full control over that process now. We're working on that and hopefully the 2.1 release at the end of the summer will have that. And the last component here as an administrator is our configuration. So we'll have authentication mode and like I mentioned in this specific installation is database, but it could be OIDC or LDAP, Active Directory. Then we have some of our system settings essentially. Hey, who can create projects in Harbor? Everyone or only administrators? Do you want to delegate the ability for end users of Harbor to create their own projects? Token should expire after what timeline? How about robot accounts? What's the default token expiration? I'm using SSL in this case. Can I download the root certificates so that I can actually make SSL calls? And then the last part is the CVE ID. And I'm going to show that to you when we go into the project. But certain of my images might have CVEs that there's no solution for them. I want to create a CVE whitelist so that those CVEs don't impact my vulnerability policy that says any image with high or above vulnerability should not be pulled. So I can add CVEs to the whitelist here and even include an expiration date for when they should be removed. So after that date, the CVE whitelist no longer applies and I better have it fixed for those libraries. Let's take a look at the projects now. So we've talked a lot about management and operations so far. Let's take a look at the projects. This gives me a full dashboard view of all the projects in Harbor. And I'm going to deep dive into my library project here that has one repository, a few home chairs, and a few other things. So just coming here from an initial view, I get to see these registry certificates. So since I've enabled this installation, I get to download the certificate that I can install locally on my machine so I can make calls whether using Auras or the Docker client against this hardware installation. And we also give you a helpful push command. So if I want to see all the different commands for tagging and pushing artifacts here, I get to copy them so that it makes it easy for me to kind of either give it to someone else or copy and paste these commands and run them on my CLI. Let's take a look at our artifacts here. So we have a few artifacts here and the important thing is I will try to kind of give you a view of all the different things that Harbor supports today. You get Docker images here. You see the nice little mobby icon here. Docker images here as well. A Docker image here that has, that's basically built on OCI index and it has an artifact list here. So notice here, multi-architecture image, ARM and AMD64. And in this case, AMD64 has some vulnerabilities while the ARM one does not. So that was one of the questions that someone asked earlier. Then we have a home chart. Now, home charts can be deployed in two ways in Harbor to the dough using the charts museum, which I'm going to show you in a second, or using OCI. So now home charts are considered artifacts. So I can click on my home chart here and I can see an overview of the home chart, the values that are defined here as part of the home chart itself. And then you can see the different tags that have been created for this. And we also have Sina bundles. So here, let me open up the OCI index here for the Sina bundle. So if you are a fan of the cloud native application bundles, then you get to push them and pull them in Harbor as well. And now not only do we support all these artifacts, but for the cases where it's supported, we also give you the ability to do vulnerable scanning on all of these artifacts and give you results. In this case, I'm seeing that I have one high, three medium, one low vulnerabilities for this Sina bundle identified by the SHA on the left size. So a couple of other things, I can copy a pull command here. So again, giving you ease of use. So if you want to pull something from Harbor, easily can copy that. I will give you the tags that are on each one of these images, the size, vulnerabilities, like I mentioned earlier, and when it was pushed, and as well as pull time, the latest times. Going back, I also have some of, we also have the integration chart museum that we want to deprecate over time. But right now, you get to see all of your home charts here. And again, you are able to see some of the same details that we saw earlier. When I drill into a home chart, I get to see the summary, as well as some of the values that the home chart has identified. I'm going to go ahead and delete this home chart. I'm going to show you that to you in a second why I did that. So let's go ahead and delete MariaDB home chart. And remember that in a second. So now we have members. So this is where the role-based access control for the project kicks in. Each project is fully isolated in a multi-tenant way and can have its own users as well as permission. So for every user that you add to this project, you get to identify the role. Project admin, master, developer, guest, limited guest. And you might ask, hey, what all these roles mean? Well, let me actually take advantage of two things that we have. First thing, I'm going to go to goharvard.io. And these are brand new spanking websites. I can click on documentation here. So I'm going to go to the documentation for 2.0. I'm going to do a search here for user permissions. So you want to find out what the user permissions look like? I do a search here. And here's the user permissions per role for every one of the roles in Harvard. So you want to find out who can see a list of project replication jobs? Well, only the project admin. So make it easy for you to identify what the RBAC permissions look like in Harvard. Then we have the scanner. Each project can choose its own scanner based on the needs of the project. So for example, in this case, I'm using 3D, which is the default scanner. But I could change that to Claire and say, hey, I want to use Claire for scanning of this project from now on. Again, user choice based on your business and organizational needs. And getting into the policy. This is where things start getting fun. I want to be able to say and create a rule that says for all my repositories in this project, I want you to retain all the artifacts that were pushed within the last 90 days. Very powerful. I want to basically say, hey, for compliance reasons, I cannot have images older than 90 days. So go ahead and delete them. Your choice in terms of how you define the rule and the root semantic city, a lot of flexibility in terms of how to create that. And if you want to basically define the retention based on most recently pushed, most recently pulled, number of days, or always retaining. We also have tag immutability. Like I mentioned, you might be able to say, you might want to say, don't go and update an image after it's been tagged as stable. So now I can create a rule that says for the repositories matching star, tags matching stable, mark them as immutable. So that means that when I mark those, oops, sorry, meant to do edit, when I mark something as immutable, nobody can update it. Nobody can delete it. So I have to go and forcefully change the immutability rule for me to be able to change those images. Very powerful for some organizations where compliance is key. We have our robot accounts. And these are very useful for CI CD. So I want to create a new robot account. For example, in this case, we're going to call him demo one. And I'm going to set an expiration date of tomorrow. And the description is demo robot account. And I get to define some permissions for it. Do you want to be able to push and pull artifacts, or home charts? In this case, I'm only going to allow them to pull. So it only has permissions to pull images for CI CD. I'm going to save it. And now I get a JWT token that I can save for interacting with a hardware in the context of this project only with a robot account that has full permissions only web hooks. This is a critical component of basically interpreting hardware with other components in the infrastructure. So now I can create a new web hook and I get to define a name. I get to individually pick the web hooks that are of interest to me, create an endpoint URL, and hardware will make sure that it pushes all the web hooks that are relevant based on my configuration to the endpoint I created. This case actually have this web hook site that I created. And I want to show you here that earlier I deleted the home chart. And here's the web hook that I received for it because I've subscribed to the delete chart web hook. So now once I deleted the MariaDB web hook, I got back a notification that shows you that I deleted it. So you can do either reporting based on this, update configuration management database, drive execution action through another tool or kick off a workflow or anything else. The last part of the project is the final policy around content trust and vulnerabilities. When you say I want to enable content trust, you extract hardware to allow verified images to be deployed. Or you extract hardware to prevent vulnerable images from running if they have high or critical, for example, severity. And I also want to scan images only immediately on push. And on top of that, I can also define two CVEs. Oops, clicked on that by accident that hardware has been instructed to ignore in terms of as a white list from the policy above. Give me your flexibility in terms of defining your vulnerability policies within your infrastructure. Let me go ahead and show you one cool new thing now. Now the support OCI, that means that I can actually push and pull pretty much anything in hardware. So I'm going to go ahead and create a new blob of five megabyte size. So I'm using Windows, I'm using fsutil here. So now I have a file called blob.img, which is five megs. And it's just basically there's nothing really in it. So now I'm going to go ahead and push it into this instance of hardware. So oops, sorry, I pushed it to the wrong instance of hardware. I'm going to go ahead and push this to this instance of hardware. So it's forbidden. Did I do something wrong here? It is possible that I did something wrong, but what did I do? Well, this is a quota protected repository. And let's go to a project code here. And you're going to notice here that the quota project is set to one megabyte. Well, it's understandable that that fell. So let me go ahead and update my quota here to 10 megabytes. And I'm going to go ahead and push my image now. And I was able to push it. So now going back to my projects and let's go to the quota protected project, you're going to notice now that I have a new large artifact that I just pushed. And it understands is an OCI artifact and it's a five size, five megabytes and tag is V1. So look at that. If you have anything that you want to push into hardware and you want to support the ability to wrap it into an OCI index and push it as an OCI artifact, you can use Aura. So that means that now you can add more and more artifacts into hardware. And it gets to use some of the policies that hardware has, like quotas, which I just showed you. Now I can even add a tag. And I'm going to create a tag called demo tag. Actually, let's call it a new tag. So now I've created a new tag for my image. And I can pull the same artifact now from hardware using the new tag. So now I can actually do an Aura's pool using the new tag and hardware just pulled back my digest. And if I do a DIR, there's my blocked image. So expanding the horizon, expanding your ability of doing cool things with hardware. This kind of concludes the demo. So I want to take some time now to answer the rest of the questions for the seven minutes that are left. So the first question that we haven't answered yet, do you support replication with node design? And I want to answer that by going back to my slides. So looking back at the hardware roadmap, some of the things that we want to enable moving forward is improving the Kubernetes operator, something that I mentioned earlier. But we also want to do signing policy replication. Today, when you sign an image in Notary, that signing signature does not survive when you replicate an image from one provider to the other. We're working with the upstream Notary V2 effort to make sure that the signing policy gets replicated along with your images. And it's not tied to a single instance of Notary. And when that's available upstream, we will also support it in Harbor. Garbage collection I mentioned earlier, we're going to do better around being able to do online GC. Our next release, Harbor 2.1, is very much tied to image distribution. That's kind of the theme of our release. We want to enable proxy caching capabilities as well as integration with peer-to-peer distribution mechanisms like Dragonfly and Kraken. And we will also continue doing identity and access management enhancements, enhancements in our interrogation services, and so on and so forth. Harbor has a huge, huge roadmap ahead of us. We have a tremendous community that supports us. And I want to show you some of the community statistics here, because they're super relevant here. And we look forward to improving Harbor for you over the next releases. So I want to go through all these details, but this is super powerful community that we have in Harbor. Lots of maintainers, a ton of members, GitHub stars, PRs, Twitter followers, webinars, contributing companies, huge huge community. It's thriving. Come here and be part of it. Let me see you, what are the questions I can answer here? I have lots of questions coming in. Is there commercial support available for Harbor? There are some companies, VMware in particular, they're offering commercial support for Harbor as part of the Kubernetes distributions that they're offering. If you're interested in that, find me on Slack and Private, M2 is my alias, and we can chat about that. What are some of the best practices for Backup Restore of Harbor platform plus images? You can actually use Velero, Project Velero, that's an open source project to back up and restore the entire Harbor installation that includes your persistent volumes. It includes the configuration of Harbor in different components as well as if you want to do ready postgres backup as well. There were two questions on that. Do you support replication for both image and charts? I kind of answered that through a demo. It's images, charts, or artifacts. All three can be supported as long as the other end of the replication also supports. For example, if the replication target does not support OCI, you cannot replicate OCI. There's a question here. I'm a big fan of Harbor. I got to move a lot of images from data doc registers to Harbor. I wanted to know a little bit more about Harbor replication. How does it work under the hood? A great way to learn about that. Please attend our community meetings. We have a few folks that are super passionate about that, join our meeting, and we'd love to talk to you more about that. It's a great topic. We have some of it documented on our documentation, but join our meeting. We'll give you all the down and dirty. I don't think I'm going to be able to answer all these questions by the time we have left. I would say keep it to like one or two more. Yeah, let's do a couple more. So is there a timeline for the home chart for the Harbor 2.0 release? It should be any day now by the end of this week. Hopefully by tomorrow you'll be able to get the home chart for Harbor 2.0. By the way, I'm putting up the slide here in terms of how to connect with the Harbor team, whether that's on Slack, mailing lists, Twitter, or attend our community meeting. I want to share the slides. You'll be able to get access to all of those ways to connect with our team. Can I create custom roles or change pre-existing roles? No, today there are back roles and the permissions of this role is fixed and defined in Harbor. We are looking in the future on how we can actually use OPA to define the role-based access control, but we're not there yet. If anybody's interested in coming and helping us deliver that, we'll love that. One more question. All right. Could you please repeat what ORA's client is? I'll make that very easy. I'm going to go over here. ORA's was created by this. If you search for ORA's this here, it will point you to the OCI registry storage, which is what ORA stands for. There's a GitHub link for downloading ORA's for the different architectures like Windows, Linux, Mac. Awesome. Thanks, Michael. That was an awesome, great presentation. That's all the time we have for questions today. Thank you for joining us. The webinar recording and slides will be online later today. We're looking forward to seeing you at future CNCF webinar. Have a great day. Thank you. And if we didn't answer your question, come on Slack. Ask it again. Our team will answer it there. Awesome. Thank you all. Thank you.