 I'd like to thank everybody that's joining today. Welcome to today's CNTF webinar. And the title is Gain Confidence in Compliance, Advanced Image Scanning with Harbor, which is a CNTF project. I'm Alex Ellis. I'm the director of OpenFaz and I'm also a cloud native ambassador in the Cambridge area in the UK. I'll be moderating today's webinar and we'd just like to welcome our presenters. So warm welcome to Michael Michael, who's director of product management at VMware and a core maintainer of the Harbor project. Also like to, I'd also like to give a warm welcome to Liz Rice, VP of open source engineering at Aqua Security. And she's also the TOC chair for the CNTF. Hi. Hello for coming. Hello everyone. Thank you Alex for the welcome. So we've got a few housekeeping items. Before we get started. So during the webinar, you're not able to talk as an attendee. So it's a little bit different to a normal Zoom call. There is a Q&A box though and we'd like you to use that to drop in any questions that you have and we'll try and answer them as we see fit throughout the talk. Now this is an official CNTF webinar. So as such, we are abiding by the CNTF code of conduct. So we just ask you, please do not add anything in the chat questions that would be in violation of that code of conduct. Basically, it can be summed up like this. Please be respectful of all your fellow participants and presenters. And with that, I'll hand over to Michael and Liz to kick off the presentation. Absolutely. Thank you, Alex. And hello everybody and welcome to the Harbor webinar. I know that it's pretty late for some of you folks in European and Asia time zones. So thank you for attending. Today, Liz and I are gonna talk a little bit about security and compliance with Harbor as it reflects to the images that you guys are using in production for your cloud native applications. So let's get started here. So, you know, one of the most important things that I want everybody to be aware of is that there was a study that basically outlined that the top 10 most popular Docker images each contain at least any vulnerabilities. Think about that for a second. That number is astounding with the proliferation of Docker images and cloud native applications today. That means that you're producing as enterprises and as users more and more images every day. More, some of those applications are running in production. Some of them are running in public clouds. Some of them are running at the edge of your data centers. So it is super important to make sure that the applications that you're running there in production do not contain any vulnerabilities. And that's where Harbor, Agua and Ankor come into the rescue here with some of the work that we're gonna showcase today. Why do you wanna run your own registry? There's obviously a lot of registries out there. Most of the public cloud providers do provide their own. You could also use anything else like Docker engine, for example, but it's very important to think about what are the key capabilities you're looking for out of your own registry and why a tool like Harbor might really fit that purpose. So let's get started here. The first and most important reason why you might wanna run your own registry is security and compliance. You want a comprehensive policy that you can define, understand and be also be able to apply it across the board. You may have registry and data ownership issues. So you might wanna co-locate your data near your applications or even within your own data center. And most importantly, you wanna bring in the identity federation that you're already using within your organization so that you don't have to have a separate identity and a separate user store for your registry. So what are some of the features that come out of this umbrella? You could have things like vulnerability scanning that will assess your images for vulnerabilities and give you a report back saying if you're vulnerable, that's a varied of those vulnerabilities and if there are any possible remediations. You might want to see the exceptions. So there are certain situations where there might be a vulnerability and there's no fix for it yet. So what do you do? Do you halt your production images or do you identify different way to still be able to keep your applications in production until a remediation is available? You might want image signing, trust that an image is coming from the source that it's coming from. There's quotas and retention that allow you to prevent end users from running away and populating multiples of terabytes of storage in your registry. And then going down the path of identity federation and user store, having identity federation with tools like OIDC and LDAP and having role-based access control on the back end is what's going to enable your end users to not only bring their own identity within Harbor or any of your own registries, but also be able to use that with tools like doc client and the help client. And last but not least in the isolation and multi-tenancy space, you wanna be able to have multiple organizations within your company that have their own access to the registry and they're isolated from each other, either because of compliance reasons or because of data sovereignty or because they're simply different business units. At the infrastructure layer, you might wanna be able to deploy on an infrastructure whether that's private, public, hosted or the edge, the topic of data locality comes back again here. And most importantly, these registering your images need to be Kubernetes and Docker compliant because that's exactly where you're gonna run your compute platform. On a scale and control perspective, you wanna be able to have not only control access to artifacts, this is where you get to authenticate and authorize who should have access to your cloud native artifacts that includes your Docker images and your home charts. You might wanna be able to replicate the resources based on your business needs whether that's replication to the edge, across data centers or even to the public cloud. And some of those capabilities that Harbor brings into the picture is, we allow you to replicate your artifacts to a multitude of other tools like another instance of Harbor, the Docker registry, Docker Hub, Huawei Cloud, AWS, Azure, GCP, Alibaba Cloud. So pretty much every popular public cloud out there, Harbor can integrate with and you can replicate the resources there. And last, it's all about automation extensibility. As enterprises and users, you have made a ton of investments in products and services within your own data center. You wanna register that we're coming and be able to be plug and play with those existing investments so that everything just works. So what are some of the key capabilities that come out of that? You want integration with logging. So you can push your logs to an Elk stack or Splunk for example, you want web hooks. So your CI CD pipelines can be well integrated with the registry so that as images are built, you can push them to the registry. You can make sure they get scanned for vulnerabilities and then maybe notify when they're ready to be pushed to production or possibly to a Kubernetes cluster. You wanna rest API. So everything can talk with each other in a unified manner. And last but not least is robot accounts. You wanna be able to have specialized accounts that are used for automation to basically be able to integrate your registry with these other tools. So what is Harbor? Harbor is a cloud native computing foundation incubating project. Today we have almost nine and a half thousand stars on GitHub. And in a nutshell, we're an open source container image registry that secures images with role-based access control. We scan images for vulnerabilities and then we sign images as trusted. So we deliver security and compliance to your organization. We have very high performance and scalability. We are interoperable with some of the other existing investments you have in your infrastructure and we provide you with a consistent image management for Kubernetes. From an architecture perspective, some of the things I mentioned earlier are really depicted here. On the right, we have all our replicated registry providers where Harbor allows you to replicate both in and out to all of the public cloud providers out there. We have multiple storage capabilities at the data access layer and the entire architecture of Harbor is componentized such that you were able to scale and meet the demands of our end users while at the same time, giving you the ability to work and interact with Harbor with the tools of your choice. So you can interact using our web portal or API. You can use the kubelet in a Kubernetes environment to be able to push, pull down your artifacts from Harbor. Also the Helm client works as well as the Docker and Rotary client. And today we're also going to concentrate a little bit about this green box right here on the right, which is the newly created scan providers that allow you to scan your images that are residing in Harbor using third-party pluggable scanners. So let's go back and review a little bit about the Harbor 1.9 release that came out a couple of weeks ago. Our key theme of the release was advancing multi-tenancy with key enterprise features. So what did we do there? We added image retention. So now you can go into a project and specify whether certain images should go away after nine months or after three months. So you get to define what your compliance policy is and allow Harbor to basically implement a policy in the form of retention. So that you also don't get filled up from a storage perspective. We also added project coders. So now you can define a quota across a single project that will take into account the sharing of resources between layers and different repositories and allow you to basically implement that quota so that your end user teams have a maximum storage capacity for Harbor. We also added the WebCook events for CI CD integration. So I mentioned that earlier as well. It's very important to have a tool like Harbor that can integrate very well with other existing investments and CI CD is one of the major investments in a cloud-native world. So being able to have WebCook events that can talk to CI CD system is super, super critical. And in 1.9, we extended on our replication investments that we had with Harbor 1.8, adding replication integration with AWS, Azure, GCP, and Alibaba Cloud. And the last major feature that we added in 1.9 is this concept of the CVE exception policies that I mentioned earlier. This allows you to have created an exception policy so that certain CVEs can be ignored for Harbor for a small period of time so that it gives you the ability to remediate them in the right way for your enterprise. Let's take a look at Harbor 1.10 that's currently in progress. Our goal is to actually ship 1.10 at or near QBCon US, our big cloud-native conference that's coming up in late November. Now our theme here is about security and compliance. And you'll notice that most of the features that we're creating here are around enabling capabilities around security and compliance. So the first one that we're creating is called immutable images and repositories. This gives the ability to our project administrator to define certain images and repositories as immutable. What that does is it allows you to tag, for example, Redis version 2.5 as immutable so that no other user can go and update that image. You can create a new version of Redis. You can go to 3.0, for example, or you can create a new tag. But the actual combination of repository and tag is immutable so that you as the operator can feel confident that the image that your CI CD pipeline created is also the same image that's signed and will be made available for download by your developers as they push the application to production. We're also adding OpenID Connect group support. So now you'll be able to define a group as a member of a Harbor project and all the identities that belong to the group inherit the same permissions in Harbor. So that makes it easier for you to manage your organization within Harbor. Also adding a new role in Harbor for a role-based access control called a limited guest. And this is a role that's super popular with end users that are using Harbor in a multi-tenant environment where each tenant can belong to a different organization and you sometimes don't want each tenant to know about each other. So think of a hoster that's basically deployed Harbor, has made Harbor available to their end users and they don't want Pepsi, the company, to know that Coca-Cola, the company, is also a member of the same group. So a limited guest role allows you to still be able to use Harbor as a guest but not be able to see any other guests that are part of the same project. We're also adding some more features around the CLI secret and robot accounts to give them a little bit more functionality. But now we're going to the diamond of this release and that's the implementation of what we call the interrogation service. Think of this as a service that allows an end user to interrogate all the images that are living in Harbor and assess their security posture. The first implementation of that is the pluggable out of three scanners and we're super excited that both Aqua, Encore, and HP all of them came in and are contributing in this effort to create our first scanners which is 3D by Aqua and Encore Enterprise, Encore Engine and Enterprise. So I'm going to turn it over to Liz here to talk a little bit about vulnerabilities and compliance. Great, thank you very much, Michael. Yeah, so these pluggable scanners are all about helping you find vulnerabilities in the container images that you're using or planning to use. So I thought it might be worth just diving a little bit into what we're talking about here with these vulnerabilities. Got a few logos here. These are the kind of rock stars of, sorry, can we go back one, thanks. These are the kind of rock stars of the vulnerability world. Things like Heartbleed, ShellShop, Dirty Cow, Meltdown, you've probably heard of them. They're critical vulnerabilities that an attacker can take advantage of. When we're talking about vulnerabilities, these are software flaws that attackers know how to exploit and they can use them to do something bad, unexpected inside your deployment. Now, these are the kind of super critical ones or a handful of super critical ones, so famous they get their own logos, but these are just the tip of the iceberg. Thousands of vulnerabilities get discovered every year. Some of them much, much less important, but nevertheless, you want to use a scanner to find out what is inside your images. All right, so if we go to the next slide. When we're talking about a container image, the code inside it kind of comes in three parts. There's the application code that you're writing yourselves, that your developers are writing bespoke code. Then there are potentially some dependencies that you're pulling in doing something like app get install as part of an image build. These dependencies are managed by package managers. The package manager depends on the distribution of the links distribution inside the image. And these dependencies can have vulnerabilities in them. And also there's the base image itself. So when you're running your Docker build, it starts from a base image, and that could already have some of these packages contained within it. So the image scanner is looking at dependencies wherever they've come from the base image or in dependencies that have been added through a package manager. What these won't find is vulnerabilities that you might write to bad coding practices in your own applications. Scanners know about publicly, what published vulnerabilities that exist in third-party dependencies. Well, mostly talking about packages, operating system packages, I will mention application language packages shortly as well. All right, so not all scanners will give you the same result. If you run two different scanners on the same container image, there's a very high chance you'll see some differences. They almost certainly all will start with what's called the NVD, the National Vulnerability Database. And this is a central database showing which versions of which system packages have which vulnerabilities. So that's a kind of baseline that I think all scanners will use as one data source. But in addition, you really need to be looking at the security advisory information from the particular distribution of Linux that your image uses. Because that will tell you whether or not the distribution has patched that particular package for that vulnerability. So a version that looks as though it should, it may be vulnerable according to the NVD, it's actually been patched by the distribution. It's not actually vulnerable. And if the scanner doesn't know to look at that advisory, it would give you a false positive. Additionally, there's some other sources like vendors, Nginx and MongoDB, for example, publish their own security advisory information. And the more of these sources you pull in, the more accurate your data. Plus some scanners will have feeds from, direct from security researchers adding additional accuracy into their feeds. So basically the more of these sources that a scanner is using and the more up-to-date the data is that it's pulling from those sources, then the more likely it is that the scanner is accurate and that it will reduce the number of false positives which is a real problem for scanners. In addition, some scanners will have different functionality available like for example, supporting, looking for vulnerabilities in language packages. Things like Node or Java or Python or Ruby or published security advisories but not all scanners are able to give you the results from that. Some scanners will look for things like sensitive data inside your container images. They may also look for things like malware. Some scanners will also support Windows containers as well as Linux containers. So the range of functionality that you get from different scanners it is different and there are differences in accuracy. You have options in terms of open source scanners like Trivi, like Ankor. There are free scanners out there. There are commercial scanners out there and you may as an enterprise want to look at a commercial scanner if you're interested in some of this additional functionality. Sometimes additional sources are only available commercially as these additional data sources and of course you might want to be paying for support. So there are differences in the scanners that are available. You probably have choices that you're making on your projects about which scanner you want to use and what we've been doing by working with the Harbor team is making it so that you can plug the scanner of your choice into the Harbor registry. So the way this works is to have a scanner adapter for each different type of scanner. Harbor today supports scanning with the Clare vulnerability scanner and we're adding in support for both Ankor and AQUA's commercial and open source scanner options. And then that scanner API that we've added is available for other scanners to write an adapter. So this is a generic interface that any scanner can plug into Harbor. As you can see from the diagram, the Harbor API can initiate a scan. It can use the configuration to pick or to establish which scanner it should be using based on the project because you can configure a different scanner for different projects. It launches asynchronous jobs which will then communicate over the scanner API with the appropriate adapter which will kick off the scan and retrieve the scan results later. Those scan results we've defined a standard Harbor format for if you like so that scan results can be compared and displayed in the Harbor UI. There's a question about that Liz. I don't know if you can take it. Daniel Mendes has said, will app scanning from Trivi still work in Harbor in terms of reporting? And what he's specifically asking about is scanning MPM modules, particularly in a distro less container that doesn't have a package manager? Yes, is the answer to that. I think there's no reason why those results shouldn't also be displayed. Haven't absolutely, I can't say I've tested it but I'm pretty confident that is gonna work. There was one for Michael while we were just at this pause and we had a question about is notary used for signing images? Yes, we're using notary for signing images and actually one of the things that we're working on is figure out how to actually sign these images so that they can also be replicated while maintaining their signing signature. So we're working with the bigger community in the OCI team to make that capability a reality. And with that, is there a way that you can have a sliding retention window for the policies you were talking about? For example, James Weaver would like to know could you retain the last three images? Yes, absolutely. Great, thank you, I'll give it back over to you guys. All right, so I think the last thing to add about this is really that we've defined this generic scanner API. So if we move on to the next slide you'll see how very simple it is. It's an asynchronous API where we can request metadata from a scanner which allows Harvard to find out what capabilities the scanner has. That kind of leads into some of the future roadmap that I think Michael will be talking about shortly. You can make a request for a scan with this post scan to say which artifact, which container image you want to start scanning. And then you use a GET request to retrieve the report from the scanner adapter. And then that scanner adapter will be making either API calls or executable calls depending on the particular scanner to execute that scan and return the results to Harvard in this standardized format. So I think this is now demo time, Michael. Absolutely, Alex, is there any more questions that you might be able to answer now before we actually get into the demo? Yeah, I do think there are a few questions that we've got. So a couple of people have asked how frequently are the vulnerability definitions updated? So that's one of them that we've got from a couple of people. So that will depend on the scanner and the way that it retrieves its information and also to some extent on how the adapter interacts with the scanner. So for example, for Trivi, Trivi updates all its data sources every 24 hours. And at the moment, every time, it's quite clever. I'm just gonna go off on a little diversion here in Trivi because I think this is interesting. Trivi stores its vulnerability data in Git repositories. So after the initial download of data, which happens through an init container in the adapter, so that downloads the data as soon as you configure the Trivi scanner. But after that, every time you run Trivi, it does a check with Git, just does a Git pull to update the latest set of results. I'm sure other vulnerability scanners will work on different schedules. That's kind of one of the things to look at. Interestingly, some of the data sources themselves are updated on different frequencies. So for example, Red Hat have two different sources of security advisory information, one of which includes information about unfixed vulnerabilities, in other words, vulnerabilities for which no fix is yet available. And that is updated more, or a vulnerability will get into that data source. It seems several days before it then results in the kind of official data source called Oval, which includes only the information about the fixed or won't fix vulnerabilities. So I've gone on to a little rant there, but my aside was really to say it's not just about how frequently the scanner updates its data, it's also about how up to date the data is that it's pulling from. That's great to know. I guess the last question maybe we can take before we get the demo is, Piotr is interested in CVS S3.0, is there support for that? I believe it is 3.0. If I don't quote me 100% certainty on that, but I think it is 3.0. Okay, great. Thank you. And by the way, all of the scanners that we have are configurable, both the batteries included, which is clear, as well as trivia and alcohol can be configured in terms of how often you wanna refresh that database that they have internally for basically tracking vulnerabilities. Excellent. So let's take a look at the demo here. Let's show you guys some items, but feel free to ask questions and so go along. And Alex, let me know if there's something that the folks wanna see. So the first thing I'm gonna show you guys is this concept of compliance and security policy that Harbor has and how that can be defined within Harbor and applied across the board for your container image registry. So the first item here that I have that I wanna show you guys is the concept of the CV white list. This is where we allow an enterprise and a user to come in and define certain CVs that can be ignored either for a period of time or indefinitely because either vulnerability doesn't exist, sorry, vulnerability solution doesn't exist, or you wanna wait a small period of time to test it before you apply. So you as the project administrator, you get to define the level of risk that you're comfortable for the CVs and the images that your organization is publishing for your cloud native applications. And Harbor allows you to just do that. So I can come in very easily and define this globally at the Harbor configuration or I can define this at the project level. And as I mentioned earlier, you get to set the expiration as well. The next thing that I wanna show you guys is the vulnerability database here. It indicates when the database was last updated and this is for Claire, which is the built-in vulnerability scanning that comes with Harbor. And you also get to define a schedule. So if you want all your images to be scanned hourly, daily, weekly, you get to define how you want your compliance policy to be set and Harbor will execute it. Now, if you go to an individual project within Harbor under the configuration of the project, you get to define how you want this project to behave from a compliance standpoint. So the first thing is you get to define whether you want content trust to be applied. So that means only verified images can be pulled down. So if an image is not signed, no user will be able to pull that and run it in production. You can also define whether you want vulnerable images to run. So you define, again, your level of risk and your level of comfort. So for me, I could say that if any image has medium or higher severity vulnerability, don't allow it to ever be deployed. In addition to that schedule, that Harbor can dictate around scanning your images. You can also scan images on demand, which you'll do in a second, but you can also set it so that images can be scanned automatically every time So every time there's a new image that comes into Harbor from your CI CD pipeline, it can be immediately scanned for vulnerabilities and the policies above can be set immediately. And again, the last item here is the CV white list that you mentioned earlier. So I can either choose to accept the system white list. I can create a project white list which can start from the system one. I can even add my own CV. So I have a CV here that I opened and I can copy that and enter these details here. And now I instructed Harbor to ignore these two CVs from the system. And obviously I can choose an expiration time as well. And this gives me enough time, enough comfort to go and figure out the right way to remediate these vulnerabilities, test my images before I push them out to production. So let's go now and show you some of the new stuff that we started creating. Under the configuration tab of the main project settings of Harbor, there's a new page called scanners. And here you can define and add your own scanner. And like Liz mentioned earlier, this is an open specification. Anyone, any company that has their own scanner and they would like to extend Harbor's capabilities by adding their pluggable scanner into Harbor, they can implement our API interface, we'll work with them, and then you'll be able to add the scanner to your Harbor instance. Now we have Claire, which is the default that the battery is included with Harbor, but we also have introduced 3V and Encore here as our pluggable scanner. So if I actually extend this here, I see that 3V is developed by Aqua Security. I see the endpoint URL for 3V. And I also see the mining types that 3V can consume and produce. This is the contract between Harbor and the pluggable scanner that Liz mentioned earlier, that kind of identifies how do we talk to that scanner? We give it an OCI image manifest that describes the container images that they need to scan. And then we get back this vulnerability report that basically tells you what CV is that scanner found for your images and their severities and other details that are very specific to the scanning results. And similarly, the Encore adapter also has the same kind of contract and the same kind of capabilities that 3V has as well. So you get to see the version, the distributor, the mine types and the URI where Encore is deployed. And both of them, all three scanners in this case are in a healthy stage. So I'm gonna tell you about something cool that I set up here. So a lot of you have images out there in public clouds. Maybe they're in any of the big public cloud providers. Maybe they're in Docker Hub. And maybe you want to have Harbor bring those images into Harbor and be able to apply these strict compliance and policy that we talked about in terms of scanning and remediation and policy. And then your developers can actually start utilizing those images to run them in production. So having that extra set of control and having Harbor kind of bring these images into Harbor, apply your compliance needs is super important. So what I did is I went ahead and set up some replications. So I have a Docker Hub account and I went ahead and pulled down images from my Docker Hub and put them into different projects that I have set up for Clare, Encore and 3V. So now I get to pull the images on a periodic schedule or manually and then have Harbor do its thing by basically going ahead and scanning these images for vulnerabilities. Let's go ahead now and see what our projects look like. So I have my HR business department here and the same images are scanned by Encore, Clare and 3V just to show you the differences. So like Liz mentioned, no other scanners are created equal. Some scanners pay specific attention to different areas. So this up to you and your own business needs and your compliance needs to identify the right scanner based on the workloads that you have and you can apply that scanner on a per project basis. Actually in the future, we will enable you to run multiple scanners in the same project. So you can get consolidated results from multiple entry paths. So let's take a look at the HR business team for Encore. If I take a look at that and look at this new scanner tab that we have here, you'll notice that the default scanner here is Encore. It is in a healthy state. We have some of the same details that I showed you earlier in the scanner API. So when we go back to our repository, I can go into any of these images that have created in this organizational unit. And I can see here all the different vulnerabilities that have been discovered by Encore in this case. So I can click on the actual tag and I can both see graphical view of my images. And when I see this, I'm saying, oh my God, this image is super, super vulnerable. Look at how many CVEs exist for my application. So I can scan it again on demand. So I can extract the Encore engine in this case to go ahead and re-scan my image just in case something has changed. Obviously nothing has changed here, so I'm getting back the same results. So I wanna show you guys now some of the differences. So if we look at Encore, we have 74 highs and two eight mediums. So I go to the next scanner in this case, it's Claire and click on the same project. Notice that the numbers are different. This is not anything else, but to show you that different scanners have different priorities and they scan different for different things. And similarly, if we go to 3D and we go again to the same image, you'll notice here that this specific scanner has also a different set of CVEs being reported. If we allow you eventually to run the combination of all the scanners, you're gonna get this comprehensive view of your scanning results. Now, one of the things that Lisa mentioned earlier was this way that you build your images with dependencies and packages. So if you actually go to the build history here of Harbor, you'll notice here that we give you a rundown of what happened in the Docker commands in the Docker file that basically built your image. So you can see that actually someone tried to do upget update or yum install and update everything that they have in their image. And when did that happen? So look at the dates here. The dates are important because it tells you how the history of both the creation of these packages, but also how many vulnerabilities could have existed since then. And similarly, when I do get the result back from all of my vulnerabilities, I can expand on any one of them and I get some additional details in terms of the high level description of the vulnerability. And I can even click on the CDE here and it will take me to the actual spot where the CDE exists. Now let's talk about one final thing here in our demo. In many situations, you have Kubernetes clusters or Docker compute instances running at the edge but in various smaller parts of your data center. And in some of those situations, you may not have the compute bandwidth to go and implement a solution that includes full-blown scanning either because you don't have the computer, the memory, or the storage resources. So what can Harbor do for you? You can set up a central project within Harbor that will scan all of your images for vulnerabilities and you can use any of our three plug-able scanners that you have available for use in Harbor. And once you have assessed that all your images are free and clear of vulnerabilities, you can use Harbor again to replicate your images to the edge. So now you can have a cache of your images available locally to where your compute applications are running and you don't have to scan them there because you trust the Harbor scan them up in your main data center. So now if links go down because links always go down, you don't always have 100% internet of the edge, your applications are not impacted because you have a cache of your images locally in the Harbor instance. So that's a good way to implement your security policy and have Harbor basically establish its compliance. So Michael, we have a question about how this all sort of fits into the bigger picture. So somebody's asking, what if you had different environments and you wanted to promote the images through from death where you're okay with the vulnerabilities to staging where it gets a bit tighter and then in production you want zero highs, let's say. Can you do that? Yes, absolutely. And that's actually one of the major reasons why if you look at the projects, each configuration of the policy is on a project basis. So you could have a project for your death test, a project for your sandbox and a project for your production. And each of them could have a different level of risk and a different threshold for the vulnerabilities. So you could set up your death test as low and your sandbox as medium and your production, sorry, the reverse. So you can set up your production to be lower above. So anything, any vulnerability of any severity never to be deployed in production. And then you can set up your death test to be on highs, which means only high vulnerabilities are prevented. Okay, that's really helpful. We have about 15 minutes left just for the speakers. There's another question that's similar to this is how could you actually integrate Harbor into your CI CD pipeline and what kind of feedback could you expect in something like Jenkins, for example? Yeah, absolutely. So Harbor can be integrated very well in the CI CD pipeline as the target repository for your images. So as you build your images, you start from source code and you build some container images, Harbor can become the target and the repository where you store them. And then you can use some of the key capabilities of Harbor to both not only tag your images, but define them as immutable or basically define the policy based on your project and your organizational needs. And then on the backend, Harbor has this concept of web hooks that you mentioned earlier that allows you to get callbacks when certain actions happen in Harbor. So for example, when you get a new scan and you get some new vulnerabilities, you can get back a web hook or what I think a new web hook now when your storage code is exceeded, you're gonna get back a web hook that's gonna tell you, hey, you run out of storage. That's why you'll push to the Harbor repository is failing. So now you can integrate your CI CD pipeline with some of these web hooks which can drive additional actions. One action example that I can think of is let's say you started from source code, you build an image, your CI CD pipeline pushed that image into Harbor. Once Harbor has verified its security posture, we can send back a web hook and letting you know that your image is free and clear, it's clean, go ahead and deploy it into production. And your CI CD pipeline can execute the last step which is push that image to Kubernetes. Okay, so if you had a joint job that it built it and deployed it all in one job, is there some kind of text feedback you get when you try to push the image or does the image push and then it gets scanned? So you have to query Harbor to know. Yeah, you didn't have to query Harbor to know or listen for the web hook to identify what the scanning results were in that case if you have like a singular job. But what actually adding, this actually brings up a cool topic. In the future, one of the things that we wanna do is we wanna integrate some Sonobuy so that we can actually establish and assess the images that you're running in production if they have vulnerabilities. So an image that you pushed yesterday, today my new CV might come up and but you don't know about it because your image is already running in production. So we wanna basically integrate with additional tools to be able to assess the security posture of your running images. And that's actually where additional tools like Aqua and Ankor come in that can actually help you that. So Liz do you wanna talk a little bit about how Aqua can help you maintain your security posture for your images running in production on Kubernetes? Yeah, exactly. And it is a very good point that you could have many hundreds or thousands of container instances that you discover have a vulnerability because a new vulnerability has been disclosed and the scanner tells you which images. So with a tool like Aqua, we can very easily identify for you the running instances of a container that are affected by a particular vulnerability so that you can then take action. You would typically want to rebuild the image with the fixed version of whatever the package is that's been affected and then deploy new versions but tools like Aqua and I'm sure other security tools as well that can help you identify whether you have running containers that were instantiated from vulnerable images. One thing you want to avoid doing because I have seen well, at least one company that was trying an approach where they were saying we're going to vulnerability scan inside your running containers which is a pretty inefficient way of going about it. You only have one image you may have many, many, many instances of that image. Absolutely. Thank you, Liz. If there's no more questions, we can go back to the deck and we'll probably have a couple of minutes at the end for additional questions. All right, going back to the deck. So you've seen the demo now, you've told you that you're going to have a lot of images that in your production, pre-production you're producing more and more images as enterprises and they're going to have a lot of vulnerabilities. So come and use hardware because we can help you in addition with our plug-able scanners to assess your security posture. But the journey doesn't end there. We have a very long roadmap with hardware in terms of the investment ideas that we want to make and some of the key capabilities that we want to bring to our users to you guys. So from a management perspective, we want to build a Kubernetes operator that allows you to create an HA installation of hardware on top of Kubernetes that includes everything called our dependent components, a very scalable hardware instance. I mentioned earlier that we want to do some sign-in policy replication because signed images today cannot be replicated because the signature of the signed image is tied to the instance of the registry that you're deploying them from. We want to add additional investments into metadata management so it can make it easier for you to manage your hardware registry. Obviously, preference scale is something to do continuously improving and we have a few of our customers of hardware that are having thousands and thousands of images in production today consuming terabytes of storage. And observability, everybody's new cool password. We want to continue improving on the metrics and health APIs that hardware provides for you so that you can integrate hardware well with other tools and products in your cloud-native environment. On the topic of image distribution, I talked earlier about how we want to make it easy for you to push images down to the edge so that you can scan them once and then have a cache locally available for your workloads to use. We want to add two new investments there around proxy cache and peer-to-peer distribution so that images can land where you need them faster and with less network resources being used. On the extensibility part, this is a huge initiative of us. We're going to continue enhancing our webhook support or continue enhancing the interrogation service. You're going to see more scanners in addition to Aqua and Anchor come into Harbor, giving you choice and freedom to pick the scanner that meets your needs. And lastly, we're going to be very engaged and we already started with the OCI, the Open Container Initiative around artifact management and conformance. So that Harbor will be able to store more than container images and home charts. We're going to start thinking about how we support Sina bundles, OPAs, operators, everything that you need in terms of artifacts for your cloud-native needs, we're going to figure out a way to support you. If you want to get in contact with our community, I want to tell you one thing, we're thriving. We have 9,500 GitHub stars today, over 200 contributors, 30,000 plus downloads of Harbor and almost 3,000 folks. And if you notice here on the bottom left, the top chart is the contributing developers and the bottom one is the contributing companies. The red line is when we donated Harbor to the CNCF. Ever since then, we have experienced this explosive growth both from a contribution perspective of companies and developers who have 20 plus product implementations, a ton of contributing organizations, including our newly added partners in Angkor and Aqua, and we have 300 plus community members that are engaged with the Harbor community. If you want to reach us and collaborate with us, here's the different ways that you can do that. We're on CNCF lists, we're on Slack. We have a Twitter account and we have regularly scheduled Zoom meetings that are both friendly for Asia and EU as well as for the United States. And we have a demo instance of Harbor, always available and running. It's demo.coharvard.io. Feel free to go and check it out and take it for a spin. Thank you, Michael. Liz, did you have anything else to finish with? I think Michael has covered it all very well and I'll just say we've been very pleased and excited to be able to contribute this whole pluggable scanner's item into Harbor. So I think it's an advantage for both the open source community and end users and vendors to be able to collaborate in this kind of way. Yeah, a huge shout out to the Angkor team, the HP team and the Aqua team for kind of helping us bring that vision to reality and doing a lot of the heavy lifting and enabling that capability. So you guys have been great. And we will have a huge presence at Cubicon. So we will have a lunch and learn with Joe Beta. So feel free to sign up and get some lunch and get the real hands-on experience on Harbor. Great, so thank you to both for a very awesome presentation. We have had questions throughout the talk. I know not everybody has had as a complete answer as they'd like. I would encourage you, I don't know, Michael, if you can go back to the previous slide with the Slack. This would be a great place for people to go and ask really technical questions. One of the things though was, can you just compare it very quickly to Quay from Red Hat? So that comparison would not be quick. Obviously Quay is a great product in this ecosystem as well. There's positives, pluses and minuses across both products. Feel free to come and find us on the CMTF Slack channel and we can kind of give you, based on your needs, some of the pros and cons of each of the different tools. And in a lot of ways, we're very similar to Quay. So if you tell us what you're looking for in terms of the needs of your enterprise, we can kind of give you a more fair comparison of the different tools. Yeah, Daniel had a question relating to that as well. And he was saying that are there ever signing flows that will be supported in the future other than Notary? Because it seems like OpenShift may not support Notary currently. Yeah, so that's exactly what I was mentioning earlier, that we're working on the OCI team to figure out what the signing solution is to be moving forward. On the Notary topic, there was a PR out there that kind of got the community a little bit buzzed about, should notary be archived. That's not going to happen. Notary is not going anywhere. It's heavily used. It's maintained. And we're going to work to figure out how to best utilize Notary in this new world of replicating assets to different areas. I think that's a really good answer to that question. The only other thing that I think you may have answered is, I mean, if you have one harbor instance, is there secure and authenticated registry for each tenant, or is there still an element of that being shared? So we can only authenticate with a single identity federation provider. So for example, if you're using LDAP, we can only authenticate with LDAP. But if you want to have a multi-tenant environment where each tenant brings their own identity specifically, you basically can create your own IDP and then have that federated for different identity providers. So you're basically creating a double hop in your identity federation. Tools like, for example, Auth0 enable that very easy. Fantastic, great. Well, if anybody wants to know anything more, this is a great slide. It's got all of the stuff you need to know. Thank you to the presenters and everybody that came today. Thank you, Alice, for moderating. This has been awesome. Thank you, Alex. You're welcome. Thanks everyone for taking the time with us, Jay. Have a good day. Bye-bye.