 Hello, everyone. My name is Chen Yuzheng. I'm the engineer from VMware. And now I'm working for the Project Harbor. As you know, security is gradually receiving more and more attention. And it's a very important aspect for enterprise users. So today, our session will discuss about how to use Harbor and the NERUS to reshape security in cloud native. There are two parts for this session. The first part I will introduce the Project Harbor and the next part, my colleague Simon will show the Project NERUS. Harbor is an open source trusted cloud native registry that projects such source signs and scans content. Harbor extends the open source local distribution by adding the functionalities usually required by users such as security, identity, and management. Harbor registry closer to the build and run environment can improve the image transfer efficiency. Harbor also supports replication of images between registries and also offers advanced security features such as user management, access control, and audit log. The mission of Harbor is help user consistently and securely manage artifacts for Kubernetes. Let's go through the core capabilities that Harbor can provide. The multi-tenancy is important for the enterprise users. Harbor provides the RBAC. The user can be assigned different roles with different resource permissions. And different teams also can manage their results on their own by the project isolation. By the policy user can manage quotas for different projects. When the quota reach, the user can use the retention and garbage collection to clean up some useless artifacts in Harbor to release the storage. If user want to protect some artifacts already released or published, they can apply the immutable rule by matching specified repositories and tax names. The vulnerabilities can be managed by policy as well. The user can add false positive CVs to the system or project release to bypass the deployment security restrictions. For the distribution of artifacts, Harbor also provides rich functions. In addition to Harbor, there are also many well-known registries in cloud native such as Dock Hub, AWS, ECR, Azure, ACR, and so on. So Harbor provides ability to copy artifacts. It is convenient to copy the artifacts between the Harbor and these third party registries. This is a very convenient and useful function for some users who want to copy the artifacts on the Harbor to their registries as a backup or migrate from other registry to the Harbor. In addition, Harbor currently also supports the functions of acting as a proxy. The proxy will cache the remote artifacts on the Harbor. When the remote artifact is updated, the cached image of the Harbor will also be updated by the user pool requests. In scenarios, while the remote registry network is limited or there are some rate limits for API, it can help user pool the artifacts they want from the Harbor directly instead of from remote registry. At the same time with the increase of the scale of the enterprise Kubernetes cluster, the warehouse of a single center is often unable to meet the pool requests of a large number of nodes in a short period of time. So some enterprise users will use the P2P to speed up the distribution of artifacts. So Harbor also integrates the P2P preheating function. At present, Harbor can support preheat the artifacts to the Dragonfly from Alibaba and the Kraken from Uber. Preheat the artifacts to the P2P network in advance, which can speed up the later pooling when required. Harbor also provides the IAM artifact sign and scan CV exceptions to guarantee the security and the compilers. In terms of external capability, Harbor supports configurating web group notifications by project, sending notifications to the consumer when the events are occurred in Harbor. The Plugbow scanner realize the freedom of the scanner and the scanner from different vendors can be connected to the Harbor at the same time. In addition, in CI and CD scenarios, the robbed account can interact with Harbor more conveniently and effectively and also ensure the security. At the same time, all function of the Harbor provides the rest API externally to facilitate user calls. This is the architecture of Harbor. From the top to down, the top layer is the client, such as Kublat and Docker client. The outer layer of Harbor service has a proxy, maybe an index increase, which is reasonable for forwarding the traffic to the corresponding components. And then comes to the core service of Harbor, which is reasonable for the processing of all API requests. And the down are some other component services, such as job service to handle a synchronized tasks. And the bottom data layer is dependent on some data service, such as Redis and Postgres. On the left are some identified providers and monitoring related integrations. And on the right is a list of integrations with some third party scanners and artifact registries. Next, we will dive into replication and scanning as narrows will mainly relay on these two functions. The goal of replication is pulling artifacts to local Harbor from remote or push artifacts in local Harbor to remote. From the picture, we can see that results are refined internally and each results has its own manager, such as policy and registry. The controller is reasonable to processing the whole operations. Eventually, the replication job will be submitted to the job service. Finally, the job will be executed by the job service worker and think the status to Harbor core by hook. The integration service handles the scanning for artifacts. Harbor defines a common spec to the pluggable scanner. The spec is the contract between Harbor and scanner. So the vendor of scanner should also implement the adapter service, which followed the spec to connect their scanners to Harbor. The scan request also be converted to the job or job service. The job will send HTTP requests to adapter service and wait for collecting and agreeing to gain summary report. Such architecture makes it possible to scan one artifact by multiple scanners to enhance the security. At last, let's take a look for the two features by demo. By default, there is a library project in the Harbor. Project is a concept in the Harbor and the repository and artifacts are managed in one project. You can see the replication and integration service from administration sidebar. The two functions are the systems scope. So please make sure you're logged in as system admin account. Firstly, let's check the replication. The replication is managed by replication policy. So you need add a new replication rule before you use yet. But before set the rule, you need to add the remote registry to Harbor because Harbor needs some metadata of the remote registry. Click the registries to manage the remote registries. And then there are some configurations need to be padded. Currently Harbor supports many third party registries such as ECR, ACR and Docker Hub and so on. Then name your remote registry. The description is optional. The endpoint URL is required. It can be an IP address or FQDN which can be accessed by Harbor. The access ID and access secret can be used when your remote registry is private and Harbor needs the credential to pull or push images from yet. The last configuration is the third. You can uncheck the checkbox to disable Harbor verify the remote registry third. It's useful if your remote registry was deployed by self-signed third. Finally, we can test the connection and if no problem we can save yet. Now for demo let's use the Docker Hub as the remote registry. Okay, it's set successfully so let's click to save it. So far the registry has been added successfully. Let's go back to add a new replication rule. You need to name your rule. Let's try test. The description is optional. And there are two modes push based and pull based. Push based means push the image from local Harbor to remote registry. Pull based means push the images from remote registry to local Harbor. They are two replications in opposite directions. Next section is the south results filter. You can define your customized rules for replication such as matching different name or tag or label. You can go through the tour tip for more detailed guidance. Then choose the registry which added in the previous step. The destination namespace means you want to put the image under which results for Harbor it's project name. If we leave it empty, it will use the name as same with the source. You can also reduce the nested repository structure by configuring the pattern. There are three trigger modes in Harbor. By default is manual. Manual means you need to run the replication by manually call the Harbor API or click from Harbor UI. The scheduled means that it can set the replication triggered periodically by providing a cross-drain. For the last is the event based. It's especially used for when you want to back up the new pushed image to the remote registry. A replication will be triggered if the new image pushed events happened. You can also click the checkbox if you want to replicate the deletion operation. In simple words, Harbor will delete the image from remote registry if the deletion happened on the local Harbor. Set the bandwidth if you want to limit the network input or output for the replication job. The last option is override. Enable this to will override the remote results if it exists same with the source. For the demo, let's replicate only one Redis image from local Harbor to local Harbor and leave other options default test pool Redis. Redis and let's only replicate one is five zero. Okay, then click the replicate to trigger the job. After trigger yet, you can find it in the execution history, you can look the status and progress from the history. As well, if you want to look for more task details, you can click the execution ID. In the task page, you can see how many ongoing tasks running under the job and click the logs buttons to catch up the running logs of tasks when replicating. Let's wait a moment for the replication job finished. Okay, so has already succeeded. Let's go to the project library to check whether the Redis image has been copied to the local Harbor. Yep, it has been located here and can click the digest for more artifact details. All right, this is a simple demo for replications for Harbor but after user pushed all replicated images to Harbor, how can we guarantee the security of the image? Now the integration service will come into work. Click the integration service to check scanners in Harbor. By default, triv is the built-in and default scanner for Harbor. You can add other scanners by providing some scanner informations. After adding it, you can click it to see more, to see more metadata like scanner vendor or version or more specific configurations. If you have multiple scanner instance, you can choose one as the default and also support customized for every project. The vulnerability scan can be triggered by a scheduled time like replication which can help secure your images timely. Now let's go back to the artifact page to scan the Redis image. After clicking the scan button, you can see the scanning is happened. Let's wait for a moment because scanner need to pull this image layer and analyze the vulnerabilities in the blob. You can see the vulnerabilities scanned from this image by Harbor's icons and number or vulnerabilities by different levels will show up. By clicking the artifact, you can see more detailed information such as CVID, description, affected packages and fixed versions. Okay, now let's try to pull the image by Docker client. The image has been pulled successfully but this image which includes the vulnerabilities may threaten the security of the application. We can set some policy to disallow, pulling the image with vulnerabilities. Let's go to the project configuration page and click the prevent vulnerable images from running. You can choose the security level based on your scenario and here Redis image contains the critical vulnerabilities so we choose the critical. Save and then save the configuration. Let's try to pull the image to see what happened. Now you can see that Harbor has prevented outside pulling for this image because it includes the critical vulnerabilities but if some vulnerabilities are false positive or have no effects to your application after assessment, how can we bypass this? You can add the system or project allow list. Now let's go back to the artifact page to find out the two critical CVID and copy them to the allow list. The first one is this and another one is this. Now let's save it. Okay, let's try to repool the image again by Docker client. Yep, right now the image can be pulled successfully although it includes two critical vulnerabilities but they will add it in the allow list so this is as expected. Not only for this, Harbor has more advanced security related functions waiting for your exploring to protect your artifacts. That's all the demo, thank you. Hi everyone, I'm Simon. I'm an architecture from VMware. I'm gonna give a brief introduction to a Harbor related open source project, project networks. Let's watch a proper kind of film first. Harbor is the number one trusted cloud native registry for on-premise container images and they're trusted for good reason. They use third party static scanning tools whenever an image is created to ensure the images are free from vulnerabilities. And while static scanning is valuable, it doesn't prevent multi-step or supply chain attacks. Some malware contains code that only activates during runtime and by then it's too late. Today we're announcing project narrows which adds dynamic scanning to Harbor. It allows you to assess the security posture of Kubernetes clusters at runtime. So vulnerabilities are identified, images are flagged and workloads can be quarantined. Project Narrows runs on the workload cluster and looks at the full end-to-end lifecycle of an image in a container. You can easily analyze the data collected, assess the security postures of your workloads, generate reports and enforce predefined policies. Get ready to meet your compliance needs by adding dynamic scanning to your security arsenal today. Okay, as we all know, there are three major challenges in cognitive security areas, including mis-configurations, known or unknown vulnerabilities and exposure of secrets. Currently, organizations typically implement a cognitive security strategy to ensure security. And generally, the strategy consists of consideration of some principles including something like shift security left, continuous security controls, CSED pipelines integration and traceability, accountability and visibility. At the same time, the three major challenges are exactly the challenges that Project Narrows aimed to address along with Harbor as well. Today, cognitive users leverage Harbor to provide static analysis of vulnerabilities in images using scanners, such as TreeVe, Clare. And the static analysis will scan the images after they've been pushed to a registry. Project Narrows will provide a unique addition along with Harbor as it will allow users to access the security posture of Kubernetes cluster at the wrong time. This means images will be scanned at the time of introduction to a cluster. So vulnerabilities are caught in real-time and images will be flagged. And workloads can be currently to further and complete a story of Kubernetes infrastructures from the security perspective. We designed and delivered a set of capabilities that enable users to access the wrong time security posture of the Kubernetes cluster and protect them from vulnerabilities and attacks. With Project Narrows, users can know clearly the overall security posture of their Kubernetes cluster and make sure the actual security situations match their security compliance expectations and alert any breach. In the meanwhile, users can set up a policy to quarantine the workloads sourced from vulnerable images and stopping the propagation of the risks. And furthermore, it can also scan the Kubernetes cluster misconfiguration following the CIS benchmark. Additionally, Project Narrows has already been integrated with VMware Application Catalog as well. For VMware Application Catalog, once it delivers images to a registry, it does not have any awareness over the runtime security information or the packages it provided. But with Narrows, VMware Application Catalog, the governor and Project Narrows work together to deliver a better experience for IT managers that need to govern OSS Application Catalogs. With Project Narrows integrated, it allows VC to provide meaningful security alerts and helps catalog administrators to detect latent threats due to outdated and in our live software. So with Project Narrows, it will be very easy to integrate with different platforms. It can be run on any Kubernetes and platform. It is also a unified security platform. All the information over security with Rekage can be gathered to the central place to analyze. It is also open source project that is free to use and extend. Here is a typical user journey of this project. Images are firstly catch to harbor from some third-party registries such as .harp. Then images can be scanned in harbor and the security data is generated in harbor. After that, the security data will be consumed by Kubernetes clusters which have Project Narrows installed. And finally, the scanning results of Kubernetes cluster along with all the misconfiguration information of the clusters will be gathered. Okay, let me show you how to install Project Narrows from scratch. So after you clone the GitHub repo of Project Narrows to a local, you can just execute this command to simply deploy Project Narrows to your Kubernetes clusters. In this installation script, it will check all the dependencies you need to install this project. And after that, run this command. Then you can find there are some new namespaces are created. All the components of Project Narrows will be deployed in this namespace. And you can also find a namespace named ChromeDrop. The namespace named ChromeDrop will be created after user set up the policy. In this namespace, ChromeDrop will be triggered periodically to scan the workload in the cluster. At the same time, you can find in this namespace, we have the open search instance installed. So after the scanning job finished by default, all of the reports will be gathered into the open search instance. Okay, so the installation is all set. Let's watch a demo of the portal. The environment we are using is set up with VMware application catalog, Hyper, and is ready to use Project Narrows. Under settings, a platform item is written must create a secret to connect to the Hyper and VEC. Then he must specify the security data source and fill in the endpoints of the data sources. And today we will use Hyper and VEC. Now that the configurations are complete, the security auditor can specify the scanning rules in the policy section. To create a policy, there are a number of fields to fill out, including how often the scan should go on, the scanner they would like to enable. User can also fill out the configurations of their open search instance. So all the reports generated can be aggregated into the central place. VEC is a custom multiple selection or trusted prepackaged application components that are continuously maintained and verified for use in production environment. With the involvement of VEC, it opens several OSS inspection use cases for users. After that, the user can define the baselines to set up the security expectations. This is important because workloads that vary any of these baselines defined will be flagged. Here you can choose whether you want to guarantee the workload that are flagged. To view these reports, you can find them in the report section. The application developer and security auditor are going to care most of all these areas. We have three types of reports generated correlating to the three kinds of scanners we specify in the policy. In the image risk reports, this view shows the history of numbers of containers scanned by project narrows. We are going to dig into one to see more detailed information. In this report, default namespace was scanned. We will drill down on the Apache. In this container, it is offered and continuous maintained by VEC. So we can get the information here. In the cluster vulnerability reports, it shows the reports of the Q-bench. Scanners are specified in the policy. Q-bench scans the misconfigurations and checks whether Kubernetes is deployed securely. Then we will come to the risk scanning report section. The software packages inside the workload containers can be scanned by the risk scanner. And here are the scanning results. We also got a score from measuring the severity of the vulnerability for each CVE. Another area for the security auditor is to view not only the security poster, but also the risk trains which we organize into three categories, cluster, namespace, and workloads. Okay, so that's all for the demo. For more detailed information, please visit our GitHub report. We look forward to engaging with the cognitive community, getting feedback, and learning how people want to adopt and use these capabilities. If you are interested in working with us more closely, please email us at narrows.vmall.com to discuss the possibilities of becoming an early customer, user, or other potential partnership opportunities. Thanks for joining us today.