 Hello, everybody, and welcome to my session on simplifying application deployment at the edge with Harbour. My name is Michael Michael, and I'm one of the Harbour maintainers. I'm actually very involved in the CMCF community. I'm also a maintainer for Contour, and I'm also one of the chairs in the Kubernetes community responsible for SickWindows. I am a director of product management at VMware, and you can find me under the alias M2 on Slack. So what is Harbour? Harbour is a CMCF graduate project. We've graduated this past summer, and we're super excited to be part of the CMCF highest level of ecosystem set of projects. Our mission is to be the trusted cloud-native repository for Kubernetes. So we want to be very fast, very efficient, very secure, and the de facto way that you use for storing all of your cloud-native artifacts that you use with Kubernetes. You can find us on Twitter at Project Harbour, as well as on our website goharbour.io. In a nutshell, Harbour is really an open-source registry that secures artifacts with policies, role-based access control. So you know who accesses what and who is entitled to access what from your artifacts. We ensure images are scanned and free from vulnerabilities so you can reassure that the images and the applications you're putting in production don't have any vulnerabilities, and we sign images as trusted. So you know where these images are coming from. With that, Harbour delivers compliance, performance, and interoperability to help you as a user, as an enterprise, consistently and securely manage your artifacts. Our community is thriving. We have over 13,000 GitHub stars, 200 committers, 250 contributing companies, 3,000 contributors, 4,000 folks. You can see on the right here we have a steady stream of commits to our GitHub repos since 2016. So for the past almost five years, we'll have a steady contribution across the board on Harbour, and we'll have 40 maintainers across five companies and pretty much every continent imaginable. So our community is thriving. We're a welcoming and open community. If you want to come in and help us achieve and further the vision of Harbour, come and join us. We have bi-weekly community meetings. We're active on Slack. We're active on Twitter. Come and engage with us. We'd love to hear your scenarios and enable your requirements with Harbour. Now Kubernetes is and has been the standard for container orchestration in the data center for quite some time now. But we're now seeing tremendous interest from users that want to deploy Kubernetes clusters at the edge, but actually even here in that they want to put Kubernetes clusters in cars, in airplanes, with the proliferation of 5G, we're hearing about Kubernetes clusters that want to land at many edge devices or many thin edge devices. And a lot of the things that we hear from these users is that they want to focus on simplicity, reliability, and security. So they want to make sure that when they put these clusters at the edge, they can manage them from afar. They want to make sure that these clusters run autonomously, they're secure, but their applications are always up and running, whether you're a retail customer, a manufacturing plant, or even a car. But you can't really operate Kubernetes without the registry. One of the fundamental needs of actually running your cluster is that cluster requires images that need to be provisioned and available so that your docker or your container D runtime can actually run those images. So there is a need here for something that's a registry. Now, users are asking for an easy way to describe how to deliver these images where they need to run. Translate a little bit differently. These users are asking for the ability to deliver images right next to the compute that's running their applications. We've heard that loud and clear in the Harbor community. And Harbor version 2.1 that shipped not too long ago improves image distribution with new features on proxy cache and peer-to-peer support. So that's a tremendous, tremendous release for us. With the proxy capability, we've extended the concept of a project, and I'm gonna show that to you in a demo in a little bit, where we're reusing some of the same adapters who created for application. And now you can proxy artifacts and create a local cache with Harbor where all these artifacts are considered local. So they're right next to your compute in your Edge devices that you need. And the management policies and all the other policies that Harbor has can be applied to your proxy artifacts. That includes quotas and scanning and all the rest of the policies that make Harbor the tool that it is today. The second area that we've innovated is around peer-to-peer. And we've integrated with Dragonfly and Kraken, which are two open source, intelligent, peer-to-peer based systems. Essentially they're acting like a content delivery network for making sure that they leverage P2P to land files, in this case, your artifacts, where they need to go saving in a lot of ways enterprise bandwidth because they do that very efficiently. You can do a lot of policy on top of that like host-level speed policies, flow control, be able to do encryption and all these other things that make sure that when your images are landing on this host, whether they're at the Edge or in your data center, are landing there with maximum efficiency. Now, Harbor, with this capabilities makes it possible for you to distribute your cloud native artifacts and images, co-locating them alongside with your applications running on Kubernetes. At the same time, we enforce the same Harbor core tenants that we have today, whether you're running at the Edge or at the data center, ownership and deployment, a huge part of what makes Harbor successful, multi-tenancy, the ability to enforce RBAC rules and project isolation, policy. Our policy engine is one of the best in the business. We have quotas, retention, immutability, signing policies, vulnerability and scanning policies. All of that can be applied no matter where you run with Harbor. Our security and compliance, we have identity and access management, we have scanning, we have CVE exceptions and the last one, extensibility, the ability to integrate with the tools, services and processes that you have, whether you're running at the data center or at the Edge. We have webhook integration, replication integration, our pluggable scanners, our REST API, robot accounts, CLI secrets, all of those make it possible to extend Harbor and connect it with some of the investments that you made within your data center and for Kubernetes. So let's switch here and do a little bit of a demo. We're gonna do a little demo to talk about proxy cache and also touch a little bit on peer-to-peer with Dragonfly. So I'm gonna stop sharing here and I'm gonna share my browser. What you're seeing here is the Harbor installation that's on demo.goharbor.io. I wanted to show you that delivery because that's a free instance of Harbor that's always available. So you can go and try out some of our latest features and this instance is actually running the latest release of Harbor. So I went ahead and created already an endpoint for the registry connected to Docker Hub and I'm gonna show that to you really quickly. So the provider here is Docker Hub, provided a name for this connectivity and it's an extension, there's a connection to Docker Hub using my own personal account. I've went ahead and verified it so we can test that connection and say that it tested successfully and click and exit out of here. So now with that in mind, I'm gonna go ahead and create a new project and I'm gonna call my project QBCon demo and hopefully nobody else has a name. I'm gonna mark it as public so I don't have to deal with using username and password to connect to this project right now. And I'm gonna indicate here, this is a new feature that this is a proxy cache and by enabling this as a proxy cache, I get to link it to one of the existing connections that I've added into the registries. Remember the connections that we have in the registries are being reused also with some of the replication providers that we have because we know how to connect and replicate content in and out pretty much every popular registry out there and including Docker distributions. So in this case, I'm gonna connect to Docker Hub and I'm gonna click okay. So now we've created our project here and it's empty, there's nothing in it. So I'm gonna go ahead and pull up my command window here and you're gonna see that in a second. So here's my command window and I'm gonna show here that I don't have any Docker images here at all. So it's empty. So I named my project QBConDemo. So I'm gonna call my Docker command now is dockerpool demo.go.io QBConDemo. Mitch Mike is one of my repositories on Docker Hub and NVGo is one of the images I have there. So I'm gonna go ahead and pull this and what's gonna happen behind the scenes here is Harbor is gonna be called first. We're gonna realize that we don't have this image. Harbor will pull it from Docker Hub and once it replenish the cache then Harbor will have it and provide it to me. So if I do Docker images here, I'm gonna see that I was able to download this image. It's about 1.0 something megabytes. It's a fairly small image and I have an image ID here and I get some of this info. So let's switch back here to Harbor and if I refresh this page I'm gonna see this image in Harbor. So there's a QBConDemo Mitch Mike and NVGo image and it has a SHA here and I'm gonna go ahead and copy the SHA just to show you that this is the same SHA that we have here for this image if we were to look at this in Docker Hub. So connecting to the same image in Docker Hub and there's the same digest that you see here is a Linux AMD64 image. And now in Harbor we have that and it's available and it's cached. So next time we request it it will not go and pull it down from Docker Hub. It will just do a manifest check make sure that this is the latest image and Harbor will keep a local cache for your image. So the more images you bring in the more of them are cached and then you can apply all the different policies of Harbor, you can scan them. You can look at the policy around Quoda and the ability to apply tag immutability or you can add web code control or the configuration with scanning and CVE exceptions. So all of the capabilities of the project that Harbor provides will not be available for you for the use of this proxy cache that you've enabled. So now if you're at the edge and you have deployed Harbor at the edge you can use Harbor as a proxy cache to bring all your images in from the public cloud or from whatever that your other Harbor or rather instance of a registry is and keep them locally available right next to your Kubernetes clusters that are running your compute. So if what I need is on your network connection is severed or is unreliable, your images are always available in Harbor to run right next to your compute. The next thing I wanna show you all here is I'm gonna switch to this environment where I have Dragonfly enabled and I already went ahead and created the connection to Dragonfly but I'm gonna show it to you all. So the provider here is Dragonfly I could have chosen cracking as well and it has an endpoint it's API driven and I didn't enable authentication here but obviously you have enabled either basic or authentication so it's up to you depending on how you have it set up and then we can test the connection here and we see that Dragonfly is enabled and I've been running wanna exit out of this and what's gonna happen is I could have created a new project here and I'm gonna call it P2P demo just to show you really quickly what that looks like and I can go into this project I can push any number of images that I've liked and the most important part here is the P2P preheat option. So I can go into this option now and I can create a policy and what this policy will do is it will take images from this project that I've defined in Harbor and push those images into the P2P provider Dragonfly or Kraken and preheat those images so that we're able to distribute them based on the P2P policies that you have defined in that tool whether it's Dragonfly or Kraken. So in this case you get to pick the provider and that's Dragonfly, we're gonna give you the name we're gonna call it preheat policy or P2P demo for example, can add a description and then you get to decide what filters do you want to basically pick up every repository out there? Do you only pick up the tags that are basically tag protection? Do you wanna pick up, for example, I'm gonna put star star here so it will pick up all the images and then the tag is latest. So now we will pick up all repositories with tag latest or you can add one or more tags and then you can define how do you want that preheat to be triggered? Do you want it to be manual? As in USA, I want you to preheat. You only be schedule-based so you can give us like a current job, for example if you pick custom or hourly or daily or you can say I want it to be event-based and the events here is when an artifact is pushed when an artifact is labeled or scanned you get to see based on some of these actions are happening, for example, a new image is pushed immediately we'll get preheated and send to the P2P provider. So that's it for today. Hopefully you enjoyed this short preview on hardware and how we can enable you to operate at the edge right alongside your Kubernetes clusters and enable you to meet your application as well as enterprise needs for the edge use cases. Thank you so much.