 Hi everyone, and welcome to this webinar by Verve for CI-CD in Kubernetes. During the next half hour, I will talk about Verve, the CLI tool for implementing delivery of your application. I want to discuss some interesting Verve features and CI-CD challenges that Verve helps solve. My name is Alexei Igrachev. I work at Bellark, a company that provides DevOps series and service, and within the company I lead an R&D team that focuses on the delivery to Kubernetes. We can define experience and best practices in close collaboration with our DevOps teams. Verve is a solution that we use to implement delivery for our clients, and today I will talk about it. What Verve is about. So first, some numbers about Verve. We have been developing and using Verve in production for 7 years, and currently we have more than 5,000 active clients applications that use Verve. Since December, Verve has been a CNCF sandbox project, and the last point is about our growing community. What Verve is. First of all, Verve is a toolbox. It's a CI tool that covers a wide range of tasks related to the container image lifecycle, application deployment, testing and distribution. One of the key highlights of Verve is its advanced features for building and deploying applications. To provide additional capabilities and functionality, Verve includes a building builder based on builder and extended help. And of course, Verve is a glue that brings together different components of the delivery process and simplifies work with them. Verve automates numerous tasks, streamlining the ACD workflow and facilitating efficient management of these components. That's it. What Verve is. A single all-in-one-two advanced features and glue for delivery components. But what for? Verve provides building blocks for use in your CI CD pipelines and implementing consistent and efficient delivery to Kubernetes. Let's look at how it's used there. Verve can be used not only for building CI CD, but also for local and in-cluster development. In the context of CI CD, Verve can be used for building, tagging and publishing images, running source code and application testing in Kubernetes, deploying temporary environments such as review and dev, deploying persistent environments production-like and production. If we talk about isolated environments, in such a scenario, Verve can be used for the application distribution in the last step of the Dev pipeline and then for the application deployment in an isolated environment. In addition to Verve, there are other solutions in the last step, Helm, Argo and Flux. This highlights the possibility of implementing hybrid approaches where the user can benefit from both solutions. A few words about project configuration with Verve. Verve uses YAML configuration to describe how your application should be built and deployed. A typical project configuration includes several files, Verve YAML, one or several Docker files and Helm chart by default in .helm directory. This configuration is enough to demonstrate Verve in action and we will come back to it later. The introduction is over and we can move on to the central part of this webinar. I would like to talk about the practical security cases and highlight some of the interesting work features and approaches. The first part is about the container image lifecycle. In this part, I will discuss our approach to cache management, tagging, promoting and cleaning up all the container images. So, let's start. And start from the beginning. Let's recall how caching works when we build images locally with Docker. So, we have the following Docker file. And start the build. Docker builds container image layer by layer running Docker file instructions. And as a result, the build is successful and we have a build image. When we rebuild, everything will be taken from the local cache. Let's modify GoMod and GoSum in the project and build the image again. This leads to the build of all layers from the corresponding one. And when we rebuild, everything will be taken from the cache. Nothing special, because it's basics. Let's summarize properties that such build has. Layers are used between builds. Cool. Layers are determined by the previous label. Docker file instruction and used files from the build context. Layers are read-only, immutable, and Docker responsible for managing parallel builds and cache on the host. These are nice properties, but they only run per host and don't scale to a group of runners in CI CD. Move on. Let's look at one more scenario with Docker. Image publication. In Docker, we have three commands to do that. Build, tag, and push. So build, tag, and push. Here it is. The image is published to the container registry. Now we have enough context to see how the same is done with GURF. Start the process. GURF builds, tags, and pushes an image for a layer into the container registry. Done. And this is done the same way for each layer. Then we run the same build, but on an arbitrary host. As a result, nothing happens. Because all the layers have already been built and stored in the container registry. We don't push anything from the host and pull from the registry. Nothing at all. To sum up, layers are used between all builds. Layers are determined on the same principle as with Docker. Layers are also immutable, because GURF coordinates the work of all builders. Let's talk about the image usage. What the synchronization process is and how we can use images with GURF. The synchronization is performed within all GURF commands that require images during operation. After the synchronization, the command continues with the final image, which matches the content-based tag for the last layer. Let's look at some GURF commands in action. Suppose we want to run some source code test. For example, a unit test. We add a Docker file for our image. Define it in a GURF YAML. And then we run the GURF-Cuberant command. We use the image name from GURF YAML and the command we want to run on the image. As a result, GURF runs the command and the appropriate image in Kubernetes. Let's consider another example. We want to deploy our application to Kubernetes. In addition to the previous configuration, a Helm chart is added. In templates, instead of an image tag, we use a special Helm value, which GURF passes at runtime. After calling the GURF-converged command, the application is deployed to the Kubernetes. This is how it works. In both examples, the user doesn't work with text directly. Thus, we propose to pass release artifacts through all steps and delivery environments. In such a scenario, the user does not need to think about taking it, and in general, about what is storing the container registry. But why shouldn't users think about it? Because GURF can manage whole container image lifecycle. Here, we briefly look at our approach to cleaning out the container registry. GURF offers its approach to cleaning, which takes into account images currently in use in Kubernetes. And once, that is somehow related to developers' ongoing activities based on Git. When cleaning, GURF scans all used images in Kubernetes and ignores them. GURF knows what images were used on which commits and ignores them according to the user's selected key policies. And removes the rest. Just like that. With this, I end the part about the container image lifecycle. It was a small overview of the main GURF approaches. In the next part, we will talk about the application deployment tracking. In this part, I will touch on such topics as how to wait for an application to be ready during deployment, how to stop a failure deployment in a timely manner, how to make the deployment process transparent and verbose. And I would like to start this part by looking at what Helm can do in this context. But it is valid not only for Helm. Start from the first example. We have Helm and we deploy our application to Kubernetes. Let's run Helm upgrade command. Helm applies manifest to Kubernetes. And Kubernetes applies manifests. And that tells Helm that manifests have been applied successfully. After that, the Helm command completes successfully. Cool. But we need more. Because we want to know when the application is ready, right? Second example. Helm has a wait option that allows us to wait until the application becomes ready. After the manifest are applied, Helm starts checking the readiness of the application. And when the application is ready, the command completes successfully. But Helm does not guarantee that the application is ready. Because it's based only on the status of the main resources in Kubernetes. But to be honest, for simple scenarios, Helm tracking is usually enough. Third example. Let's talk about timeout here. After applying the manifest, the command will exit either if the application is ready or the timeout has expired. Let's imagine the application failed. And we wait for the timeout. Wait and wait and wait. The application failed a long time ago. But we have to wait for the timeout. A logical question here is what timeout to set for deployment. If you estimate it to low, the deployment may fail too quickly, even before it has a chance to make progress. If you estimate it too high, the deployment will have to wait a long time. You have to decide between two bad options. And the last example. Again the same steps. And we wait, wait, wait, we are waiting. And don't know what's happening with our application. We have to go to the cluster and check the status ourselves. Helm does not show any log, and we should track it ourselves. Let's look at what Verve has to offer. So, we run Helm and Verve simultaneously and do the same task. As you can see, we have any container logs and application logs. And also we have status progress for the earliest resources. And also you can see the desired state for each resource in status. And when the application is ready, Verve command completes successfully. And the same story with Helm, but without any observability. So, let's look at another example. Failure deployment. So, we have an error. It's threshold back off. And Verve gives the application one more chance. But the problem persists and Verve ends the failure deployment. The command is terminated. We have all Kubernetes events that happened during the deployment. And we have enough information to start fixing the problem. And we do that. We change command and then run deploy again. Fix it. What's about Helm? Helm waits for a timeout. It's set to one minute. By default, Helm has a five-minute timeout. So, the command is terminated with an error. Timeout, waiting for the condition. It doesn't help us. And we start troubleshooting in Kubernetes ourselves. That's all. That's the big difference between Helm and Verve. So, what have we done in the context of the application deployment trick? Smart waiting for resources to become ready. Verve has generic tracking for all resources that is based on the information about the resource available in the cluster. Instant termination of a failure deployment. Not only timeout. Verve does fail fast based on events in the cluster. Deployment progress, logs, system events, and application errors. Verve gives enough information for troubleshooting and just for observability. And the last point. Configurable behavior for each resource. All of the above you can configure for each release resource. In this part, I only talk about tracking. But deploying Verve features are not limited to this. At the very end, I would like to talk a little about how we propose to distribute the release artifacts of the application and deploy them in an isolated environment. We suggest using bundles for the application distribution. A bundle is a way to distribute a chart and its related images as a single entity. With a bundle, you can impact the application state for future deployments. A bundle is a distribution format that follows a specific structure for organizing Helm charts. Essentially, a published bundle is a just chart stored in an ACI registry along with its images. The published bundle can be copied from one container registry to another, as well as unloaded and uploaded into the container registry. In this case, the Helm chart and associated images are packed into a tar archive. The published bundle can be deployed using Verve, Helm, RGCD, Flux, and other solutions that support working with Helm charts from the ACI repository. With bundles, you can reconcile the state from the container registry to Kubernetes without it. The main part is over. Lastly, I would like to take a quick look at the Verve website. Open Verve.io and go to the documentation section. The user immediately gets to the page installed and run. The user will find all the necessary information to get started with Verve locally and in CI CD. In the local development section, you will find step-by-step instructions to set up your environment. And also start using Verve for developing and testing application in Kubernetes. The CI CD section covers all the instructions for configuring the CI CD process with Verve and your CI CD system. It includes all the necessary settings both for the CI system and Kubernetes. The documentation also provides a complex example and a ready-to-use workflow based on the best practices. The documentation offers a comprehensive resource to explore and understand all the capabilities of Verve. Starting with overview articles and progressing through the practical sections, user can gain a deep understanding of first functionalities and apply them effectively. As an example, let's explore the deployment aspect of Verve. Let's start with an overview, which will introduce you to the main features of the deployment process, giving you a clear understanding of its functionality and operation. Following that, we can delve into specific areas such as tracking, which we discussed earlier. Also, we have CI CD guide for developers, but it will be useful to DevOps engineers too. In this guide, you will find both practical step-by-step instruction and the necessary theory. It will guide you from basics to more advanced scenarios. The guides also take into account the specifics of different programming languages and include examples of the application source code and related infrastructure. That's all I wanted to say today. If the problems seem familiar to you, try Verve. Create docker files, jump charts and let Verve handle all the rest. It was an excellent webinar. And see you again!