 Hello and welcome to the Continuous Delivery Foundation presentation on Google Summer of Code 2021. My name is Martin D'Anjou and in this presentation, the students will present the work that they did during Google Summer of Code 2021, combining Phase 1 and Phase 2 of their work. And this presentation is actually a montage of Part 1 and Part 2, which took place on two different days. And we're combining the two into a single presentation for this recording. Today, we're going to be talking about the projects our students worked on. But first, we're going to have an introduction to the Continuous Delivery Foundation. Then our students are going to present their work. This will be followed by question and answers. Links to the Phase 1 demo slides and Phase 2 demo slides are provided here. I would like to invite Tara to present the Continuous Delivery Foundation. So Continuous Delivery Foundation was founded by Google, Netflix, and Cloudbees back in, I want to say, 2018, 2019. It's a little early in the morning for me, sorry. I don't remember the exact dates. With the purposes, it's a sub-organization of the Linux Foundation with the purposes of furthering developer tool stack as it relates to, in particular, Continuous Delivery and helping drive industry standards. We became a sponsor for Google Summer of Code last year for the first time. And Jenkins, it has a long-standing project, has a long-standing history with Google Summer of Code. So the foundation joined in along as well. And we've had students also contribute to their projects. So it continues to be an area where we love to see investment. Next slide. Martin, next slide. So it's our second year. As I said, we had 22 project proposals. We were able to accept six projects this year. We had three to four mentors per project. It's really great. The various projects get really excited to support the students in their efforts. This year we had good project proposals for Jenkins and Spinnaker. Additional projects that can be considered would include Jenkins X, Tecton and Screwdriver, as well as Ortelius. And then next year we will have another new project that will be online for students to take a look at, which is Shipwright. So we're expanding the number of projects that are available. And so hopefully you can spread the word amongst your friends. They want to apply for student proposals next year. We will hopefully have a plethora of opportunities. I am pleased to report all the students passed their midterm evaluations. I haven't seen the latest results yet for the final, but it has been a very successful summer of code season. Next slide. I would like to extend a thank you to our students and mentors. I also want to extend an invitation. Hopefully all of you are made aware that you have the ability to record a lightning talk. It could be a talk that you recorded for this. It could be another variation to submit to DevOps world. We have reserved time there. So you can have a chance to see your project at an industry event, which we hope that you will choose to join. As a reminder, we have a very firm code of conduct. And I hope that your experience as a student with one of the continuous delivery foundation projects was a positive one. So thanks again. Back to you, Martin. Thank you, Tara. Before we get started with the presentations, I would like to invite mentors or org admins to say a few words. I would say a few things, if you don't mind. Of course. Yeah, first of all, thanks to all six students working on the continuous delivery foundation GSO projects. And thanks to everyone who was working on Jenkins. We had five great projects for them are focused on Jenkins in the cloud and cloud deployments and even if you take a look at these presentations or cloud events, remote monitoring is open telemetry and parameters and security detectors for Jenkins Kubernetes operator, all of them are strength and Jenkins position in cloud environments. And this is exactly what we need for the project and all of these projects are important part of the Jenkins roadmap. So thanks a lot to mentors students and we're looking forward to seeing this project adopted by Jenkins and users. Thank you, Oleg. Two organizations participated as part of the continuous delivery foundation in the Google Summer of Code this year, the Jenkins organization. You can find the Jenkins organization Google Summer of Code channels from their main page on Jenkins.io. They also use a GitHub chat. You can also find them on specific channels of the CDF Slack. They also use a discourse site on community Jenkins.io. And there's the mailing list. The second organization that participated under the umbrella of the CDF is Spinnaker. You can find Spinnaker at Spinnaker.io. And you can also find them in the Slack workspace. Spinnaker team.slack.com under the GSOC-2021 channel. The first three demos are going to be Git credentials binding for SH path and PowerShell. This will be followed by conventional commits plugin for Jenkins. And lastly try.spinnaker.io. In the second part of the demo by our students, there will be three presentations, cloud events plugin for Jenkins, security validator for Jenkins Kubernetes operator, and Jenkins remote monitoring. Git credentials binding for SH path and PowerShell. Today I'm going to talk you through my presentation for Git credentials binding. And we'll discuss the results that me and the mentors have achieved so far during the GSOC 2021 program. So let's get started. Under the project overview. The project involves extending the credentials binding plugin to create custom bindings for two types of credentials, which are using a password and as such private key. These bindings are then used to automate the authentication task when performing any Git operation using command line Git through the back or PowerShell in a pipeline job. Now why these bindings were required or what was the motivation behind them? Firstly, when it comes on performing a Git operation using a pipeline script, there is not much support provided. And the user had to depend on various workarounds through the credentials binding plugin or environment directive. So the solution was to use with credentials wrapper followed by credentials binding plugin which would take the user's credential and supply them automatically when a Git operation is asked for them during a user's authentication. Coming to the objectives of the project. So the first objective was to provide authentication support for SGTPS protocol and SSH protocol. Then comes the targeted audience, which were pipeline job users. Another requirement was that the binding should support command line Git version 1.8.3 and later. Also, it should be available on different operating systems. The last one was that it should support not only a Jenkins controller, but also a Jenkins agent. Now the results for phase one. So doing this phase, we were able to achieve Git authentication support for SGTPS protocol. And it was released as Git username password binding in Git plugin version 4.8.0. Also, this binding supported both freestyle project and pipeline jobs. The results for phase two. In this phase, we were able to achieve Git authentication support for SSH protocol. And the support for private key formats include OpenSSH, PEM and PKCS8. Also, four encryption algorithms were supported, namely RSA, DSA, ECDSA and ED25519. The support for private key formats and encryption algorithms are provided through the Bouncy Castle API plugin and SSHD plugin. This binding was also available for both freestyle projects and pipeline jobs. Now I'm going to showcase the demo. In the demo, I'm going to showcase both the bindings. First, I'm going to show the working of Git username password binding in a freestyle project. So now we'll look in the configurations of this project. Here, I'm performing a simple Git checkout on a remote repository hosted on GitLab, which is a private repository. So as you can see, this is the repository that I'm using to perform the Git checkout. Now I'm using the Git username and password credentials to push a tag to the remote repository. So now we will build the project. So as we can see, the project was built successfully. And if we see here, there are three tags and now there are four. So the tag was successfully pushed. Now I will be showcasing the Git SSH private key binding. So I'm using a pipeline job for this. If we look at the configurations, I am using an agent, which is an Ubuntu machine to use this binding. So here I'm performing a simple Git clone operation on a repository, which is hosted on GitHub. And this repository also is also a private repository. So we will be building the job now. As you can see, the build is being performed for different private keys, which are using form as such as OpenSSH, PKCS8, and all these keys are encrypted with passphrase. So the build was successful. And this shows that we were able to successfully clone the repository using private key binding. So this was the demo. So now moving on to the road head. So here we will be discussing the tasks that need some work, even after the GSHOCK program. So that includes adding more automated unit tests and making minor bug fixes and code improvements. But apart from all that, the major task is of releasing the Git SSH private key binding. And that's all for the presentation. Thank you. Are there any questions on this project for Harshi to his online right now with us? We have a comment from Rashab, just pointing out it was not possible to do Git authenticated operations before this. So securing pipeline efforts is super important. This is very awesome. Great. I look forward to actually using this in my own build files. That sounds really useful. All right. Thank you very much, Harshi. We're going to move on to the next presenter. Next presentation is conventional commits plugin for Jenkins by Aditya Srivastava. Aditya, are you online here? Yes, I am. Okay. So do you want to share your screen, share your presentation and go at your own pace? Okay. So my presentation is a mixture of both entity and a video. But I would like you to keep sharing because Zoom behaves weirdly on my system when I share the screen. Okay. So you want me to change slide when you tell me? Yes, that would be really helpful. Okay. The floor is yours then. And just let me know when to switch to the next slide. Sure. So I'll start with the conventional commits plugin for Jenkins. Before starting, I would like to thank my mentors. I was lucky to have such an awesome mentoring team, Garrett, Kristen, Olivia and Alan. Thank you so much for mentoring me throughout the phase of GSOC. Next slide, please. This is me. I'm Aditya. I'm a GSOC student over here. I'm a Jenkins infrastructure enthusiast and I like open source in general. We can move to the next slide, please. So today I'll be talking about what are conventional commits, conventional plugins for Jenkins, how to use the plugin. I'll show you our demo extending the plugin and next steps, followed by Q&A if there's any. Okay. So what are conventional commits? So Martin, there would be some kind of an animation here in place. So it will be better if you can just bring out the whole slide. That's it. Okay. Thank you so much. So conventional commits are a lightweight convention on top of our commit messages. They are made such that so that it is easier. The commits are human readable and it is easy to write automation tool. Conventional commits dovetail with semantic versioning. So can we move to the next slide? There are some examples of conventional commits. So conventionally, as I was talking about conventional commits, they dovetail with semantic versioning. So they follow this pattern of major, minor and patch versions. So a chore is a conventionally commit which basically is not to bump any version of all the three versions. Fix is incrementing the patch version. Deep is an addition of feature. It increments the minor version and breaking change increments the major version of the semantic versions. So there are multiple ways to write a breaking change that's been shown over here on the slide. We can move to the next slide. So now I'll be talking about what's the, what's this plugin? What does this plugin do in Jenkins environment? Martin, you can bring in the whole side again. Yes. Thank you. So, right, so this plugin determines the next semantic version. It takes in the following things, the commit log of a repository, the latest tag, and the current semantic version that it is at. So sometimes the latest tag and the current version mentioned in the configuration file would be different. The plugin handles those situations as well. Currently, we support six project types that is Maven, Health, Gradle, Python, Mac, and NPM. We are adding more project types. And you'll see how easy it is to add a project type in upcoming slides. We can go to the next slide. So, using the plugin, plugin is available at plugins.junkins.ios at conventional commits. You can also download it from the Bait Center. We are using JEP229 to release the plugin on every feature. I recommend that you use it and you'll see this in demo as adding a step in the Jenkins pipeline. And it works on both declarative and separate pipelines. We can go to the next slide. The demo. Welcome to the demo of conventional commits plugin for Jenkins. Here we'll be looking at five major use cases, starting with minor version bump, followed by a major version bump, then using build metadata, bumping with three views, and right back the calculated version. Let's get started. Okay, so let's see how minor version bump looks like. So, I have a sampling even project. I'll show you all the source code. It's in GitHub. And as well, you can see there's just one tag, 0.1.0. And I recently pushed off commit, adding a feature that has add hello old option. I have a sample pipeline ready. I'll show that. Configure. And here's the pipeline. We are cloning the project and calling the next version. Save it and build. So, as it was, as the tag was 0.1.0, and we had made a feature commit, adding a feature, so it will bump the minor version, and it should ideally show 0.2.0. Now, let's see whether it does that. I see the next version. So it's it correctly identified the tag and the next version. So now let's try bumping the major version. I have a sample repository here with me. It's a sample Python repository. As you can see, there are no tags present and I have made a breaking change comment. I will show you all the current version of the project. It's usually in a set up by our conflict file. I have it in the conflict file. Let me open it and the version is 0.0.0. So now let's go back, create a pipeline and bump the version. So it's a new item. I'll have to give it a name sample Python project pipeline. Just a script. So we are just cloning the project and calling the next version. Life safe. And I'll build it now. So what we are trying to see over here is as the current version is 0.0.0. And the commit is a breaking change that should bump the major version and give the next version as 1.0.0. Let's see the logs. So the next version, it says no tags found. So as there were no tags on 1.0.0. So we saw a minor version bump on the sample Maven project, a major version bump on the Python project. Now let's go back to the Maven project and see how we can use build metadata in the conventional commits plugin. So here we are back at the sample Maven project pipeline and I have modified the pipeline a bit to add build metadata to the conventional commits plugin. I'll show you all the pipeline. Here I have used environment variables to add build numbers using the optional parameter build metadata. The rest of the steps remain same. I am finally printing the next version. So let's run it. As it's getting built, if you'll remember this is the same project that I used to demonstrate the minor version bump. So we know that the next version should be 0.2.0 along with the build number. Here is the print message and it is 0.2.0 along with the build number 6. Let's see another interesting feature of the conventional commits plugin that is adding previous information. We'll have to modify the pipeline script a bit. I'll be adding the optional parameter pre-release and name it alpha. Apply and save. Now let's build the plugin. Okay, it's built. So the plugin calculated the next semantic version and appended the pre-release information we gave it. What I recommend is to go to the GitHub repository of the conventional commits plugin and look at all the options that are available to manipulate the pre-release feature. So we have three, that is pre-release, naming the pre-release. Second is preserve pre-release, that is keep the existing pre-release, the default value is false. And finally, we have increment pre-release where we increment the pre-release option and the default is false. So last to our Boolean. I won't demo these in the interest of time, but we can see how it will work using this table. Thanks to Philip. So if our current version is 0.1.0 alpha we have a fix that is incrementing the patch version. We have preserve pre-release and increment pre-release. Then our final version would be 0.1.0, 0.1.1. This is because of the patch version alpha because we have preserved the pre-release and got one because we have incrementing the pre-release. So the final feature for today's demo is the write version feature. A user can use this feature to write back the calculated semantic version into the configuration file of the project. Adding a write version option parameter as true would do the job. So I'll apply and save and build now. It takes a couple of seconds to build. It's done. I'll see the logs and it says that the next version was returned to the configuration file. It would be interesting to go and check where has this been written. So I have already changed my directory to the project and I will just perform the XML. So here's the version that's been written. I think we can go back to the slides. There are a couple of them. So extending the plugin as you can see we had only six project types that we supported. Suppose you want to add a new one that is let's take an example of Go project type. You just have to create a public class and implement the following three methods. That is check. The check method just it returns a Boolean to a false. It's a check whether the project the given repository is of that particular project type. So we usually check for the configuration file. For example, if it's an even project we'll check whether the form.xml exists or not in the given directory. The second is the get current version. It is reading the current version from the configuration file of the project and finally is write version so writing back the calculated version to the file and thanks to Kristen. This was super easy. She suggested the factory part in for this. Thank you so much and don't forget to add your class to the project type factory. Next steps. Yes. The next steps would be to write back in various configuration files so right now Maven and NPM are done. They will Python help and make our list. So that would be my next steps and we would love to hear your feedback and suggestions for the plugin on GitHub and also on GitHub. Thank you so much. Are there any questions for Aditya on his work? No questions in channel though this is very cool. Yes. Well, thank you very much Aditya for your work on the conventional commits plugin and thank you to the mentors as well. Let's move on to the next presentation Try.spinnecker.io by Daniel Ko. This is also a recording. Hi, my name is Daniel and today I'll be presenting my Google Summer of Code 2021 project, Try.spinnecker.io Explore Spinnaker in a sandbox environment. I like to start off my presentation giving a little primer on what Spinnaker is. So Spinnaker described itself on its website as an open source and multi-cloud continuous delivery platform that helps you release software changes with high velocity and confidence. As you can tell, it is quite a mouthful and I like to break down these buzzwords one by one. A simplified high level explanation of what Spinnaker is is a tool that allows you to deploy applications in a very fast and safe way. Spinnaker supports deployments on all major cloud providers such as AWS, Azure, Google Cloud Provider, and Oracle. Spinnaker's biggest selling point is its continuous delivery features. It supports advanced deployment strategies such as red-black rollouts which deploy a new version of your application with the existing version and it destroys the old version once the new version is ready to go. It also supports automated canary analysis which rolls out a change to an application to a small subset of users and then metrics are collected to see if everything is running properly. You can also define your own deployment process to your heart's content. Spinnaker also supports rollbacks which allows you to revert to an older version of an application if the new version has gone catastrophically wrong. There is also a manual judgment feature which makes all updates require human approval and you can restrict updates to a certain time period. There are tons of other features that are available on Spinnaker and I recommend you to read them if you're interested on our website. Spinnaker was originally developed by Netflix to serve as their own private deployment platform but it was released to the public in 2015 and since then many other companies such as Google and Airbnb have also adopted it as their own primary deployment platform and in 2019 Spinnaker was donated to the CD Foundation. The motivation for this project actually comes from my personal experience. I clearly remember the first time when I tried to install Spinnaker and it was extremely difficult to say the least and I spent countless hours searching through random GitHub issues and looking through Stack Overflow and digging up random messages from Slack just to get the main UI of Spinnaker to appear on my computer and I think probably one of the biggest reasons why it's so difficult is because there are so many dependencies required to actually get Spinnaker running. So you need an external storage provider like an S3 bucket and you need to have a Kubernetes cluster that has at least 16 gigs of RAM and 4 cores. You also need to set up cloud providers that you want it deployed to and you need to do a lot of networking to expose the UI of the API and whatever services you're providing. If you compare this to a project like Jenkins to run Jenkins on your computers to have Java installed and double click the jar file. Having a sandbox environment where users can go in and deploy some pipelines and test out the Spinnaker UI is something that I really wish I had when I first heard about this project. I think other open source projects are the importance of lowering the barrier to entry to having some kind of hands-on experience with their own project that we see services like the Go Playground and Play with Docker being available to the public. Regarding the infrastructure of our project I decided to go with a multi-tenant solution on an AWS EKS cluster. This means that all the users will be sharing a single Spinnaker instance on the cloud. All the infrastructure is codified using Terraform and it is simple as running one command to get try.spinnaker.io running AWS. Spinnaker and its associated configurations are installed using Armory's open source Spinnaker operator. So here are some of the key features relating to environment from our project. We mainly decided to focus on supporting Kubernetes deployments as the most common use case for Spinnaker is deployed to a Kubernetes cluster. We support the AWS load balance controllers that users have an easy way of accessing their deployments on their web browser. We also have a private image registry hosted on AWS so that we can get around any rate limiting issues and also so that we can verify the authenticity of each image that we allow users to deploy. We have also installed a special admissions controller to block any images that are not from our private registry. For user deployments we have a couple of default pipelines that users can deploy. We also have an auto resource cleanup pipeline that deletes any unused resources after a certain period of time. So here is a demo of the Highlander pipeline where we deploy version 1 and version 2 of a certain application. So after you hit the manual execution button you can see Spinnaker going through its various stages for this pipeline. So after everything needed for version 1 is deployed here is a manual judgment stage which directs users to go to the load balancer section and take a look at the deployment using the URL. So after a few moments you can see version 1 of the application being deployed and then the user can go back to the Spinnaker UI and continue to the next stage where we deploy version 2 of this application. Once it's finished deploying the user can go back to the same URL and refresh the page and you will be able to see version 2 of the application being deployed. And then the user can confirm that they did indeed see version 2 and that concludes the pipeline. So here is a quick demo of the cleanup pipeline. So here is another pipeline that has stuff deployed to a cluster. And if we go back to this cleanup pipeline it usually runs automatically every 30 minutes or so but we can just run it manually for this example. And after a minute or so we can go back to the pipeline that has stuff deployed to it and if we refresh we can see that everything is gone. Tridot Spinnaker.io also supports authentication and authorization. For authentication we are using Google's OAuth 2.0. This allows anyone with a mobile account to sign in and get started with Spinnaker right away. On the authorization side I created a custom plugin for Spinnaker. This plugin extends the authorization server for Spinnaker which is called Fiat and gives everyone a default role called public. The reason why we need to do this is because every application has a corresponding role and we need to give each user a specific role so that they can access the public pipelines and applications that we have pre-defined for them. The detailed auth flow for Spinnaker can be seen in the diagram below. So here is the auth flow from the user's perspective. So once they go to the Tridot Spinnaker.io website it redirects them to Google OAuth and they can select which account they want to log in with authenticated so they can see the pipelines and applications that we have set up. If we query the API for more specific information about our account we can see the roles listed here. So this account has the role public and it also shows the email and the name that is associated with this particular account. So here are the areas for future improvement. I think it would be nice if there are more default pipelines or more interesting containers to deploy as there are only really three examples that we offer to users at this time. I originally planned for supporting user created pipelines so that it would be a little more interactive but due to time constraints and security concerns we were unable to add this to our final project for the summer. The groundwork is already set for limiting which specific containers that users are allowed to deploy so with a little bit of testing I think this could be added. Additionally we only support users deploying to a standard Kubernetes cluster but I think it would be a lot more interesting if users can deploy cloud specific services such as Google's App Engine or Amazon's EC2 instances. For the long term viability of this project I think it would be worthwhile if we pursued a hybrid tenant solution. Currently users all share the same Spinnaker instance and the same Kubernetes provider but in a hybrid tenant model users would still share the same Spinnaker instance but you would provision a separate Kubernetes cluster for each user. This will allow users to have less restrictions on what they can deploy as well as less security concerns as users will only have access to their own cluster and not anyone else's. Before I wrap up my presentation I would like to give a couple of announcements. So the alpha release for a live hosted version will be released very soon. Unfortunately there was a delay due to infrastructure setup but the link to the live hosted version will be posted on the CD Foundation Slack and the Spinnaker Slack so keep your eyes out for that. Additionally we're working on transferring this repo to the Spinnaker organization's GitHub account and once that's complete I would love to hear your feedback through issues that you can file through GitHub or submitting any PRs that you might have. Finally the single source of truth for this project can be found in the link below. This is a link to the Spinnaker docs and I will be updating this site frequently with the live link once that shows up and the Slack channel that we can use to discuss any concerns that you might have about this project or any feedback. I would like to thank the CD Foundation for selecting me as a student for this year and for scheduling all the event meetings and things of that nature throughout the summer. I would also like to thank Google for hosting Google Summer Code this year and lastly I would like to give my thanks to my mentors Dan, Fernando and Cameron who have been really helpful throughout this entire summer. Thank you for coming to my presentation and feel free to email me or to reach out to me on the CDF Slack channel or the Spinnaker Slack channel. Cloud Events Plugin by Shruti Chaturvedi Shruti is unable to be with us today I believe however we do have a recording which I am going to play right now. Hello everyone my name is Shruti Chaturvedi and in this session we are looking at the Cloud Events Plugin for Jenkins and this has been developed as a Google Summer of Code project under CDF and the idea behind this project or this plugin for Jenkins has been to enhance interoperability between Jenkins and other CI CD tools. How we are doing that is by integrating Jenkins with Cloud Events. So Cloud Events is an industry adopted standard specification for describing how events should look like. So without using Cloud Events any of the tools there is no common format for how events should be emitted. So you know each of the tools can have their own different and specific way how they are describing events which makes it very very hard to design systems or design systems around events or an event driven architecture because there is no common way and each way is going to look different for a particular tool. By using Cloud Events we are basically standardizing and we are saying that all of the events shouldn't have this particular structure and it makes it very very easy to define and design event driven architectures and we wanted to bring that standard specification in that common way of consuming and emitting events inside Jenkins. And that's the Cloud Events plugin. So during Phase 1 demos we talked quite a bit about indirect interoperability about Cloud Events and why do we want to use this particular idea of interoperability and here is the link it's a YouTube video if this is something you'd want to watch and if you want to read more about interoperability about Cloud Events and how is Jenkins interoperating and how is Jenkins using Cloud Events and also why did we want to implement it in the first place here is a medium article. So the Cloud Events plugin for Jenkins it allow users to configure Jenkins as a source and or a sync emitting and consuming Cloud Events. So as we said earlier using Cloud Events is going to standardize the way that events are both emitted and consumed because it is just giving us a simple architecture or a simple design of what an event should look like. So with Jenkins as a source we are defining that all of the events which Jenkins will be emitting should be Cloud Events and with Jenkins as a sync we are defining or we are defining how Jenkins will be consuming those Cloud Events. So why would you want to use the Cloud Events plugin for Jenkins? So obviously the first thing is because it standardizes communication between Jenkins and other CI CD tools and not just other CI CD tools but also tools apart from CI CD which is using those Cloud Events and this is going to allow indirect interoperability and we talked a bit about interoperability in Phase 1 thermos but to give you an idea about indirect interoperability is just an idea that we do not want to have a direct one-to-one relationship between our systems which are interoperating. So there is this common language which each of the tool understands and all we are going to do is make our system interoperate with other systems using Cloud Events or using this common language. The second reason is because we can build complex end-to-end pipelines extending multiple CI CD tools and again not just CI CD but also other tools which uses Cloud Events without meeting any extra effects and when I say without meeting any extra effects what I mean is that we will not need to design specific translators to talk with other systems. We will only need this common language which we will be needing in order to design a system which extends multiple tools and multiple systems. All we will really going to need is for any sync which is consuming Cloud Events to basically define how it wants to use a particular kind of event which is coming from a particular kind of system. So that's the logic behind Jenkins as a sync is Jenkins as a sync will understand this common format which it is going to be consuming from different systems, you know three or four different kind of systems but the logic here is how do we want to use or how do we want to use those particular events which are coming from all of these different systems. The third reason is integrating other system with Jenkins in a loosely coupled scalable and tool agnostic manner. Again this is tool agnostic so we are not designing this for any particular tool but this can be extended to any tool which can for Jenkins as a source this can be for anything whatever is consuming Cloud Events. So Jenkins as a source is going to be emitting Cloud Events and it's just going to be out there so any of the tool which consumes Cloud Events can use this particular event which has been emitted from Jenkins. We are not designing it in a way so this is going to be specific to a tool but this is a tool agnostic and scalable in the sense that this can scale to several services not just two or three but any service which understands and uses Cloud Events and it is obviously loosely coupled because we are not creating that direct one to one coupling or one to one agent or adapter which we will be needing to talk to that particular service and again it eliminates the need to maintain tool specific adapters for communicating with systems so as you imagine it is a much simpler way of communicating so imagine if we have 10 different systems 10 different systems in our pipeline and each want to talk with each other and it would be quite hectic to define to define essentially ways to communicate with these 10 different systems because each of these systems would have a different way but using Cloud Events we just have a common length each of these tool is going to understand is going to emit and consume so making all of our design very easy so that is why Cloud Events plugin is your go to if you are designing such an event driven system and some super great news is that the Cloud Events plugin is now released so from phase one one of the question that came up was when it is going to be released and good news is it is now released and you can check it out right here downloaded and obviously please do provide us with your feedback so during phase one demos we saw Jenkins Cloud Events plugin UI for Jenkins as a source so how we can configure the Cloud Events plugin for using Jenkins as a source which is going to be emitting Cloud Events and other systems can consume these events we also saw the types of events which are supported by Jenkins Cloud Events plugin so going back down here we have Hue Events, we have Build Events we have Job Events and also Node Offline or Online Events another thing we saw was the structure so how the metadata of the event can look like and how the event data can look like and we saw this using Sockeye Service which is a Knative Serving Service and going to define how each of these events is going to look like so with phase two starting we had some questions for ourselves and these questions made us think more broadly about the Cloud Events plugin for Jenkins not just in an individual sense but also how this is going to look like for when Jenkins is interoperating with many different CI CD tools and when a user is building a pipeline which has different CI CD tools which need to interoperate so the first question we had was how can we implement a transient fault tolerant way of sending Cloud Events especially important for an event driven architecture because we want to make sure that none of the event is lost by any network failures and how can the plugin handle asynchronous communication so very important for implementing an event driven architecture through the Cloud Events plugin for Jenkins inside of Jenkins the second question was if we were to implement an asynchronous communication inside of the Jenkins how should we do that should we have a messaging queue system or a PubSub system the third question was how can this plugin work alongside other CI CD tools that use Cloud Events so again very important to make sure that the Cloud Events plugin for Jenkins allows Jenkins to achieve that initial goal of enhancing interoperability between different systems in a much easier way and without needing to maintain specific adapters for each different system so we wanted to see if our final goal of building a tool agnostic in a scalable eventing system for Jenkins has been achieved through the Cloud Events plugin for Jenkins so all of these questions led us into designing a proof of concept using Jenkins Cloud Events plugin for Jenkins and also tools which have been using Cloud Events specifically CI CD tools and this proof of concept has been inspired by the EventSync team at CD Foundation and so they have something very similar with Taccon and Captain where both of these system act as a source in the same sending and consuming Cloud Events and how they are implementing that that false tolerant and also the asynchronous communication is through K-native Cloud Events broker by default uses Cloud Events and transfers it between different systems so it acts as the middleware handling asynchronous communication handling on retries and other never failures which might occur so it is taken away from both like Taccon and Captain and in our POC that handling of never failures is taken away from the Cloud Events plugin and it is sort of creating an abstraction at the Cloud Events K-native broker layer rather than inside of our plugin so what do we have inside of the Jenkins POC for Cloud Events plugin is Jenkins as a source sending Cloud Events and Taccon as a sync consuming Cloud Events and we also tested this out with Captain and also did a test with how this is going to look like for when we are using Kafka this particular POC we are only looking at Jenkins and Taccon where Jenkins is sending a Cloud Events broker to a K-native Cloud Events broker so the K-native Cloud Events broker has an idea of a K-native trigger and a K-native trigger you can think of it as a filter which filters on specific attributes of the Cloud Events metadata for example CE type so looking here we have CE type CE spec ID and source so we can specify any event inside of our K-native trigger and only any event which has that attribute matching will be passed on further and all of the other events will not go beyond that layer so any of the events which pass the K-native trigger will move on to a Taccon trigger and the Taccon trigger will be receiving the Cloud Events and this is where we can extract event to specific information from Cloud Events so for example if we have to extract number of executors we can also extract event data or event metadata however we like and pass that information and trigger a task around a pipeline on however we have defined it inside of Taccon definition so now we will be moving on into the demonstration and taking a brief look at the YAML files we find for the PUC. Alright so what we are looking here are brokers in the K-native eventing namespace so we have the default broker and the Kafka broker and the default broker is a very simple Cloud Events broker which is going to be dealing with Cloud Events events to transfer messages between subscribers and different syncs and sources essentially so the default broker is the only one where we will be talking about in this PUC this is a broker definition a very simple default broker and this is the trigger so as we were looking at the image for our PUC the K-native trigger is going to define a type and it is going to filter events on that specific attribute that we have defined so the event attribute which we have defined here is type or CE type because we are referring it to type because it is going to be CE type or Cloud Events specific attribute so CE type to be Q and Ted waiting so here is the event that we are looking at and this is the only event which will be passed through to the subscriber of the of this K-native trigger so Tecton this is the only Q and Ted waiting is the only one which will pass through and the next thing that we have is Tecton trigger and the Tecton trigger is what is going to be receiving that entire event and then extract information so here is where we are extracting information job name from CE type and then this is the information then we can use in our task run and pipeline run however we want to use this information and this is not only specific to event metadata we can also use event data again this is according to the need of a user so where you open to using however you want to use this so what I'm going to do is I'm going to copy this particular URL and this is going to be the URL for think of the Cloud Events broker or the Cloud Events plugin for Jenkins right here so I'm going to paste this and when I paste this all of the events of the types which are checked here will be sent to where to this sync but whatever will be sent from the broker to Tecton will be that one event which we are filtering which is the Q and third waiting this particular event so I'm going to save this information and what we're here is we're looking at task runs inside of Tecton dashboard so when I run this job I should see one single task run not two not three so we're looking at this test and this is the one I will be triggering here and it will only trigger as soon as that event is received inside of K-native Cloud Events worker that is going to be filtered and this is going to be sent to where to Tecton and Tecton will trigger a task run into a particular event of that type is received so for now we're only seeing one events and that's good Jenkins to Tecton task run and this is what we have defined so so yeah so this is the only event that was received by Tecton because all other events will filter and for example there had to be if my Tecton was not available or for some reason there were some other network failures it would be the job of K-native eventing broker or the K-native Cloud Events worker which handles Cloud Events and this is going to be retrying all of those network and transient failures so that was it for the demonstration of the POC so that was the end of the presentation thank you so much everyone special thank you and shout out to the city foundation GSOC and all mentors on this project for such an amazing summer this might be an end of GSOC but this is definitely not the end of me contributing to and being a member of an absolutely amazing community and if you have any questions and feedback we would all love to hear them you can send me an email here connect with me on LinkedIn or our GitHub repository and please file an issue with us and we would love to take this further and develop this into a more robust system thank you great so I want to thank Shruti for sending in this recording do we have comments or questions from the attendees one please so my name is Cyril and I was a mentor on the open symmetry instrumentation of banking promoting and I would be interested in understanding what are the semantic conventions that are used in the event we can communicate events we have a shared data structure but what are the what is the common attributes that are shared across all the CI CD tools so that they can interoperate together this would be something interesting to me alright so Shruti is not on the call I guess those were comments right Cyril yes this was something I would be interested in understanding I think and I've seen the CD foundation has done some work to normalize the vocabulary across CI tools so I guess it relates together we may want to have Cyril connect with the interoperability sig at CDF to get more details on the cloud events cloud events semantics I think it's an excellent question I'm just not sure that the group here has answers for it I do have on the basic answers but we can follow up later on the call but actually cloud events is very pluggable it's structured event system so you can add a lot of information in these messages basically integrate as many systems as you need and one thing which was mentioned that cloud events basically doesn't mean that events themselves are very much standardized so that's why there is a project started in the continuous delivery foundation about creating a standard for CI events so there is a separate project started basically as a spin-off of the events SIG there is interoperability SIG the interoperability SIG created another special interest group in the CDF called events special interest group and this event SIG is actually working on an open standard that would unify these events across multiple systems so great time to contribute important to be able to integrate things together and if we can imagine some graph and dashboards things like this if we have some common semantic conventions then as a community we will be able to create a lot of tools that will integrate with all CI systems and we will be able to integrate a lot more okay thanks I'd like to move on to the next presentation if there is no other comments or questions okay I will start my share again the next the next presentation is security validator for Jenkins Kubernetes operator this is presentation is by Phuket Sharma and for this one I will play the recording as well hi everyone I am Phuket and today I am going to demonstrate my work on adding a security validator to the Jenkins Kubernetes operator so the security validator right so like what is the problem it is solving right so in the Jenkins custom resource that we are defining we are defining it in a declarative manner right so the custom resources and the plugins are being declared in this fashion right so there are some security vulnerabilities that are present in the plugins and it is not visible to the end user so to solve this problem like the security validator is being added to the operator so the security validator is nothing but a validation webhook that that operates before the object is persisted to that so it operates before creating or updating Kubernetes Jenkins custom resource object so the webhook is different from the validation that we were doing in the Reconciliation loop it is after the object is persisted to that 3d cluster so it is low in contrast to the you know that validation the webhook is quite fast and yeah so on using the webhook it is completely optional for the user to use the webhook right and it can be easily installed via help or there are kubectl benefits that can be used to get the webhook up and running right also we are using certmanager as an external dependency right because we have to manage tls certificates and that's why certmanager is used so let us move to the demonstration demonstration so i will be using the image that i have built locally to launch the operator along with the webhook so this is the image that i have built locally so this is operator security validator first of all let me create a new name space name it demo and then i will be using help charts to get the operator up and running so we will provide the path to the charts so the name space we have to specify the name space so the name space for the operator and then i am going to set the image that i am going to use right so this is the image also the webhook is completely optional and we can run it via this webhook.enable tag so by setting it to true the webhook can be installed right so this tag will also install all the external dependencies like the certmanager and all those dependencies will also be installed when we will enable the webhook so also it is advisable not to launch the Jenkins CR along with this help charts because the webhook actually takes some time to get up and running and if i install the Jenkins custom resource along with the webhook then i won't be able to validate the security warning so by default it is said to be true right so by default we are launching the Jenkins custom resource so we should set it to false as well so yeah that's it i think i have specified all the flags so let's launch the operator yeah so let us see so yeah so actually it takes some time for the operator to get up and running right so it will first initialize the plugin data cache and all those things so it generally takes around a minute or two to get the operator up and running and then we can launch the Jenkins CR right so yeah so all the certmanager resources and all those resources will take some time to get up and running and they will provide the TLS certificates that are necessary for the communication so yeah i think all the resources are up so finally the operator will start so as you can see it takes some time for the certmanager resources to get so that is why the operator was the pod was crashing initially so okay so now the operator is up and running so let us try to create a Jenkins custom resource from here right so i have defined some Jenkins custom resources and in the first example i am creating a CR with some of the plugins containing security vulnerability for example in this vnc viewer plugin we are using the 1.7 version right so this version will have security vulnerabilities right so let us try to create a new CR from here yeah so it will throw in errors specifying the plugins containing security vulnerabilities for example it will specify that these user defined plugins will have security vulnerabilities and we have these 4 plugins that have security vulnerability so in the upcoming example i have kept all of the plugins so i have updated their versions to a version which does not have security vulnerabilities right so let us try to create a new CR from here but first of all let me see the logs of the operator right so yeah so as you can see there are lot of warnings in the logs right so it is like for each version of the plugin there will be multiple security vulnerabilities right so in the logs we can like we can see all the security vulnerabilities detected for that particular version also we can see the warning message and the link to the advisory right so there is a whole lot of metadata that is included in the logs right so let us try to create a new Jenkins here right so this time we should be able to succeed yeah so it works so now let us see the logs yeah so it wrote a response that allowed is like we are allowing it to create a object right so it sends a 200 response right so yeah that's pretty much it apart from that we have to mention that whether we want to validate the security warnings or not and if we set this flag to false so for example in this particular example we set it to false so here we have plugins containing security vulnerabilities and if we set this to false then it should fail to create the Jenkins here right so let us try so yeah so it is able to create the Jenkins here despite having or despite the plugins having security vulnerabilities so we can yeah so we can manipulate it whether we want to detect vulnerabilities or not using this toggle switch so that's all so there is always room for improvement right so some of the areas on which we can extend the project or add some features like one of them is to have a post installed right so as you have seen while installing the webhook so it takes some time to get ready so what can be done is to have a post install hook that checks whether the webhook is ready and when it is ready then the help installation will be completed so the user knows that the now he can send he can create a Jenkins custom resource another things another areas on which the webhook can be extended is we can move the validation logic that is being done in the controller to the webhook right and there is other sorts of validation that logic that can be implemented in the webhook right one another feature is like right now we are validating the plugins right so for each plugin there are dependencies for a particular plugin right so what can be done is we can like traverse the whole dependency graph and check for validate all those plugins all its dependencies as well because that is those plugins are also being installed so like validating those before installation would make sense right so this is another cool feature that can be implemented so yeah so that's all from my side thanks thank you pulkit for this presentation I will stop the share and open the floor for questions and comments do we have questions or comments regarding pulkit's presentation going once, going twice let's move on to our last presentation today our last presentation is Jenkins Remote Monitoring by Akihiro QG and I believe Akihiro is here so do you want to share your screen Akihiro we're not hearing any sound from you Akihiro I can see it looks like you're speaking but I'm not hearing any sound how about this yes yes thanks so much I want to share my screen yes go ahead thank you yes we can see your screen you can go ahead thank you so much I'll talk about my project Remote Monitoring and the purpose of this project is to support Jenkins admins in travel shooting and monitoring the remote system and to achieve this purpose we set the goals below one is to correct telemetry data including metrics traces and logs of the remote module with open telemetry and the other is to send the data to an open telemetry protocol and the open telemetry is a next next industry standard observability framework for cloud native software and it handles three type of telemetry data logs, metrics and traces at once and thereby it enables the integration between the different type of telemetry data and but in this Google Summer Code I didn't include that trace feature and I'll explain the details later and which open telemetry end point to use or how to visualize the data are up to users and users need to set up these services on their own I'll show you in a demo first okay and I will follow the getting started section in the rhythm of this project and I've already cron the repository in this demo and now change directory to the example directory and run docker containers using docker compose and this will set up open telemetry corrector and which is an open telemetry protocol end point and the rocky for log aggregation and premises for metrics back end and graphana for log and metrics visualization and in this demo I will also set up a Jenkins controller using docker compose and next I will download a monitoring engine from Jenkins Marvin repository this engine is a Java format file and this engine is the main deliverable of this project and I will use the engine later as a Java agent when launching agents okay and when you want to correct logs of remoting you need to create a logging property file and use our log handler so I will create a logging property file and copy this configuration file yes and here you need to set our uh logging handler here okay so by now all docker containers are running and the monitoring engine has been downloaded and then I will run Jenkins agent and first I will download the remoting agent from here yes agent and set this environment variables this specifies the location of the target open telemetry protocol then I will execute the remoting agent and here you need to use the remoting engine as a Java agent this use the remoting engine as a Java agent and you need to also specify the logging property file and then an agent is connected and let's explore the telemetry data on Grafana okay first I will show you the agent logs from the log key data source here you can filter the agent logs by service instance ID and the monitoring engine generate this service instance ID for each agent automatically but also you can set this value set this service instance ID using an environment variables now you can see the logs in Grafana okay and each log entry has several attributes such as log levels and the names of the class and the names of the class and the method that issues the log and next and regarding the logs in the error log you can see the stack trace for the error okay and next I will show the agent metrics from this data source you can filter the metrics by metrics type and service instance ID or other attributes and so far we correct only very general metrics like gp-road gvm memory usage and etc like this but in the future I want to add more Jenkins specific metrics such as the number of reconnection or the size of the working directory etc yeah and I will introduce other features and deliverables first you can control which metrics to correct in this monitoring engine this monitoring engine can correct many kinds of metrics but you may want to correct only gvm metrics because you already correct the other kinds of metrics with another tool so we offer the future to filter metrics by their name regular expression so you can correct only gvm metrics by setting environmental variable like this and I prepared two types of demos so that you can try them out very quickly one uses docker compose like the demo I showed before and the other uses Kubernetes docker compose demo is the easiest way to try out our monitoring engine it set up all services you need and its services its service is preconfigured so what you need to do is to clone the repository and change directory to the example directory and do docker compose up and then you can explore the data on the graphano and I also prepared the gvm integration demo and in this demo the service instance ID will be its name so you can find out target logs and metrics more quickly and I will show you a quick demo for this gvm demo I will set up a mini cube cluster you got to skip and create a namespace Jenkins and set up pods and I will create a Jenkins controller and a rocky and open telemetry corrector and premises and graphano so then I will skip until all pods to be created and then I will access the Jenkins controller and set up Kubernetes plugin yeah I'll skip yes okay I will set Kubernetes URL and test connection and its success free connected to the Kubernetes cluster and next I will create a sample pipeline yes name sample pipeline and I will use our custom agent image in this demo and I will copy in the pipeline scripts yes and I will use our custom image and the image and its docker file are published on github and this pipeline waiting for so this this pipeline just print and slips 300 seconds and I will start this pipeline twice and here two agents are correct allocated and these agents emit the telemetry data so let's see graphano okay I will show in a log and you can filter the log by service instance ID and its value is equal to the agent Jenkins node name so you can use these strings to filter the logs so it's much more easy to access to the logs and you can also see the metrics from its data source well it's like this okay I will go to the next slide okay and in the end I will mention my back log the biggest back log is the tracing feature I tried two approaches to trace remote behavior one is to use engine listener interface and the other is to create extension points on to trace the remote more precisely and I tried to generate a variable span and I tried to define a variable span for monitoring and travel shooting the remote system in the phase one and as you can see in this figure I actually created a system to trace monitoring behavior as a proof of concept but finally I couldn't find out a good way to instrument the remote module to produce variable and have for tracing data for monitoring and travel shooting so I decided not to include the tracing feature in the google smart code deliverable and I mentioned the other other back logs in the current implementation we correct only very general metrics like cp-road but we can help Jenkins add means more by adding more Jenkins specific metrics for example the count of one or the average offline time in a day may help Jenkins add means to check the connectivity and I conducted a user survey in phase one and I found that connectivity is one of the main factor for a high availability agents so this is important and users should be able to configure the open telemetry service name and service name space so I was looking at the source attributes so having more configuration option is also the back log of this project yes this is all for my presentation and Google smart code in Jenkins is a great experience for me and thank you so much mentors and anyone involved to organize this Google smart code thank you very much thank you for this presentation Nakihiro are there any questions or comments? I would like to just say thank you to Nakihiro for the contributions because it's a great project which is super valuable to the Jenkins ecosystem and you can see that there are so many projects happening around the observability space and the agent monitoring has been done for Jenkins and users so currently with this update we will be able to provide better diagnostics for users and I'm looking forward to adopting in CI Jenkins as well because we will definitely benefit from some advanced monitoring features thanks so much thanks a lot I'm super happy about the project results and hopefully we will be able to continue to open telemetry types so basically to clean up this backlog because there is a lot of opportunities maybe it's a subject for either another mentorship program or maybe Google Summer of Code next year the person I would rather prefer to work on it earlier let's see thank you so much thank you alright can you please I think I can provide your share so I'll stop your sharing okay thanks for the presentation and those comments is there any other comments or questions okay Mark actually I had power cuts so I had to drop off that so if there are any questions so let the audience to take to us so yes so are there any questions for regarding security for me maybe one of the questions would be about classic Jenkins Helm charts so have you discussed with these mentors whether some bits of this project could be all supplied there or is it solely specific to the Jenkins Kubernetes at the moment can you repeat your question so sorry in addition to the Jenkins Kubernetes operator the results classic Jenkins Helm charts and also there is a lot of security related questions there in terms of management and configuration including YAML files so I wonder whether the results of your projects could be potentially applied to there too what do you think looks like we have connectivity issues Oleg with poolkit he dropped off the call and maybe Elba will ask in the chat and by the way all projects have chats so you can just ask there is needed oh he's back you want to try to answer that poolkit if you heard the question yeah the network is quite unstable that's why can you hear me now yes so regarding your question and what I have understood is you are saying that the Helm charts we can use the Helm charts to install the operator and we can apply those results so the Helm installation and the time to get the operator to the ID so you are saying that can we have use Helm charts for installation right hello yes though it's not about using Helm chart for the operator installation this part is clear it's rather about using the same validators for Jenkins being installed with Helm chart but without operator because the results official Jenkins Helm chart which is not based on operator it's standalone alright so may I suggest you can continue in the chats it will be better yes okay so thank you poolkit for for this for the time and for answering questions they can continue answering questions in the chat I believe that will be easier okay let's move on to the conclusion alright let's move on to concluding the presentations today so last couple of slides alright so just want to thank everyone again we have a feedback form that I invite everyone to visit and give us feedback regarding the summer of code program and this presentation and actually this is the end so we'll turn off the recording and then everybody can ask more questions in a more open channel and speak freely to each other alright and it's actually the last slide so thanks everyone let me stop the recording