 Okay, so this is the March meeting for the Cloud Native Jenkins 6. And on the agenda, we have Gishen talking about Azure Storage and introduction to JenkinsFile Runner and comments on the serverless Jenkins with JenkinsFile Runner on AWS Lambda and Project FN. Um, yes, we can start. See? I'll make you presenter. Okay, I will share my screen. Can you see my screen now? No. Hello. No, I can't. I can't? Yes. It works, right? Yeah. Okay. Hello, everyone. I'm Jie and I'm from Microsoft. And today I will share our implementation about the External Artifact Storage for Jenkins. It is JEP202. And first, I will talk about our design, about this plugin. First, we choose to develop individual plugin besides the Artifact Manager S3 plugin. For things, the plugin name is not suitable for Microsoft Azure. And we also didn't choose the J Clouds and just to decrease the complexity of our plugin because Azure Jenkins team will take care of maintaining all the Azure-related plugins of Jenkins. So we just don't need the J Clouds here. And because previous, we have a storage management plugin for Azure in Jenkins called Windows Azure Storage Plugin. It can upload and download artifacts to Azure Storage Services. So in this plugin, we just choose to reuse the plugin to finish the uploading and the downloading job for us. And to make the Windows Azure Storage Plugin to be compatible with JEP202, we have made some changes to the plugin. First, the plugin user cannot upload artifacts from the agents. So we have changed the class, we choose to extend the master to slave file callable class to make it be able to upload some artifacts from the agents. Also in the JEP202, one machine that security problem is that we shouldn't pass all our permissions to our agents. So we also choose the SAS, also called the shared access signatures, which will give limited permissions to agents like maybe only be able to uploading or downloading artifacts. So I will do a demo with the new plugin here. But do you have any questions about the design? Okay, so I will go with the demo. Oh, sorry, I just wanted to ask whether from what I see, it's a common implementation of artifact manager API, right? Yes, we just implement some interfaces from the Jenkins core and the workflow plugin. Yeah, so, yeah, thanks. Let's do the demo here. First we can go to the Jenkins configure system here. And in the artifact manager for builds patch, you can add Azure artifact storage to manage your artifacts in Azure storage. And after you add one, you will see a table like this and you can choose different storage type. But for now we just support Azure a blob of storage and then maybe we will implement some Azure file service like this other storage type here. And then you can choose your storage credential and it will authenticate with Azure. And here is a little similar with the artifact 3S3 plugin. You can choose a container name here and you can also provide a base prefix for your storage container. And once you have finished this setting here and you can create a easy job like artifact manager here and then we can see the configuration of this artifact manager project. I just do a very easy demo here and first I will create a file and I will stash it to Azure storage and I will do some modification to it I will delete the file and make a sub directory in the folder. And I will also use the directory command to on stash file I have stashed before and finally I will archive all the artifacts here and let's run this job. This log will also get some logs from the Windows Azure storage plugin and it will also print out some messages like stash file to our container and it will also on stash file from our archived file and also upload the final artifacts here. And let's go to the Azure storage and the first I have choose to go to the blob named demo which it is the same as the configuration of Jenkins and you can see the base prefix size here. And in the SIG folder you can see it is a project name of Jenkins job and in the job in the job. Just we have run a job and it is 29. You can see also in the Azure storage we have a 29 folder here and in the folder we can see the target artifacts and there's a sub directory here and there's a file I created and to test the virtual path here I will choose to delete the job here and you can see the photo of 29 it also be deleted from the Azure storage and it is consistent with Jenkins master. That's all, any questions? Do you have a demo showing the upload from an agent? Does it work the same way as well? I didn't set up agent here, it will take some time but I have tested it and it works well. Any other questions? Yeah, also I wanted to know what kinds of automated tests you do for this and how you develop this in terms of testing and what different kinds of situations you tested against? First I write some unit tests just like the artifact manager's S3 plugin and I have covered all the scenarios in that plugin and I also do some integration test with Azure storage like I will have set up a job like create some files and stash it and un-stash it and archive it and also do some deletion and also copy the artifacts from jobs between different jobs like this Any other questions? Yeah, so for credentials, I don't know in Azure you can also use instance profiles or something like that so the credentials are not storing Jenkins or that's not possible. Okay, so credentials, right? For the credentials, here we just support one credential type which is implemented by ourselves and you can only choose the Microsoft Azure Storage credential type here and you can provide your account name and account P to authenticate with Azure Storage you can only support one credential here Okay I have a question can you provide a filter for the credential list? Maybe no one will know the exactly type of the credential or do you have a hard file to discover this? Yes, you can see some easy information here and maybe we can enhance it and indeed if you have other type of credentials it will not list here and it will only list the storage credential here Can we implement the one we add when we click the add button there is a list of the type Can we filter this list? It will default to use the Azure Storage account, right? Yes, can we just leave this alone? Yes, I think so I will do some changes to make it happen Okay, no questions, thank you Thank you Any other questions here? Okay, then Sorry Can we provide a multiple provider for the architecture to upload? More than one You may have several containers here You may want to upload two different containers at the same time of your job or like you have several jobs and you want to upload two different containers like these scenarios, right? Maybe the two use cases all exist It is an open question about this Yes, I agree with you but for now this plugin cannot support that scenario and I think maybe we need some changes on the Jenkins core side, right? To provide a variable to maybe define the different containers The idea of the Artifact Manager API was to have one backend for the whole Jenkins master so you don't have to do anything in your jobs to just use it If you want to upload to multiple containers then you would use a plugin with pipeline support where you could just say upload this to this container and upload this file to another container But for this, this is what the API was looking for The way you don't have to change your pipelines at all All your Artifact storage is going to go to upload storage and is not going to go through the master I wonder maybe we can offer a filter to match the Jenkins job and the storage Maybe we have a better solution It's just a simple offer If somebody wants to, they can always define a special Artifact Manager factory which lets you define a folder property where you set a different Artifact Manager factory for all of the jobs within the folder That wouldn't require any changes to Jenkins core or any other plugins that would work with S3 or the Azure storage equally But we didn't see any particular need for that at least while developing the S3 version because the intention, as Carlos said, the intention was that the storage is basically transparent for users of the Jenkins server So it's intended to be something that the administrator configures with just a big enough area and sets up the credentials and then users don't have to think about it Yes, I agree with you that the Jenkins server pipeline should be transparent for the storage If we have another way to let the different pipeline upload two different storage, it will be great I think And other questions we should talk about? How about writing some ideas to the document that will discuss and be more effective If I'm not here, right? And I also have one question about the cloud native SIG Does this page always updated? I'm a little confused here since the content seems a little updated here Yes, so if you want to put references to particular implementations feel free to propose a pull request to this page So the advantage is that you are managed by a pull request and everybody can do that Okay, so the part like drop is that you haven't started right? What do you mean? Yes, it's not up to date, I don't think There are some things that are in progress that are no longer in progress Okay But the things that are not started, they are probably correct Okay That's right So there are some research work There is also some pull requests and implementations For example, .ci, but there is no EPS and Jenkins core and pipeline for that right now Okay That's all Do anyone have any other questions about the artifact manager? Okay, thank you, that was great Thank you I will stop sharing And Oleg, you'll do the Yeah, I'll do the quick overview We also have Francisco on the call He's one of Jenkins' father run of maintainers Do you see my screen? Yeah Okay, could you please make me a presenter Yes That's proper to you too Okay I should really close later Okay So I'll just try to make a really quick presentation with demos I think we already had a brief discussion of Jenkins' file runner at the special interest group meetings Just to quickly introduce myself My name is Oleg Minash I'm just one of Jenkins' father run of maintainers and Jenkins core maintainers We have been working in cloud native seek on this topic starting from June I believe We had a presentation together with JC and Carlos about single shot Jenkins' masters The presentation happened in September I believe And this introduction is just another view of one of components we used in single shot masters vision Okay, so what Jenkins' father run of does? Just quick think it's a single shot Jenkins' master which can take your Jenkins pipeline execute it in a container in pretty much every environment produce the results and then terminate So it's a single shot master thing we've been talking about before the seek meetings If you have seen shifting years ago post by TK if you had another view of what cloud native Jenkins' would look like Obviously revision significantly changed since other times but the common things like having container scaling but you need to run Jenkins as they use and being able to firstly develop the things and to apply it for customers cases and Jenkins' father run was positioned as one of ways to achieve that and when we talk about Jenkins' father run or when we talk about Jenkins' pipelines I would rather talk about Jenkins as far as so just a function which can be involved for example you have a pipeline or pipeline step and if you want to execute it in your environment if you were able to take Jenkins and execute that it would be really helpful and this is what Jenkins' father run is about basically it's just a binary or docker image which takes your pipeline and executes it and currently it has access to workspaces and it also can dump data to a CD out probably I should just show a quick demo so yeah if you want to run Jenkins' father run there is a simple common line currently we don't host the Jenkins' father run as an efficient docker image but later I will show how to build it if you are interested so yeah I'll just switch to my common line is the screen big enough? yes it's okay so you may see that it's just one of the demos from Jenkins' father run architecture so in Jenkins' father run we have several demos and there are also demos in other reports we will be discussing so if you want to reproduce it you can just refer to this demo there is made clean build command which prepares this demo I won't touch it but I will just execute it so it's a common experience of Jenkins' father run you may see that there is docker command which takes an image it also takes a Jenkins' file from provided pass yeah I am running it on Windows with Windows sub-system for Linux with a few virtual machines so it takes a while to just execute it but yeah if you take common environment it takes just a few seconds to start up and yeah probably I should have disabled antivirus before running this demo what's going on? oh yeah you may see that it's just microstole in visualization so let's try again what's going on? so yeah it seems that there are some issues with streaming data oh yeah now it runs right so you may see that it starts executing whatever pipeline inside you may see that there is a performance optimized durability mode it's just a message from pipeline ecosystem because Jenkins' father run includes lots of customizations to run fast well if you run it on common environment it's just less than one second to start up and execute this demo and here you may see that it actually executes a few commands it runs VGET from whatever evergreen repository then it uses pipeline utility steps it's a RIT GAML command and it uses it in order to just print a Jenkins corporation being used by evergreen so if we take a look at this demo you may see that this demo actually it gives our Jenkins file which we executed so it's a single-stage thing which just executes three commands but obviously you can do much more things with Jenkins' father runner it's just a common engine you can do many things and like I will show a bit on the example of how it would be used so let's just finish with overview so Jenkins' father runner has pretty long history in Jenkins' means it started almost one year ago I believe that thread has been started on March 13th so probably it's time to celebrate one year of Jenkins' father runner originally it has been started by TK he proposed it as an engine for pipeline test automation and also for single-shot executions then it was enhanced by Nikola Delov who also contributed a lot to the stability of this component and who researched ways to manage configurations in various ways currently Jenkins' father runner is in beta state so you may see that it's beta 7 there was a number of releases since October when we switched Jenkins' father runner to common maintenance state so it was hosted on Jenkins' you've got a number of beginners currently it's maintained by Marista, Francisco who is on the call and by me we've gone towards various enhancements and towards the realization of Jenkins' father runner so it could be consumed in various flows so it's still in beta but it already got a lot of adoptions the biggest adoption is of course Jenkins' eggs they use Jenkins' father runner in the serverless mode we had an overview of this mode also a few meetings ago then we have a reference implementation of CI Jenkins IO runner so it's an emulation of CI Jenkins IO it's Jenkins on Jenkins which we used to test plugins and we also packaged it to Jenkins' father runner just to showcase that and there is also a lot of different packaging so a few examples it's mid-hop actions, it's courtship also AWS Lambda and Project Cafe which will be presented by Carlos today so Jenkins' father runner got some adoptions it can be used in pretty much every environment and how could you use it one of the most standard places to just take something from the repository and we have a Jenkins CI Jenkins' father runner repository currently provides a way to build a binary or docker image but as I said before we don't really host this docker image right now, there are plans to do it at some point and the problem is that packages on the particular pipeline plugins so if you want to run your customer pipeline obviously it's not the best way to do that and in order to work around that we also offer a tool chain in order to build the customer Jenkins' father runner packages so this flow is powered by customer packages and it produces ready-to-fly docker images so this just an overview of this phone it takes customer packages it takes Jenkins' father runner you can pass whatever arbitrary configuration of Jenkins core versions and plugin versions then you can use Jenkins' configuration spot plugin or just standard Ruby hooks or system properties to pass configuration and as an outcome you will get fully packaged Jenkins' father runner image that you can start out of the books so for example, if you want to take Asia artifacts which we discussed before you can configure them like Ruby or Jcast you can pass credentials to this image for example, using Kubernetes credentials plugin or something for Asia you can pre-contribute everything and bundle it in the same image so you will get an instance of these configurations together with Jessie and Carlos we presented a flow for AWS with Amazon S3 and other components I believe it was presented in September in basic meetings so actually you take everything and package it as a single instance and probably I should just show you an example of that who has ever seen this customer packages flow so I will briefly show it then if you are interested to have more details there is a link to the blog post I will post the slides later in the Gitter channel but there is an instance so this instance actually is actually managed by Packager Compact YAML so this is an input YAML file which manages customer packages and it defines everything here so you may see that there is a pretty short configuration now because we pass plugins using maximal input we also pass Jenkins file runner using this definition so you may see that we take a particular commit right now obviously I should change it to a tag because now we have releases I just didn't touch this component and here you may see that we also take whatever base image in order to build on the top of it we use it in order to have a specially built images for see a Jenkins Ion if you don't like that you can take a classic Jenkins image for example what else do we have here we have some system properties mostly for configuring pgms which we use in sidecar containers now and we optimize provisioning and we configure instance using jcasc plugin and groovyhooks so you may see that everything is packaged here and after that you can just take Jenkins file runner and sorry customer package and build the image so you may see that there is command here it's effectively just pass configuration and everything is built I won't be showing it right now because it takes Ernst to build it on a machine with his windows defender but yeah on common machine you will get it maybe in 30 seconds if you have everything cached what else do we have here in this demo you may see that I'm using pomexml as an input you may see it here so all plugins are passed through pomexml there are a few reasons for that firstly we use standard Jenkins plugin pome so we get features like plugin enforcer out of the box last but not least we also get a dependent bot so it's automatic management of dependencies which is integrated in this release so you may see that there is a bunch of incoming updates and yet finally if you want to try this demo you may see that there are some assembles for example if you want to build a plugin there is a demo embedded here and we just use a common Jenkins build plugin so if you want to run this demo in order to build your Jenkins plugin in a single shot query if some reason just that I can talk about this demo more later but I believe that Carlos will be presenting customer package flow or maybe other flows in his demos so probably we could talk about it later okay so before we go to the last question are there any questions so far no questions they believe okay so what has changed in Jenkins file runner recently so we have been talking about it since June I believe in September they will presentation but there was a lot of changes which happened recently thanks a lot to Francis and Evarista who implemented the most of these changes so firstly we have a new framework for test automation Jenkins file runner test framework has been actively used in customer packages and we are adopting Jenkins file runner and other components then Jenkins file runner supports Java 11 now it contributes a lot for two features in the cloud for example Java 8 doesn't really integrate with C groups so if you want to limit usage or RAM usage it's better to use Java 11 and you can run Jenkins file runner with Java 11 right now even though all common disclaimer supply it has been just landed in J in Jenkins a few days ago then we did a lot of things to clean up the architecture before the purchase was based on JUnit test rules it had a bit of performance in terms of initialization we've already improved a lot and there are still lots of things which we could improve a lot but in the first we implemented some pipeline features integration so now we map workspace to a standard location so we can map it to volume it's useful if you want to use Jenkins file runner as a basic step for example in your pipeline now there are discussions about Tecton pipelines in Jenkins X and Jenkins file runner can be used as just a step with workspace mapping for example then you can disable send box if you do not like script security well for Jenkins file runner it's probably not a case so there are no security risks because it's a single short instance and if you want you can disable send box in order to give more freedom to your users and also pipeline parameterization also they think which was requested for a long time but in the Jenkins file runner master it's available in beta 7 what else? we have a lot of integrations and I believe that Carlos will talk today about AWS standard or Oracle fn and if you have questions about other integrations we can just talk about it later if you have time at the meeting any questions? if no you can find more information by following these things so there are some user guides there is also architectural review which you can use to get more information about the instance but yeah from the user standpoint it's really simple you just take the image and run Jenkins files and as long as you have everything packaged properly it will execute your pipelines okay that's it from me thank you Salah thank you Jen if I find the bottom okay so share my screen so yeah what I did was a couple of proof of concept running the Jenkins file runner on AWS lambda and in project fn so you can see the well I wrote a blog post and there's a good repo so this basically allows you to run Jenkins pipelines on AWS landas it has a github web hook processor and it will receive this github event plunder repository and execute the Jenkins file in that repo and with all the benefits that Oleg mentioned this could be interesting if you're building like AWS related things like uploading and downloading things from S3 and this obviously doesn't support all the features on Jenkins files but it's if you're doing something limited that could work the lambda limitations is it can only take 15 minutes of execution time 3 gigs of memory and 500 megabytes of storage so whatever you write into the file system and the git clone can only take 500 megabytes and there's a couple of things that I didn't bother to go through for now for this proof of concept is check out the cm doesn't really do anything you need to clone the specific repo that you want that would mean that it checks out the current tip of the branch which might be different from what the web hook was in response to yeah yeah I couldn't get to check out the cm to work I guess there are some environment variables that need to be set for that to work and it also requires proper initialization of payload in Jenkins file runner so right now it starts any Jenkins file as standard pipeline so probably you won't be able to emulate ccm until it's fixed architecturally well this happens before Jenkins file runner is called this is just the the function here it's also Jenkins as well at all yeah this is just a function that basically it receives the request and parses the github payload and then here it gets the clone URL and it's here yeah, git clone will clone the git repository and then it will run the Jenkins file afterwards on that directory so this git clone uh yeah would need to well I guess the run Jenkins file should somehow say something for check out the cm to work but you'll need some patches yeah this basically runs the Jenkins file on that directory after the clone passing the right properties for Jenkins file runner to execute and I'll talk now about the plugins structure and things like that um you are pretty limited on on the space and stuff like that so um I have an example that I try this is basically just a very simple Jenkins file with some so executing maven um as I said you need to add user local bin to the path you have to set the java home and yeah this will clone and run maven and you have to use the um user home the temporary directory as user home for maven to work um so lambdas work with layers and each layer has is limited to 50 max so I in this uh building this project there's a file here what it will do will create the three three different zip files which will be the three different layers one for the youngest file runner one for plugins one for github and maven so if you want to use something else you have to build it for lambda and package it and upload it as a layer and everything will be expanded into slash up um if you wanted to do anything else like store artifacts on s3 then you would need to maybe configuration as code plugin the artifact manager and then the jammel for the config as code plugin and then add that as a different layer yeah I took some inspiration from the lambci project where you can find builds of gcc, gol, java, etc for lambda and even git and then because basically you have to compile things so everything is expanded in slash up and all the libraries are in slash up so you have to recompile things to build those layers um here I'm using docker to build all that and create the zip files once you have the layers well you have to create a function use the java 8 runtime and upload those layers to aws and you have to just set the handler to this memory at least one gig um if you use the default things 500 max it will crash for now and the timeout you want to probably increase it to the highest which is 50 minutes you can also do it with the command line and this will be to create the function the layers is a json file with the the layers um you can use the layers from s3 or uploading through the web interface if you want the lambda to receive the webhooks then you also have to have api gateway trigger basically what is going to receive the webhook from github and call your lambda otherwise there's no way to call a lambda via http but if you want to have a synchronous execution because you get the webhook from github and then you want to return a success code to github and continue the execution uh you also need create snsq uh simple notification service in aws and so have the api gateway to add something to sns and then trigger lambdas based on that I didn't go that far because this makes it it gets it complicated pretty soon pretty easily but that's how this asynchronous event execution is supposed to work in aws um building publishing okay so if I go to lambda on aws I have my jengas file run the function run time so uh you can upload here the function code if you want uh this is the handler that I mentioned um you need a role that allows you to store the loss of the lambda function on aws clock which locks uh so I think this was created from here from the UI you can set the memory, the time out and everything else is the default on the layer as far this is interesting so you have one that includes the jengas file runner code so the function only includes the code in this uh github repo so only the function code the java code for the function another layer that you get from the make file is the jengas file runner this is the jengas file runner proper code then another layer with the plugins and another layer with the tools so you can upload this from here or you can uh upload it to s3 and then just reference from from here and here the other thing that I mentioned I have the api gateway uh that is receiving the webbooks and calling the lambda and then the logs from the lambda goes to cloud watch logs there's nothing really here this is the cloud watch log that I mentioned there's nothing really here this is just added and I get the api endpoint where where my github trigger can be uh set to and once this runs so basically you do a post to this web uh to this endpoint what you get uh basically you get nothing pretty much but what you see in the logs you can go to the lambda uh cloud watch logs so you have logs for each execution and you see all the output from the Jenkins file runner so you get well and the function too so there's some github cloning and then Jenkins file runner is executed and all this may even download from the internet and obviously if you want to do this properly you probably want to have a maven repository in AWS so this doesn't have to go to the internet every time and you get the build success um and then it's finished here what happens with lambdas is that they get reused so if uh you call the endpoint multiple times very in pretty short amount of time this gets reused so this takes care of uh using different directories in different executions what you may have uh or end having you see here it's not the loading everything from from the maven repo what you may you have to take care of is if this gets called multiple times for and causing downloads of files then you're going to run out of space so it's not it's not a perfect solution for everything and I think that's it for lambda and then I took the same ideas and build the fm and the same functions for fm project which is this open source project for functions uh in docker well yeah you can do it in docker containers and you can run it on prem or you can run it on cloud so it's all open source and the difference with lambdas is that uh well the signature changes the function signature let me go here so the function is a little bit different but not too much if I go to the function handler uh yeah the request it's a bit different uh but the clone is the same and and running Jenkinsfile is pretty much the same so the big difference is the packaging lambda layer layers are limited size and things are getting expanded in slash opt but fm allows you to just provide a docker file where you can install whatever the that you want and you can see that here this docker file so what I'm doing is I'm getting maybe this is a multi multi stage docker file so I'm getting maybe I'm getting the Jenkinsfile runner um I'm getting the plugins that I want I'm getting the predator fan utilities the fdk what they call I'm getting the open gdk8 so and on top of that I mean copy maybe the fdk and Jenkinsfile runner and the plugins and then um I'm just using the fdk entry point just passing the right parameters I copy this from their docker file and I'm just the command is just my my function handler but yeah everything is in one docker file you build a docker image that works with print defend you don't have to do all this stuff that you need to do with AWS lambdas and I try this both local and Oracle functions which is probably the thing on the cloud and this is a limited availability for now um less limitations that in lambda check out the cm does not work the same thing and everything that writes files need to go into slash them um that's pretty much it you can install fm locally and try it with uh you can create the application fm create and then deploy it and then you can just send be a standard input your github um payload and invoke the function with that and get the logs of the last execution locally with with fm get logs um the interesting thing with fm is that they tell you that if you want to see the logs in the cloud or somewhere else you need to install a syslog server and you can do that locally with docker but basically they are not providing a great way to see the logs you have to send the logs to syslog and I think it's paper trail that offers you syslog on the cloud but uh yeah you have to send the logs to somewhere um that's it do you have questions what's next for these prototypes nothing so far um I guess it's just uh I don't know how useful they can be there's definitely lambdas can be interesting for you have a Jenkins file and you want to run it on aws and it's simple enough and it doesn't take 15 minutes more than 15 minutes you could do that the problem with aws is kind of a bit complex to set up things so the benefits you may get must be higher than the time it takes you to configure everything what is the benefit being compared to jinx file run on kubernetes because if you take kubernetes you can achieve pretty much the same just by using kubernetes as a system it should be more portable well yeah it's obviously more portable than lambdas um what you get with lambdas is the serverless part you don't have to run anything you don't have to run a server you don't have to do anything and you pay exactly for what you use so for something that you don't call very often or for something that you want high scalability and it's simple then it would be interesting the problem is the complexity and all depends on how many tools you need installed and everything like that but you could have if you manage to package it then you could have your projects built in lambdas if it's like a small project that you don't build enough and just pay sense to the girls that's right so theoretically we could also pick customer packages or whatever to produce these outputs as well but yeah it would be much more challenging or yeah you would have make files so maybe they could somehow generalize any other questions or we can wrap up okay thank you for joining and I'll publish the video and add the notes I'll stop the broadcast now