 Okay, so today we have our first Jinkus end user panel, so the idea is to have a virtual discussion, so not contributors presenting to users, but end users talking to contributors, presenting their experiences, expectations from Jenkins, and then contributors who on the call just asking questions, providing some feedback. And again, everyone is welcome to participate in this discussion, either by voice or by chat. And I suggest that we start from Andrei, because he was the first to respond. So Andrei, would you like to speak about your experiences and quickly introduce yourself to the community? Yeah, hi, my name is Andrei Babushkin, and currently I work for Intel, in Intel OpenVINO toolkit project. And we use Jenkins, since the inception of our project, it, as far as I remember, it's 2018, and the oldest Jenkins version I was able to find is 2.89 or so. So we have seen many updates, we've seen how JCASC was created, we've seen UI improvements. We, I think we have upgrade issues only once, and I can't remember when it was last time. So this part of Jenkins is very, very great. And most of user experience issues, I think is connected with the fact that OpenVINO is not a Java project, right? So we don't have any Java experience. And when something goes wrong with Jenkins or in Jenkins pipeline, there are huge stack traces, mentioning some strange concepts, some deep insight pipeline CSP code, and that's a bit confusing. Other issues we have, like our pipelines is such big that we was forced to split them into a few separate jobs because we can just put all stages in a single pipeline, use viral step and run all builds on all Linux on flavors on Windows and on Mac and run tests. Because when you try to upload test results to Jenkins build, it puts like we, there's no way to separate few test executions in Jenkins test reports. And other issue is sometimes we need to more powerful build dependencies than just upstream downstream relationship, right? Sometimes we want to specify that this build is dependent on from this build and this build, and currently we cannot do this in our multi-job pipeline, right? So basically that's all that I was thinking of for a few days after end user, if you explain all announcement. So I think that's all. It's a nice examples and maybe a few quick questions before we move on. So for splitting executions and test reports, have you seen the new code cover API plugin? I may have seen, but I had no chance to try it. I tried. Yes, why am I asking? Because it actually supports splitting reports by various factors and tags and programming languages if you want. So if the user experience that is what you would like to see, maybe it would be a good referral for an issue. I believe that the G-Unit plugin currently uses GitHub issues and team who's on the call is currently one of the maintainers of the G-Unit plugin. So yeah, but like test reports is just one example of why we need to split our jobs, our pipeline to a few jobs, right? Other thing is amount of logs we need to see. And like we've tried to put this into one big pipeline, but like just imagine you have parallel steps with Ubuntu, CentOS, Debian and Windows and Mac OS and inside each parallel stage, you need also parallelize test executions, right? And we actively use blue ocean for visualizations to see logs. And when you try to use parallel stages, inside parallel stages, it just shows you nothing. So if you use the classic UI, the G-Unit report will show you reports by stage and group things by stage. So you can actually have a slightly better overview. I still don't think it's gonna be great for what you want, but it might be much better than blue ocean for you. No, blue ocean is actually better because in each of build jobs and test jobs, we split our pipeline for a few stages like copy artifacts, unpack artifacts, run tests, write results, something like that. And if we see this in classic UI, we just see the latest stage and in the latest stage. I'm not talking about the stage overview or anything like that. I'm purely talking about the test result reports. As in if you go to the build slash test report, you will get kind of a, I'm just trying to pull up an example on my instance. So I can tell you exactly rather than just going, I think it works like this based on my memory. So if you bear with me for two minutes or hopefully less than that. Yeah, well, we talk about that, about job relations. What did you expect something like a specific dependency graph or how would you like the jobs to be executed? What's your main problem with the current three doing? Actually, I think like I saw a GitHub actions pipeline recently and in GitHub actions, we can specify a stage dependency, a dependency that depends from two or more stages. So this stage will be executed only after all stages it depends from will be executed. So something like that. Yeah. So basically, so when you define target milestones like make file and you don't really care what's the execution of targets. So it's quite a popular topic. Personally, I think that Jenkins pipeline engine supports it in principle, but it requires a significant reward of how our details are demanded. So right now there is no way to actually implement them Jenkins. You can just have parallel jobs which will basically start from where you're beginning then you could probably use join or milestones plugin to actually do some dependencies but it would be quite complicated. We actually have tried to use, I don't know the name of the plugin, but it has added a stage like a weight or something like that. But it seems there was some bug in this plugin and we have received some bad logs in our pipeline. So we stopped using that. But actually we use Jenkins not only for continuous integration purposes. We run many, many tests in our nightly and weekly validation cycles. And our weekly cycle is about 3000 Jenkins builds and it's quite a lot. Yeah, now the classic test results are exactly the same as the Blue Ocean test results in that it only shows you grouping if a test has failed. So you get the full stage name. So apologies about saying it was better than Blue Ocean. It's the same. Oh, for once. Okay, any questions to Andre, what just cases or should we move on? Because we still have an opportunity to discuss particular topics. For example, the Ignite talks. If someone wants to hack a DAC support for pipeline a few hours, but we have any questions? Yes. So thanks, Andre. And yeah, I think that we could try a deep dive later. So for example, if your team or want to join together so that we deep dive into your topics, actually I had an honor to go to New Zealand upgrade and present there at one of meetups. And yeah, I know that there is a lot of Jenkins users there. So we could organize something specifically to do a definitely use case. For example, if you're interested in Jcast and pipeline you can invite contributors and they'll work together. It's actually a deep dive. Yeah, I try to contribute a little bit, but once I broke many years with my changes, so now I'm afraid of it. Welcome to the club. Yeah. Okay, thanks a lot. And let's move to your audience. Would you like to introduce yourself? Sure. So hello everybody. Nice being on this group. I am a life and data scientist working for a pharmaceutical company. And for a minute I wanted to sort of forget everything you know about Jenkins and actually remember everything about Jenkins. But what we're talking about is actually outside the standard DevOps operations. And all like, is it possible for me to share slides or can you? Yeah, you can just share your screen. Okay. You should be able to do that if not I will fix it, but you should have permission. So there's a share screen green button on your control panel. Is it? Yeah. Okay, can you see the slide? Yes, thank you. Good, so I put these slides together just so that you have a frame of reference later on. If you wanna go back and refresh your mind or some of these things that may be a little bit out of the standard realm of what we're doing with Jenkins. But back in 2013, I discovered Jenkins. Myself, I'm a trained PhD molecular biologist, but I went back to school and got a master's in software engineering and I was for a long time interested in software development. And I'm in an interesting intersection of medicine and data science now, which makes a lot of these things really, really interesting. So back in 2013, I discovered Jenkins and has been using it since then, but we've been using it for a total different application. And a few years ago, we published this paper in scientific literature where we introduced Jenkins as a platform for scientific data and image processing applications. And it has nothing to do with actual compilation of code, testing code and so on. But nonetheless, it uses all of the capabilities of Jenkins. So I really wanna start by thanking a lot of people that have been sort of fundamental in this process and interestingly enough, my boss at the time was called Jeremy Jenkins. And over the years, I've met many of the Jenkins contributors and very nice people in the group like Oleg and Marky and even Khashoggi who visited in Novartis a few years ago, Jesse. Importantly, my colleague who is now in New Zealand, Bruno Kinoshita who developed some of the key plugins for this and participants in the GSOC 2020 last year where we developed a machine learning plugin for Jenkins. So why use Jenkins for life science applications? Really there are a lot of standardized things that Jenkins offers that are key enablers such as the accessibility of the jobs via a web portal, the freestyle parameterized jobs, easy deployment, the super rich plugin ecosystem. I'm not gonna read this whole list but these are what I call sort of the standard enablers of Jenkins that have made this possible. And the benefits that this offers is that life and data science pipelining really requires integration of a lot of utilities, applications, custom script tools and Jenkins is able to do all of that. Finally, we have developed this concept of one-page web apps on a shoestring. People can go to a Jenkins job interface and be able to execute an entire data analysis or data ingestion and processing and parsing in a very reproducible way that leaves a really good what we call data provenance path where we can always determine where the data came from. And finally through this similar web portal we're able to share this data with others and collaborate. Nonetheless, there is a kind of any impedance mismatch between develop and operations and science and just always as a kind of funny point I bring this word artifact that we're using in Jenkins. And of course artifact is used with the idea of something that Jenkins creates but for science this is really a spurious observation and a bad thing, something that you do not want. So just that really kind of simple example of nomenclature where things are different. But let's look at specifically a pipeline's jobs and builds. For developers we check out a code from the SCM. The pipelines are more consistent and continuous. The jobs require very few parameters. The builds are almost always deleted and the artifacts are automatically tested. On the scientific side though, there's nothing such as the concept of an SCM for data and instruments. The files are all over the place whether it's on a particular instrument on a local network drive. The pipelines are really discontinuous. It consists of an ad hoc mix of the Jenkins jobs. Different tasks are encapsulated in separate jobs that need to provide input and output to each other. The builds are almost never deleted because this is really primary data that you're generating. It's not a kind of a superseding old data or old jobs or old builds. And the artifacts are really inspected and annotated and curated by the scientists rather than in an automatic way. Another sort of impedance mismatch here is around job configuration. For developers now, we're moving more and more. They're pipeline as code. Andrew mentioned the Blue Ocean project and I had some questions around its status because it really looked interesting at the beginning because it starts approaching some of the requirements that scientists have around visual editors for configuring the jobs. But I have tried to use it and I realized that actually it's more for, you know, kind of the built states and larger sort of not so granular that it is useful for configuring parameters. And we use a lot of the freestyle parameterized jobs which is not very common for the developers. So what we're missing and still is sort of this configuration, exploration, dependency management, understanding where these things is. What you see on the right hand side is a kind of my attempt to roll my own. This is actually the parameters in a particular job and they depend on each other. And so they depend on groovy scripts and script likes that are executed as part of the job. So this is sort of our own version of trying to understand the configuration better but it would be great if we had a kind of a better supported tool. Certain metadata are still issues I think in the standard version of Jenkins searching for artifacts across different builds is still very difficult. Build level metadata is not searchable and it's not generated very easily and the same thing across builds. I think actually Andrew may have taxed on this as well around the artifact relationships. I call it relational builds where, you know, a downstream build may depend from two or three upstream builds and it's very difficult to sort of document that and it's even more difficult to do a cascade which we would like to do if you delete a primary artifact on which a bunch of analysis are dependent on downstream. You would like to have the opportunity to at least identify those, develop them and move them. Here is a concept that is critical for what we're doing and it's totally missing from Jenkins. What we call this is the interactive pre-builds. A lot of activity going on before you even start the build and this has to do with the fact that starting a complicated analysis in our Python image processing whatever requires the selection of a bunch of parameters that they may be appropriate or not for the analysis and going through a full build cycle, it's very expensive and so what we have actually would like to do is have a bunch of pre-build artifacts generated by selecting different parameters and having the ability to generate a set of artifacts each out of these those parameters. And then all the build does, these are some examples of the kind of artifacts we're talking about. We're talking about images, we're talking about scientific analysis that you visualize through graphs and even data tables and so on. And all the build does at the end is archives and reports these pre-build artifacts. So for example, here you can see there is a report with six different pre-build artifacts that are using different algorithms and different parameters to generate. And we have managed, and this is the amazing thing about Jenkins that's still sort of it's one of the greatest joys to work with it because you can get it to do a lot of different things. Even these pre-builds that I think the concept is missing from it. Now, something that may not go well with a lot of people is, please don't let security eat the function. Groovy script execution, inline JavaScript, HTML are keys for the kind of things that we're doing. And we have been struggling and struggling to maintain their functionality in the present scheme of security improvements. I know that Bruno has been very good at fixing security warnings and so on, but it's just the nature of what we're doing. Finally, I would like to say that we're talking about a lot of big companies that are using Jenkins and about a lot of big installations Jenkins, but for the life sciences and data sciences, we cannot forget how Jenkins would fit into sort of the environment of an academic lab where there is an academic lab doing some research, they need to deal with the data and their laboratory instruments, and they do have one developer there. It would be great if that developer can apply some of these kind of jobs that we are developing for life science integration and data science in a rather easy way. And that's it. I will leave you with a set of preferences. And if anyone is interested in hearing a little bit more about this, I think we have an Ignite session on applications of Jenkins and data sciences a little bit later and we'll go in a little bit more detail on this. And again, thank you for the opportunity to speak on this on behalf of perhaps voices that you've never heard of before. So thank you all for inviting me. Yeah, thanks a lot for feedback. If you want to do extended session, your Jenkins online meetup always welcomes you. And yeah, there's a lot of good points. Definitely it could be was discussing like, I especially appreciate the point about security and things like moral ocean. We discussed them a lot of the previous summits. And I think that it's a really valid points from a user standpoint who actually want to keep Jenkins as a framework for use cases like your informatics or whatever way you still gonna get like the ins and outs of the box, but you want to use a power of Jenkins as an automation engine. Yeah, thank you. It's great to see this angle. So any questions from others feedback? Yeah, I just want to add something about security. I need to make a confession because since the beginning of our project, we use permissive script security plugin. And we use it just because we are not creating new plugins. Instead, we put all our custom functions, GitHub API integrations, GitLab API integrations, something like that into our Jenkins share library. And in the beginning, we have seen many messages from script security plugin and we just decided to turn it off. So it's, I think it's not good, but it's much easier to just turn it off in order to allow our code to be executed. Oh, well, I have to make it coming out. You are not the only user using this plugin production, but yeah, I understand the point. And fortunately, we don't have Wadi Kondrainio here. So we can discuss this topic even more. One question before. So before we begin in one, how much time do you need approximately for the UN discussion? Because you have some time constraints. Ivan, Victor, okay. Okay. Yeah, I mean, 10 minutes, I'm telling that no more. Yeah, so basically we have 10 minutes more. So would you like to continue feedback from Yannis for now or rather take it offline? Because yeah, there is a lot of feedback to discuss. I think it would be rather feasible to have something like one hour session, maybe on half a session together with whomever is interested and talk. What do you think about that? I actually prefer more of the things that we are talking before is the pain points that we have in Jenkins and the UI and the scalability in Madrid appliance, this kind of stuff. So we can continue discussing. Are you agreed, Victor? Okay, so let's continue then. I mean... Yeah, I want to make a point about the use the, try to avoid the security script like to in the Jenkins Shattered Library. We have a really big Shattered Library and we don't need to approve any script or use that line for anything. We can, we will manage to do everything that we want without have to work around the script security line with scripts, with binaries, with all our things. So I think that this is not required to use the line. You can always find some way to do the same thing in the best way that is keeping the security line in there. There are some plugins like pipeline utility steps, like note API iterator. For me, for example, it was always a case when I needed to do custom scheduling. I was using note iterator API, which would allow me to query notes and schedule my sub tasks for parallel pipelines using them. But still there are many cases when direct access to binary API would be beneficial. And to be honest, what are the common use cases? But as long as he's doing it in a global Shared Library, I don't think you hit script security. If you're trying to use it in your own pipeline or a folder one, you will, but not in global. Yeah, it's exactly this. We are loading our library in the runtime. So I think it has restrictions as the library which is connected to a folder. And there's too much code already to rewrite, right? So that's why we keep using permissive script security plugins too hard for us to rewrite our old code in order to not to interfere with script security. Is there a reason you're not using a global one and just a folder one? You should just be, if you moved it to global, then I think it would just work. Yeah, but we can't. We use a library step to load our library dynamically. We need this for versioning. In our case, a lot of the scripts are part of the parameters in the UI. We're using the active choices plugin that creates sort of interactive cascading parameters and also creates for us these HTML and JavaScript elements that we're interested for more interaction in introducing graphics libraries, scientific libraries and imaging and so on. And yeah, all of those for us, we need to go and approve those scripts. For versioning, we use tasks in the repository and we have a current task that is used in all the pipelines and we have some regression or wherever. We move that task that is the current to the previous one and we fix the issue in the time. We release the library 10 times, between five and 10 times a week and we managed to have the library in version without any issues in the three years that we are using the library more or less. I should say that I don't think I have deactivated the security plugin or use the other plugin that the permissions, whatever it's called. But we have tinkered around with a, what was called a WASP renderer or something like that in the startup of Jenkins so that it will allow HTML and things like that to be rendered. Sorry, I don't know if I'll be using the right moment for that one. No, that's fine. So this one on the top is like you were saying about the usability. So I want to be the voice as well from some of the users that we have at Elastic. Like some of them find really hard when you run so many things in parallel on a pipeline to debug what's going on, right? So that's one of the issues we hear about quite often, how to make this easier to debug from the console out and so on. Because sometimes it doesn't even reload correctly the logs in the UI. So that's probably one of the issues from the point of view of the usability I would like to highlight. And also about the usability, I think that's also a good point. The one regarding to make the life easier from the end user how to restart a particular stage when the build with a particular pipeline fail, it's not working in all the cases and that's probably one of the key areas as well how we can make people way of working with the Jenkins pipelines easier that they don't really need to wait for as I already hear like some bills take hours and so on. So how you can only run a particular stage of a pipeline easily without going through the entire pipeline as well. That's something as well I haven't found an easy way to do that. And that's why I listen from our users quite often as well. So those probably are a couple of points now way to point just in case we don't have time to the presentation of what we do and how we do things in elastic within the Jenkins. Yeah, for that one of our approaches so that we could use Ignite Talks because we don't have so many Ignite Talks submitted at the moment. So we could just have your session there after Ignite Talks and just deep dive. Oh, here's what is coming. So he missed the most interesting part. But yeah. So yeah, thanks a lot. We have something like one minute before we start breakout rooms. And yeah, right now I'm not 100% sure how Mark can figure them. So yeah, I'm trying to figure it out at the moment. So I believe that we do them in this room but I don't see breakouts configured to be honest. So I'll stop the recording and then we can figure out together if needed. So thanks a lot. And again, we will be doing another session with elastic during Ignite Talks. So thanks a lot.