 Okay. That's it. This is another session for the JenkinsX Open Office hours. We're going to probably just do a bit of a chat of what's been going on the last couple of weeks around JenkinsX. As always, feel free to ask questions to anybody to be involved. It's a community thing, more feedback and more involvement from anybody else other than people that talk too much, like most of us on the call. dyna'r unig iawn. Diolch yn fawr o hyn yn ddechrau, yn ymdweud ond wedi bod wedi bod yn bwysig yr aelod ar hyn ymdweud y gwaith o'r ffwrdd yr ymdweud ymdweud ar y cysylltu ar gyfer Genkin's Ex ar UKE, dyna'r ymdweud o'r blwyddyn os ymdweud ymdweud ymdweud ar y cysylltu. Ond a chi oedd ein bod yn ysgol, oedd ymdweud eisiau amdalodd wedi bod ni'n gweithio nhw'n gweithio? Fy ydych chi gweithio gweithio? Rwy'n gweithio? Ian, mae yna'r cyffin weld y cyffin sy'n gweithio yn y peth fwyaf o'r cyffin gwneud. Jog ddweud eich ffordd bwysig dda chi, yn y Starbucks? Mae'n fydig i'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio. Rydyn ni'n gweithio'n gweithio, unrhyw o'r llaw-mwysiad. I have been doing, I've been doing a variety of random things, yeah, so the Chartmuseum stuff was interesting, it's actually a really good way into not only sort of the project itself but also sort of like our deployment and like our infrastructure and how maybe it differs from like a you know sort of vanilla install, so yeah I kind of rolled all of that stuff up, the background to this is basically we won't, we saw who looked. Has anybody else lost sounds, was that just me? I've lost them as well. Yeah me too. You know I think we just like, you've cut out there slightly. Maybe we can come back to you. You still there? Maybe we'll come back to you. I guess, Gary did you want to talk to us about the tutorial that you've been doing? Yeah sure I'll go. Apologies because I'm sat in quite a noisy place because I mean Cardiff City Centre at the moment trying to cheer on Gary and Thomas, he's about to do his victory lap of Cardiff for the Tour de France, so I'm sat in the quietest place that I can find but it's everywhere to be noisy. So yeah I'll be working on adding a tutorial or set of tutorials to my screen. Let me know when you can see this. Did that come through? Yeah I can see that. Okay great so at the moment we've put two tutorials up there both based on GKE and they are both using the cloud shell feature and it seems to be a fairly recent feature that they've added to add is of inbuilt tutorials into cloud shell and I'm just going to show you one of these now. So click on one of these. It's going to launch your cloud shell. Assume that you've already logged in with an account or a boot to the right place. If you've already got it open it's going to ask you to update it. It actually loads the tutorial on the right hand side of the page and it allows you to, it really thinks I've already started it because I have, I've just missed that. And I can start bouncing through this tutorial and it gives me prompts on what I need to do. You can make these as complex as you need to. So if I, just as an example, one of the things, one of the really nice things you can do is you can click on this based in cloud shell button and it will actually copy and paste the command into your shell and you just hit enter and it runs. And that's going to go off and install cloud shell, sorry, install JX for the latest version and any other dependencies that it needs. It takes about 30 seconds roughly, maybe a little bit less today. And once that's complete, you can say I've completed that part and go on to the next step. And then if you need to go and create something here, so I'm going to try this command. This is actually going to create a Terraform cluster and I can just follow that through like I normally would. So it seems like a really, really powerful feature and something that's quite new in. There's a few things where you seem to be able to launch like a local dev proxy. So if there's something you're booting on the inside cloud shell, you could then get it to open up a web browser window to that local app. But I haven't got too far into that yet. But in terms of just generally building out tutorials, I think it's a great little tool. Yeah, it looks fantastic. Yeah, it's certainly going to be useful for some of the workshops we've been running and for people that just want to get going fast without having to download any JX binary or if they have some old restrictions as well. So I love people who have lockdown laptops, corporate laptops. This is another good way, a great way to kind of keep the ties at GenCisex. Yeah, and all of these tutorials here seem to work so far, at least within a sort of GCP trial account. So they're not, but it doesn't require you to have any extra permissions or something. That's awesome. Excellent. Nicely done. And that's on the website now as well, yeah? That is just gone in, yes. Nice stuff. OK, Ian, you're back. I'm back. Can you hear me? We can, yes. I don't know what happened there. It was very strange. It just decided to cut the sound. It obviously had enough of me talking too much as well. It just decided to cut out. Yeah, so highly available chart museum, basically packaged as a chart itself. So there's a sort of custom chart, which I've just called JX Infra. It's like the snapshot tag. And basically, if you install cluster with that, you'll get GCS copy Chrome job running to basically sync your index, and you'll get chart museum starting up, connecting to a GCS bucket. So you basically get, you obviously get the chart museum itself, like the destructive API is actually, you can obviously query that in order to push stuff up to it. But if you make the bucket public with that little sort of sync job running in the background, it means that you can literally just give people the public address of the bucket itself. So in terms of get requests and all of that kind of stuff, you don't even actually need chart museum, the server actually running. So, you know, you could conceivably tear down the entire cluster. Everyone's still be able to pull the charts. Yes, so that was all done. Other things I've done. Oh yeah, I did. This was just because I saw a couple of people. It was partly for myself just because I like the idea of having been able to sort of debug specific versions, and it's particularly people that are committing and doing PRs and stuff like that. Also, I saw a couple of issues raised about JX version on the JX repo. So I kind of rewrote part of the make file yesterday and also added a bunch of tests and other stuff into JX for the versioning. So basically now, if you are a contributor, then if you clone the JX repo and you start building locally, basically if your master branch is on the tag, then you'll just get a tagged version. But if it's not, then you'll get a suffix dev version with the dev suffix and commit hash. So you can always see which version you're using and you won't get prompted to update constantly anymore if you're on a dev build, which is just kind of like a little thing, really, that I just thought would be really nice to have in there. Great stuff. Awesome. So we should be... I'll probably help out. Yeah, I was just about to... I'm sorry? So I think this is the latest one that you've got on. Yeah. It's just one other little thing which is just sort of developer ease type thing, which I've just been quickly playing around with earlier, which was shell completions because JX is quite a sort of complex binary. We've got lots of commands now and it's really difficult to remember and type. So I just added in a little, basically another command that lets you generate bash and zish completions. I thought we had that. I'm pretty sure we did. Yeah, we already have that. Yeah. I can't find it anywhere. I think it's called completions. It's so hard to find the... there's so many commands it's easy to... Yeah. I wonder if we've already got it. It's called JX completion. There you go, it's in the chart. Already for Mike. Great idea, though. Obviously someone beat me to it. Yeah, it's just because I was basically messing around with the... One of the other things I've picked up is basically sharing... Well, adding users and sharing your access to a JX cluster. I was chatting with other James about how to do that. Potentially adding some custom resources in that will enable us to share those things out really easily. Find a grain control rather than just exporting or packaging up basically all of our auth and then just sharing it around because obviously there's sort of... I'm thinking about the user story being basically that some sort of system administrator or admin will be probably the person that's creating the cluster and then it's going to be their job to sort of give out that access to various teams so that they can actually do their work. I imagine that most sys admins are pretty scared about giving out sort of admin level privileges to everyone. It seems like we've had a sort of good conversation with a bunch of people from the community about what they would like and what they want to see. It definitely seems like having even just sort of fairly basic set of roles in the beginning maybe before just admin developer viewer kind of thing. Would make it a lot easier for us then to share those things, share access around with that sort of compromise and the security of the cluster at large. Looking at that at the moment, it's quite interesting. I think a lot of people will be looking forward to that because I think that's a common thing wanting to add extra team members to your... Yeah, this is a regular request. Trying to knock off some of these regular requests and things that will really add value to people's experience and stuff. Yeah, looking forward to that. Awesome, look forward to seeing that. Thanks, Ian. Is there anybody else that wants to share anything? I can show some stuff around Prowl but give an opportunity for anybody else. James, go on, you've been quiet. I was maybe going to mention KNATIVE, but I wanted to if you should do Prowl first before we lose track of going to KNATIVE, but basically... No, go for KNATIVE, go on. Let's mention KNATIVE really briefly. So KNATIVE was announced two weeks ago. I think it was announced after the last office hours, so in between the last office hours and the office hours. There's now a KNATIVE thing, which is basically a group of bits of open source backed by the usual Red Hat, Google, Pivotal, and various others. The idea is an open source project around serverless. It's the main focus. It's trying to have a Kubernetes-NATIVE serverless platform that runs on top of Kubernetes. In the normal Kubernetes modular microservice kind of way. There's really three parts to KNATIVE. There's the webhook piece, there's the serving piece, and there's the build piece. The serving piece is using Istio, so that's kind of cool. So it's combining Istio with a serverless kind of serving mechanism. So you can, for example, have no containers running, and then the HTTP request can come in. You can then scale up a container to service the request and then tear it down again. It'll have a nice elastic serverless style running of containers. So that's really cool. A bit that's really interesting from a Jenkins X perspective is KNATIVE. KNATIVE build, I should say. KNATIVE build is very similar to things like Codeship and CircleCI and various other CI tools and concourse. Almost everything, apart from Jenkins these days, tends to use Docker containers and simple steps in Docker. Ago, again, is very similar. You define a pipeline in terms of steps. Each step is a Docker container and a command in the container, and you chain them together. So it follows that kind of similar pattern to lots of CI tools that's out there. What's particularly interesting, though, is it's a fairly small controller. It takes a CRD to define builds, and then it's got a controller that turns those CRDs into pods. What's really interesting, which I found quite surprising when the first saw it, but then the more I thought about it, the more awesome it appears. KNATIVE build is turning into a single pod, and each step is an init container in the container. So that each step is then discarded. It runs until the Kubernetes waits for the init container to start and complete. Each init container has an exit code, and it passes or fails, and then if it's good, it goes to the next step, and next step, and next step. So it's kind of really using Kubernetes to implement the pipeline engine, which is kind of interesting, because the build is just a regular pod. There's nothing weird or wonderful about it. It's a regular Kubernetes pod. It's just the CRD takes a build resource and turns it into a pod, which then runs the build using a normal Kubernetes stuff. So that's really interesting. Incidentally, the command JX logs we've had for a while, I'll just type into the shell. There's a command JX logs, which lets you, iteratively, output the log of a pod, and usually you give it the name of a deployment or something, and it will pick the newest pod for a deployment and title the log. I did a patch last week to JX logs, which if you want to title the logs of Kubernetes, you just do JX logs, and then give it the label name, build-name, which is the current default label on the Kubernetes build pod. That it will basically find the latest and then log each initcon that was each step in order, so you can just title the log really simply, because that's one of the problems right now is trying to find out what the logs are for Kubernetes is not immediately obvious. So anyways, that's a bit of a ramble. Knative looks really interesting. Given the backing of Knative and it's the serverless, and serverless is pretty hot right now in the general software development scene, I can see lots of people investing a lot of time and effort into polishing Knative. Right now, it's very simple and basic, and you just run a couple of commands on that kind of it. Knative itself doesn't have a webhook thing yet, which maybe segues into Ystof James with Proud, but it does look like if I was a betting man, I'd kind of say this is looking like a long-term strategic pipeline engine. What's interesting is, you can embed in Knative to build any build tool, whether it's a shell script, a Jenkins file, any kind of language or processing engine or whatever. It can be bash scripts, it can be Perl, Python, PHP, whatever. So, this is just orchestrating the high-level Docker images. The actual detail of those images could do whatever you want. The other thing that's interesting, by the way, about Knative versus things like Argo, if using Argo, each step is a separate physical pod, and so that if you want to share a step between steps, you have to sync everything to S3 or Minio or something, to give back again to the next step. With Knative, a Knative build pod is all the steps are in one pod, and each step has a workspace folder which is mounted in it's content. So, you can just share content between each steps without doing anything or syncing anything, which is kind of nice. Knative is really interesting, it's definitely one to watch. Another thing, well, I should have probably got these links ready. Well, I'll tell you what, shall I switch to you, James, and you'd talk about Prow, and then, when that's finished, I'll mention some of the other bits and bobs in the Kennedy Junkie's X World, because I didn't think of talking about this, so I'll need to look up the URLs so I can show you. It might be worth putting the URLs on the page, so I don't think if people watch this back on YouTube, they don't get the chats of history, so it might just be something I'd be useful as well. It's only for the doc, yeah. Yeah, the doc. Okay, cool. I'll just talk a little bit about Prow. I'm going to share my screen. So, can somebody tell me when you can see this? Yes, looks good. Okay, great. Prow is really interesting. One of the challenges I think we've been trying to solve, I think, with around having pipelines on Kubernetes is ensuring that we have a we don't have a single point of failure, we want to ensure that we can always get a webhook event from when changes are pull requests are raised or pull requests are merged into GitHub or different Git providers and how we then handle those events making sure that we're not actually upgrading Jenkins, for example, and we could miss those events. One of the other really cool projects that's been around for a little while now which is from the Kubernetes ecosystem is Prow. Prow is being used by the Kubernetes projects also other related projects too like Istio and Knate of using it as well. What's kind of interesting is we're looking at adding this in we've got this actually running already for our own services for JX to a certain degree and I really want to there's a diagram here it's under the Kubernetes testing for is the Prow Git repository and there's a little diagram here I don't know if you can see this but it's just thought I wanted to go through this very quickly there's a lot of parts to Prow there's a lot of terms, lots of different names you get used to them but a lot of them are implementation detail Jenkins X is trying to initially hide these implementations but you can go down and have a look at what's usual if you want to so we have events coming through in GitHub say a pull request is opened and that event is sent over to this webhook handler this can be highly available you can have multiple replicas Kubernetes replicas so we can perform upgrades for example and rolling upgrades to the hook handler controller and we won't miss any events coming from GitHub now this does this event then sends and creates a custom resource definition into Kubernetes of a Prow job and we have a controller here which is called Plank and this will then actually start up the pod and actually run the actions in the Prow configuration there are Kubernetes pod specs so you just add a Docker image name and then your commands that actually runs so it's really cool nice and simple highly available so we get scalable as well and it's being used by the some of the largest projects on GitHub most active projects on GitHub so you know it's scaling that way there's also a UI and some various other things that help with periodic batch jobs and cleaning up as well but those are the main ones there's also a UI called DEC interesting then is since the Knative was announced sorry my daughter's in the background apologies for that since Knative has been announced there was a pull request on Prow repo actually adding some support for build controller which will actually create Prow jobs handle Knative builds which is kind of cool I've been getting a bit more familiar with the Prow repo with the codebase and probably adding in some extra bits to actually handle when you're raising pull requests having the statuses get synced back via Prow so kind of we're going to be submitting that upstream as well hopefully we'll get some reviews on that and get a better understanding if that works we'll get that one merged in but in short we should be able to have in Jenkins X support for Prow so highly available webhook handler and the venting and creating Knative jobs being able to actually use these using the CLI the JXCI we're actually familiar with now I can give a little bit of a quick demo it's not amazing but I can just kind of show something in a second so we've started using it slightly on JX we're not fully implemented yet what it does first of all when you raise a pull request actually we have Prow plugins so you can actually see it analyzes the size for example and has labels lots of these controllers actually perform actions based on the labels on the pull request for example there's something called Tide which will automatically merge a pull request if it has all the appropriate labels and doesn't have other labels such as hold or working progress for example so let's actually just start by I've got a test repo Prow test I'm on a branch I'm going to create a pull request from my branch working progress so I'm going to give it a working progress title because we're going to see one of the plugins that Prow has I'm going to create a pull request from my working progress branch into my master branch so that's going to go away and have a little think so that's sent an event into the hook handler and that's then commented back on the pull request because we had working progress in the Prow title it's actually added a label of do not merge that means that if it was approved and if it was all the correct labels were added to it Tide won't actually automatically merge it until that label has been removed we've also added this comments so we don't have an owner's file in here yet but an owner's file contains owners of the repository and reviewers that can actually be recommended and automatically assigned to that pull request and we can see this is a bit I haven't quite figured out quite yet but we've automatically run the test we didn't actually wanted to run that straight away so I've got to fix that or figure that out but the idea is you can now start making comments on the pull request so test this for example if we run that let's do here you can see getpods it should work oh it's already started here we've got a this is a canative build pod that's actually been started with the init container it's actually cloned the source code that pull request it's actually running the unit test for that test project and then it's successful so we should see that the status update should go to successful on that pull request it should go in a second whilst that's going it's got other things like other kind of seems to be developer workflows that people are getting familiar with looks good to me for example I'm not sure if I can actually do that one because it's my own pull request I'll see the status checks that she passed and the job succeeded so I'm just going back very quickly what we've got is CI running using prow and Knative for Jenkins for Jenkins X project which is kind of interesting still a bit more to do but it's kind of very exciting Knative as Jane said was only announced two weeks ago so it's lots more coming but it's very exciting times I was just going to show something really I guess for the next office hours I should try a real demo of Knative with Jenkins X so I don't actually have a real demo right now but I was just going to mention two little things that happened last week if I quickly share my screen which tie into what James has showed just share my screen we have a little controller we have a little controller command which if you type jxcontroller build we kind of need to package this up as a help chart which we haven't quite done yet but if you basically run this command line jxcontroller build it watches for all of the pods generated by Knative and any pods which start or stop or steps happen or whatever we basically mirror the Knative step statuses into the CRD we have right now for looking at pipeline if I open the shell a second we've got this bit bigger let's see one second I should have done this earlier make a big shell if you type jxgetactivity and I'll filter by demo this is going to list all of the pipelines which I'll just be working on right now because here's a pipeline here's a Jenkins pipeline here's a pipeline where we checked out some code we did a release, we released 001 and then we did a promotion and blah blah blah usual kind of stuff what's happening should have turned off my phone before this so we do the normal jxgetactivity stuff which really is just using a Kubernetes CRD under the cover if you do kubectl getact we'll see here's all the different pipeline activities which if I look at my last pipeline kubectl getact minus oh yamol piping demo this is the yamol of the CRD pipeline activity which where we define various methods that are like what's the release name, what's the git URL what's the branch, what's the tags and so forth and also what all the steps are now what this new jxbuildcontroller does is whenever you run a pipeline a KNATIVE build we generate the same CRDs for Jenkins pipelines and KNATIVE pipelines which basically means we can use one tool to visualize all the pipelines either from the command line all the VS code and IntelliJ and Eclipse plugins all use the same CRD so it doesn't really matter if you're using a Jenkins pipeline or a KNATIVE build pipeline the same set of developer tooling the CLI, the REST API, the ID plugins and the CloudBees web console all of those visualize things in the same way which is kind of nice and so hopefully all the same tooling should apply whether using Jenkins for doing a pipeline or whether using KNATIVE it should kind of work the same one of the little thing I thought I mentioned is here's a quick example of this is a KNATIVE build resource as you can see at the top it's part of the KNATIVE version and the kind is build so this is a CRD which defines a build instance and this is just the git repository in the branch or whatever and here I've just got a single step called release and I'm using the Jenkins X Maven builder and I added a step last week called jag step release so this is I'm slightly cheating I thought well what's the quickest way of getting Jenkins X pipelines to work inside KNATIVE let's just write a big uber step that just does everything to do a release which just chains together all of the internal pipeline steps so rather than writing a really big pipeline I just cheated and just did one command so basically end of the cover is what that does is the same as our current Jenkins file it sets a git, it does a clone it generates a version, it tags everything it pushes everything and does all the promotion so that's a one command line version of what our Jenkins files do you'll notice it also mounts dokesocket and sets up a secret for Maven settings but this entire lob of YAML works in KNATIVE for doing a KNATIVE build to do a release pipeline for Jenkins X I was quite surprised how simple this was so we should be allowed let people choose really do they want to stick with a Jenkins server and keep everything in a Jenkins file do they want to go YAML and use KNATIVE build pipelines or maybe even combine them trigger some Jenkins pipelines after KNATIVE build steps or the other way around KNATIVE build steps one other final little thing that was a very experimental thing I raised an issue last week about if people started to use KNATIVE pipeline steps how could we maybe make it simpler to use the two things together to use CI and CD like we do in Jenkins X with configurable steps and with things like build packs and whatever I will get into this whole issue I will read or fly in if you would like but the basic idea is prototype code that lets you take we have a Jenkins X YAML file which is optional which can just define which build pack you want to use so given the build pack we can generate the KNATIVE build YAML completely from a library of build packs so rather than having to copy and paste these big YAML files into all of your Git repos we can just reference them in your Git repo or you could specify various different pipelines in Jenkins X we tend to have different logical pipelines like the pipeline that runs on a pull request the pipeline that does a release we can separate those into different YAMLs or blocks of YAML and let people completely override them in each repo or inherit them we could maybe even look at doing things like prepost steps so you might say I want to reuse the standard release pipeline but before I push the docket image I want to do this so whether you want to copy and paste the whole thing every time or you want to reference it by name or whether you want to just add certain pieces before or after things we should be able to let people just define very dry bits of YAML which is very concise but it gives you the full power to completely do whatever you want in every project so that's just a bit of a ramble I'll put all of these links into the google doc if anybody fancies noodling all this code after this meeting so that's me awesome that's really great stuff I think we've just been blown away a little bit about the potential around Knative and it's certainly going to be quite exciting to be following that project so it's quite surprising how nicely fit into Jenkins X as well we didn't really we weren't really expecting Knative to turn up we just figured that one day it would be some kind of Kubernetes-like thing but it was quite surprising how quickly we've just been able to reuse our IDE plugins our CLI even actually reuse our bills it's been quite surprising how quick it just fit in it was a spooky timing because we were just looking at Proud and almost within the same time Knative arrived and Knative and Proud became a thing and Jenkins X and Proud Knative all kind of happened within a week or two which is still very early and I wouldn't recommend anybody really go into production with Knative build at any time soon but it's certainly looking like it could be a nice simple option plus another thing we've always kind of figured at some point it might be nice to have a SAS option for Jenkins X so that if you just want to use Jenkins X but you didn't want to run any infrastructure or manage anything yourself use something like Codeship or whatever for all of your release pipelines in many ways Codeship and Knative are basically the same pipeline they're just running Docker commands things like Codeship and so forth see how they're hosted Knative runs in your cluster so I think the Knative thing fits very well with the potential SAS version of Jenkins X at some point where you might want to use Codeship or Google Cloud Builder or something like that for your releases but still have the same UX of Jenkins X I think it's a really interesting time I mean everything's changing very rapidly in this space these days it's fun and it's also great that everything's using CRDs as well these are Kubernetes natives exactly and the other thing that's really interesting is so right now okay Prow isn't quite finished in Jenkins X but it's kind of close but you have the option use Jenkins for releases and promotion so you can pick and choose which tool you wish that you could use Prow for webhooks Jenkins for releases so you could basically pick and choose what you're going to use for webhooks what you're going to use for releases what you're going to use for promotions and you can plug and play exactly what pieces you wish when we do multi clusters I really kind of prefer the idea of Prow in staging and production for promotion workflow well pipeline build step that hardly does anything and never really changes you just lock that down and then you're done you kind of don't want a developer centric tool that lets you spin up and import projects whenever you feel like running in production you kind of want that in your dev cluster so I think all these different options is going to let people choose exactly what they want their infrastructure to be like maybe Prow is used in production and staging just for promotion only no docker bills, no releasing anything just promoting released stuff and then maybe use completely Jenkins in development or maybe use a hybrid Jenkins on some projects Knative on others or whatever and pick and choose based on the app what you want to use but have the whole thing integrated together I think that's a really compelling thing Knative is not going to be a good fit for lots of things certainly things like mobile apps stuff like that is going to be terrible for but for lots of things like writing making little containers and releasing them I think Knative could be pretty compelling One of the other things, you mentioned about multi cluster as well because currently our answer to that is to install Jenkins in each cluster to handle the promotion because we're just leveraging the web all worse let your dev cluster be system cluster admin on all of your clusters which is even more scary kind of crazy but now we could just do a lightweight part of Prow so just the hook for example that starts at the job and then runs the Prow job between the helm up there and one of the other good things is you were just asking me before about the size of it the memory footprint is like 60 meg or something and the it was more than one of a CPU so it's very very lightweight, low footprint but it's also Prow has all the secret setup as well so for the webhooks so that's all nice as well so it does feel quite certainly much better fit than what we were using before Plus we probably want to do a cron job as well so as well as just the webhook thing for promotion you probably just want to go every hour let's do a reapply just in case there's been drift and some of these are some crazy production let's just keep reapply in case weird shit happens I think the combination of Prow and cron jobs looks a good one and one day maybe K-native will have its own webhook thing that might use some of Prow or whatever it is we could switch to some other webhooky thing maybe there could be a simpler lean and mean webhook thing that is big just for promotion so it's hard coded to do one job only ever or something like that that's all great we were talking to somebody this week who they have their own custom built webhook thing for production which has been through their security checks if you want to use some custom webhook thing of your own choosing some lambda or whatever crack on it's all good so I think that modularity of Jenkins X is going to serve as well in the future and let people tweak things and configure things to suit exactly what they need rather than this one size fits all model and that's because of the focus around Githops isn't it so having Git being the thing the source of truth and then people like you say and then if they have particular way of actually implementing that then that's cool I think also having this came up on the chat room today people were asking about things like Spinnaker I think one of the really nice things about having a Kubernetes native solution for all of this is it's all microservices all the way down using Kubernetes to connect everything together so if you want to change a webhook handler or a cronjob handler or a load balancer or a thing you just switch a piece in the infrastructure and it all kind of works so I think having Kubernetes native solutions for all these things all the way down the stack is going to really be the best long term solution for all of us I think that this has all moved forwards in a much faster by the way yeah absolutely exciting times cool right I can show the single sign-on operator if you are oh sorry we forgot to mention that we should have gone earlier that I wouldn't have rambled about let's hope it works I can share my screen that you can see okay let me show it do you see my screen not yet oh yes okay so let's have a short introduction to the single sign-on stuff so what we try to achieve is to make this single sign-on setup quite easy for Kubernetes applications and we try to base all the all the pieces on well-known open source components so one of the main components is DEX which is an open ID connect provider federated which has connectors to many identity providers you can see here like github, githlab Microsoft Azure Active Directory Google and so on so basically it's a proxy towards this and it's quite nice lightweight written in Go starts quite fast also it has a nice gRPC API if you look here you can manage it over gRPC and the idea of this single sign-on operator is to have a operator that takes over all these setup steps to configure a single sign-on for a set piece so and if you think it's in this janky sex single sign-on operator I wrote a bit of documentation and yeah I can show it currently I managed everything with janky sex so we have a DEX installation we have a single sign-on operator installation and we have a vanilla go-length HTTP service for which we want to set up a single sign-on so if we check the staging environment we have currently three ports running and I have heard a lot on the single sign-on operator maybe we should have a look into the single sign-on CRT so the basically the operators listen to a so watches the CRT and when the CRT is created it takes over all the steps to set up SSO so if we look inside of the CRT basically we see there is this open ID connect URL which is the URL which points to DEX identity provider depends where is installed so either in the same cluster or typically it is in the same cluster it could be in the same namespace or it could be in a different namespace also we can define the upstream service so we just need to define the name of the service for which we want to set up the single sign-on then we need a domain because this service is the single sign-on for this service which will be publicly exposed under this domain and then it's just typical proxy image we want to use for the single sign-on proxy and then some cookie specific configuration like how long we want to keep the cookie if it's sent over to the PS or these kind of things so that's it so I can now go and just let's put this to watch the post what's going on so we have this here now I create this SSO resource so the resource is created now we see that first operator has created OpenID Connect client into the DEX using gRPC API now it deploys the SSO proxy it waits for the proxy to get ready so we still the pod is not yet running so it waits until the proxy is up and running when the proxy is it's ready it will expose the proxy publicly using the expose controller it runs the expose controller as a job and we see now it's the service was running now it runs the expose controller expose controller is running is ready now what it does it gets the public endpoint of the SSO it sets this public endpoint as a redirect URI into the DEX for this specific client and then it restarts again the SSO the SSO proxy container in order to get the new configuration so this takes a bit and it waits now again for the container to be up and running this takes a while it's relatively fast now it's kind of running so now it's done now the SSO CRDs mark has initialized so now if we go here and just let's say we do a we try to get ingress or engine environment we have this SSO goal lang HTTP ingress here we can just take it I forgot to mention the DEX identity provider is configured with github connector we have a github application for the DEX identity provider and now I will open this SSO public endpoint and I will log using github so I have this now the DEX login window show up shows up so now I can choose github I already logged in once so I have the DEX configured to work only with jenky sex organization so I already went access to this so now when I say authorized so I logged in so now I'm logged in so you see now the SSO HTTP shows up it returns the code now we can go and check that the cookie was created for this so we can check oops we see here in the cookies the cookie should be content settings cookies show our cookie and then we have this so we have this cookie so it was created for this service we see our name whatever we configure it so now if I go back so now I can do whatever I want so I can so cookie it's set up I can log in again so now if I delete a cookie try to log in again so it shows me again the login setup and whatever so now I can log in I already logged in so github keeps track of my authorization and it doesn't ask again to authorize the dex to connect to my so that's it basically on this side so I wrote a bit of documentation here how to install basically SSO operator we kind of not really for dex but we create our own dex repository based on the upstream repository because we wanted to set up Jenkins X to do like continuous deployment of text into Kubernetes and we have our own chart based on the open source stable charts here it's just the dex which has a few jobs here so one of the jobs it generates the GRPC certificates it's a self-sign certificate and it stores the certificate into some secrets the problem is if dex runs in a different namespace we need to copy the certificates from the dex namespace into the SSO operator and into the SSO operator chart there is a there is a job which can be enabled like if you deploy SSO operator in a different namespace than dex we can enable this job and the job it will copy the GRPC certificates from dex namespace into the SSO operator and the SSO operator will be able to connect towards dex so by default if you deploy both in the same namespace you don't need to enable this so the SSO operator will be able to detect the secrets and fetch the client and certificate and public certificate of this because it's a self-sign certificate so yeah that's it yeah if you don't want to use Jenkins X to manage the operator you can also use scaffold and help to install it and then share a bit of details how you can create and run the like a CRT so one probably is our next thing to fix is to see how set manager and TLS work together that we can expose dex over TLS and also create SSO proxy which is exposed over a TLS endpoint that everything so this is absolutely must to have without this it's kind of not very secure setup okay great thanks very much it's awesome it's not okay cool we've got five minutes left anything else anybody want to share going once twice okay again in two weeks well in two weeks time we'll have to send a note out whether we can make the two weeks time because I think maybe a few of us are travelling but we'll see okay thanks everyone goodbye