 Alright, we're live. Hello everyone. I just have to remind everyone that it's probably best to mute when you're not intending to talk, just because it's probably background noise and it will help everyone over here if you're the only person intending to talk. James, take it away. Okay, thanks Liam. Okay, hello everyone. This is this week's Jenkins X office hours. We've got a couple of people on the call. We're going to have a little demo from Gareth Evans today, which is going to show some of the work that he's been doing around some integration with CodeChip and Terraform as well. So that's pretty exciting stuff. So before I guess we do that, did anybody have any questions or anything? So maybe it's worth having a little update of what people have been doing these last two weeks, any new cool things that's coming up in the Jenkins X world. Cosming news has popped up in my screen, so maybe you could kick that off. Whatever you've been up to. So do you hear me? It's fine? Yeah, I can. Okay, so what I'm working right now is a SSO operator that we get seeing that sign on running inside of Jenkins X. So that's my main focus right now. And yeah, building an operator and it will work with DEX OpenID Connect provider, which is basically only a proxy towards various identity providers such as Google, Azure Active Directory, GitHub and so on. Probably we will start with GitHub first. So yeah, I'm using the Jenkins X completely to manage the deployment of the operator. I got it running. Now I'm trying to get the gRPC running with DEX. Yeah, I have to sort out some secrets to move them from one space to another and so on. But yeah, I'm progressing on this side. Yeah, really just to highlight that as well. I was totally impressed. You are actually properly dog fooding everything with Jenkins. You are creating and building out with Jenkins X. Well played. Nicely done, Cosy. Yeah, bonus points. Awesome. Okay, well, that's really exciting stuff that's coming through there. Ian, you're probably there as well. Can you hear us? Maybe what have you been up to? I can hear you. Can you hear me? Excellent. Yeah, so lots of testing. The main thing that I've been focusing on in the last week or so really has been getting parallel builds to complete without any kind of manual sort of interference. So without being required to go in and either re-kick off certain builds because they failed because of clashes or to go in and result merge conflicts. And yeah, there's a PR for that now. So I think fingers crossed that's done and works. There might still be an edge case or two lurking in there somewhere, but it seems pretty solid. I've been testing it quite a lot and I get consistent passes now. I kind of hardened up the actual test runs themselves as well because occasionally they just get a network disconnection and then the test would fail just because the Jenkins client got disconnected momentarily. So there's a retry on that now. So if there's any kind of flaky network or anything like that, it won't just fail the test. It gets a false failure. I suppose the most interesting part of that really is just adjusting the Jenkins configuration itself just to make it do sort of PR builds without attempting to merge on the PR side, which was the sort of blocker really. And the other thing was adding into the default environments the concurrent builds only option, which just prevents sort of clashing where to sort of competing builds of both trying to deploy to master at the same time. It's possible that I had a sort of weird edge case where basically one would hang forever because it was waiting for something that had been declared in one of the other manifests that wasn't actually there. It would just sit there forever and just kind of wait. So it now does those sequentially. And I guess with the proud stuff that you're working on sort of going in and maybe if we can get the promotion containers down sort of like a smaller footprint so they can kind of spin up real fast and then it will be really slick and just really sort of solid in terms of just being able to push. You can just kind of have a number of apps and you could just be pushing to all of them at the same time and it's just going to handle it. So yeah, that's pretty much it. And yeah, just got, I mean, just today, just having a look at the whole sort of chart museum thing and looking to switch that out to sort of Google or sort of cloud native storage. It's probably worth explaining a little bit why on this one. Because I just do jump in. Probably one of the, again, the stability things that we're actually looking at. You're doing a great job actually just looking at how we can improve stability of tests but also then services we're actually providing. And quite a few people have come up and been hit with issues where our chart museum is down. So yeah, just give a bit. Yeah, so I mean, so basically the kind of like a single point of failure at the moment because our chart museum, you know, if we need to tear down our cluster or there's any kind of issue there then anyone trying to pull those charts is basically going to hit a problem and especially if your testing can end up with these, you end up with failures basically that aren't really anything to do with what you're doing especially if you're trying to sort of test something that potentially you might be like sort of wanting to make a PR or something like that. So yeah, we've been kind of knocking internally, we've been kind of knocking a few ideas around about how to kind of handle that and just settled on, I think we've settled on like pretty much the simplest way that we could do it, which I quite like, which is basically just we switch our sort of back end the actual storage to in our case it'll be Google Cloud, but I mean it could really be anything to be like S3 or whatever and then we expose that bucket read only publicly and actually use like that endpoint as the source from which we're pulling our charts. So even if Chart Museum isn't actually running on the cluster, it's not going to break anything and we've basically got as much uptime as Google's you know sort of cloud implementation which hopefully should be pretty good and it's nice and simple as well. I mean there is sort of a joining issue of proxy access and if you can't get to those addresses and sort of how to mitigate that but I think we've kind of broken it out into kind of like two separate problems now and we'll probably look at the proxy thing separately and try and sort of solve that for people that have got really restrictive firewalls or you know can't do that for whatever reason and just try and solve that separately but that's really sort of firm up things in terms of the builds and obviously we I think other James mentioned as well that's kind of our goal in terms of switching out the actual JX deployments for users as well so you know you'll have like a persistent volume basically it's like mounting in and you know if you're blowing away cluster or doing anything like that you should bring it back up and your chart museum should be in the same state you had it at before. Yeah now that's brilliant and then we should be able to make that quite nice and configurable because we've got a different we can override the values the helm values for our different cluster types so yeah like you said that should work then on the other big clouds as well and then the cloud storage. Yeah. So then other people should be able to follow the same pattern if they want to do something themselves and customize their own applications for example. Exactly. Yeah. Lots of stuff. It's like full tolerant all the way down. Yeah. Brilliant. Okay. Excellent. Thanks. James I can see you giggling there. I was. I don't know why I was giggling there. I was just happy about the full tolerance all the way down. I guess speaking of full tolerance let me talk about something that's not full tolerant. So well the EKS is full tolerant and Amazon is and COPS on Amazon is. One thing that's always been a bit of a tricky thing with so let me step right back. Using Jenkins X on Amazon has been slightly different to using Azure and Google. With both of those two if you create an ingress controller you get a stable IP address so we could do the NIP to IO kind of hack to avoid you having to do wildcard DNS. Strictly speaking in a real production install of Jenkins X we should all do wildcard DNS so we get multi-availability zones and all that kind of good stuff like Google storage and S3 but it's one of those things if you just want to kick the tires having to have your own DNS name that you can set up a DNS wildcard just to try out Jenkins X it was always we saw that as a bit of an unnecessary barrier entry really. Particularly it's most developers well most of us mess up DNS the first time we try and configure it and then nothing would work and then you have nothing in Jenkins X to use because if you can't talk to Jenkins you kind of screwed. So we've always had this kind of deficiency on Amazon that we couldn't work without wildcard DNS properly. It could cut with electric elastic load balancers ELBs do have IP addresses but they kind of change randomly so they survive for a few hours so for a few hours you've got a Jenkins X cluster and then it's completely useless because you can't see anything anymore which has been kind of a bit of a worry. I mean it's only really affects folks kicking the tires really like if you're doing a real install you should always use DNS but this is more of a tire kicking thing. So we've had lots of kind of emails and discussions of how to do this properly. I've had lots of help from some Amazon folks which is really nice. So the official story is if you want stable IP addresses you have to use the new load balancers in Amazon called the NLBs network load balancers rather than the elastic load balancers of the old ones and NLBs network load balancers are the new ones. I've got a poor request. I'm just about to submit that switches to NLBs by default for COPS and EKS. One flying that argument was it was NLBs work perfectly today with COPS which is awesome. They don't work with EKS right now because of roles. If anything doesn't work in Amazon it's usually because of roles. We're using EKS CTL under the covers which is a really lovely tool from EKS folks which you can just type EKS CTL create cluster and it creates a cluster which sounds so simple and trivial. Like you do G Cloud create cluster it just creates a cluster on Amazon. Oh it's a totally different world and creating a cluster involves all kinds of things. So EKS create cluster under the covers creates the roles and the VPC and the back plane and the node pools and all the various things you need to do to boot up EKS. So we're using EKS CTL but NLB was one issue that EKS CTL didn't support the roles necessary for NLB unfortunately. But Ilya has been helping us really well. Thanks Ilya for the works. And as I type as I said this well I'm typing as well. As I talk Ilya is just about to release V2 3 or V2 4 of EKS CTL which as soon as that's available as a binary I'm going to submit the pull request to use it. So by say tomorrow if you do JX create cluster EKS it will use the new binary of EKS CTL and it will use an NLB and then if you don't use well called DNS your EKS cluster should survive for more than a few hours. Like the next day you should still be able to use your JKS cluster which would be awesome because that's been a kind of a slightly embarrassing issue that we've had ever seen. So pretty much this week COPS and EKS should work awesome for JX and it uses Elastic Container Registry and the covers by default as well which is nice. So actually after this at well by the end of this week Amazon should be the best supported cloud really because we still don't automate Google Container Registry yet or as your Container Registry at the box which I hope we can fix that pretty sharpish because it's slightly embarrassing. All the hard works done we just need to get the IAM rule set up so that on GCP we can push to GCR and then as year we can do the same. So that should be easy to fix and then all the three big clouds will use Elastic Container Registry and will use Ingress properly. This whole lesson is reminding me how awesome DNS is and how we should all use it all the time because we kind of often forget that. But it is that thing that to kick the ties with JX you kind of don't want to have to do the whole DNS dance just to try it out for a couple of hours and then tear it down. But in a real install we should all be using WellCAD DNS. I have added the code to set up route 53 on Amazon WellCAD DNS entries if you choose to do that so you get prompted would you like to use WellCAD DNS and yes is the default by the way and if you say yes it tries to do that. I'm not sure that bit actually works yet I haven't tested it. It compiles. But I'm hoping to test that to check it works. But if that does work it would be nice to do the same thing on Google and Azure because they all have a DNS kind of thing. So it would be kind of nice to automate setting up a DNS because it's so easy to mess that up. And you know people often pick the wrong thing but the mount tour and they put WellCADs in the domain name when it shouldn't and all this kind of stuff. So it would be kind of nice to automate that. I've seen a few issues fly by where when people try to WellCAD DNS they point the WellCAD DNS at the API server not the ingress controller. So the more we can kind of automate that bit the better really because we basically want a WellCAD DNS that resolves to the host name or IP address controller. Then we can get in from the outside to talk to any services in Kubernetes. Sorry for that big ramble but basically EKS is almost there like within 24 hours of being awesome and COPS is pretty much there and ingress is all pretty much within Amazon finally which is taking way longer than I imagined. And we're really close on ECR and sorry ACR and GCR which will be really nice. Having really nice out of the box container registry and ingress and low balancer support is really, really nice for the big three clouds. And we can then remove the internal Docker registry that we're authenticated at the moment. Actually we can do that right now really easily. We should probably just do that for Amazon because right now it defaults to ECR so you might as well just disable a Docker registry by default. That might as sort of like as a sort of trickle down sort of benefit enable us to start using multi-stage builds as well for our images. Yeah although the only problem is we can't assume everyone is always using you know the latest so some people won't have, you know, anybody who is using the internal Docker registry probably can't use multi-stage but I'd love to use multi-stage. As soon as we go canico, all bets are off. Once canico is the default, which I'd love that to happen by the way, make canico the default then we can go as quick as we want to multi-stage and latest Docker build features. I saw some blog and I didn't realize there's actually you can do inline use canico. There's a builder image for it. So you can actually do a Docker pull of canico and then you can actually do the build inside that. You can run it. You can run it in Docker. Yeah. Like a Docker in Docker, yeah. So we should be able to just switch out our. I tested already with the builder base image. I ran it in a Docker container, not in a Kubernetes job because anyhow if we want to run it in the agents we cannot use jobs because we need to spawn jobs out of the agent and I think at this time I guess it's not possible. We need to run it inside of the agent in a container probably. It's the easiest. But then canico, sorry, scaffold can use canico in the build port. So we just need to get that working. I mean it should work. I think there might be just the secret. There might be a secret required to push. Yes. You need to mount Docker secret inside of canico container and then it works. Yeah. So hopefully we can get that sorted pretty soon. It was painful. Yes. So I mean hopefully in the next couple of weeks we can get ACR and GCR sorted. It works natively with GCR. So it works straight out of the box. But if you want to use Docker, register your Docker hop, you need to mount the Docker inside. Yeah. That's the only thing we have which requires privileged containers, doesn't that? Yeah. And then we don't need to run any more agents in privileged mode. Yeah. All the Jenkins X should be okay, I think. I think that's the only thing that we're actually running. I think the build port would still need to be privileged, though, to run canico, I think. No. You can't. Doesn't it require this privilege? No. I can't remember. No? No. Oh, okay. Cool. Awesome. Yeah. Yeah, we should try to get that done soon. Yeah. Nice. Oh, EKS-CTL is just released. So I'm just going to do my public cluster. Awesome. Okay, so on from that as well, something I guess two weeks ago was the last one we did, but there's now a, quite a few people have asked about it. There's a JX upgrade ingress command that people can, once you've actually created your cluster, you can switch out, you can change your DNS. So if you're using the nip.io wild card, initial thing to keep the tires with your cluster, the cluster, if you then want to switch to a real domain, then you can do JX upgrade ingress. And there's a little wizard in there that also allows you to enable a cert manager. So we have integration with cert manager to generate sign certificates from let's encrypt. So you can switch to HTTPS ingress rules. Genghis X will automate everything and recreate all the ingress rules with your new domain and sign certificates, which is pretty cool as well. So that also means you can start with nip.io and then do DNS later. So you can rather install time doing the full thing. You could just install it once, play around without doing DNS, then do DNS later, then do JX upgrade ingress when you're ready. Awesome. Or switch to TLS whenever you want as well. That's the other thing. Obviously it's three. You start with nip.io, then you go DNS, then you go TLS. You could do them whenever you feel happy to do them. Yeah. And as a flag to do cluster-wide as well, or just in the namespace you're actually in. So which is quite handy. Cool. All right. Make sure there wasn't, I guess you're all working on non-community stuff really, but I did notice you did the tray, the desktop tray thing. Was that cool? Yeah, I can. So that was like two, three weeks ago. So it's just an initial repository. But I mean, it works. It's a tray application in a sense that when you start the app, which is electron-based, it's the same as visual code and atom and Slack, and uses that for creating the native app as a tray icon at the top in the menu bar in online. You could build basically windows or client anything, but there is nothing specific to Mac. I mean, everything is kind of abstracted away. But basically, that's a tray icon. You click on it and see like a drop-down, basically a window. And at the moment, it's not at the same stage as the visual code plugin, but the code is the same. It adds a watcher, and then it's basically prints out the pipelines. The cool thing is that from that app, you can also execute commands. So we have a command to open that thing that we are the commercial side that we are working on. But the idea is that you can, it's the same as the visual code where you basically click on on a pipeline, and then you can execute that. In the terminal, the same is here. You're just doing that behind the scenes is using the, you basically opens the command in your terminal, whatever is your default terminal setup in OS. So you could execute anything you want. You could also like, I was talking with Rob, you could do like that you can grab a folder and then drag and drop on that icon and you can import to a JSLJ. It's basically, I had a bunch of problems setting everything up because everything is TypeScript and with Electron, it was kind of messy. And also I had a problem with Kubernetes Client, which is that open source one from Kubernetes project. It doesn't work in Electron environment. So I had to touch it. Basically the Electron, no, sorry, the Kubernetes Client is using ShellJS, which is like a wrapper around the terminal functions. And I had to, there is a PR which fixes that. I have it locally. It works. And I also created a PR for the Kubernetes Client, which yeah, I remember, I remember I haven't followed up. I have to sign up some agreement or something like that. But it looks fine. It's basically, it's just a patch. Basically they have like a, it's a Chronos API, but if you switch to Async, then it works also in an Electron environment because the way it's wrapped, it's some certain APIs are not available. So it's just that. So there was a bunch of stuff to just to get here. But then the idea is to have the same functionality as a visual code extension. You can run it locally. You just empty install, start, starts the app. And then you can run it locally. And then you can have an app which is, you can update it on the fly because Electron supports that. You can update itself when you create a new release on GitHub, for example. And like I said, because you have access to the terminal you have, you can create whatever you are you want and you're not locked down to any specific IDE. You can do a bunch of stuff. Yeah. That's awesome. I'm quite excited to see how that goes. I am keeping an eye on it. But yeah, nicely done. Thanks. I guess before we go on to Gareth, I'll give a quick update about some of the stuff. So I'm looking at integrating Proud into Jenkins X. So for anybody that doesn't know what Proud is, Proud is part of the Test and Restructure project on the Kubernetes. Org, GitHub Org, what they do is Proud is like an event webhook thing that handles webhook events from GitHub and then updates pull request. You can actually then comment. So if you add a comment on a pull request, that will send the webhook event and Proud will intercept this and then it can trigger jobs. So in Kubernetes. What we're looking at doing is using that for the promotion at the moment for when folks are using GitHub or GitHub Enterprise rather than using Jenkins to do the promotion of an application into a staging or production environment, we're actually going to use Proud. What's really cool about Proud is it seems to be so many, lots of other projects are actually using it. Istio is one, and most of the Kubernetes projects are actually using Proud. So it seems to be a bit of consistency about a common way of working for developers. If you have a pull request with WAIP in it, for example, it will automatically add a label to your pull request of do not merge various other things like this. And then when you you can trigger your CI tests if you have some of the ways of the pull request you don't know, you might want to validate there isn't anything dangerous in that pull request before triggering your CI tests, for example. And then you also set things up like you can have auto merge as well. So that if it passes all your CI checks and the LGMT comment will actually auto merge and you can have our releases. So it's a real great way that developers and the open source projects around Kubernetes are using and it looks like a great fit for actually having a developer workflow prey way in Jenkins X. So very excited about that. That's tying into some of the work that Gareth is going to show now. It's a bit picky, but we're getting there. So hopefully by the next update that should all be in there. So Gareth, you're going to blow us away with an amazing live demo, right? Oh, great. No pressure. Yeah, apologies. I'm actually still wearing my gym kit from heading to the gym this afternoon. And I got so into debugging something that was going wrong. I haven't had a shower yet. So it's lucky you can't smell me through the... Glad we worked remotely, eh? Yes, yeah. So just before I start I'm going to put this little slide up. This is kind of... Oh, no, I need to share my screen first. All right, let me know when you can see that. Oh, can see me. Okay, so I'm just going to put this little slide up here. So this is basically just an overview of what I'm doing. So I'm going to run three commands and what that's going to do is we're going to first create a GKE service account where then going to create some Terraform config that creates a repository. Publicates that repository with the config that we've requested. And the third step is to create a Coacheship build that will automatically trigger that and start applying a cluster looking like what you can see on the right-hand side of the screen. Just to prove that none of it actually exists at the moment. This is my Google Cloud console. I'm going to refresh that just definitely there. And then this is also my Coacheship account. I think it's the basic tier. It's just a free account that's set up. All right. So can you all see that window and is that big enough? Yeah, it looks good. Yeah, looks great. I learned from there the week of disabling a transparency except I'm not too easy to see. So I'm going to run, I've got a lot of checks. I think it was built yesterday or today. The first command I'm going to run is to create a service account. I haven't got code completion working yet, James. That is going to ask you to log in. I'll use this one down here. Go back to this window. It's going to ask for a name for the account and it's going to try and work out which project it can assign it to. But if you've only got access to one project it will automatically select that. So let's call it. Let's chosen the Google Cloud project that we want to use. It's now going off to try and understand if we've got enough permission to create that account and I think it's the set I am policy role that it looks for. That's the key one. And it goes through and assigns the particular roles that it needs to be able to administer a cluster and manage the storage in the way that we need to do it. And at the end of this we're going to set one out of three. That's good. I'll clear this. So the next step is we're going to create some Terraform config. So we have a command for this. I can give it an org name. Oops. I'm also going to call office hours. And I'm going to tell it two extra things. So I'm going to pass in the GKE servers account to make sure the permissions are correct. And I'm going to also tell it not to apply the Terraform once it's generated it. So what it's going to do is create the repo, add some config to it, and commit that repo back into it. And because I didn't specify any clusters on the command line, it's going to prompt me and ask me how many. So I'm just going to select the defaults for this. So I'm going to say one. It's going to be called dev. And I would like to install Jenkins X inside the dev cluster. I want to use me as my username. Choose the repository. And it's going to go. This checking whether the services are enabled actually takes a bit of time. I've got on my list of things to see if there's a way that we can speed that up a little bit because it seems to be most of the effort is put in that one function. So what's that doing? What's the significance of the storage? You mentioned the proportions of the storage. What's being stored in there? Yep. So Terraform by default, it generates a state file and it will, it needs to store that state somewhere. By default, it will just use the local file system. But if you're trying to trigger this from a CI server or from multiple machines, you want to use that. So the Terraform that gets generated is automatically configured to use a Google Storage bucket at the back end. Awesome, yeah. I'm just going to put this in Belgium. Usual machine size and with the defaults of the size. This is going to start creating this and that's complete. If I show you what the structure of that looks like. The directory. Inside the jx folder, you have an organization's directory. Inside that, you'll have one for the name of the organization. If I do this, you'll see inside we have a basically a build sh which is what code ship is going to use to apply the changes. We have a cluster directory with a sub folder for each cluster you want to create. I suppose the main important one is this Terraform.tfr file which is where all of your actual configuration for that cluster is stored. All the options you've selected such as the zone and the machine type and the number of nodes. That's where they get persisted. That's two steps worked. The third and final step is to create the code ship build that's going to push us up. I'm going to clear the screen to fix the top again. A command for this one as well. Again, we specify the organization that we're dealing with. We need to specify the GKE service account again. Now it's going to prompt us to log into code ship. We don't persist this information in the initial job creation. It's not required anywhere else. Speaking to James about putting this in one of the secrets files or the auth files at some point. I'm going to log in with my account. I'm going to locate it again. It knows that the organization exists. The build has been triggered and that's the trigger ID. If I just bring back my main window here. I'm going to refresh the screen. We can see the organization office hours. The build has been created. That build is now triggered. Give it a few seconds. It's cloning the repository and it will start doing a JX basically a Terraform apply against that environment. Awesome. We have CI for creating and managing upgrades to multiple clusters as well. This works with more than one cluster. We still have the problem to solve. I wouldn't suggest using this for more than one cluster at the moment. Because it's looking at the GitHub repo that contains the organization config. If you make a pull request or commit to that repo it will trigger new builds and effectively an upgrade. That's awesome. We could add a CLI to add a cluster. You can start with one cluster because it's just a folder in the Git repo. You can generate the pull request to add a cluster after you've created one. That's awesome. I think the existing CLI is very close from doing that. If you specify an organization that already exists. I've been looking at it today. It should add another cluster config to an existing environment. That's awesome. This solves the problem of currently when we first created the JXCLI the wizards and everything which was great but to keep the tires creating a cluster. In any enterprise organization or anybody that's going to be using it in a team for real creating that configuration having it stored locally is just not ideal. Having all of this actually managed full GitOps for the dev environment as well and for your clusters is practicing what we preach. This is pretty awesome stuff to be honest. Are we going to wait to watch it go green or are you going to call that a success? It's going to take about 20 minutes unfortunately. There is a slight bug at the moment in the JX install afterwards. It was this one we were looking at the other day James to do with the GitOps server. It's a bit strange just like the second time round you get like a partially incomplete thing. I have a pull request in that should fix that but I just haven't had time to get it merged and test it fully. One short question. When I go and modify the Terraform I just commit something so then the code chip will kick the build and then upgrade my cluster immediately. That's the case. She's pretty nice. I was going to change the output of the Terraform plan to make it a bit more of a boast so you can actually see everything that it is applying because it's quite a quiet environment. Currently, the default admin password is actually installed in the config? That's actually displayed on the first install screen. When you look at the output that's again something that we could specify that as an option. Generate a password and apply that as an environment variable for the code chip to handle. I was just wondering about is then in a couple of weeks when we're looking at the vault integration for example about the secrets management that's cool. The other piece is had an open question I suppose about how we would tie this back to a version of the sort of JX platform. I was hoping we could put the version in the Terraform that we want to install. I think when the Terraform provider doesn't quite work. Yeah it's not quite ready yet but I've almost got a Junkie's X Terraform provider that one of the parameters is the version. Another cover is kind of doing a JX install with the version. Once this is kind of working with the prow and everything I'm hoping we could try out the Terraform provider for JX and do things like define environments in Terraform define the version of JX and do upgrades of Junkie's X via Terraform. Yeah because then you should just use Updatebot to trigger on a Exactly. Automatic pull request as we do releases. So for Updatebot kind of from my understanding it kind of pushes out changes so you build to the JX platform and it pushes out a change too. Yeah Makes a pull request. So how would that work for essentially organizations that we don't own? Yeah one thing that was always on the roadmap of Updatebot that we never got around to building but we're almost at the stage where we really need it so we might end up building something is how can anybody subscribe to be, like right now we push but we want people to subscribe to pull those updates. There's various different things we could do like the people who wanted to pull or something like that or people could register please pull request me and here's a GitHub organization or whatever we just need to figure out what's the right way of doing the pub sub kind of thing. It doesn't have to be super time critical you know we could make sure that every day we run Updatebot to push changes to everybody or something but if there's like 5 million people want to generate pull requests on the run organizations it might be better for people rather than those push but some kind of way like right now already Updatebot can pull you can say things like pull any new versions you can find for my stuff and you could do that you know every day or every hour so there's a pull option but it's not quite awesome yet I mean what would be really really nice is if we could add into this CI job an automatic pull of versions of Jenkins X I don't know if that would be part of the code chip CI or something like that but if it can be done with Updatebot we should be able to do something like that yeah because there's Java is available on that on the code ship sort of base image that you get yeah we were hoping it might be go soon yeah one day we might want to go so just a quick bit of dirty laundry for a second so we're using Updatebot inside Jenkins X to push version changes of anything like a base image library the Jenkins X binary whenever we release the binary we do pull requests on Docker images to use the new binary version all this kind of stuff so we're using loads of Updatebot one of the slight issues though is Updatebot right now pulls so whenever you're pushing doing pull requests it's pulling to see if there's another pull request merge so it can complete the build once we're using prowl we've got that kind of sorted it'd be kind of nice to stop Updatebot pulling because right now we keep running out of GitHub API tokens even though we're authenticated which you get 5000 an hour we burn through those really really quick thanks to Updatebot but I think we almost need to reimagine Updatebot inside the prowl kind of world because I mean one of the main things Updatebot's doing like okay so if we're updating say BuilderBase the BuilderBase Docker image once that's released we have to do pull requests on like 10 Git Revers but we want to fail the release of BuilderBase if any of those pull requests fail to give that feedback to the team that BuilderBase is now broken or BuilderBase broke something all we really want to do is two things auto merge all those pull requests and deal with merge conflicts which prowl can kind of do all that and then when the merge is completed trigger something so that we can know do we mark this the BuilderBase releases completed or not if we can get all that working then just the webhooks coming from GitHub will be enough to trigger all of this flow and we can stop all this pulling kind of crap and everything will be faster and everything can be kind of parallel and concurrent and then we can do better feedback so we can kind of do things like if a BuilderBase pull request failed fail to merge or whatever so we can get much better feedback feedback is the main thing but feedback for me is a game changer update but if I do a release and I break a downstream repo we should both hear about it the upstream team and the downstream team should get feedback I think soon we might have to do significant engineering possibly rewriting it in Go possibly so it's easier to use in Prow and inside pipelines and everything like that because right now it's here's another bit of dirty laundry we install a JVM in all of our Builder images just so we can run up the pipelines so we have to rewrite it in Go we have to rewrite it in Go we just have to if we change to Prow basically we need to redesign the entire workflow because it's event driven the events come from GitHub and then you can keep custom jobs in Prow to do like version updates whatever some easy way to create custom jobs in Prow to do on this event execute some action what I'd really love to see so imagine the BuilderBase example we're doing a release of BuilderBase and we're doing pull requests and say five Git repos we kind of want to see a visual checkbox of what are all the PRs that's being generated like a little table on a pull request or a GitHub page or an issue whatever it is something on GitHub should have a little table with here's all the pull requests of all the repos and here's the green or red or pending and Prow when the feedback comes could be updating this table so if anybody wants to see in GitHub how are we doing on the release which one's past which one's failed that should just be a single piece of that and then have a if it passes or fails do an action raise an issue send a slack message and I guess another crazy stuff if you connect multiple repos in Prow you can aggregate events from different repos and they kind of have overview over all Git repos like you send some PR in one rep or let's say and this has dependencies to other repos and if you break them you can go and see okay you just look to the events and what happened if we collect all the events and then we just put them I don't know in elastic search or somewhere they just have a dashboard yeah I think that could be amazing yeah great some pretty cool things coming through Gareth you did my work well played there's a question hey have a question about Enterprise Git I've been talking with James I'm not sure which James about JX Install oh hello this is Nikash hey yeah so I am trying to set up Jenkins X to configure with my Enterprise Git I have the I created a Jenkins X Git server on my Enterprise Git and I also created the token for that Enterprise but when I run JX Install specifying that Enterprise Git provider URL it still tries to create the spring environments on a personal I mean a public github so github.com with my personal username do you know why this is I'm not totally sure to be honest when you type JX Get Git server does it show the Git server as an option because usually when you do like JX Create Spring or JX Input whatever you're given a choice of all the Git providers that you've got configured yeah so when I do JX Get Git server it shows both my public github as well as my Enterprise github JX Install gives me the option to choose my Enterprise Git and when even when I do select that it still automates the creation of this environment to my public github oh okay so you have chosen the other Git provider but they kind of ignored it yeah I think I think I've seen this by the way because last week I noticed but we were also getting a lot of our PRs on JX were failing and because they're time me out because we're getting rate limited because even though we're using github Enterprise for our BDD test the environments are actually on github right so I'm guessing it's exactly the same issue that you've got here when we created it it's actually not using the correct Git provider right it's falling back to the default so it sounds like a bug there yeah okay that could even be related to the problem I'm having on code ship because it's all the same function that seems to be anything I can do from my end to fix this or is that something you guys take care of you could try debugging it if you're feeling brave otherwise someone just has to add some print statements into their code or figure out why it's getting run maybe he can check quickly the pipeline activity for the event and then you can see which Git provider is there that we can at least figure out we have a wrong pipeline activity for a wrong Git for the environment I think this is the import stage like before you even when you're just creating a brand new Git repo it's using the wrong Git provider the simplest thing might be just to debug the JX import it doesn't kick it's right at the beginning I think if we just try debugging there's a dock on how to debug it if you can bear the setup of it go on your laptop which is fairly it's pretty easy to debug we'll show you where the code is you can just put a break point can you see what the code is doing wrong or one of us could debug it when we get a chance I'm relatively inexperienced with this type of thing I'll try it but I'll be on the external support Slack channel and message you if I have any issues if ever you fancy if you feel brave enough to give it a try because Go is kind of easy to understand it's a fairly straightforward language but yeah it is a bit different here's the link on how to debug it but we could try maybe do it together on Slack or something sometime I appreciate that it would be good to fix the story about it awesome that was a good question coming in there that was nice is there anything else we've got five minutes left anybody wanted to mention there was the VGo stuff but that's pretty short and sweet I might be able to pick that in I was literally just going to show I don't know if anyone's played around with it or anything but I was amazed that it literally it worked out JX's depth first time without any kind of faffing which I thought was pretty amazing do you want to just do a quick TLDR of what VGo is yeah so VGo is I suppose another official Go team experiment for versioned module inclusion in Go actually in the Go itself there have been package managers all flying around and they've all got various different issues and they've been slowly improving but I think that it reached a point where the core team were basically looking at it and saying well if we just had versioned module imports and we didn't need to actually kind of resolve all versions of something to one thing somewhere in your vendor directory or whatever then this will make everyone's lives easier and VGo is basically there release of that prior to it getting released actually in Go itself so if you're interested in sort of taking it for a spin you can clone or go get VGo and then you can actually run you know as if it was using modules by just using VGo command instead of Go and so far I've only been messing with it for Jenkins X for I haven't really spent much time with it but initial sort of signs are that it's pretty awesome I just got like a quick thing that I can show if you're interested okay I did my screen this one entire screen okay you can see that okay is that you see that okay or is that too small a little bit small if you can a little bit maybe a little bit more if you can yeah that's nice okay so basically this is just master branch of Jenkins X as it currently stands for the vendor directory so all I'm going to do is just oops remove vendor and I need to quickly change this line so basically in the make file the Go just switch it out to this and basically all this does is just means inside all of the make scripts it's going to use VGo instead of Go and if it doesn't exist already it's going to go and get it for me and okay one other thing I have to do is just so that you can see this as if I'd never run any of this before is it basically downloads all of these depths into your go path source directory in a directional mod so if I completely clear that out I'm going to have to so you know the first time that you run anything it's basically going to try and resolve all the depths that way and now hopefully if I just run and we're now VGoing so it's pretty smart it figures out basically if you're using some other sort of vendering tool obviously there's a Go package .toml and .lock in this project so it's kind of using it from there and choosing the existing versions that we've got in yeah but it's basically it's doing like a it's kind of like a slightly different it's a slightly different approach to like sort of package sort of version resolution it seems in that it's basically going to take the least up to date version that will result in success so you can see now it's that's it basically that's as if you'd never run it before it's now downloaded all of the things and it's now running the tests so we're now VGoing and we'll get Go Moduling depending and if you look at what it's actually done it's just create these two two files in the project Go.mod and Go.sum and basically the idea is is that more packages libraries projects that basically switch over to this you're basically going to be able to you're sort of responsible only for resolving your dependencies inside your project so if you're kind of downstream of that then you're not going to get all of these clashes where you're trying to sort of combine two sets of rendering and there's different versions and it's just a nightmare because it will be able to build a really accurate graph of all of the imports all the way down they just look like this so pre-stand stuff just a big old list and if you do pre-go list it will just tell you what you've actually got so you can see that it's now recognizing that basically this project is Jenkins X slash JX and you know anyone importing anyone importing us from anywhere is basically going to have an easy ride in terms of resolving dependencies with anything else they're doing so yeah that was it basically just like a really quick little thing but it works really well and it's definitely a step in the right direction awesome so we're all looking forward to Go.1.11 then yeah Go.Modules looks awesome and no more GoPath once you start using the modules so does that mean you can just check out clone a repo to any directory yeah so I mean it's a lot of this as a Go.Module file like if there's a Go.Module file it just goes I don't care about GoPath anymore that's what I saw and hoped it was so yes that's awesome alright well we're just past just gone 5 o'clock so I think that was pretty good our thanks to everybody Gareth cracking job on the demo and I was good involvement for everyone so yeah I guess until next time thanks cheers