 Hi, guys. Hi there. How are you doing? Good. Welcome to everyone. Yeah, we've got a doc at the moment. I'm just going to post it in the chat and also in the Slack panels. Docs. If anybody's got any thoughts or questions as we're going along, then maybe pop them into the docs, the Google Docs. And if we don't have immediate answers, then we can follow up. We haven't got much of an agenda today. I personally was trying to get something to show. I may try to show a live demo or something, but I'm not brave enough. We'll see if we have anything. We've gotten really bad about putting people on the spot and giving them no heads up beforehand, James, don't you? I've learned from the best world. I thought a good topic of discussion, actually, would be for Ian to give us a summary of the latest BDD test. Because that's pretty awesome, actually. Cool. OK. Yeah, I can do that. Yeah, on the spot. No warning whatsoever on the fly. No safety net. Yeah, so I guess that's pretty much what those tests are really for, really. A safety net. So there were a few things about the existing tests, which were kind of like I found hard to work with. Like one of the things was the vendering, like the vendering of the dependencies actually for the harness itself was kind of hard to work with in terms of trying to fit, because we're sort of dog-fooding and we're using a bunch of the commands from within JX itself in the harness in order to like pipe the logs and these kind of things. Yeah, so someone's dog is not happy about this at all. Yeah, so yeah, they were kind of out of lockstep with the actual main project itself. So one of the things I wanted to do was like vendor the dependencies in a way so that they matched basically whatever was in like currently a master in the actual JX project itself and then use those dependencies, use those methods actually in the test times. The other thing that I think is like a big thing will probably become even bigger as we add more and more tests is a whole sort of parallel execution of the test suite because running that many sort of quick starts sequentially just takes forever to run basically. So I kind of got around that before with the old test harness by kind of rolling in parallel, which is like a little binary that you can use to execute. But you don't really get any control over streaming output of logs or like all of the log output will be interleaved so you'll see kind of the output on sort of eight different things and at the same time really difficult with debugging or that kind of stuff. So switch over to Djinko. Djinko basically handles the whole sort of parallelism, it's hard to say, thing by effectively compiling. It kind of splits up the tests at runtime and then kind of compiles them into their own binaries and then executes them from like a sort of centralized place using like a sort of status server that gets spun up as well to actually record the output. And that gives you lots of cool options like only output the log if the test actually fails so you're not just seeing reams and reams of sort of logs for things that are just passing normally. You only actually see the log output if that execution comes back as failure. But I have like an internally inside the library there's a thing called gexec which is basically just a wrapper around exec that enables you to kind of control execution from external commands and also write expectations specifically against those commands. So you can say like I'm expecting this process signal or I want to wait this long specifically before failing this test or whatever. So there's kind of a lot more control especially for sort of our use case where we're just looking to sort of execute sort of binaries and stuff. And you're looking through and you've also added something into to generate that living documentation which then we can publish on to the website as well. Yes, I mean the other thing that's quite cool is there's that I have a sort of an abstraction called like a reporter and obviously like the basic default test suite has a default reporter but you can write your own as well and then you can kind of make it do whatever you want. Another really cool thing about that is that they have hooks that enable you to synchronize all of your parallel tests into one thing at the end when you're finished. So you're executing your tests across eight different nodes eight different tests running sequentially but it will basically wait for everything to finish compile the results and then you've got access to this sort of reporter structure that enables you to create whatever you want. So I mean there's a little, but under the hood it's a little bit fiddly. So what I'm actually doing is basically marshalling every nodes sort of test results into sort of JSON writing that out to the file system and then reading those all back in compiling it together and then writing it out again as HTML. But you can see the B output, JSON output if you wanted sort of like database reporting or whatever format you want really. And I've just hooked that up with JX docs. So it just spits out it's actually an MD, MD file. But that accepts valid HTML as well. So it's just got sort of the time that that build completed and then sort of programmatic listing of all of the sort of top level description blocks. So you're describing Jax Quickstart whatever or just describing a certain feature. It's just gonna output all of those into that. I had to break out my slightly rusty I have to say CSS skills to actually update the style sheet for the docs. So hopefully it's okay. I mean it's pretty sort of neat and tidy and stuff. And I've just added in like a little SVG tick or cross depending on if the suite passes or fails. And I updated this morning as well just to say that basically anything that we mark as pending although that will come back as pending in the actual test suite run in the doc it will just come back and say that that's a failure which I guess is, I mean I could amend it so it just says that it's pending in the report as well but I can't figure that's a good way of sort of encouraging us to complete our pending specs. Well, why are you talking? Would you mind if I shared my screen so I could show people what this project is? Even though I put it in the chat because this is recorded if someone's looking at the recording they won't have the recording of a chat I don't believe. Is that right? I'm sorry? Yeah. Yeah, isn't it? I've pasted the link to the project it's under the Jenkins X organization. I've pasted that in chat but if you're watching this in a recording you won't have access to the chat. So just look at my screen I can do that just so people can see where it actually is. Is that opening? Taking a long time. Now I can't do that, it's taking too long. Anyway, the name of the project is BDD-JX. Yeah, BDD-JX. It should be a lot easier as well if anyone looking to get involved in the project wants to run the tests hopefully and stuff. There's really only one thing that you need to have set which is just your gig organization basically your gig username in an environment variable and at the start of all of the test runs it will actually spit out a little thing saying what you've got that set to and what you need to set. So the intention is there, I guess, to encourage people to try it for themselves and report things if they've got problems or just generally help with the contribution of getting things really solid and working really nicely. So we're going to run these tests every time we do a change to the platform. Yeah, so I think they run on every PR now which is really cool. It's just like especially running in parallel I think we're going to run into some kind of underlying issues that maybe we wouldn't have run into before which is why I was thinking about the whole mitigations for merge conflicts which I think when in this morning I think James Lionel Messi James merged that this morning so that's my unofficial name for in order to differentiate the two James's. We have Lionel Messi James and other James. I probably need a better like thing than other James. Ronaldo James? Ronaldo James, yeah. It's a fight to the death of the Ballon d'Or. Cool. Yeah, so I think that's pretty much where it's at. So just for anyone that wasn't in our conversation this morning basically sometimes the issue can be if you kick off a bunch of different app creations at the same time you can get merge conflicts into the requirements Yammel because all of the PRs are hitting at the same time and basically it's the same three lines that it's trying to write against. It doesn't know what to do, you get conflict. So a mitigation for that is basically to reorder the requirements by name. The intention being that the insurgents will come at different line numbers and therefore you won't get as many conflicts. So it's definitely possible to still create conflicts. I don't think there's any way around that and there's some really nice code that does like rebasing that seems to work really well. So it's just like an extra kind of little added addition on top of that really to try and make it even more kind of smart about how it does it. And just to get somewhere with that. It's been great because you found some great bugs that we just wouldn't be exposed to. So having those being run automatically on every pull request has been superb and then the parallelism as well because that's what people are going to be wanting to do part of that build, right? Yeah, that's the thing. Yeah, I think it's really cool. That's the thing, it is such a complex system and it handles so much for you already. There's kind of inevitable that there are going to be these edge faces and there are going to be these little things. Yeah, I mean the intention is just to try and sort of close those gaps and in particular make it really, really easy for people to get started. Because I think if you can kind of get beyond, if you don't have any kind of technical issues in that very first kind of starting up process then the main sort of feeling you get from looking at the project is kind of like, wow, this is kind of amazing. It's doing all this stuff for me. Which if you're kind of really excited about it and you come to it and the first thing you see is some sort of error or you get blocked in some way. Kind of a bit deflating. So yeah, it's really just kind of exposing people to the awesomeness of Jenkins X without like, you know, teething trouble. It's awesome. I wonder if we can get the BDD test to label the quick starts if they don't work. And then we could filter them out from the JX quick start command. So anything we know doesn't work, we could just hide it basically from the... Yeah, like automatic... Yeah, that's really cool. It's a bit of a rubbish filter, but if you put a prefix of WIP, then it does that right now. But we could come up with something better. Yeah, like maybe we could just label the GitHub repository or something. I'm not sure if there's an API to do that, but something like that would be awesome if we had some way of... Just some canonical place we could say these don't work. Yeah, nice. Then to also have maybe an option on the CLI to actually then to include the beaters or the other quick starts. You can actually select them if you... We only have the box, the stable ones. I think that's like the key to it as well, is to really... There's obviously a bunch of domain-specific experts that we're talking about Elixir this morning. Haven't had an enormous amount of exposure to Elixir myself. I think it's really cool, but I've only just played around with it. There are probably people using it in very production grade Elixir environment that have got a bunch of best practices about how to bootstrap an Elixir app and encouraging contributions so that we can use best... In that specific domain, we can use the best practices for this specific thing without having to be experts in everything ourselves and making it really easy for people to contribute in that way. I think that would be really awesome and we'll get a much broader range and the quality will probably be a bit higher than if we're... Each of us trying to figure out like how on Earth do... How on Earth is... What is the best practices for an Elixir project without looking at it specifically? I wouldn't know. I'm just like, oh, Go works. That's great. Ultimately, it won't be good. Longer term is these quick starts actually rather than just a basic page of Hello World, it'd be nice to actually offer some links to training or various information so that people can try out different languages, have that automated CI-CD pipeline set up and then maybe links to wizards or something, how they can start building on top. It's also like a training involving new languages. Yeah, I mean, it'd be great. I mean, it is so easy to... For instance, Rust and played around with Rust a little bit, but we've got a great quick start for Rust. It's like, boom, there you go. You've got a Rust app up and running. It's live. You can query it. You can play around with it. You can put it through that whole process straight away. And it's very difficult to... I can't really think of any other way of getting from sort of nothing to... This is being delivered in various environments. You've got previews, you've got everything. And you get all these logs, centralized metrics and everything as well as all other bugs. So it's a really good forum for extending skill bases as well. If you've got other things that you want to play around with, we've got a quick start for it. It's a great way to get involved in that too. And people can contribute because I think a lot of those quick starts, people have been contributing. If people do have a quick start, then we can just forward that into the repo and we can tie that into our automated tests. So that's kind of another way that if people, from the organization or from their team, they have put together a boilerplate quick start that uses the same typical way they package or build their applications. If we have a quick start, then that gets tied into our BDD tests that get run on every single PR. Yeah, that would be an amazing way of involving people as well. Awesome. Thanks, Ed. I was going to say, some other folks have been doing some interesting things. I know James has been loving the time that he's been spending with EKS. But we've always got on EKS, right? Yeah, Amazon is awesome. Yeah, I've definitely been spoiled using GKE, which is so easy to get stuff done. And Amazon's powerful, but different. Yeah, slightly harder to use. But yeah, we've got EKS is just about working. I haven't really blogged about it yet because it's not 100%. We've got JX. Let me show you the CLI. Let me try to share my screen. Is this my second one? Yeah. I'll keep talking in there. Oh, there's no player. He was taking me out trying to share my screen before. Let me try. Does that work? Oh, there you go. I'll turn off my video just in case that helps. The internet isn't that good. So we have a command line now for JX create cluster, EKS create cluster, there we go. Create cluster. And we have an EKS command. Yay. And this actually works. And under the covers it's using EKS CTL from the Weave folks. CLI. Because it turns out even though EKS is a thing, making an EKS cluster is still really quite hard. And so if you use this, so basically create cluster EKS downloads EKS CTL if you haven't already downloaded it and it installs it in your path and all that kind of stuff. Then it uses that to spin up the EKS cluster. Then it installs JX on top. And it's surprisingly complicated to create an EKS cluster. There's all kinds of different things. There's a VPC, there's a service role, there's a node policy thing. Anyways, a node pool and then you create the EKS cluster. So there's all kinds of different Amazon resources. That all appear in CloudFormation. So another covers EKS CTL is using CloudFormation which is kind of cool. So it's kind of working. And by the way it uses ECR by default. So I've switched Amazon and EKS providers to use ECR by default. The elastic container registry which is cool. The one weird thing is whenever you create a project on Amazon, you can't just push to an arbitrary Docker image to ECR. You have to create an ECR registry repository, I think it's called, for every Docker name you make. So whenever you do JX import or JX create spring or JX create quick start, I've hacked that that if your cluster is, if you're using ECR as your Docker registry, then we do the AWS create ECR thingy. So we kind of try and hide the manual stuff you have to do with Amazon a bit. So right now if you use Amazon with COPs or EKS, it uses ECR which is cool. So we don't need to use the local Docker daemon or mess around with insecure Docker registries or that kind of rubbish. So that's kind of nice. The only thing that's missing which I've been fighting with now for a few days is by default when you install Jenkins X, we install an Ingress and then we find the host name of the Ingress and we use a load balancer like a single load balancer for the cluster and then we generate Ingresses to all the different services and all the different new spaces using that Ingress controller. The current issue, I'll stop sharing my screen now actually and put video on instead, the only issue is when you're using ELB, Amazon's Elastic Load Balancer, there's no single canonical IP address. Each zone gets its own IP address internally so there's three IP addresses and those IP addresses change all the time every day. So once you create your cluster, the next day disappears again, which is not brilliant. So you can manually set up Route 53 yourself by hand but that's kind of painful. So I'm just hacking expose controller so that we can use path-based... In fact, let me share my screen again. I'll just start with a kind of half demo. Let me put my screen back and hide Rob's face. If I open the window, I'm using an Amazon account right now, an Amazon cluster, JXCTX. Yeah, I'm using a COPS cluster right this second. So this is COPS. If I do JX open... Where did my window go? There we go. JX open. Notice the path-based ingress. So we're using a single domain and then we're using paths afterwards. So now we're using paths. It means rather than using the IP address, we could use the ELB host name. The only problem is using path-based breaks everything. So if I open Nexus, Nexus doesn't work and you can't look into Jenkins because Jenkins doesn't know it's at least path. So now I need to go through all of our charts and inject the path into all the charts. Oh, it's horrible. But yes, which is really unfortunate but I can't really see another way forward. I don't think you can... Sorry, I was going to jump in on the whole path thing. You know you're using an ingress control already, right? Which is basically just nginx. Yes. So not that it's going to be like unicorns and rainbows but you could potentially map paths using nginx at ingress to domains, like sub-domains. Is there a way of hiding the path to the... Yeah, that's what I mean. You basically rewrite all of... You have a bunch of rewrite rules and you just read the path. How do you do that? That's awesome. You've got to do it for Monocular at the moment. Yeah, so... Because it's just nginx, if we come up with some sort of fancy custom nginx conflict to the ingress, then we could potentially rewrite all of the paths just into sub-domains and then you wouldn't have to just kind of handle it there rather than getting involved in... Oh, that would be awesome. Yeah, let's do that. That sounds awesome. Thank you. I'll just drop this into the Public Slack channel in a second, but we do something similar for Monocular to rewrites. We've got two back-end services and Kubernetes. Yeah. We add annotations onto the service and then ExposedController generates the ingress rule with those nginx rewrites annotation. Yes, that's awesome. Okay, I'm going to try that. As soon as this call is finished, I'm going to try it. Yes, I was scared by having to go around and fix every application that's broken. That sounds a much better idea. Thank you, Ian. That's awesome. I owe you lots of bidders. I'll try that after this call. That's brilliant. So yes, so Amazon's close. Like EKS and COPS is close to being awesome. So we can switch back to using the ELB hostname for Amazon once this path stuff works, which would be awesome. Yeah. Very good. Is there any questions or anybody got anything more? I can give an update of what I've been messing around with, but maybe it's not very interesting compared to somebody else. I was just going to mention one other thing. I've finally documented how to change the configuration of Jenkins X, of course, in this Slack. I put it on the chat room earlier today, but the other thing is we've now got, you can specify the Docker registry now as you're installing by a dash, dash Docker registry, which we documented on there. So just if you don't want to do the MyValues dance, you can specify it on the command line as well. And GCR should work. There's just an IAM thing we need to figure out to let the build pods push to GCR. I haven't figured out the magic yet to get that to work, but hopefully that should be fairly straightforward, and then we can automate that in JX install or JX pick custom GKE. It's been nice to always use GCR by default on Google. I wonder if the scaffold folks might know something about that. You would hope that scaffold is pushing to GCR. Yeah, that's a good call. Yeah. Nice. Before you say anything today, I was just shaking down a very trivial bug which has caused me a bit of heartache today. That's HALM. So I'll be using an old version of HALM. It's 28.x and we're now on 29.1. The JX version, when you pick that out, it just tells you the client and the server both the same version. It's just clearly wrong. I've only just discovered. I'll try and find out why that is happening. It does mean if your client and your server are different versions and your builds will fail. Yeah. Yeah, we should make JX install or JX pick list to check the version of HALM. Also maybe upgrade, JX upgrade platform. Yeah. Because when we update a new version, we want to tell people to kind of update their client. Yeah. Yeah, we probably want that in JX state as well. We've got problems just to say the version's on our line. Because I think it's quite easy, because it's on time. Yeah. I was hoping HALM 3 would just be a thing when none of this matters anymore, but it's not really going to be a thing for a while. Yeah. We should fix it. Sorry, James. On you go. Is anybody really interested in what I've been doing? Yeah. There we are. Share the pain. So I did a little thing. One of the things we've struggled with in the past is ingress rules. So if you need to change the domain, for example, out of the box we create a cluster and we default to nip.io. Now, if you want to change to a real domain, it's kind of quite tricky to be able to start and do this. Quite a number of things you have to change. You have to delete your ingress rules. You then have to update the configuration for exposed control. You have to remove annotations from the service so that the external URL is updated. And there's also one of the other things we wanted to do for ages is enable cert manager so that you can get turned on HTTPS and TLS so you can get automated signed certificates generated for your ingress rules. It's pretty much well done. I've been testing, caught a couple of bugs, which I'm just adding some tests in. I've said it for the last three days, I'm that close. But I really feel it is. I'm not going to sleep tonight or there's going to be a little wine that will help me tonight and I'm sure that'll fix it. But then once that's done, then it is going to be fully focused on trying to get Proud in and tie up to work as Gareth's been doing around multi-cluster terraform. Now the World Cup is finished. I'm pretty confident we're going to get back people to sleep. What World Cup? No idea what you're talking about. Is that the football? I've just focused on the Euros, really. They're going to be awesome. Just two years to go. It's going to be brilliant. I'm going to start watching cricket. No. No, that's a bad idea. Cool. Any questions anybody has or any other updates people want to give? Okay. Rob, you look like you were about to say something. Yeah. Should we call it a day? Yeah. Let's wrap it up. Thanks everyone. Thanks guys. See you soon.