 Alright, so hello everyone, welcome to the CNCF end user launch where we explore how cloud-native technologies are adopted by end user organizations across different industries and sectors. The CNCF end user community is formed with more than 155 vendor neutral companies that use open source software to deliver their product. I'm Ricardo Russia, I'm a computing engineer at CERN. Today I have Andy Bergin as a guest speaker. So in these live streams we bring end user members to showcase how the organization navigates the cloud-native ecosystem to build and distribute their services and products. You can join us every 4th Thursday at 9am Pacific. This is an official live stream of the CNCF, so it's subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct, basically please be respectful of all of your fellow participants and presenters. If you have any questions for us during the stream we will be monitoring the chat, make sure you ask the questions in the live stream chat. So this week we have as I mentioned Andy Bergin, here he will talk us about platform evolution and 5 years of Kubernetes at skypatting and gaming. Before we dive into the questions, Andy do you want to briefly introduce yourself? I think I should. Hi everyone, thanks for joining the stream or watching the recording. Hi to you Ricardo, nice to be here today and thanks for inviting me on. So I am lead platform engineer within the infrastructure and platform engineering squad at skypatting and gaming. I have been at skypatting and gaming for over 7 years now. I recently started as a DevOps engineer in the vet tribe. Moved to work with Duke in the data tribe and for us 3.5 years I have been working with the Kubernetes team which has been great. Before that I did lots of things with digital marketing, many different hats, many different skills, from dev, from ops, from production management, finance and all sorts of things. But I am really enjoying being back in the tech now. Outside of my day job I run the local DevOps meetup in Leeds in West Yorkshire in the UK and I am also part of the organizing team around DevOps days London which is a conference that happens supposedly annually but obviously over the last year things have been somewhat difficult around that. We all know but we hope to be back next year so looking forward to that, looking forward to just going to conferences in general and including KubeCon in that list right there. Awesome, that sounds pretty exciting, lots of things. I agree for the conferences, it has been pretty good to have the first one physical after all this time with North American. But I guess we can dive into the questions. I would start, maybe you can tell us a bit more about the infrastructure setup at your company and specifically maybe you could explain a bit what are the specific technical hurdles that bookmakers have to face. Okay, let me see a little bit on that. So we are online bookmakers. We've actually been around for over 20 years. Initially the betting arm of Sky Television in the UK so we were to be the red button on the remote you would press that and be able to place a bet was the idea behind that but that was a very long time ago. And since then obviously things have evolved. SkyBet started its own technology stack over a decade ago and that's been growing steadily. We offer a range of product services around sports betting and also gaming as well so kind of poker and sports spots and also all sorts of other entertainment products like that. I think the main thing with our industry where it perhaps differences from a lot of others is really the nature of the traffic patterns we get and the technology stack we have to have to deal with that. Coupled with the regulatory stuff we have to do as well. So make sure that we are looking after our customers and we are compliant with the regulations. It gives us a number of problems which we have to use technology to solve. So particularly to do with load I think many people who work in retail will be familiar with the busy days of the Black Fridays etc. Well we tend to have at least one of those a week in our industry and we have to deal with unpredictable demands I think perhaps on the gaming side of things this is sweeping generalisation we know kind of the patterns and can plan around promotions for things like that but with sports betting we really are the the whim of what happens in the sports game so typically in the UK the soccer games kick off at 3pm on a Saturday afternoon and there's a large spiking activity of people placing bets up to that mark and then what would have happened several years ago is that would have dropped off immediately and we would have really been quite quiet until the end of the day when we were settling the results but now we've got in-play markets etc so we don't know quite the demand we're going to have on the services depending upon what events happen in the sports games so you know we've got a very spiky traffic pattern which is kind of unpredictable as well so we have to have systems which can deal with that sort of scale and to be able to obviously make sure that they're available for our customers so they're the kind of challenges we face and I suppose to answer your questions the traditional stack that we had in the pre-communities day it would have been very much vn-based running our data centres and building applications with capacity to deal with the load obviously things are all different now but that's kind of how things were when I started all those years ago with the business. That's super interesting actually I guess one of the questions or one of the points I'll save for later maybe it's also understanding how you manage these spikes and maybe over provisioning of resources or I'm actually interested if you're running on premises and probably cloud but maybe maybe we can start with your transition to Kubernetes you just mentioned virtual machines and can you tell us a bit about your transition to Kubernetes and Cloud Native and how did you get that going? Okay yeah good question so yeah I have set that one nicely haven't I so yeah we're about to start a Kubernetes journey and this is back in 2016 and it wasn't meant to be a Kubernetes journey it was going to be a journey of what could provide the next generation of hosting platform for the back tribe and you know what platform could we put together that really made it easier for developers I think as kind of like operations engineers I think we maybe approach problems from thinking about the problems it can solve for us but really this whole journey started on about how do we get our developers to go quick we're in a very fast change in market a lot of companies in competition with us how can we get products to customers how can we make it easy for our developers because unless codes in front of our users it is kind of worthless so how can we make that easy and how can we address some of the problems that we were having with the more kind of traditional infrastructure we had you know the quality gates the bottlenecks how can we enable those but still do it in a safe way so the objective really was around creating a platform which had as few human interactions between somebody pushing code to a repo and having an automated process to get that onto the service in front of people that that was the objective and to do that we we set about building out some some pox first of all obviously you've got a technology choice and back in 2016 kubernetes was still relatively new and wasn't as mature as some of the other container stacks that were around then so there was some technical evaluation done with that also at the time this was one of the first pieces of work which wanted to run in public cloud as well so although traditionally a lot of our stuff of running date centers we did have some stuff in cloud but we wanted to get more stuff in there so we did the initial kind of proof of concepts to check out the technologies settled on kubernetes i'm very pleased to say it was before my time in the team but i'm glad they chose it but after that it became a how do we build with the developers a platform that they need so we worked with the team that has one of these spiky workloads i referred to earlier so what we call our push team so they are the the updates in the typical sports website there are lots of events happening not just like football games but things that happen in football games or netball or whatever it is and these events can change the prices of markets that people can gamble on so there's there's literally thousands of updates a minute going through that all need to be reflected on the user's device what then which they're using to interact with us so we worked with the push team to build out an mvp first of all building it out using container linux as it was at the time on aws provisioning sort of the cloud storage and cloud load balances as we needed for that and what that allowed us to do was on the platform that for the for the updates that when there weren't many updates we could scale it right down and when it was busy we could scale it up and that platform was very successful and it went on to become the kubernetes platform which was fairly widely adopted around the business and here we are now five years later with a whole bunch of people around the business from different departments using it super interesting i think that the early adopters went through the same process as well of like deciding which orchestrator and container platform they should choose that's also very nice to hear um that maybe maybe you can dig also you just mentioned that you deploy on aws um maybe you can mention a bit about the stack do you use like managed kubernetes or is it like your own um deployment and yeah sure so obviously we're talking 2016 and i think gke was available back then but in the um in his very early days too um so we decided to do things the hard way as was the uh the way to do things back there so um we we didn't we do things completely the hard way we based i think mentioned before we use contained on the next so core os as the base for our solution and we provisioned that through a bunch of terraform which would provision in our kcc2 instances which ran core os we took a slightly um novel approach which has worked really well in the sense that we wanted the whole thing to be ephemeral so effectively when we reboot or create nodes in our cluster um the pixie boot into core os work with some um container linux technologies called matchbox and ignition to pull down pre-rendered configuration um all those uh for those nodes and then effectively boot from scratch they're going to the early user space where they um take these uh the matchbox and the ignition configuration and they uh they they apply that to the os and before it goes in the proper user space so it's kind of like a a pre-boot thing inside container linux so we use that to provision to set up the node with all this specific um settings and configurations mainly system defiles and then we boot properly and that's when the operations operating system spins up so that means that if we reboot a node we kind of start from scratch we do have some persistent storage on there which are volumes amounted off the file shares to store things like docker images because that can act as a cache we don't want to pull them all the containers every time we start them up um but other than that a lot of stuff is held in uh memory disk um so slightly different set up to some other um some other clusters but that's where it really when it means things like upgrades are uh a change of a version number in a repo and then we republish all the uh matchbox and ignition stuff through terraform and that means that when the node reboot they can pull in a newer image of core os so that that works really well um we run the control plane in high availability we run that across a couple of nodes the scd database is backed up very regularly and yes we have tested it to make sure we can restore it as well um like i said we use a lot of terraform to actually do the provisioning and based on um other system components we should expect like a monitoring stack based on prometheus uh we use some of the other services which were already running in the supply for developers around the business so we don't run our own login stack that goes into our elastic stack that's run by one of our other teams in the business so we kept that familiar um stack of tool language developers new um and now we don't just run aws we also run on-prem using um the same terraform and um ignition scripts although they are slightly customized for different provisioners for things like storage and for and for the virtual machines as well so we run those on on vmware but it's essentially the same configuration regardless of which environment you're running in apart from the nuances of the uh of storage and load balancers and etc essentially we keep the same stuff and um yeah that's allowed us to keep parity between all the environments and we run about five clusters uh we don't run a lot clusters we run them independently as well we don't have like a a a cluster mesh over the top of that although we do run the service mesh it's the over we run that locally on each cluster all right super nice um like one question i have is just before we jump into the coronary details you have a bunch of small socks behind you so um yes on the wall behind me um this is a we we recently moved officers um during the pandemic so um we uh we we we we we i left the office in march 2019 and i've only been back once to take like stuff because we moved officers so we're now in a uh a building that's entirely owned by the company which is really nice and it's completely custom set up um but in the office we had um a few bits of uh kind of uh customized stuff we had around the place just to make the place feel like home so the kubernetes sign behind which says platform engineering they used to hang up all of our desks and basically when i went in to collect my stuff i stole it i don't think work know that so i probably shouldn't have said that out loud but the socks behind me are from conference swag many of those as will have been from a cube con or two um and at the time we were going through our socks audit um so there's kind of a pun there so we uh we we uh used to have those in the office with a graph of the number of socks we had and the number of socks we were going to tell the audience the auditors as kind of a joke so yeah that was our that was our official socks audit for the kubernetes platform beautiful cool i'll go back to the not be geeky mechanic that's pretty good all right so um i guess like digging a bit more into the kubernetes part um like you mentioned you started um and eventually you you had to manage i guess growth as things picked up so i guess i had two questions here one is just uh the growth of extended usage once things get popular the other one is kind of related to what you mentioned that the beginning for handling spikes do you have some sort of autoscaling and how do you manage that yeah sure so so yeah um yeah great the growth of the cluster so um yeah we we started off with as i said we we built it for one customer in one use um and that gained popularity really really quickly um and um with that come some challenges because you've not only got to scale tech you've got to scale people you've got to scale um the way you work as well um so um i think when we when we moved to on-prem there were some changes we had to make around the code base so we made some optimizations at that point to handle some of the some of the growing pains we'd seen in the first iteration of the cluster so we for example on an aws we could use a cluster autoscaler to deal with workloads so as things got busy we could pop up more ec2 instances to run more workloads on and obviously as it got quiet we could scale that down as well um so that was great on um on aws but on-prem that's not something we can do we have to kind of over provision for on-prem um the bits we had to swap out were i think we're on the dimension of just before we turned to like storage provisioners so if you want to slice of storage on the aws base clusters you get an ebs volume if you're um on-prem you get a slice of net app provision through uh through our in-house storage arrays load balancers you get a lb in aws if you're on-prem you get a slice of f5 configured and of course what we wanted to make sure we did with that was we kept the same developer experience so although the uh the the um provisioners for for cloud were fairly well you know understood we had to write some custom stuff to do that on-prem but we didn't want developers to be slowed down by having to configure f5s and to be requesting storage so we used the same um obviously we kept the um the persistent volume storage we just changed the provisioner there which um makes it sound really simple that was a lot of work went into that and the same with load balancer provisioning um obviously we didn't necessarily want our developers to be logging into f5s and configuring those when they could just declare the state of what they wanted there network connections to be and the cluster should do that for them and and obviously we we put that together as well so um yeah there were there were some of the um challenges we faced from keeping that parity as we changed environments um in terms of growing um we uh i think when we were working more closely with certain teams we hadn't necessarily anticipated the challenges ahead with that particularly multi-tenancy and i mean i think the initial year of the cluster was without our back because it wasn't there um i know our back was added shortly before i arrived to the cluster but that that presented some challenges because um how do we manage that for both environments we've got a solution based off vault and uh LDAP groups which allow us teams to authenticate and get access to um to the cluster so from that they're restricted to what namespace they can do we've done a lot of work with putting insane defaults and these privileged security when those namespaces are created so that if you get on the cluster you're kind of locked down to start with until you unlock the bits you need so you have to set all your network policies you need to set up quotas and stuff um and by that we've kind of managed the expectations of the customers getting on um we've got a support channel where people can raise support requests and tickets and ask questions and we can help them there uh but i think the the main thing we found was in terms of that growth was our users didn't always understand the line between what was the kubernetes thing and what was our kubernetes thing and there's an expectation from ourselves now that we um expected our development teams to learn how to build apps for kubernetes and also how to maintain and manage those um and off the back of that we've um you know we put in a lot of training uh we've trained over 400 developers on a couple of different courses on how to build and write apps for kubernetes so they can get that right uh but of course there's still you know um i've heard it said that kubernetes isn't a developer tool i'm not sure whether i agree with that definition but um i think that there's definitely a barrier to entry there but um whether or not it's it's massive or uh small i think largely depends on the developers we're working with as an example we've we've got developers who would gladly be given root access on everything and would uh love to uh insert records directly into the xcd database given the opportunity to do so in the control plane but obviously at the other end of the spectrum we've got people that just want to put a few lines of yaml together and they're not that interested in what that is because they've got you know quite rightly developers have got a whole lot of other stuff to deal with um you know the main knowledge of their actual problems they're trying to solve the code they're trying to write uh the business logic you know there's the you know i think the uh expectation that go away and learn kubernetes as a like an apathor i think is something that that doesn't really work um and i think we we got a little bit by that and hence we got to retrospect what we do um quite a bit of training around that to uh to bring um what to to help developers uh easily understand what they need to do on our clusters now how clusters work all right now that that's very interesting maybe maybe i have maybe another question uh about the management of the clusters but maybe building on the developer experience that you were talking about uh is there like a streamlined or recommended way for people to manage and deploy their applications you mentioned that they they have access to the clusters but but is there like a recommended way to manage lifecycle of their deployments or or the upgrades to do like there's all this talk about github to embrace this kind of thing or use some other tooling maybe can yeah it i think um given a time machine we would have put more developer tooling in place or encourage the teams that we work with initially to do that um i think um if we were starting again from scratch now we would certainly have some opinionating ways of building apps for kubernetes and what we're supported on there but um as with all ecosystems that evolve um we now have uh particularly the bet tribe have put together a standard way of building applications so after a couple of years of people going off and doing their own thing or being influenced by what other teams have done um there's now a pattern evolved of how things should be done and we have a team that are building a built sorry an application helm chart which allows developers to build applications based off a set of base images which are regularly updated they can take their applications there's pipelines built for those to deploy onto the clusters they're going to set standard dashboards and they get a a bunch of tooling and references to where they can find the logs etc all of that kind of thing which you know you need to run an application but that wasn't built by us that was built by another team and that's kind of becoming slowly the de facto way and we're seeing lots of our developers migrate to that way of doing stuff which is nice to see um I think as I say if we could start again we would um perhaps have done things a little differently and I think what one of the things we've done over the last three years and certainly for my day job as well is we relied heavily on this kind of um use of developer experience to kind of like solve a lot of the groan pains we had with the cluster we got a lot of users on there fairly quickly and I think we suffered with groan pains sort of internally of how we were working with the clusters so we've done that we've done a heck of a lot over the last three years to to like to like smooth that out starting we're just basically talking to more and more of our users about what they want from the cluster how they're going to use it understanding who was actually using our cluster was was quite a big uh a big undertaking we did trying to understand um which workloads belong to teams because they can move around as well so we basically tag all the workloads on the clusters now with metadata so there's labels which indicate who owns the stuff and that's allowed us to do a load of really cool stuff it's allowed us to shard the logging so rather than just having one logging pipeline well we can do that per tribe now there's a lot of work gone into that I mentioned we go out we speak to teams we we talk about requirements we take that feedback back we can do that and understand about the workloads which they're running but it's enabled other stuff away around others as well around things like best practice and standards so we put together a whole bunch of um ideas and call them best practice call them standards the principles of how you load and run an application let's go back and engage them on a containerized platform so we've got standards around build run deploy now um and we've got that that was built with input from everybody um who was using our cluster so we've got like a collective uh mindset on that it's not just our opinionated uh version of what looks good um so we've got that and then we built tooling around that to kind of like check on that as well and provide dashboards etc to indicate where things aren't following the rules and some possible solutions they could have to uh to fix that so um we've done a lot of work on that and it's um you know that's evolved further into things like um understanding costs and education on resources and things like that so that we can um you know run things efficiently as well yeah yeah I think like you you covered a lot of the challenges yeah it sounds sounds very good but one one thing like maybe you already mentioned but if you would say like the main problem you would have today while running your clusters well would you highlight something you mentioned a bunch of stuff that that is tricky to handle and yeah well I think from the technical side you you're always going to have um you know and this isn't a kubernetes problem this is just a you know a running computer systems problem running distributed systems uh you know you've got you're going to have face problems with problem workloads and with components of the system not behaving there's you know constantly keeping things up to date even if it's ever green a management of that um and then of course the probably the big one which um I think um whichever system you're running is going to be capacity particularly in an on-prem environment you know do you have enough storage you have enough network bandwidth um is your monitoring able to scale with your workloads um and then coming down to like right sizing workloads to to have the right requests and limits on them trying to support teams to to to get that right we find that particularly challenging because I don't think there's a great range of tooling out there to to to help with that we built some in-house tools we're building more we know this is a problem and we're you know in order to get our development teams to understand and to set their requests and limits correctly we need to help them to do that we can't just you know uh you know produce graphs and then point out inefficiencies or you know things get in infilled or uh cpu throttled etc that's not going to help so we we need to put better tooling around that so that that there's some of the day-to-day challenges but many of them are you know just keeping things up to date making sure we maintain enough time keeping things reliable sounds very good very good I'm just checking if there's question I don't see any so maybe maybe we can switch slightly the topic and less from the the technical or tooling part but maybe you can tell us how what's your experience as an end user in this community is there I don't know what's your feeling interaction with other end users or with the tools well you mentioned there you've been to a couple of cook cons from from the socks as well so I guess you've been involved yeah I mean I mean I think from the the end user community I mean being a member of that it really um that's a real boon when you're at the conference I think um the attending you come is um suddenly the team have really enjoyed I've not actually been a one yet I've got to be honest about that so but I am hoping to get there but I do like the physical conference I've been to the virtual ones but I love the whole hallway tracks etc but they're again I am a conference organizer so I I'm a little bit of opinionated on that but yeah I know cubecon is certainly something which the team have have been to and have come back full of ideas full of different approaches to do stuff I think I think the the main takeaway I take from the team when they've been and they come back is they say they had a plan of what they were going to see and obviously as you know cubecon is a massive conference with with many many tracks there of talks to see and I always come back waxing lyrically about the things they didn't expect and I think when I think almost they've said when they went to like the popular talks and they couldn't get in actually the ones they went to because it was near or or it looked interesting they're the ones where they picked up these little tidbits and these little interesting bits of knowledge which have come back and have been used I think like OPA was was was a great example of that no one had heard of it before can't remember if they went to I think it's Copenhagen I think that was the one they went to before they went to Barcelona and they came back from that like like this is brilliant we have to use OPA it's it's obviously like like the you know something we can we can use to help our teams on the cluster you know but without actually ending up with that talk they would have obviously we would have known about it eventually because obviously it's like a huge topic you know but I don't think we'd have had that kind of early visibility of it I think a lot of our early Istio adoption was based around talks and examples and demos and talking to other people at cubecon which is which we've seen so yeah I think even more than just the conference which of course is you know great and important I think I think supporting the CNCF is important because you know we rely heavily on the projects which it looks after so supporting that is super important to us so yeah I think the you know the the end user community is is really important and so is that's brilliant and yeah we're all hoping that normality will come back fingers crossed it looks like it's happening so you actually mentioned a lot of a lot of the tools that you you are relying on you mentioned of course Kubernetes you mentioned Prometheus you mentioned Helm OPA just now I'm kind of curious because you have a pretty large deployment and interestingly you have both on premises and public cloud deployments so it's multi cluster you mentioned that you don't do any kind of communication between the clusters which is also kind of common I think from what I hear are there any tools you also mentioned challenges in costs and things like this are there any tools or technologies that you're particularly interested on integrating in the near future or that you're looking forward to to look at yeah I mean we're I mean there are a couple I can mention so for example I mean we're we're heavy on our Prometheus adoption and we have had constant requests for long-term storage of metrics so Victoria metrics is something we're we're heavily looking into now obviously we want to manage that carefully because we're aware that long-term storage you know means different things to different users and we were particularly careful on how we manage our Prometheus instances as it is based on you know things like the amount of cardinality the metrics have and startup times etc so you know Victoria metrics is something where we rolled out and we're starting to roll out to our customers now so that there's long-term storage but we want to do that in a manageable way so that's that's one of the things we're doing we've used Gatekeeper which is a tool which allows you to basically report on OPA states we've used that for our kind of like standards and best practice dashboard we wrote an exporter which takes our data out in the format we want because we've also got a lot of metadata tagging in there which can identify workload ownership and stuff so then in the dashboards we can visualize that by ownership as well so that's been a very useful technology there are various updates to the networking stack going on updates to Istio at the minute the 122 upgrade is not without its challenges I don't think so we are working closely with our users and I suppose the great thing about already having those community those communication channels with our users in place has actually made that fairly straightforward we're able to identify workloads and go talk to them they're going to have problems when we do the upgrade so we're hoping to have everything in place in the next couple of weeks so we can get a 122 but I suppose the overall thing with our cluster is although we're looking at different we're always looking at new bits of technology and replacing existing functionality with newer bits I think the thing we we're more fans about is just stability and updates to things you know we've got a lot of operators that we run you know we want them to be stable we want the underlying Q&A system to be stable we want the monitoring stack to be stable we want all the things that send data from the cluster to all of the other services which ingest our stuff to be stable you know stability is a big thing and you know it's a dull answer and it's not a very exciting one because it's not like the you know the new shiny tech but we kind of like things just just working I think one of the things we're really pleased about with our cluster is the stability of it and we like to keep it like that and nobody likes getting paged and we want to keep it like that as best we can I think that's pretty fair and yeah I think the interesting bit here is also that you have a pretty like large deployment and it's interesting to see how you scale things like primitives or metrics and things like this and you're looking at these new new products to handle that I think for other end users it's extremely useful this feedback so I think I don't think we have any questions one one thing maybe I would put here is do you have something else that you would like to tell other end users or the community that we didn't cover here yeah I mean one thing I haven't really covered which has been really important to the team is how we use kubernetes not just to run workloads we also use it to provision infrastructure obviously I mentioned things like low balance of provisioning I mentioned like storage provisioning but they're kind of you know they're with the basic built-in primitives for for kubernetes you know you want a pv you will get some storage if you if you want a low balance that I mentioned that that's configured for you based on that work though the the automation really hasn't stopped there so I would give you some examples we were a team that did the f5 automation they're now trying to automate more things if you want more if you want to configure an f5 now for a virtual machine usage you can do that through through a code base where you where you commit yaml definitions for the load balances you require so even if they're outside of kubernetes we can use our provisioner inside of the cluster to actually configure load balances for things that aren't in the cluster so obviously the pull request approval on that but it means that rather than going into an f5 and configuring that for teams it's now all done as code that is obviously you know a massive benefit the same with dns entries we've done a lot of automation in the cluster and if you want a dns entry you can create a dns object in the cluster that's got an operator behind it which will provision you a dns entry in our dns provider through their api and we'll handle all that and we'll tear it down when you don't want it etc but equally we've got another repo where people can put those dns definitions and they'll just get created even if the dns record isn't used by something inside the cluster so we're starting to automate bits of infrastructure through kubernetes even if it isn't kubernetes so two more examples of that firewall automation has been something we've been you know every organization is one and you know the software defined networking is something that the organizations have won obviously we have you know over a decade's worth of network configuration in data centers in offices etc we're now starting to build tooling which will configure some of the firewalls through things that are provisioned from kubernetes so again we've got a repo where these rules can be defined and they can be pushed out to kubernetes and they can be configured through the tooling that's available within kubernetes another example that is a cert manager which other people are very familiar with we're using that with our certificate provisions now to manage certificates and we're hoping to offer that outside of the cluster so there's lots of bits of automation that we run inside kubernetes to to manage the resources we need or you know teams or developers need and indeed people in infrastructure but we can also offer that as the way to manage this stuff in an automated way outside of the cluster so that I think is something we you know we we're building on and building on and I think a lot of the automation we're going to be doing over the next 18 months for things for infrastructure are going to be powered by kubernetes as well even though they may never have a workload related to it on the cluster I think that that's a trend that has been going maybe in the last two co-cons maybe we see a lot of projects like cross plane that seem to be like starting to look at managing things that are not related to containers or containerization at all they are just relying on kubernetes as a platform I guess for for all this so it seems that you've been you've gone pretty far already yeah I mean of course we're also doing that for things in the cluster as well so you know operators you know things like you know we've got an in-house mysql provisioner which will you know with a small chunk of yaml creating your namespace a you know however many node or container based replicated mysql cluster you want and the amount of resource and tooling to back it up and restore it built into that operator so we've done a lot of work on that kind of thing we do offer some operators which we didn't write so obviously the Prometheus operator is one which we use a lot to manage the Prometheus instance it's on the cluster but we've got stuff for redis and a few other bits and pieces as well so that means that you know obviously like developers they don't have to go and you know ask for that provision it or you know or manage it themselves you know you want to mysql it's about six lines of yaml yep that's brilliant I actually we still have a couple of minutes so I just thought of something also because you are mentioning managing things that are not in the cluster you also mentioned that you have like multiple clusters multi-tenant just out of curiosity how do you handle this setting of external resources when you have multiple clusters are users like allocated to a certain cluster or do they see these resources everywhere or yeah so I think the LDAP groups are obviously shared in the organization so they're in the LDAP groups you're in and they obviously define which set permissions you get and then those are bound to access on the various certain amount of points within within vault so we use vault on a per environment basis we don't have anything we have a vault in every environment I think the environment configurations are set within the same vault instance so they're not shared but they may be managed on the same one all right very interesting I think I think this has been fascinating thanks so much for all the information I think we will wrap up here and I guess if if there are any follow-ups people can reach out either either finding you or or meeting you at I'm on the LinkedIn I'm fine I'm findable on there I'm Andy Bergen on Twitter if you want to tweet me please yeah or hopefully like share some drinks or a future cup of corn I will afford to that that will be great I like to say I organized I spent eight years organizing meetups then a few years organizing conferences and I haven't done any of that for it for getting off for two years now and I miss it now I'm looking forward to getting back to doing that then yeah talking to people about stuff and finding out what they're up to it's going to be great okay okay super cool so then thanks everyone for joining the this episode of the cloud native end user launch it was great to have Andy talking about sky betting and gaming and how they use a cornice and cloud native again I remind that the end user stories are happening every fourth Thursday of the month at 9am pacific don't also forget as we mentioned a couple of times already to join us at kubcon cloud native com you yeah it's may 17 to 20 and we'll have a lot of latest information from the cloud native community also if you would like to showcase your usage of cloud native tools as an end user then you're welcome to join the end user community with more details at cncf.io slash end user so thanks again everyone for joining us today see you next time and thanks a lot Andy for the great you're welcome thanks for having me it's been it's been good fun nice to share with the people what we've been up to so yeah it's been great thank you all right