 I'm curious to see how sharing a screen from your browser to share your browser is going to work. You can only share. Do I need to give you some special permissions? I don't think so. I know. That's perfect. Came across. You see it. Yep. Double screens going. Yeah, that looks very good. So I think we can start. So welcome, everyone, sorry for the late notice, but I see some people showed up. And thanks a lot, Jeff, for stepping in and giving us an overview on this topic. So yeah, go ahead. Yeah, good morning, everyone, or good evening, depending on where you're at. I'm Jeff Saylence. I work in the CNF working group as one of the co-chairs right now underneath the CNCF umbrella. So I work with my friend Taylor and Ian Wells around, we're really focused on the network device inside of containers paradigm. But we typically work with service providers, and what you'll find is that most service providers run air-gapped environments. And there's lots and lots of reasons for why they have to do this. Typically, you want some type of network segmentation when you're hosting the internet. You want to keep your private stuff private. And so typically, there's maybe at best a proxy, sometimes not even that. And it's for your typical security compliance reasons. So it's not just the whole wall perimeter, but the era of potentially pulling up a malicious image through a public repo. You run a Helm chart, and suddenly something spins up that you don't want to spin up, and it tries to phone home. If there's a default proxy set up, you can get yourself into trouble really, really fast. So at my previous job, I was at Charter Communications as one of our lead architects for our Container as a Service Platform. We set out to build a fully air-gapped container as a service platform. And we had quite a few bumps and bruises along the way. You find out that getting something truly air-gapped, especially in the world of cloud, where it just assumes that you can egress out of your VPC or you can egress out of your data center and go to Red Hat or VMware or whoever's repos, whoever your favorite software developers, there's just this notion that maybe there's Docker Hub floating around there somewhere and that you're always going to have access to it. Then suddenly, you don't. And so how do you develop and run the images that you want, but then build a pipeline that then assumes that you were going to then not have any access whatsoever to the internet? And so this is just kind of a very high-level overview of what we were building. So the other big thing, right, is you're gonna start with an internal repository. It can be any of the favorite ones. It could be Artifactory, it could be Harbor, be Homegrown, but one of the biggest things that you have to instantly get all the people who wanna move fast to not do is to basically just turn that into a proxy. So most of the private repositories have the ability to where all it does is like an intermediate stopgap where it's just going anytime you make a request, it goes to the public repository and immediately pulls it down. So if you don't put the right restrictions in place, I know typically the word restriction isn't really liked in the cloud native world because everybody wants to be agile and move fast and get things on demand. But we found that it actually saved us a lot of pain by controlling what releases were made, available to this far-right cloud infrastructure, both vendor-provided images, open-source-provided images, and then privately-developed images. And so first thing we had to do was build a dev environment that was actually truly air-gapped. And the struggle with that is, people will turn up proxies in their dev environment, they do things, and you will find lots and lots of random things that slip through the cracks. So depending on what you're using to deploy Kubernetes and to what images you're hosting, you will find that in your ranchers, your Tanzus, your OpenShifts, even a lot of homegrown, CubeADM or CubeSpray deployments and stuff. There are hidden curl commands everywhere. There is assumptions that packages are available. There is some automated lifecycle that just assumes certain containers or service platforms will just go and self-update. And you'll go, you'll build something in an air-gapped deployment like once, and maybe you did get all of the packages that you need, but then suddenly it goes into production and the first time you try to update your Fluent D pods to like, as you're doing your logging, you're like, oh, I'm gonna update this log folder. And suddenly everything breaks because there was this assumption with this hidden URL inside of the tooling that it was gonna go back to rancher or Tanzus main repositories and pull down the most updated certified image that they had. So you find all these weird little nuances as you build it out because there's tons and tons of expectancies within the Ansible or the Go or whatever is written that there's gonna be access to these and you really don't find these things until you get things first built in a truly air-gapped environment in dev and then once you start doing all the day two operations in production. Trying to think, I'm just kind of rambling here, like what specific questions do people have around like how you would build an air-gapped deployment or maybe like what pain points come about it and stuff like that. Feel free to jump in. I see Alex, you have your mic open. Yeah, just wondering whether in this the developers themselves are also air-gapped or whether the infrastructure that you're building is air-gapped. So it was an iterative approach. We eventually got it to where the development teams were air-gapped as well and what we would do. It started off and like it instantly makes things start to slow down which then annoys the developers. But when we got the private repositories actually tuned correctly so that way you could pull in an image on demand to the internal repository and then immediately execute like a sanity scan and then tag it so that it was made available in one of the dev sandbox environments. So once and it took a long time to get all of the web hooks set up to create all of the filters to make sure like, I mean, originally we were pulling in like the whole Helm repo, right? Like just the stream Helm repo. And suddenly you start going through everything and there's like, you know, here's a chart for deploying Bitcoin miners. And we're like, oh, we probably don't want this available in our data centers, right? So we started to build the filters. It took a lot, I mean, it's like most automation tasks. There was a lot of pain and effort up front. And then once we kind of got to the top of that, you know, hockey stick curve, we fell over the other side. And so it started with, you know, developers just kind of doing whatever they wanted, which, you know, it's just cause they're trying to get stuff done. And then they would try to go and deploy in the air gap environment and everything would break. We'd go collect all the lessons learned, like why isn't this working? You know, like which URLs did we not catch in this chart or this manifest? You know, sometimes the containers themselves think that they can reach out to the internet. So, you know, you spin up the image and then within the automation, within the container itself, especially if you're building runners and stuff, you'll find like all kinds of stuff baked into a runner that suddenly just pukes on itself when you try to do this in an air-gapped world. So that was iteration one is, you know, they're building locally, they put it into dev, everything breaks, we do some analysis. Then we got it to where we were, you know, creating the taboo of where we made it to where you would make a request to the private registry and then it would just automatically pull something down through the thing, but then there was no filters in place, there was no sanity checks. So like, you know, it would just go to a URL and we pull everything at that URL. It's not being choosy. And then once again, you start getting like the Bitcoin riders and things like that that come in and expose you to risk. And so then finally, you know, we got it to where we had like, you know, the appropriate repos mapped out. And the other thing too is, you know, I'm assuming since the CNCF, I'm being very like, you know, Kubernetes and container focus, but we had to do this for everything, right? So like the base OS image, you have to create your image and then all of the tools that are building that image, we only let them pull from the private repositories. So all of the packages, you know, whether it's YUM or whether it's, you know, Debian based, we're sitting there vetting all these packages, putting them into the repository, then building the images. And honestly, once we went all the way to the left in the build process and thought about what do we actually need to build? We then started to write all the web hooks to, you know, pull everything from upstream, immediately do the scan and then tag it and make it available for dev. And then eventually what we did was the devs would just build within the dev environment or locally, but they would point, you know, their local devices to the internal repositories instead when they were doing specific repositories. And then if the package wasn't there, we'd make a note of it, we'd go, we'd get it for them. And they could now do this, do self-service, but the framework was already there in place so that they know that they're not gonna pull anything malicious, that it's gonna be, you know, tagged appropriately. Like, so like I said, once you get that automation in place, then the devs are allowed to kind of do what they want, while also still meeting like what, you know, production and operations are demanding from a security and a compliance standpoint. I mean, mirrors a lot of what I think we've done at GB Search. The question that I have is whether the developers who are working on their developer box there, do they have just free access to the internet to like go and look up things on Google? Yeah, I mean, on their local devices, we're not getting in their way. Cause this is where you get into like the weird chicken the egg scenario, right? Is what packages do you need to pull? So... And that's the thing that I think I find crazy at G-research where developers don't, like that developer box is also air-gapped and they don't actually have access to the internet and to get, to even understand like what they need. I don't, I don't know how they know. Sometimes like you want a hugging model, you got to go out to the internet and figure out which one you want and then to get that inside, there isn't a proxy yet. Anyway, that... Yeah, no, that's definitely not like... Cause yeah, you get into scenarios like how do you know what you need to add to your private repo? How do you develop? How do you pull in tools? If, you know, at some point there has to be some level of freedom so you know what you wanna test in the first place, right? And so basically, and when you say developer would come in from like the service provider world, I mean, there's like somewhere upwards of seven to eight lines of business with, you know, four to five development teams each that was then all funneling into this big, big infrastructure, right? So like basically what we did is if it was heading for in production, right? It was going to one of our public cloud, you know, areas, sorry, environments or one of our data centers, there's basically like this inlet that starts with are you building on one of the golden images? Are the packages that you need available? And so, you know, we provide that base template and we would start giving out tips, right? Or like the developers who are building locally be like, look, you spin up this, you know, in virtual box on your machine, start with this image so then you can see are all the packages as you're developing and you have free reign with the internet? You know, are the packages that you need within this Ubuntu, you know, box or within this Red Hat box, are they available? If the answer is no, run yum update, you know, add the repos that you need and then we'll check it, make sure that's safe and then once you get into the, you know, the true pipeline towards production and you move through dev, you'll find out quickly whether or not you built your stuff correctly because it'll break in the dev environment with little risk to you, right? Like dev is, we treat it truly as dev, if stuff breaks, that's okay. But it's not going to let you build anything that you did not see it in that private repo. So like, you know, it's on the developer to kind of keep track of what may not be available for them. And then, you know, they need to run the diffs to find out, you know, where are they going to fall on their face? And sometimes they don't know and that's why we have the dev environment is, you know, they build it locally in the box. Okay, go deploy this in dev, see what breaks and then, you know, typically, it'll break two or three times before they catch everything, you know, just because they've pulled something and, you know, the amount of code they're writing themselves versus stuff that they've just pulled from other places, you know, is sometimes very drastically way more towards the, I wrote a little bit of glue code for like lots and lots of other people's stuff. And then that's when you have all those embedded URLs that get you in trouble. And there's just this assumption that, you know, your VPC is pointing out to whatever public repository and it can pull whenever it wants to. What are all the checks that you're running? You know, what, we run a battery of test ourselves. I'm just curious what, particularly third party tools you're employing to do all that stuff. So now, since I'm, you know, I just moved companies, I'm not running any tools currently. I'm doing a bit like, at the old one, I mean, you know, it depends like a bunch of your, you know, standard security scanning tools. So like image scanning tools, code scanning tools, you know, Veracode, X-Ray, if it was in Artifactory or Prisma Cloud, I mean, a lot of the big players and we run different kinds of scans and different types of compliance checks then we'd also have like a battery of functional tests, right? So for instance, I mean, we treated the infrastructure the same as we did the application. So like, we would have this whole battery of tests to make sure that all of the packages we needed to build Kubernetes existed, for instance, and that the packages that we were putting together would survive like Sonabooey, that would survive, you know, we would use disaster recovery methodologies where we'd, you know, back up at CD, we'd completely nuked the cluster, rebuilt from the cluster and like, so I mean, we treated everything from top to bottom, like, you know, this whole like top left-hand corner here, the whole GitOps thing, you know, I'm not gonna say we were even 80% there, but that was what we were ultimately striving for. And so whether it was, you know, all I'm doing is updating, you know, like Qube-ADM for instance, right? Or I'm updating Qube Proxy, I'm updating, like I said, a Fluent D thing, the application itself, I'm making an update to the base operating system, every single one of those was declared in Ansible, and Ansible was sitting there creating a mapping, and you can do this with any, you know, pick your favorite scripting modeling language of choice, but basically what we would do is we would make, you know, these infrastructure code templates to the best of our ability, then put the Kubernetes manifest on top of that with the Helm charts and so on, we'd build like this layered stack and everything line by line, you know, would map only to an internal repository. And then like I said, where we would constantly think we were good is, you know, we've used Rancher, we've used Tanzu, we've used OpenShift in the past and stuff is, you know, our manifests would be clean, like the YAML, driving Ansible or the YAML that we were pushing into Kubernetes, everything pointed to an internal repository, but then there's some internal mechanism within Tanzu, for instance, that thinks it can go back to VMware or, you know, OpenShift is assuming that the satellite instance has access to the internet and that is always when we found all of the gotchas as we were trying to do this or at the application layer itself is when everything would break because the app would assume that it had access to the internet and we're like, you don't get that. Yeah, and in a couple of cases, I mean, I run an open source program office, so in a couple of cases, we've had to go back to whatever project it is and try and fix their stuff so that it doesn't phone home or there is an option for looking at an internal repository or, you know, some file somewhere, kind of thing. Occasionally, we've actually run into places where people didn't want to fix the stuff. Have you ever run into that kind of situation where? Yeah, in fact, especially to like so in the service provider world, because we have, you know, SLAs and SLOs that we're required to keep, we, you know, we walk a fine line with open source and also wanting, you know, if we can get it some type of vendor-supported backing for, you know, even open source projects. I mean, that's how like, you know, the red hats of the world make their money, for instance, right, is provide, you know, service agreements on open source software. So, yeah, it gets weird and it was a lot of times, I'll be honest, being at like, you know, a tier one telecommunications company makes it a lot easier to wave a big stick and say, I need you to fix this for me or I'll go to your competitor. But we run into this a lot or conversely, we would go in, like what we did with our containers or service platform, we're at the platform layers, we had to go in with the fine tooth comb and we went and changed a bunch of manifests ourselves internally and just swapped out all the URLs to point to our internal repositories. And this is like I said, I mean, we were six, seven months in, you know, we're like, we're good. And then suddenly we would like run like one update or something. And next thing, you know, we found like this other hidden embedded thing that would break everything. And you know, some of that like that's the whole being agile thing is sometimes you just got to take your lumps and deal with like, you know, bruises that come along the way. But what we would find sometimes is like certain, you know, open source communities or vendors would be like, well, you changed this so we're not going to support you. And we're like, well, then I can't use your software. Like that is definitely like a struggle, right? And so I mean, with like the more sane ones we were, you know, we even got them to start making it so that some of these things were modular. So like it would actually like guide you until like adding the repositories that you would potentially pull through, you know, others gave us the full Heisman trophy stiff arm and told us, you know, if you do this, you're out of compliance. And then most were somewhere in the middle, right? Where like they grudgingly slowly but surely. But like I said, that was with me wielding, you know, my company's name behind me and being like, I'll go to this other, you know, competitor or, I mean, even in the open source world, right? Like, I mean, how many different log forders are they are? And you know, everybody's convinced there's the best. I'm like, hey, I'll switch from flu and data log stash if you guys are gonna be jerks, you know, like, and like they're, they still, you know, they want their babies to succeed. So even in the open source, there's some level of, you know, protection and willingness to try to like, you know, make your software projects stand above the rest. So I mean, we definitely kind of leveraged that a little bit. It's definitely harder, you know, if you're like an eight man startup and your lead devs like, hey, I have to do this because of this and they're like, yeah, we don't care about you. That's definitely gonna be a much bigger challenge. Yeah, it's, you know, we've been mostly successful except where the people we're trying to convince are so big, they just decided to ignore us. There's Google and Amazon come to mind. The AWS one that comes to mind currently is something in the client that they don't actually follow the HTTP standards for redirection. And we point this out to them and they're just like, yeah, we're not gonna fix that. Like, but it doesn't, yeah, so what are you gonna do? Well, even then too, you have to decide where you're like, so the Amazon example, right? If you're using EKS is CNI, it's a routable IP address throughout your VPC, right? So like you really have to like understand too and I come from like a networking background. So the legacy model of security of, I think of things and like perimeters and like, you know, here's this walled garden, here's this wall. Well, you have to understand like where your walled garden is too or, you know, same thing with them. You're doing something fancy with like Calcorsilium even where you're advertising pod IP space to the underlay via BGP, you might provide reach to something you never intended to provide reach. And so this is where, you know, like I said, getting your dev, your pre-prod and your prod environments truly air-gapped and getting like, you know, it's impossible to get 100% efficacy obviously between your different environments and you don't want you to because there's certain things you wanna do with one and the other, but on the networking piece, like if you say that your prod is truly air-gapped, right? Like I think someone, I think Ricardo, you said at the beginning that a lot of people in this group you know, working like, you know, the education space like at universities or like research and stuff. So I mean like, you know, I'm assuming like CERN, you know, like they probably like don't even have a fiber line running to anything that like, so there's no risk of it, right? I mean, if you were gonna do that then you need to make sure that your dev environment is gonna do it or else you're gonna break things in production. So maybe I have a follow-up on this and it's actually more on how much you expose the fact that these environments are air-gapped to the developers themselves. And I'll give an example. So yeah, CERN is probably a good example because there are systems that are clearly air-gapped, anything that is controlling the machines or the accelerator is very much controlled, but then you want to give developers a good experience and they have a general network where they can actually get internet connectivity and do their builds and things like this. So imagine like one situation would be you do all your work, all your builds, all your images in some sort of general network and then through this process that you also described of kind of approving images, you would have some sort of replication of say the registry in the air-gapped environment with some sort of automated replication. And this would be like an exception to the air-gapped is that you actually have some paths for the images to be promoted to be exposed in an air-gapped environment, how this works could be done in different ways. But let's say for example, you will have two hardware instances, one in general network, another one in air-gapped network and you will control what gets replicated automatically. But then the question here is if you will have deployments both in multiple air-gapped environments and you're running instances of a registry in each of these air-gapped environments, how much do you hide this from the applications and from the developers? Because there are some things you can do like for example, have mutating web hooks that will just rewrite the registry to use a local registry, things like this. How far do you go into making this invisible to the service managers, developers, et cetera? So the answer is it depends, which I know is a cop out, but I'll explain. Because for one, there's lots of ways to solve this air-gapped, right? So I definitely don't wanna propose solutions just because depending on your environment. I mean, one proposed solution is I take a thumb drive and I walk to all the different environments and I plug it in and I upload it to the registry. But what you were saying about having multiple, so what we actually did was we had a private network and since I was in a networking company, we had the ability to do this, but we had a source registry that was the single source of truth for everybody. And we then had a bunch of satellites in different environments where they would have a private connection to the source registry, but then other than that, they were only accessible to that local environment and then we had a mirroring strategy. And so what this afforded us was since we had a single source of truth where we basically had a single instance to write, but you couldn't read from it because we didn't wanna over-tax the thing that was like our master source of truth and then everyone else was a read in its local environment and was only accessible there. But then your point about mutating webbooks is basically then I could take an image and deploy it anywhere and there was just this assumption that there was a local registry within that local network. And you were always, since it was a single source of truth with a common tag that's then mirrored to all the different satellites, it doesn't matter where your software is deploying because you get to, and this is something, I mean, and this took us like 18 months to get to you, so I don't wanna make it sound like it's trivial, but basically it got to the point where like a developer would write one image, that they would then push upstream into that source of truth and that one image got to make the assumption that it was gonna call a local URL. And since they're private networks, we leverage things like any cast addresses on the networking side and then we got a little bit loose with some of our URL naming because these were all in isolated environments depending on which data center you're in and stuff and then we would control in our route tables what networks were and weren't exposed. And so then your software deploys in any of our different data centers or any of our different VPCs or other public cloud instances. And it was always assuming that the registry was local to that network and it was common. So then you have federated images and federated secure packages in these air-gapped environments. But then like I said, I mean, the other way to do it is like you could do a thumb drive like you could make it so that all of the local networks are calling back to a single source of truth like you could have a singles registry and then they have two different network attachments, one's a private network to the registry and everyone calls back through that. So like I said, depending on what your environment looks like there's a lot of different ways to skin this cat but the biggest thing that we pushed and it was the most painful was, A, the application specifically and whether that's platform software, user applications, it doesn't matter but like your application needs to deploy in a world where there's no internet, right? And then additionally, everybody needs to use the same source of truth because one of the ways like the other part I didn't get it super deep into was how we tag things. So like I said, you're a developer, you want to, you pull a new image like I said and so local developer device is not air-gapped, right? Like you can do whatever you want, right? A shared device, you break it, you fix it, you rebuild, you go. But the shared dev environment, right? Definitely still a lot to break things there but we're hoping that you did your due diligence. So then you pull those images into the source of truth but the first thing as far as like our promotion status is is the tag first makes it available only to the dev environment. So now you have the satellites in those air-gapped dev environments and this tag says that only these users, basically what we try to get to is policy as code, it's kind of a buzzword but using like different policy engines and whatnot we just made it so the devs could pull images in as they wanted the policy would say you can't promote this image to these dev repos until the scans have been done, until the tests have been run. If those are successful, you're good but once we got the policy in place and overcame that automation hurdle we basically just kind of took the reins off and let the developers go because the left and right limits were in place to where they could move as fast as they wanted but then they still weren't gonna like do something that was gonna take everybody else down. Now that kind of matches the approach we tried to do as well although probably we don't do it at a low level networking. We really need tools that are able to do these mutations and this could be like for the registry and the images is the easiest example to just mutate the source like you said and there is support in quite a lot of tooling and handling containerized deployments to do this sort of thing. It's not only Kubernetes, there's other tools that also support this but yeah, I think one of the things that made this popular is that we can manage this ourselves centrally without the developers having to know anything about this and it's all handled by the deployments. That's a key thing unless you get a lot of shouting. Yeah, the developers had a lot more insight early on because things were breaking and making them mad and they're like, hey, what's going on? Like I'm just gonna like fix this but once we kind of hit like that 18 month to almost two year mark, we just kind of like I said, fell over the other side of the hill. The policy was in place. There was clean methods to pull new images into the private registry, validate that they were safe and like you said, after that, I mean some developers just because they would be provided baseline templates for like their container images for their OS images, et cetera, or we would just once the base images had the URLs pre-baked in for them, at least for internal development, like you pull in third party software then there's always gonna be some kind of de-confliction. So, but yeah, we were able to make it to where it was largely abstracted from them and like once we stopped making things painful for them, they stopped asking us questions and they just did their thing. Awesome. I do have to run, Ricardo. Yeah, yeah, thank you so much for joining. Yeah, feel free to reach out, like I'm down to come talk to you again. Just let me know. Yeah, thank you so much for taking time to join us. Yeah. Great chat with you all. Cool. So I don't know if anyone had additional questions kind of this was a late call for the subject. Anyone has anything to raise or even other topics? Alex, I saw you and me too. I was just trying to think of what else we, how else I think about it from the standpoint of this, we're an air gap company and I run an open source team and there's inherent tension there. And I don't know whether it's worth, just trying to think about that tension and whether there's any discussion to be had around it. But it's not a well formed thought. So I put myself back on mute until I thought about it. So I don't know, I drove a bunch of questions there. Has anybody else got questions around this air gap world? Not really any questions, just kind of here to kind of see what others are kind of up to in that space, where I come from. We do some kind of different kinds of national security work. So we have several different types of networks that are secluded and within those networks, they might be running Kubernetes. And so trying to figure out different combinations of tools and how to securely scan artifacts or create new artifacts that can be consumed all outside of that protected space and then pull those in so that they can be deployed in a production or even a development environment for that matter. Yeah, we leverage GitLab quite a bit for a lot of those kinds of things. Just from an organization model perspective, we use the runners and the way that they're designed. We can be a little bit flexible about that. But it depends on what environment we're talking about because sometimes it's kind of hard to avoid the transfer of certain kinds of files without walking them over and physically applying them to said cluster space. So it's interesting anyhow. Yeah, so I'm just kind of here to kind of learn and listen and so yeah, no specific questions though. I think for our environment as well. We also have GitLab and we explore a lot the runners. But what we've been trying as much as possible now is to have this model that also Jeffers is describing which is to have one single source of tools for all the packages, images, and have that really tight and then define policies on how things should be mirrored or replicated to different environments. And yeah, those replicas will be airgapped and read only and they have to be populated via the central one where we enforce the policies. This is basically what we've been trying to achieve. The challenges are really on making this as seamless as possible for everyone. We actually do the same for external packages. I don't know, like Jeffrey mentioned briefly, this idea of having kind of pull through caches and in repositories, we do that as well. We try to enforce even on our general networks that nothing is pulled directly from whatever. It's all coming through our single source of tools even if that means pulling through and then just making them available after the checks. Yeah, we do the same thing where we use policies to drive, okay, where is that single source of truth that we're allowed to pull from in order to really tightly control even people within that space trying to do custom things or try things out directly, where are they allowed to pull those resources from? So we control that via policy, so. I think Alex asked before about the actual tools being used. Maybe like for us, we use specifically for container images, we are relying on hardware and we run the CVE vulnerability scanners plus some additional checks that we have in addition to like whatever goes in the code themselves and the GitLab reports. What are people using for this? Well, since I was already talking, I'll share. We're using some of the AquaSec tooling out there to scan our repository images and we're using GitLab artifact repositories to help deploy those so and scan so. I think we're using also like checkmarks and AppSpider and a few other things out there too to kind of provide that reporting level and kind of adhere to the standards that we have to, so. Yeah, I feel like we were using AquaSec at one point. I don't know if we're still, I know BlackDuck is something on the inside as well. I sometimes don't see all of everything because I'm on the outside. Those are two that I know that we're employing, I assume X-Ray because we use Artifactory as well. I just signed this up for TiedLift so there's some amount of that coming in. Various things built into GitHub, I suppose. Yeah, there's probably a whole world that I don't even know about in terms of scanning that GB Search is doing. I could come back and give a full list at some point. Yeah, but we are using also Aqua. Thanks to the vulnerability scans as well. Gersh, Allah, what are you employing over in IBM? Oh, what a timing, I was just about to drop off. So I have to admit that I'm in IBM research so I'm a little bit removed from production environment although I have been involved in the development and continuous integration and deployment for some IBM services before but not currently at the moment. But it's, so just a few comments on some of the things that were mentioned. So I was really surprised when I heard someone said that open source data are gapped even in their own environments, right? That's, actually, we don't have that, right? For open source, people contributing to open source typically are able to download and experiment and so on in their own environments, right? That these are things not going to production, right? But they are able to do that freely. So we deploy, of course, DevSecOps pipelines and a variety of image scanning tools. Different organizations or different business units really are using different pipelines actually. So there are some efforts, of course, to unify those things but as you can imagine, IBM is very big, many units are doing different things. So I don't see one unified DevSecOps pipelined with fixed set of tools that are forced on everyone. Certainly in research, we have flexibility in that. So in terms of, so we use, of course, IBM Cloud Registry, we use Quay, we use Artifactory. These are some of the repos that, of course, we use plus GitHub, of course. And we have our internal IBM GitHub. And that's the other thing, of course, that for some of our projects, we have only inner source. So we replicate the GitHub.com model in GitHub.ibm.com and basically we have projects that people can contribute to. We call it inner source, right? Following the same open source model. Cool, sorry that I stopped you from heading out. Yes, I needed to head out but since you pinged me, I thought I will just give a few comments. Yeah, thank you, appreciate it. But thank you all for the discussion, really. All right, I think this is probably just, like it feels a bit like we can learn a lot still on this kind of thing, but it's also a very broad topic. So maybe it's something we want to bring again later in the year, I guess. Seems like there's interest. Yeah, I feel like it's a pity that Jamie wasn't able to join and he could have given a lot more flavor from our side about trials and tribulations of working on the inside of the air gap solution to research. I am on the open source side, so I'm on the other side of it mostly. That's all good, you can bring it back. But okay, so I think we still had quite a nice overview of options and problems. So if no one else has things to raise on this topic, I think, I'm just looking at the agenda for the next one and we have cluster API and crossplane. So I think that will be a very interesting one as well. So that will be in two weeks, March 16th. I'm just thinking if we should cover both in the same section or if it's even worth to split, because they are not really the same. Like crossplane is more a generic way to integrate external resources while cluster API is really focusing on one use case. So maybe we choose one, which one would people prefer? I'm more interested in crossplane. Crossplane? But I am at one vote. I don't have a personal preference, so I will jump on the bandwagon there. So I would vote for that. There's two now, awesome. Yes. All right, so we have Jonathan and Nathan to break down. Yeah, I think it's settled. I think it's all crossplane all the time now, yeah. All right, fair enough. Let's do crossplane though. That's an easy one to reach out. I know that people also, that will be easy. And then should we go, maybe we keep cluster API in the back of them, just so that we don't forget that we had it there. I'll just add it here. Nathan, you just can't take a firm stance on this one. I was going to say, oh. Staying the best. Jonathan also voted for crossplane. I didn't see him in the chat, so. I need to get the, you know, we started putting this batch working group together for the CNCF, and hasn't mean many respondents for the doodle, just trying to figure out when a, when we should have a meeting. If any of you are interested in this topic, here is the doodle. This was sort of born out of a discussion around Allah, who was on here, has a project MCAD and we have a project in Nomada. Klaus Ma has a project Volcano and there's a whole bunch of work around batch scheduling. And in one of the tag sessions, we thought that it might be useful to have a focused working group on just that as part of the CNCF. There is one already happening as part of Kubernetes, which met last week. That was the discussion last week. We have one going to the CNCF in general. If any of you are interested in that kind of a discussion, there's a doodle here, so that we can try and figure out when would be a good time to meet. So just making a plug for that. Yeah, maybe I'll send an email to the list as well and maybe push it on the Slack channel as well, so that, because here we are only five right now. Yeah, this is my first step towards getting some traction here, so, yes, I will do that. And yeah, also I just posted also the link to the collocated event for, yeah, in general, that is the case, yeah. It's the, I think it's called batch system initiative within the CNCF, and it will be under the tag run time. Nice. And then I just posted the link for the this collocated event at Cook County in Valencia in May, so the CFP is still open until, I think, Monday next week, midnight BST, so basically Tuesday for all of us. I was supposed to link directly. So feel free to submit a proposal, including things like the batch system initiative, it's probably worth talking about it in this event. There will be talks about the Kubernetes batch working group and the proposal for Q that we saw in the last meeting. So there will be a lot of talks in this area for HBC and batch and workflows and Qs and stuff like this. So, yeah, make sure you push all the proposals you have. We'll definitely have a good event. It will be a half day with a networking dinner after. Yeah, I hope to get out to that. I think Ricardo, the other Ricardo, signed me up to speak something at some point, I don't know, so we'll see. I think that's for the CNCF batch run time, though. You mean a Varena, Ricardo Varena? Yeah, so that's at the CNCF tag run time, but this is the co-located event at KubeCon, so make sure you submit proposals. Yeah, it was something to do with KubeCon that he signed me up for. Have you had to figure out what he did? Ah, okay, then I don't know. Sounds good. Yeah, all right. Cool. Okay, so I think that's it for today. So thanks a lot, everyone, and meet you in two weeks for cross-playing. Perfect. Thank you. Bye-bye. Thank you.