 Welcome, everybody, to OpenShift Commons. We're streaming live on both BlueJeans and Twitch, so wherever you are, welcome. We are now going to have a great talk. I always love hearing from Mark Borschting from Tremolo Securities, one of my favorite commons members and one of the earliest members to the commons. And today he's going to talk a little bit about, and a little bit, hopefully a lot about securing your pipelines on OpenShift. And I really like this whole talk. We've been talking a lot lately about DevSecOps and how to keep the conversations going and keep everybody happy. So I'm going to let Mark introduce himself. We're going to do probably 20, 30 minutes, maybe, right, Mark, that much with the demo. We'll see how we get it. Yeah, we got slides. We only have about 700 slides or something. We have a really fun demo that we want to get to the tech and make sure folks have fun seeing what we're up to and maybe get some ideas going on what they can do in their own projects. Yeah. And then we'll basically have a conversation with everybody who's online and AMA stylish. And I'm going to pick Mark's brains a little bit afterwards. And if you have questions, just enter them in the chat, either in the ones on Twitch or the ones here in BlueJeans. And we'll just have a conversation for the second half hour. So, Mark, with that intro, please take it away, introduce Tremolo security and yourself and tell us how to secure our pipelines. Thanks, Diane. And thank you again for giving us a chance to come on to Commons and talk. So we're going to be talking, like Diane said, about securing your pipelines. So we're not going to be talking about what your pipeline should be so much as how do you provision your pipelines? How do you create your pipelines in your enterprise? How do you figure out where it goes? How the different things connect? So who are we? We were found in 2010. We're focused on identity management. And we'll talk about why identity management pipelines kind of intersect there. 100% open source. So what we're about to show you is, you know, and there's a link in the BlueJeans chat. All open source. So all freely available. We've been working with OpenShift since 2015. I'm rocking a little bit of my retro OpenShift gear here from before it was built on Kubernetes. So we've got a lot of experience in the space. And, yeah, we were in the first class of the certified containers for Red Hat, first class and certified operators. Proud to have been included in the first round of the marketplace offering that just came out a couple of weeks ago. So we're pretty well steeped inside of the Red Hat culture and product ecosystem. I myself have been working with Kubernetes since, you know, about 2015. CKAD as of several months ago. So it's not unusual to see me either in the OpenShift Commons Slack or in the Kubernetes Slack answering whatever questions folks have around identity management, authentication, et cetera. But we're not actually going to talk a lot about authentication today. We're going to talk about pipelines. And we're going to talk about it not just from a technical perspective, but from a stakeholder perspective, a user perspective, a dev sec ops perspective, the three parts of that word. You know, the developer, right? I'm a developer, I want to write code. You know, I really, if it gets in the way of me writing code, I'm not happy about it, right? You know, we as vendors, we as a community spend a lot of time trying to make this experience really good and really working on the user experience here. I, you know, maybe I know what Yamal is, maybe I don't. A lot of times I don't want to know how the stuff gets deployed. I just want to be able to work on my thing. I like to write the code, push it onto the next task. So, you know, developers, they're generally focused on making sure that that business logic, you know, the thing that runs your company, your enterprise is being built. And then the ops side of it, right? Our administrators. Raise your hand if you've ever gotten a nervous tick from hearing Slack go off. I know I have. You know, and as an administrator, you help somebody and your reward for that is they ask you every single time they need something, no matter what it is. And so, you know, you're not just the person who fixes things. Now you're the person who has to figure out how you get people to not ask you to do things personally. You know, there's a lot of, you know, ad hoc work that has to be done. And then finally, you know, you get into that audit perspective of, you know, you've been working on a system for a year or two. And then all of a sudden, this person comes in and says, hey, is this secure? Why do these people have access to these systems? Sometimes with a little more structure, sometimes with less. I know I've had one customer who literally just the auditor sent me an email and said, hey, is this secure? And I was like, sure. And so, you know, ops kind of makes all that run often in the background. And so, you got to respect what they're doing. And then sec security, you know, the security person, they're looking to understand why that's the biggest question. Why are all these projects here? Who approved the access to it? You know, we've made it so easy through through OpenShift and API driven infrastructure to be able to do things that the previous concepts of privileged access versus regular access kind of go away. So now kind of everything is privileged. If I can deploy a new code just by pushing it into a repo, I should probably have multi-factor access on that repo. So there are questions there. You know, and just because we're doing this whole cloud native thing and everything's automated, it doesn't mean that we don't still have compliance rules that we have to work around. You know, and there's this argument of compliance for security. But the other day, if the law says you have to be compliant in these areas, there's not a lot of an argument. And so when we look at the provisioning process and integrating these three things and, you know, was so often called the culture of DevOps and DevSecOps, it's not that any one of these is more important than the other. They all need to work together in harmony. And so any solution really needs to focus on all three, not just one or two of those components. So why is identity management important in all this? You know, let's talk about the pipeline. What does your pipeline work at? So we're going to do a demo of a proof of concept we were asked to do for a customer where the pipeline was more than just OpenShift. OpenShift was that starting point. They had a lot of legacy Jenkins workloads. They wanted to continue to leverage Jenkins for their builds. They wanted to use GitLab. They wanted to show that we could integrate with SonarCube for scanning. Now the interesting thing about all three of these, they have their own authentication process. They all have their own identity management system. You know, Jenkins, it ships with OpenShift. It's tightly integrated, so that works really well. GitLab has its own identity management system, right? It's got its own API, its own way of integrating with groups. You know, there's SSO, sure, but you still have to provision those different things. And then SonarCube as well has its own thing. So every application that makes up your pipeline, it's going to be more than just OpenShift. It'll be, you know, at least three things here, right? A CICD pipeline of some kind, you know, source control, different kinds of scanners. You might have different types of process-based applications. So your pipeline is going to be more than just OpenShift. OpenShift is your starting point. And so one of the interesting things that came out of when we first started getting into OpenShift and Kubernetes in general was we found that the identity management workflow engine was actually kind of a nice automation engine as well. As long as we had an API to talk to, it gave you a single path so you could actually trace when a request was made, approve the request, what objects were created throughout your infrastructure. Everything was tied together. It gave you an approval process. You know, why is something being done that needs to be approved of? Who has access that needs to be approved of? You could do that in external systems, but a lot of CICD systems, they don't have this part of it. It's mostly focused on the process, unless the approval. You've got some stuff with Git and whatnot, but I haven't seen it as integrated as this. And then there's an automation side of it where you want to do things the same way every single time as much as possible. You'll always have exceptions. You'll always have to have flexibility, but you know, you want to have your base be, okay, unless you have a really good reason, we want you to stay within this process. And that process is going to vary between enterprises. So we found that there was a really nice meld there between identity management, both between tying the various pieces of your pipeline together and being able to track your pipeline and the provisioning process. Let's talk a little bit about the demo. I want to have fun with the demo. I like demos. So what we've built today is a multi-environment OCP pipeline. So we've got two OCP environments. We've got a development and we've got a production. All of our applications for managing the pipeline actually run in our development OCP. So we have GitLab running in here. We have SonarCube. We have Jenkins. And this is where the bulk of the work goes. So the first thing that happens is when a user logs into OpenShift for the first time, they get their own sandbox. So we're just in time, built project the first time they log in. The second thing is they're able to request that a project get created and when that project gets created and we'll go through the details, there are a lot of different steps you've got to create in GitLab, you've got to create the project in OpenShift, you've got to create the project in the production OpenShift. And there's a managerial process. So we don't necessarily want developers or even admins to be running like OC commands or doing anything like that to get the server around. We'll automate that process. But then when the move goes into production, we want to leverage what OpenShift gives us. So you've got image streams and deployment configs that will intelligently roll out applications as updates get made. So we said, all right, when we do a roll out to production, that really means pushing a container from the development environment into the production line. OpenShift has its own built-in registry, all API driven. So I said, all right, let's go ahead and execute a API call in production. Let it pull the registry or the container from dev into production. That gives us that whole kind of circle of the development life cycle. The other thing that becomes really important here and we have our manager over here is somebody needs to take responsibility for when rollouts happen. You can't just say, OK, I'm just going to hit the button and be done with it. And a lot of enterprises, the people who make those decisions don't generally want to log in to get, they're used to web applications and so get to them where they live. And so we provide them a UI to say, hey, somebody requested that an application be pushed into production. That workflow in an enterprise will often have multiple steps because you'll have multiple stakeholders that have to sign off on it. Here we have just one step, but it's all customizable. You hit go, it deploys, and then there's an audit log of not only what happened, but who approved it. So we'll go through that as well. So what you end up having is this very life cycle approach to being able to deploy an application consistently into your environment. Let's talk about the dev pipeline that we'll see. A lot of this is going to happen behind the scenes. So I like to go into the details before we hit the demo a lot. So I'm a developer, I'm doing my work, I merge into master. That's going to kick off a pipeline that will do the build, do a code analysis, create a container, push it into the test environment. At that point, it's assumed that there is something that's going to do some testing, whether it's automated. There could be multiple layers of analysis here. We're just doing code analysis in your pipelines. I would highly recommend things like container, scanning your containers for vulnerabilities, things of that nature. So there could be all sorts of other steps to the pipeline, depending on the type of code it is, the type of application, et cetera. But once it's in test, at that point, we're doing our testing. That is our kind of goal for now before we push it into production. Then it's time for production. So somebody says, the typical enterprise for getting OpenShift and Kubernetes for a moment, your typical enterprise is going to be something to the effect of, I'm ready to move my application to production. Let's go to the change control board or the control access board or whoever the people who are responsible, because somebody's got to sign off on it. And you go in and you say, here's what I'm going to do. Here's my back out plan. Here's the mitigations. Here are the risks. Here's who we think will be impacted. And that gives everybody who's a stakeholder a chance to say yes or no. We're okay. We're not okay. And so here we're automating that process. We're saying, all right, somebody is going to log in, give a reason why they want to do the deployment, and it's going to go through a set of approvals. Once those approvals all clear, the promotion to production is automated. We're going to push the container of production. It's actually not 100%. We're going to pull the container from dev, because you don't want your dev environment to be able to push into production. So prod is going to pull the container from dev into its own image stream. At that point, you're then leveraging OpenShift's built-in capabilities to recognize that the image stream has been updated and roll out new versions, how you define your deployment config. So what is a project? We talked a lot about that and about having a project. But there's just a lot of different steps you got to go through to make it work. You got to create the project in dev, the project in production. You got to create something in GitLab. You got to create all your build configs. Then you have to connect everything with webhooks to make sure that when you do the commit, it goes ahead and starts the process. You've got to get your prod system to be able to pull the container, and then you need our backbindings for everything, right? And then authorization groups on day two to figure out who has access to all this stuff. So it's a lot of different steps. It's not rocket science. You don't need a PhD in cloud-native to be able to pull it off. But it can feel like a bit of a Rube Goldberg machine sometimes. And so what I'm hoping to show here is that there are a lot of different ways that you can tame it. And hopefully, y'all will enjoy this particular approach. Demo. So let's get to the fun part. All right. So first thing I'm going to do is I'm going to log in. Now I put the URL for this project inside of the chat window. First thing I'm going to do is log into my environment. This is a really basic kind of a straightforward implementation. This is a variant on the OpenShift operator-based deployment that is available on the web inside of our GitHub repo. We added some things to it. So out of the box, our OpenShift repo just does SSO for OpenShift and provisioning for OpenShift. Here, we made it a bit more opinionated. So you can see that we've got these badges. Think of this as developer portal. This will adjust based on who has access to what. As a user, I can go, I can check out what I have access to. I've got dynamic ability to request access to things. So here, keep an eye on the bouncing ball. We're up to greetings three from my testing from setup. This is all dynamic. So we'll see that this changes as the course of the demo goes through. And then finally, reports. We'll be able to see who did what. So let's start off with some table stakes and let's get into GitLab. So like I was saying, at the beginning of this SSO, but each of these applications have their own integrated identity. So being able to tie those together really brings a nice experience to the table. So signed in now to SonarCube. And then finally, let's go ahead and log into OpenShift. So first I'll log into our dev instance of OpenShift. And then I will log into our production instance of OpenShift. So we've got both of these instances going. I'll fire up projects and greetings in both of these just to show there is nothing up my sleeve. This is dev, so greetings. Okay. The first thing we're going to do is create a new project. So again, the self-service process, we don't want to have to go into GitLab, create a project, create a project, create a project, link it all together. That's pretty error prone. So we're going to go in. I'm going to say, hey, let's create a new project and give it a name. Readings four. And we're going to specify a type. So this wasn't made a drop down. We have other customers where we built this out quite a bit more for their particular needs where we did make these things drop downs to make it a little bit simpler and test. So it's going to say, I need a new application, the system approvers, however we define that would have gotten a notification, hey, somebody wants to access something. So I'm going to go ahead and review this, give a reason test. This is going to happen pretty quickly. So I want to show three projects, three projects, three projects. Let's go ahead and hit the button and come over here. There's four projects. Readings four test. There we go. Readings four prod. If I come over here, there we go. Readings four project. So we've now provisioned projects in all three of our environments, our major environments, our OpenShift dev, our OpenShift prod, and GitLab. We've also gone ahead and integrated, I think it's settings, integrate, no, webhooks. So we've already integrated the webhook for our pipeline to kick off our build config. So when somebody does a push, which we'll do here in a few minutes, that will automatically kick off the build process. I'm going to come over to the Jenkins project that we built. Here we go. And so if I come in here, we now have build configs for greetings four. And here's the two builds that we created. One is a more generic pipeline based on Jenkins. The other is a S2I build. That'll actually generate the image. And so this particular workflow is embedded as kind of an initial workflow to just set everything up. We made a few assumptions here. We're assuming that we're working in Java because that's what we're told to work in. And we wanted to make it as straightforward as possible. We're integrating our code analysis with SonarCube, generating the image using OpenShift's built-in capabilities, tagging it. And then if we come back to our readings four, we have an image stream that's right now empty, waiting for an image because there's no code. And then the same thing over here in production. If we come to our prod image, go to build. There's now an image stream that's waiting for a tag. That's waiting for an image. So now that we have our projects been provisioned, it's been approved. Let's go ahead and push it all out. So this is a pretty straightforward process. It should look pretty familiar. So let's go to greetings three. And we're just going to go ahead and copy everything out. And before I hit get push, nothing in here. Let's go ahead and go over to Jenkins. Come over here to builds and keep an eye out. So everything's been pushed in. And there we can see we've already kicked off our pipeline. Now this UI should look pretty standard to pretty much any value. It's been working in OpenShift for a while. And so containers spinning up, it's getting, it's firing up a build container. So that'll take a moment or two. And while that's going, we come over here and we can see everything's in there. So we now have that push capable system. And so we've looked at the dev. We've looked at the ops. Ops hasn't actually done anything yet, which is kind of my favorite type of ops. You know, from the security side, I come over here to my audit report. And let's look at the change log for period. We'll zoom down here to the end. And we can see here is our greetings for application. Well, this was greetings three. Oh, wait, no, here we go, deploy application. So greetings four. And we can see every single object that we created across the multiple projects. We created objects in Kubernetes. We created objects in GitLab. You know, we might have created objects inside of a database if we wanted to do that too, to track access management. But we can now tie together all of the different things that we created for this request back to this particular workflow. So now when someone says, where did this project come from? It's a report. It's a SQL query. It's not digging through logs, trying to tie everything together or digging through emails. And then when it comes to, well, why? Why does this project even exist? We run a different report. And we can see that it was created for a reason test, right? But this is the person who approved it. So I've actually got customers who are doing interesting things around like OPA, where I guess OP is the right way to say it, where if a namespace or a project is not in the open unison database, they flag it. And they'll say, you know, somebody needs to tell us why this exists. So it gives you something to audit against, a known state or an expected state to audit against. Let's take a look at our pipeline. So we can see that we've built the war. We've done the code analysis. So I'll take you if we refresh this. We'll now have greetings for founder vulnerability. Now point of this demo wasn't really to say this is the best way to build a pipeline. So, you know, in reality, we'd have different thresholds here for when SonarCube would say, no, this didn't work. We're just using defaults right now. And so that process is building. And while that's going, I'm going to go ahead and open this up in a new tab. Oh, and we can actually see that tag the image. So if I go over to our greetings for test and I look at our image streams, bam, there's our image. So we have gone through the development side of things. We've written our code. We've committed it. We've pushed it. Our security people are able to audit it. Now it's time to move into production. We're going to come back here and we're going to come over to request access and I wanted to play a project to production. Now you can see here we've got our greetings for application. This is a dynamic list that gets generated. We're querying the API server to see what projects are available. But what you don't see is the 30 or 40 other projects that are just part of an OpenShift deployment. And so that's because we're doing it by label. And we're saying, we only want to see projects with a specific label. So when we provisioned these projects, we added the label to the project to make sure that when we came to this window, we only saw the projects that were in a position to provision. So I'm going to go ahead and add this to my cart. Check out. Now this is all API driven. So you could integrate this into whatever you'd like. And we'll say demo. And so now that request has gone through, the folks who are in charge that need to decide such things are going to hopefully be in a position to do it. And here we go. We have an open approval. Now this is going to happen really quickly. So I'm going to bring this up. So I'm in prod right now. There's no image tag, right? This is our production OC prod, our production environment. This is our test environment. So when I hit this button, we're going to find that this tag ends up in here automatically. So I'm going to hit Confirm Approval. And there it is. So if we take a look here, 5, 2C27F, 5, 2C27F. So we've now gone through that whole life cycle. We're now it's up to OpenShift. And the deployment can fake to see, oh, new image stream. Let's go ahead and start rolling out things based on however we configured it, AB, blue, green, whatever, all the different options that you have with OpenShift. And it's all been automated. I didn't use the OC command. I was demoing things inside of the UI. I didn't actually have to go into the UI. As a developer, I never really had to go in there. I could use it as a staging ground to do some development testing. But I wasn't forced to actually do anything inside of OpenShift to get things going. And I was able to maintain my existing business processes. As the ops person, I didn't have to go in and manually create things. As the dev, I was able to do my job. And then finally, as the security person, I had access to all the different reports and automation needed to be able to assure the executives. Yeah, the environment's secured. And we're following best practices and our compliance guidelines. So like I said, the code is on GitHub. It's not a turnkey solution, but I think it's decent enough starting point anyway. Everything is open source. So it's right there on GitHub. Pull requests, suggestions, rants, all accepted. We love to talk and interact with folks. So whatever your thoughts are, we'd love to hear it. Yeah, so with that, I'll open it up to questions. Hi, well, thanks for that. It's been like dev sec ops month here. It seems to be top of mind to somehow conjoin all of these conversations across organizations, the dev folks, the security folks and the ops folks. And I think this is kind of a nice segue because whenever I see you, I always think of you as the pragmatic person. You're the person that actually goes out and implements it. I'm the person that like, oh, bleeding edge, cutting edge. This is like, do this stuff. And you're really kind of down there in the trenches making this work for people who are deploying OpenShift and deploying apps and stuff like that. So I'm really thrilled to have you here with that kind of pragmatic approach to things. Making a Sonic cube and Jenkins and everything else work together very nicely. It's really, I totally appreciate this perspective. One of the conversations that I was having last week that kind of struck me is we've been talking so much about security and compliance and audits is how to talk to and explain all of this to your compliance people. And so it's these systems, and you said the point, it's not Rubik's cube, Rube Goldberg. And when you bring this stuff to the audit team and the IT audit folks, and a lot of them don't know what containers are. And a lot of them are new to Kubernetes and some of them are new to OpenShift. Or how do you work with those folks to get them up to speed and to trust? I mean, I know you are all about trust and the identity management and stuff like that. So you've got a lot of background working with compliance people. How is your, what's your experience or what's your coaching on explaining this stuff? We get it because we're developers or some of you guys are ops people or maybe you're security people. But it's the compliance officers who often like, look at this and go, well, this is just spaghetti or Rube Goldberg, where's my audit report? How do I trust that all of this stuff is working in the same? That's a great question. And it's really buzzwordy, but culture, understanding the culture, right? So there's this constant theme dev ops, dev sec ops, it's culture. It's a two-way culture though. It's not just you need to understand how to do dev sec ops, but dev sec ops has to understand how everybody else does their job too. So when you go to the compliance person, for better or worse, the compliance person, they have a language that they speak. And so while not directly related to security, in a past life as a consultant, I was a project management professional. And I was on a project where I was told walking into it that the manager from the company I was working for and the project manager from the customer hated each other. Absolutely despised each other. That's always so helpful. That's so helpful. That's so helpful. And every meeting devolved into a shouting match. Like, oh, this is gonna be fun. And so I walked in and after one meeting, I realized what the problem was. The customer project manager was a PMP and she was saying everything in PMP speak. And the company's project manager was not a PMP. He was a good manager, but he wasn't PMP. So he was just using different language. And so I sat in there, I spent two hours just being like, well, no, this is an input of this. And just translating it. And by the end of it, they were like best of friends because they realized that they were talking each other's language. They were just saying it differently. And so when you go to the compliance officer, or you go to the compliance group, you've got to, it is a two-way street. There should be a good compliance group is gonna come to where you are, you live, but you also gotta go to them where they live. And so they're looking at spreadsheets. They're looking at controls. They're looking at, here is our compliance framework. We have to use NIST 853, we have to use PCI. We have to, and so it's really important to be able to tie back to those things and just make it as easy as possible for people to say yes. But the worst thing you can do is walk in and be like, well, of course it's secure. Don't you see that? Yeah, so being able to speak that language at both sides is really where you're gonna find your success. Yeah, yeah, I think that's key. I think the language thing is a really big part of it. And a lot of the security now is kind of baked in as well. It's like you can't deploy it unless it's secure in some ways, like the container has been scanned and all of these things. And there's a lot more automation than like 10 years ago or so when I was doing IT audit stuff. But they still, I tend to get requests for things like where's my, show me the log file, show me the audit report. And I think some of that, you've got baked into Open Unison and Tremolo security. So I've been pretty impressed with being able to get good reporting and good explanations out of some of the, all of the Kubernetes platforms but especially some of the identity management stuff. I think you hit the nail in the head when you talked about three different identity management systems going into one pipeline. And how do you merge all of these different approaches to identity management? And I think that's the sweet spot for Tremolo is being able to do that, which is wonderful and which is why we're so happy here, part of our ecosystem. And excuse me, drink a little more of coffee out of my red hat swag company. We were talking about swag earlier today and it was kind of funny because it's like, do we miss swag? How do we distribute swag in the time of COVID? And Mark, if you're listening said earlier is like, you know, I'm so glad, almost glad there is no more swag. And yeah, but this cup is, you know, this is, and these t-shirts are pretty rockin' awesome but we have enough of them now. No, actually, but let's go back a little bit to this too because one of the wonderful things about Open Unison is that it is open source. And you mentioned your decision to turn Open Unison into an open source project or make all the code available open source came back in 2015 when you were at a Red Hat Summit and you probably drank some Kool-Aid from Red Hat at that event. What was the thing that really inspired you to move this into an open source one? Was it our awesome business model or was it just, what was it that got to you? So it was a few things. I mean, I've been doing open source literally since I got into programming. My first job was from open source, my very first job out of college and how I got into identity management. I had posted a project on, I'll date myself here a little bit, Source Forge and I was still in college at the time and it was just on paper, some stuff around LDAP for another project I was doing. And the startup said, hey, can we pay you to do it? Sure. And then they hired me after school and the rest, as they say, was history. But I mean, I've been doing open source for my entire career. So open source has always been really important to me, not just as a way to getting into the code, but kind of part of me. And so when we started Tremol, we were not originally an open source company. And there were a lot of reasons for that. We tried some freemium stuff and didn't get a lot of traction there. And then I went to Red Hat Summit at Boston, I think it was 2013. And I was just kind of taken back by the community and just how like a little of it was Kool-Aid, a little of it was just like the getting caught up in the moment, but just the enthusiasm that people had around open source. And then I kind of came to a bit of a business epiphany as well where I realized there are people that no matter how cheap the code is, I could charge a penny for this. They won't pay it. And then there are enterprises that I could charge a million dollars for this or they could use it for free without support. They won't use it for free without support. And so I came to the realization that while my, and I call them my open source customers because I treat them as customers, don't give me money to give me feedback. They tell me what's going on. It's huge. I can't, I'd have to go through GitHub to quantify it, but the number of things where people have said, hey, now that this is in our environment, can you explain this better in the documentation? Documentation's huge, like feedback on documentation. If you're a user of open source, please feedback on documentation is the single most important thing you can contribute to a project because documentation is every bit if not harder than the code itself. Or this feature has really worked for us or hey, I had a problem doing this. That feedback is gold because then when you go to the people who are gonna pay for it, you just want to work right the first time. And so having the ability to have somebody have beaten on it for a couple of minutes beforehand, that's not you, huge, it's invaluable. And so we kind of came to that realization. So I think it was Red Hat Summit 2015 was when we actually came out of stealth mode. And that was our first Red Hat Summit, our first conference, our first major conference. That's when we were like, nope, we're open sourcing everything. And then it was strange, we never made so much money as when we gave everything away. Yeah, yeah, Peter Larson just put, one of my favorite, given enough eyes, every bug is shallow. It is, I think, the mantra there too. And it's also like, you can't pay money to get that kind of feedback, you know. We're doing work now like on, in the OKD working group, which everybody should watch it. And it's doing two things. One, it's really working out a lot of the bugs in an open shift and creating a wonderful, I call it a playground, people probably resent calling it play, but a wonderful space for also testing Fedora CoroS because the OKD runs on that. So it's like this cross collaboration between two communities and the Fedora folks are putting all this time and energy on improving the bleeding edge of rel and Linux, as well as the OKD folks are really pushing the envelope in terms of making Fedora do amazing things. And what you just see is like, all of these extra eyeballs on something that normally maybe we push out a release of OKD as just part of the pipeline process for open shift, you know, every release. We throw something in the past into the origin repo and there you have it. But now this, the extra sets of eyeballs that are working on it. And I'm personally not paying them. Other people might be paying them, but I'm not paying them. And it's just huge, I think. And one of the wonderful things. And I think that's also the neat thing about Open Unison as well is that you get, you know, you do get the paying customers because, you know, you still are, you are dressing in swag, but I do think that, you know, you're doing quite well and we're really happy for you for that. But one of the things that's lovely about Terminalal Security is if you're ever on the OpenShift Commons Slack channel and if you're not there now, just join, go to commons.openshift.org and fill out the join screen and do the join process and we'll put you in there. But Mark is there almost all the time. I think you have it set up to notify you. That's why you get that little twitch when you. It's laughing right here. And people like Mark are always sharing the lessons learned, coaching people on how to do stuff, and it's really pretty amazing. We have about 545, probably if I got updated today, probably about 550 different member organizations in commons. And I'm always every day surprised and so grateful for the contributions. And a lot of it is just feedback and peer-to-peer coaching and people showing up for sessions like this, whether they're internal red-hatters who are on their lunch hour, because if you're on the East Coast right now, this is kind of your lunch hour. If you're on the West Coast like I am, it's like your second cup of coffee hour and we need more. But it's really people's spare time. And spare is not free time. You're giving us your time and energy to help improve the products, the code, and people's understanding of open unison, identity management. I have learned from Mark so much that I didn't really think about identity management about all these different pipelines, different arenas, how to bring all these things together and bridge these different identity management systems together and then to have you come in and talk about dev sec ops is like, yeah, okay, that just makes sense. But it's only because of our conversations in these areas of collaboration that I know to ask you to come in and do these things. So it's really open source kind of changes things for a lot of people and for a lot of organizations and it's wonderful, wonderful to have you here doing that. So tell me, and then so we've blathered on about open source and how wonderful it is. Tell me what's next for tremolo security? What's coming out the pipeline in your roadmap? So we're doing some interesting stuff. We're really starting to take a focus on a lot of like what you've seen here around automating of pipelines and automating your build out. I'm actually also in the process of writing a book which is a lot of fun. Oh, no, it's not a lot. I've got a really good co-author, so that makes it fun. So we're writing a book on Kubernetes and my side of it might surprise you, it's a focus on identity management and automation. And so that's been a fun process as well. So that's going on. And then we're kind of chugging through on trying to productize stuff like this a little bit more to make it easier. So like our GitLab integration, making that simpler, we've got a couple of customers that have built things with us that are similar to this, not this exact same implementation, but similar process. So taking that and making it easier to integrate and quicker to get off the, go from what my opinion is of a pipeline. Oops, we just lost your voice. Oh, can you hear me? Now I can hear you, just for a minute. Oh, okay. So that was weird. Your opinion of and stop. Oh, yeah, so I was saying trying to make it easier to get from what my opinion of a system or platform might be to what your reality is and I'll never have the right opinion, right? But we can work on getting that gap shorter and shorter and shorter or easier and easier to get to. One of the things that never ceases to amaze me, before I started Tremolo, I spent over a decade as a consultant. And all the different organizations I go to, some in the same industry, some across industries, some different countries, how everybody has the same goal in their enterprise and they all need to get there in slightly different ways. And those slight different ways are where you spend all your time and money. And so that that's really where we want to attack is to say, how do we make those slight changes which end up translating to huge budget costs a lot easier. Making a project extensible for the edge cases is really one of the tricks of things to plug in. And in some ways, like Kubernetes is a good example, I know along OpenShift is Kubernetes, but the whole operator model too has allowing people to extend the platform without having to make it a feature in Kubernetes. And so I think some of the way we use the operator pattern has helped a lot. And one of the things that you've done is create that an operator for OpenUnison. Yeah, it's really kind of an interesting shift in the way we've done our deployments. So one of the things that makes OpenUnison a little bit unique is that it's part infrastructure, part business application. Identity management often maps to your enterprise's business processes. And on the infrastructure side, what's always made the OpenUnison deployment process a little bit harder in the past was certificates. Like we just, we have to talk to directories, we got to talk to SAML providers, we got to talk, all these different things we need to talk to, all those different APIs that we talked about back here, that's a certificate, that's a certificate, that's a certificate, that's a certificate. All these different systems, right? And so managing that certificate building process when combined with the fact that we're built on Java and nobody likes to deal with Java key stores was really painful. So we started off with, okay, well, just here's the documentation on how to do it. And one of our open source users wrote this like immensely detailed document. It was like 30 pages long of how we got it running. And I was like, that is the most shameful thing I have ever written, I am ashamed that you felt you needed to write that to like, I'm thankful that you did it, but oh my God, I'm gonna go crawl under a rock because I made something that was that hard to deploy. So we built a deployer and we said, okay, we wanted to build this thing that would deploy artifacts for us. And so we built that and it worked except they too didn't work. And this was about the time the CoreOS acquisition and operators started to become a thing. And we said, well, we could build that into an operator. And so we took that and we had another customer at the time who it went from 30 pages down to like 10. And it's like, okay, well, this has gotten better, still not acceptable, but better. And then we said, okay, let's go with the operator model. And so we moved that artifact deployment bit into an operator. And we found that our deployment process went from, 10 pages to two pages. It really dropped it down. It made it much more flexible for how we deployed the operator. There's still, I would say quite a bit to be settled around the operator process. We're still kind of figuring out what the happy medium is to what goes into a CR versus what goes into a, the Kubernetes native objects and OpenShift native objects. But it's definitely changed the way we approach. We totally appreciate the efforts that you've made. You've been on the bleeding edge of a lot of the different steps on our path from OpenShift from the early days till now. And this has sort of been a wonderful thing to watch the rise of tremolo security and the appreciation for the work that you do in identity management and helping others understand it better. And so I really encourage everybody to go out and give a look at Open Unistone. Give some feedback to it. Come on to the OpenShift Slack channel. Come to commons.openshift.org and join up and we'll put you in there and you can always find Mark or wake him up depending on the time zone and get Slack notification on and make some tools. And we will definitely be having him back here again. So please do reach out to him and join us again tomorrow. We'll be back again tomorrow with, tomorrow's gonna be a talk on Tecton from Peter Klank over at IBM. He's gonna give an update on what's going on in the world of Tecton, a demo of how to deploy it on IBM Cloud and pretty much take a look at the Tecton roadmap and again, have a bit of a AMA on Tecton with some other folks from IBM and Red Hat after that short talk by Peter Klank. So as always, Mark, we are totally grateful for all your contributions and all your efforts and though we couldn't see you at the virtual Red Hat Summit in person, it was wonderful to be part of that with you and someday soon we will see you again in person, hopefully and maybe KubeCon North America and Boston. So that'd be nice. Go out and get some lobster and have a nice dinner and talk with our Boston accents. So I may be in Canada, but I'm originally from there. So when I want them, I can talk like that. So I've got my Canadian accent on today. So thanks again, everybody, for joining us. Thanks, Mark, for taking the time today and adjusting your calendar for us. And Chris Short, as always, thank you for producing this and making this happen. So we'll sign off now and I will post the demo portion of this along with some of the resource links on our blog at OpenShift.com as well as post the YouTube video up on RH OpenShift on YouTube, but the whole raw feed will always be there on Twitch as well. So there are lots of options for finding this content, but most of all, go to tremolo.io and check out the good work that Mark and his team are doing in the open. All right, take care, Mark. Have a wonderful day. Bye, Diane, thanks again. Bye-bye.