 Hey, everybody. Thanks for joining another exciting episode of the OpenShift Commons Briefings. Today, we are very fortunate to have Mark Borstein and Brian Culhane join us here from Cromolo Security. We're gonna be talking about how to securely automate GitOps. Mark, welcome to the show. I have seen you at just about every major trade show for the last these four or five years. You've been at our OpenShift Commons activities. You've been at the OpenShift Summit activities. You've been engaged with us on just about, just about everything we ever do, webcasts, podcasts, videos. But I've never actually asked you, like, Cromolo Security, is it Cromolo Security? What's in the name? So it's pronounced tremolo, kind of a L, EL sound there after the M. So a tremolo is a sound fluctuation. So if you have ever played guitar or even the organs, with the guitar, you hit the whammy bar and that fluctuation, that's a tremolo. And so where the name originally came from was when we were first starting the company, we're a day management company primarily. That's where we got our start. And the very first product we were building was a web access management solution. So anybody familiar with service mesh will be familiar with this concept. There was a reverse proxy where you authenticated on the front end, we're actually gonna use Kerberos on the back end to lock down access and we're gonna be this universal reverse proxy. We wanted to get away from agents, which is a big thing that our competitors were doing. And so I wish I could say it was a marketing maven, but I am not. The first name for the company was gonna be auto IDM. Simple, kind of talked about what we did. My co-founder who actually is quite good at marketing came back and said, no, that's a terrible name. It's clunky, I'm gonna think of something else. So he came back and he said, well, it's a web access manager, Whammy Bar, Tremolo security. I was like, oh, that's a terrible name. Nobody's gonna know what it means. You know, my co-founder is a friend of mine from Boston and he's got a very thick Boston accent. He can't pronounce it correctly to save his life. I was like, ah, that's just not gonna work. Go back to the drawing board. And so that night I spent a good three hours trying to find a domain name for the company that had the word identity in it. I tried different languages. I tried different combinations, acronyms, everything. Couldn't find a thing. It's like, so who is TremoloSecurity.com? It's open, cool. Now it's the name of the company. So then that's how we got the name of the company and the guitar logo and everything there. How long ago did you find the company? Must have been about four or five years ago, six? No, we actually started the company in 2010 was when the company originally started. And we kind of flew under the radar somewhere between a hobby and a science project for a few years while we were kind of, getting our feet under us, building out the technology. And then in 2013, we got our first customer, public safety environment here in the DC area, which kind of catapulted us. And then we didn't really come out of like, not that we were ever really in stealth mode, but we didn't really start getting out there until 2015, which is when we had our first booth actually had a red hat summit. That was kind of our coming out, come and ask stealth mode party was that first booth at Red Hat Summit in 15 in Boston. And it's also when we became an open source company and we said, you know what? Open source is gonna be the way to go. It's gonna be the way to get to the most people. So we decided to go ahead and open source up and it's been off to the races since then. And before starting the company, you were where? So I was a consultant at PricewaterhouseCoopers. So if folks are familiar at all with the audit industry, they call it the big four PricewaterhouseCoopers, which I was, Deloy, Ernst & Young, KPMG. And so I was an identity management consultant there for about seven years. And if you name a vendor and an industry, there's a pretty good chance I was involved with some kind of cross-section of doing an identity management deployment there. And so what we found when we were doing these deployments was that we were spending a lot more time customizing this beautiful vendor demo to what the customer needed rather than actually implementing their business logic. So the vendor would walk in and say, here's this big gorgeous demo with all our opinions of how you run your identity management. What's interesting about identity management versus a lot of other technical disciplines is it's very closely bound to the business. It's got a tie to the way the business is set up because often you're saying, okay, our management process is a certain way or an organizational process is a certain way. A lot of enterprises are very siloed organizationally. So your technology has to match that siloing. And so we would spend most of our time kind of pulling that demo apart and reconstructing it in a way that would work for customers. And so this was before the term microservices really existed. But we said, the better way to do this is instead of building this monolith identity system that you then have to pull pieces out and reconnect. Let's start from basics and build what would today be described as microservices for identity management. So your web access management, your SSO, your virtual directory, your user provisioning, your APIs, your self-service, all those different things. And so we built that out. And what became really interesting was as we were building it out, it just meshed really well with the open shifting Kubernetes world because we had built these small building blocks that we said, okay, here's your Lego set. Here's a picture of what it could look like. But here are the 25 other designs that it comes with that you can do like when you buy Lego sets, right? So here's your Lego set. Go ahead, build it however you want. And it really turned both the implementation time on its head and the costs on its head. You know, one of the things, one of our rules of thumb was that your ratio of professional services to licensing dollars is gonna be two to one. For every dollar you spend on a license for software, you're gonna spend two dollars to implement it. We wanted to turn that on its head. And we found that by going with a microservices-like approach, a building blocks approach, our implementation times just bottomed out. And we were able to, you know, we had one customer, as an example, we replaced a long-standing legacy system that took them, I think, three years to get implemented. We had the proof of concept up and ready to go in three days to replace them. And at that point it was just provisioning hardware and applications and whatnot. So how have things changed since you started the company now with Kubernetes becoming mainstream and identity management in Kubernetes? What does that mean? So a lot has changed. We got started in Kubernetes back in the one, three days, maybe one, eight, whenever our back first came out and open shift at first come out. In fact, we had a booth at the first KubeCon NA in Seattle four years ago, when it was still small enough to be in a hotel lobby. Not small enough to do that anymore. Right. And so we actually originally got involved with open shift and then later with the upstream Kubernetes and with our back. And so we started with, so when you go to the Kubernetes authentication page and it talks about OpenID Connect, we rewrote all that documentation and donated it back to Kubernetes. We then went ahead and started to look at the way OpenShift and Kubernetes deals with identity. One of the things that's really changed over the last several years is this notion of what goes into your cluster. And we're gonna talk a lot about that during the demo. It's more than just Kubernetes. It's more than just OpenShift, right? I mean, you've got monitoring systems. You've got your GitOps system. We're gonna talk about our go CD today. You've got your build system. We're gonna talk about Tecton. You got your code system. We're gonna be using GitLab for that today. And all of these systems have their own concept of identity. And so specifically when you're looking at like an enterprise world, most of the implementations are multi-tenant. You know, it's most enterprises are looking for multi-tenant solutions. They're not, you know, most implementations I've seen that are where the tendency is at the cluster, that doesn't scale real well from a management solution. It, you know, there are advantages to it. And obviously you're going to have multiple clusters. But when all is said and done, the management process of multi-cluster doesn't scale when you're trying to have a cluster per application. So multi-tenancy becomes really important. And so identity is, you know, it's kind of like the force in Star Wars. It's, it binds everything together. You know, Argo CD has its own internal RBAC system. It doesn't work with Kubernetes RBAC system. It's got its own thing. GitLab has its own identity system. And then of course OpenShift and Kubernetes has its own identity system. So if you're going to provide a platform for your developers to be able to access these systems securely and to, you know, really get the IT people out of the room, out of the way, right? You know, the goal is, is that the people who own OpenShift are not involved day to day in applications, right? If, you know, one of the things in the identity world I always say is if I'm in the room, something's probably gone terribly, terribly wrong. You know, people can't log in, people are unhappy. You know, it should be the same way with the people who run the Kubernetes and OpenShift deployments. If you're running OpenShift or Kubernetes and you're in the room during an issue, something's gone really, really wrong. Ideally your application owners are managing all of that process. So speaking of being in the room, we have somebody else from your company here, Brian. And he's sitting there down in the bottom left corner of my screen. Brian, who are you and how did you get involved with the company? Hey, great question, Mike. Thanks for having us on this morning. I go back 20 plus years in the identity access management space with some very large programs of record. The early days were extremely challenging to enable access to uniquely different data repositories. It suffice to say it was painful, time-consuming and expensive. Many development hours had to be built into these projects. Fast forward to 2011, I met an innovator named Mark Borschtine who had an answer for this challenge. He had started his own company, Tremolo Security. Most intriguing was his subject matter expertise, successful consulting background and the fact that he had developed his own IP to secure the authentication process and speed the implementation phase. Huge game changer for organizations leveraging their existing infrastructure to deploy a solution in weeks, not months. This resonates with all organizations concerned with cost savings, dedicated IT resources and securely enabled privileged access to their internal team contractors and partners. Okay, fair enough. Back to you, Mark. I couldn't help but notice when you were talking just a couple of minutes ago that there was something in the background there and I thought maybe you might wanna talk about something that's exciting, that's gonna be coming out here. Is there a book coming out, if I need to? There is. I'm gonna show off the book cover here. I'm gonna do a little bit of shameless self-promotion. So I co-authored a book with my partner in crime here, Scott Serovich on enterprise Kubernetes. So when we started talking about writing this book together, we found that there was a big gap in the knowledge out there, the written knowledge, I guess, on how you implement Kubernetes between enterprises which are really unique from kind of your more consumer facing companies. In enterprise, you have to work around the organization as well. Most enterprises, you might only have a dozen, maybe even less of these massive, truly enterprise wide applications, right? Your ERP, your messaging, things like that. But then most of your applications are these siloed systems that might have a couple of hundred to a couple of thousand users. And so the people who own it, it's the most critical application in the world. And they're now responsible for keeping it up and running and their paycheck depends on it, right? Their bonus depends on it. So we wanted to write a book with that in mind. So heavy focus on identity. So we have full chapter on authentication, our back authorization, pod security policies and open gatekeeper. And then a lot of the stuff that just, you might think is kind of mundane but is really important to managing that kind of a diverse environment. Backups, logging and log aggregation. And then what we're gonna demo here was one of the most fun things that I've done a while. Our last chapter, we said, we're gonna build a platform. We're gonna talk about how you build pipelines and then build a platform with the goal of not having to have, excuse me, not having to have a Kubernetes admin building these bespoke clusters. Everything's automated. Everything's done through GitOps. What we really wanted to handle with this book was to say, look, this book is more than just theory, right? A lot of books are theory, and they give you great information, but it's not necessarily always in a practical context. We have cookbooks, which give you really specific recipes. They'll give you great ideas and great knowledge, but they might not relate directly to what you're doing. We wanted to kind of go in the middle where it's a practical book with a lot of theory in it. So the thing's huge. I think it's 650 pages of Kubernetes. And there are labs in most of the chapters that you can go through, and everything's open source, it's up on GitHub. And so we had a blast. It's coming out on November 6th, and then anybody who wants to get their hands on it, we have a discount code. We'll have it on the last slide we put up. It's 25 Kubernetes. You go to Amazon and order it there. You'll get a 25% discount. I just linked it in the chat as well, but we will have that up on the last slide, as you said. Awesome. 650-some-odd pages. I can't wait to pull that down. Yeah, it can be used as a doorstop if you really don't want to read it. Right, right, right. Well, demo time? Can you show us something? And hopefully, there's going to be lots of terminal windows and manually editing config files or prove me wrong. I promise we will not manually edit a single config file. So I'm going to go ahead and share my screen. And everybody can see what's going on here. We've got a lot going on on this demo. So when we're done, so when we built that final chapter and we wrote the book, Building a Platform, I have a graphic where we automated the deployment of Argo CD projects, GitLab, and Kubernetes projects and Tecton so that they're all integrated. And I made this diagram of all the different objects that we had to create in their relationships. And this is just the one cluster because it was a book, right? It's not a production system. 20 plus Kubernetes objects. I think I had about 45 GitLab calls to create the various projects, forks, et cetera. And a handful of calls to Argo CD. Argo CD has a combination of its own API plus it's reliant on Kubernetes. There's some CRs in there to create all these relationships. And then you think about the automation part of it. You don't want to commit code to GitLab and then go into Argo and say, let's trigger a sync or commit code to GitLab and then run a CLI command to fire up a pipeline. You want everything to just happen. So in the background, that's all built on web hooks. So you've got to create the web hook. You've got to create the secret. You've got to provision the secret. All these different things. It isn't rocket science, but there's just a lot of stuff to do. And so what we're going to show you is what the results of that are. Because it's one of those things that I could probably sit here for two hours and go through each little detail. Your audience will not enjoy that. Let's dive right into the demo. So we're going to show how we're going to integrate Argo CD, GitLab, and open use and open shift so that you have one kind of seamless process. As you're deploying your applications, you don't want to get your open shift team involved in having to set it up. So what we're going to do is we are going to provision an application through self-service request, create all the objects in GitLab, create the objects in Argo, create the objects in Tecton, link everything together, and then we'll show the progression of how that all comes about. So the first thing I'm going to do is I am going to log in as my user. Now, a lot of enterprises have sampled too. If you don't have a testing identity provider to work with, we actually provide one. So it might be something that you're interested in. Go to tremolo.io. So we're going to sign in as our user. And we're going to go ahead and create a new open shift application. Let's make sure that we get the right name. So we're going to submit the request. Now, this is going to be different from like your, you might be used to like an email where you have to request access via an email that says, okay, email that says, okay, let's go ahead and do the email shuffle of, hey, can you create this project? And you get these bespoke clusters. We're going to avoid that. So let's go ahead and now that the request is in, an admin would have gotten an email that says, hey, there is somebody waiting on a request. We're going to log in as an admin user. And you'll see that we got this open approval. Here is the request that came in. So let's review it and approve it. Of course, this workflow can be customized, however you need it. So we've submitted the request and take a look up here at the logs streaming through OpenShift. And you're going to start to see a lot of provisioning action here. There we go. So when we wrote the book, built this diagram to link all of the different objects we had to create. There were 20 plus Kubernetes objects that had to get created, probably about 40 GitLab API calls that were created. The Tecton objects or Kubernetes objects. And then Argo CD has kind of a mix of Kubernetes and its own API. You had to create all the webhook connections. So when you commit some code into GitHub or into GitLab rather, you want that to automatically trigger your workflow or your pipeline. You don't want to have to manually kick that off. So that's going to take a minute to provision all those different objects. We're not just provisioning objects, we're creating SSH keys on the fly. We're creating secrets so that your webhooks can't get hijacked, all sorts of good stuff. So let's take a look here, see where it is. Nope, we're still provisioning, I think. Nope, I think we have finished provisioning. And just to kind of prove my point of how many objects got created. Oops, I actually didn't want to log out. Let's come over here and look at our audit reports. So we're not just creating these things, we're actually creating an audit trail. So that way when it comes time to do an audit, you can tie all that infrastructure we just provisioned back to a single request and who approved it. So let's come down here to our actual provisioning and you can see this is our new application. Here are all the objects that we created. We created bindings, role bindings, groups, all sorts of stuff that connects everything together so that this all works seamlessly. So now that we are there, let's go ahead and log in to get lab. And we're going to log in as our original user, MLBIM6. And you'll notice that we just have a handful of projects here. We're also going to log into Argo. And we see the same projects here. So let's talk about these projects real quick. So let's start here, up here, we have our application built. So this is where the source code goes. This is where our actual microservice or application, whatever it is, is going to go. That does not have an Argo CD project. So that's just source code, right? Argo CD is purely for the operations side of things. So we're going to have a pipeline project. The pipeline project has all of our tecton objects in it because there are webhooks, there's security, you want to keep that isolated. And so that project does in fact have a Argo project. And we can see here there's a couple of stub objects that we created to support the webhook. We then have our operations project. So we have two operations projects. First operations project is our production project. And that also is linked to a Argo CD project. There's not a lot in there right now because it's an empty project. Additionally, oops, we have a dev project. Now what's important about the dev project is it is simply a fork of the prod project. So when it comes time to move into production, what we're actually going to do is do a merge request into the production repo and let Argo CD then go ahead and kick in and provision everything. So the first thing we're going to do is we are going to check in our source code. We're going to go to the application and I'm going to create a fork. It's forking, okay, good. So let's go ahead and check out our fork. Let's go into our application. So nothing spectacular there. We've gone ahead and we've put some code into our fork of the application. So we're going to put that on hold and come back to that. So next we're going to come to our operations code. So let's come to dev operations. We're going to come here to our Argo project as well. Now there's not really anything in here now. In real life you're going to want to fork this to do some work first but for this demo we're going to keep it simple. So let's go ahead and clone that and we're going to go ahead and copy our operations code. So what do we mean by operations code? We're talking about deployment, right? Nothing too crazy. Take a look at that in a second. So let's go ahead and push. Now we've pushed it into our dev repo and in a few moments here we can see that Argo CD is already on the job of syncing it in but we got this little broken heart. That is because our repository points to a container that doesn't exist. So we take a look here and it doesn't have a tag on it. We've got to add that tag. So let's go ahead and do that next. The next thing we're going to do is we need to check out our build. So this is actually going to do the work of building our code. So let's go ahead copy that and just to show what's going on let's go into the build project. So again we've got kind of our stubs here but we've got to build that out. We've got to add a pipeline. We've got to add bindings. We've got to add a question for the container too. So let's go ahead and do that now. Let's copy. Let's take a real quick look to make sure that the I think it's the trigger template is pointing to Python test 11. Okay, good. So now what's going to happen is we're going to add it. We're going to commit it. We're going to push it. So what's going to happen is our tecton pipeline we're going to talk a little bit about what that pipeline is doing is being pushed into Git at which point there we go. Argo is picking it up and saying yeah, let's go ahead and deploy all this stuff. So we want to wait until everything is green. So everything's being synced in. We now have a working webhook in a build environment. So let's go ahead and push some code, shall we? So we're going to go back to our fork and show you there's nothing up my sleeves, right? No pipeline found. So let's go ahead and create a merge request with our dev operations code or with our application code base. And part of that GitOps flow is once that's been submitted GitLab says okay, somebody's got to approve it and merge it. So once it's been merged you can see almost immediately Argo CD is off to the races and starting to do its thing with a build. So let's take a look at our pipeline. What am I doing wrong here? Python test 11. There we go. So we're off and running. So let's go ahead and oops. Pipeline run. Go ahead and watch this as it goes. So that's off to the races and we have three tasks in our pipeline. The first one generates an image tag based on a time stamp and also saves the commit hash that triggered the build. The second one actually builds the container. So we're using Koniko in this instance, that's a tool from Google very similar to Pogman same type of idea building a image without having to have a daemon. And then finally the last thing we're going to do is update our dev repo and in fact if we come here let's come over to dev. We're going to update our dev repo so that we're patching there we go. So we can see here the updated commit and what we did was we checked out the code from dev, the dev repo and patched it with our time stamp. So now our image has a time stamp on it and it also has the commit of the code that triggered the build so we now have something to reference. And so if we take a look here and do a quick refresh you can see that syncing and our dev instance has picked it up and now it's running a new pod. So this little broken heart is going to go away here in a minute. So there was no API call right? The pipeline checked out the code made the update pushed it back in. That was the final step. And all this requires a lot of automation because you need to build SSH keys out to do this, credentials, different things to do it that you don't want to be doing manually. You'll also notice so far I have not actually run the OpenShift command to do anything other than perhaps look at some logs. So that's kind of firing up here and we want to wait for the circle to finish and for us to be able to say, yep, it's running a good version of the pod and the broken heart has gone away. What's really great about this is that something to point out is that I as my application owner can see everything that's going on. I don't have to get the OpenShift came involved. I can monitor what's happening in Argo CD because I have secure access and we can see down here we're now at a green heart, the old one's gone. So we're happy everything's running. So let's go ahead and push this into production. So how do we push it to production? Well, it's GitOps, right? So let's go over to production and there's an error because there's no source code we can fix that or there's no operations code. So let's go ahead and push this into production. So let's create a new merge request and create that merge request into production dev and we're going to commit the merge. So now Git is our source of truth for all changes to operations. We've done no commands and OpenShift. Everything has been external to OpenShift. And so we take a look here in prod give it a second and there we go. It's launching up and it's pulling down and we can see that it's actually launching and we're all green. So we've gone the whole gambit, right? We went from requesting that an application be created provisioning all the infrastructure automated. We went ahead and committed our code a pipeline built everything and we didn't have to get the OpenShift team involved for any of it. Just to show you there was nothing up my sleeves. I'm going to go ahead and log out and show you here we only have the Python test application, right? So let's log out here and let's make sure we're fully logged out. Now we're going to sign back in to GitLab. This time I'm going to sign in as my super user. So you can see all the projects I have access to that my application owner didn't. That's because we're using the power of identity to know who has access to what and to bind these different systems together. Test 7, 4, 2, let's log in via OpenUnison here. Here's all sorts of projects that the user doesn't care about. So we're not just leveraging automation we're leveraging security to give you a better approach that is more hands-off for your team and is far more satisfying for your customers. What does it mean for customers? So what's the business impact of identity management in a multi-cloud environment? How does that affect the bottom line for customers? The biggest effect for customers is the fact that their IT departments or the people who own their infrastructure don't have to get involved with the day-to-day of applications. So a customer that I built a similar solution for we went live about a year ago and I hear from him once in a blue moose a hedge fund over in the UK, small as these things go, I guess a small hedge fund is like five billion dollars and their developers wanted to be able to launch micro services. But they didn't want to have to get the IT infrastructure side of the IT department involved. So we set up a very similar solution for them where their developer would log in and say create me a project. Now we based the permissions off of their active directory so there was no request approval. But based on that it provisioned out everything, it provisioned out the Git repos, it provisioned out the build systems, it provisioned out the namespaces, it tied everything together. And so the developers were then able to say hey I'm going to just go ahead and start pushing code and we actually gave them a little button so that they could say I'm going to push from Open Unison, from the Open Unison portal rather than from inside of Kubernetes. And he's like yeah I don't get phone calls. I mean that is one less thing that that guy has to manage because at the end of the day we all have better things to do than building bespoke clusters for people. So it really becomes a force multiplier people are happier because they're getting their job done, they're not complaining oh I don't have access to this, I don't have access to that. And really you're spending hundreds of thousands, maybe even millions depending on the size of your organization on this automation solution. Do you really want to manually onboard people? It doesn't work well. So it really becomes a big force multiplier there. You earlier in the discussion you interchanged the words Kubernetes with OpenShift and I would imagine that your technology and your solutions work on any Kubernetes based implementation. Not just OpenShift is that correct? Sure, yep. But we have been working with you folks for years. You guys are a member of OpenShift Commons you have a Red Hat certified provider for OpenShift which means that it's gone through the Red Hat internal testing and blessings so when customers want to use it they know it's tried, tested, trusted and they can get support from Red Hat and your company at the same time. So I did want to put that gratuitous plug in there so thank you for doing that. It makes a big difference for customers when they want to go and run these things absolutely. We're a security company we're going to the CIO and saying we're going to help make your infrastructure more secure and being able to say and we're certified on the platform that you're using. We're here in the DC area, back in the before times when we used to go places you go on the metro and they're signed everywhere. 100% of government agencies run Red Hat Linux. That's not hyperbole that's true and so in the enterprise being able to say yep this is certified on the platform that you're using we had to go through their rigorous process they had to review it. We're also in the Red Hat marketplace so that's an additional avenue to be able to get access to but we have we made big bets on OpenShift very early on in our Kubernetes journey and those bets have definitely paid off. So war stories. Everybody loves a good war story you've been in the security business for quite a long time both at your company and previously tell us one of your favorite war stories that you've addressed. So honestly one of my favorite stories is actually before my time at Tremolo is one of the inspirations for the way we approach infrastructure. So while I was still with PWC I was on a project where we were helping a customer a name brand that everybody's heard of and everybody's used was migrating between different identity vendors. I was brought onto the project and I was told look you have a limited number of hours to get this done. 400 applications across six continents here are three or four developers in India just have them go and manually update the configuration. That sounds like a terrible idea I don't want to do it they're not going to want to do it and it's going to be error prone and all sorts of problems. So I said look I'm telling you we can automate this we're going to come in under budget customers going to be happy they're going to have a much better product and they're going to have a much better time and so got with the team in India and you know there are always issues with different time zones and whatnot to be able to get that done but we all were able to work together and build this amazing system and this was before DevOps DevOps wasn't a word yet and so we were able to build this system that queried the APIs of the old system constructed out a framework tested the old system to make sure that that framework worked the way we thought it did provision that and automatically into the new system test the new system and then turn it all on automatically and you know the project went really really well for the most part we did have one issue we accidentally turned off SAP in Japan which didn't really go over real well but we fixed that pretty quickly but then we went ahead and once we got that you know figured out the original go date we had to cancel because our product owner our project owner from the customer had to have an emergency appendectomy remove his appendix in an emergency appendectomy I think appendectomy there you go I'm not a doctor but I do plays so you know he said no I'm not we're not going live with this okay well I was promised to another project so I had to go and they delayed the rollout three weeks and I got a phone call one day I was like yep done I was like wait really yeah no bridge call we threw the switch and everything just worked and it's like we came in under budget we automated everything people were happy with it of the 400 applications they were able to automate like 395 of them and so that was a big inspiration for for the approach that we ended up taking when we started trouble getting into the automation space hmm you mentioned early on here today when we first started talking that you're not the marketing guy that you're the big brains you're the techie behind it all and that your other co-founder is more of the marketing genius of the company he's not here today what would he want you to talk about and bring up for the audience that you haven't covered so what I'm trying to prevent is the second we end the show you get a phone call from him and he's like why didn't you talk about XYZ so here's your opportunity to prevent that phone call from happening so I guess the biggest thing that I'd want to let people know is that everything I showed is not vaporware right it's real it's it's out there we're constantly involved in the Kubernetes and OpenShift world if you're asking questions on either OpenShift Commons or in the Kubernetes Slack channel we're gonna be there and we're here to help we are we're experts I'm a CKAD I went through that certification process the dark ugly truth of most enterprise software is that the people who write this stuff never have to use it we use our own software with customers we're out there deploying it we're making changes as customers need it so we're not just building in fluff features that nobody is going to use we're building features that people are actively using actively need and so as you're kind of going through that journey of figuring out how you're going to automate your infrastructure think about tremolo security think about how you're going to automate your infrastructure with security in mind okay so your books coming out on would you say November 4th correct November 6th November 6th okay and we have the 25% discount code that's going to be available for any of the use of the listeners from the show it's good in the U.S. only until November 15 what I will say because I know a lot of folks especially in the open source world care very much about these things is that if you do go direct to the publisher once it's available it's DRM free I'm sorry it's DRM free if you go directly to packet who's the publisher okay I linked it in the chat for those people that are on the bridge here why don't we go ahead and put that up on the screen so everybody else who is not here with us today who's going to be watching this later on can get that URL and of course you can find us on twitter you know our github repo where we have all sorts of fun stuff even beyond identity so you might end up running into you know we have like an SMTP black hole that we keep updated that's really useful we hope you all the best with the book I think I will obviously be pulling down a copy using my 25% discount code where are we going to see you next you know in this world where everything is virtual these days we're not going to see you physically in person we're not going to see you on humans or the Red Hat Summit but where can we see you next so actually I'm going to have the great pleasure of giving a lightning talk at the KubeCon NA virtual security day I'm going to be talking about why you should be using OpenID Connect with your clusters and not certificates for authentication so that'll be my next kind of big thing that was at November 17th right I think the yeah I think 17th is the security day conference it's a co-located event correct it's before KubeCon starts good well I would like to say on behalf of everyone here on the OpenShift Commons briefing our that we'd really appreciated having you join us here today I think we'd love to have you come back again when's your next book coming out we'll bring you on sounds great and thanks so much for having us Mike okay great well thanks everybody I hope that hope that this was informative and useful you can tune in every Wednesday at noon time and we have a full lineup of software partners booked out all the way I think we're booked all the way into March at this point so all of our software partners with Red Hat Certified Operators for OpenShift we're going to be here on the show talking about their technology their products their war stories and hopefully some more books thanks again Mark and we are signing off for the day thanks for joining