 Thanks, Rob. All right, folks. So, the next panel is going to be our operators and excellence panel. And so, we're going to bring them on board. These are five ISV partners that have developed operators and not just any operator, but operators that are really advanced capability operators. And not only that, they're also operators certified. So, what we wanted to do is I'm going to call each of our panelists. We actually are awarding for the first time ever our operator excellence awards. The panelists will be award winners this year. So, when I call your name or your company's name, if you could come up, receive your award and then go ahead and take a seat and then we'll go get started with the rest of the panel. So, our first award recipient is couch base. Evan Pease. Are you here? Not yet. We'll go to the next one. Our second award winner is Dinah Trace. Peter Hack. Are you here? Excellent. All right. Come on in. All right. All right. The third award winner, MongoDB, Jason Mimic. Are you here? Excellent. The fourth award winner, Portworx, Payush Nimbolkar. And the fifth and final award winner for this year for 2019, Storage OS, Simon Kroom. Are you here? So, one last call for couch base is Evan here. All right. No worries. I'm going to turn it back over to Rob and we'll continue on with our panelist discussion. Thank you. Thank you, Hylia. Thank you, everybody. Congratulations. So, I just wanted to underscore the importance of that capability model and building the maturity of the operator ecosystem is really important for both the customers of these companies to be successful building these operators and using them on their clusters as well as like building the next wave of Kubernetes growth. We kind of are in the stateless applications. We went to very simple, stateful workloads with stateful sets, which based on the poll earlier today, not a ton of you are using yet anyways. That third wave is now full distributed systems running on Kubernetes of which all these cloud native databases, NoSQL databases, every message queue, all the security products, everything is getting a little bit more complicated and it means that you need operational expertise in running that. Or you defer to one of these really awesome operators to do that for you. And so it's a game changer for consuming applications on Kubernetes and consuming a new product on Kubernetes. And so it's super, super important, so we're really happy to work with all of these folks and their teams to make that happen. So let's just go down and introduce yourself, give us an idea of what you do, and then we'll continue. Hello. Cool. Hi. My name is Peter Hack. I work for Dine Trace. I'm a technical evangelist there and I focus mainly on working closely with Red Hat and the Kubernetes frameworks. Am I on? Hey, hi. My name is Jason Mimic. I'm a product manager at MongoDB. And I guess I've been working with Red Hat and Kubernetes for a couple years now. We didn't have an operator up until about a year ago. We released it, but before that we just had like an open shift template and things like that. So it's great working with Rob and the team and seeing it really mature. Thanks for having me. Absolutely. Hello. Okay. Hey, I'm Piyush. I'm a developer at Port Works. I lead the operator effort in Port Works. Hi. I'm Simon Krim. I run the engineering team at StoreJOS. We've had our operator for about a year now. And it's totally transformed the way we ask our customers to install our product and to manage it from their onwards. Awesome. That's what we like to hear. All right. First question, Piyush, this is to you. I saw when I was doing some research for this that Port Works has this config generator for OpenShift 4 to help you come up with your spec for what you pass into the operator. Tell us about that. How did that come about? And how do your customers use that? Yeah. Sure. So when we first started deploying Kubernetes, which was more than two years ago, we had, we used to deploy it as a demon set, like Port Works used to get installed as a demon set. And then there were other, slowly we started adding components, like config maps, different permissions. So it's a huge spec. And people don't want to look at that spec. So we came up with a spec generator where people could just choose some parameters, like select their storage, network, and just by clicking on some checkboxes, drop box, fill in some fields, and they'll get a huge Kubernetes spec. And they just have to apply that spec and install Port Works. With operators, it's just a 20 line spec that they can even hand write it. They don't have to use the spec generator. But that was the reason why we added the spec generator. Awesome. Yeah. Hopefully we can get that wired up with the new declarative UI that I just showed. It got to be super popular as well. Yeah. We're actually doing that already. We are working with Ali to improve the UI so that people don't have to go to the spec generator and create their spec. They can just use the OpenShift UI and be there. Awesome. Perfect. All right. This one is for Jason for MongoDB. Can you talk about how your operator has helped customers embrace like this kind of container native cloud native world? You mentioned that the operator has been in existence for a short while now. How are customers embracing that? How is it helping them? Well, there's certainly a ton of interest. I mean, I'd say, like I said, in the past two years that we've really been active in this space, I mean, there hasn't been a week on by that I haven't got calls from new accounts or new customers that are interested in Kubernetes and running MongoDB in these environments. And frankly, most of them are pretty confused. There's a relatively small subset that are any experience with Kubernetes. Most of our customers are brand new. And so they really look to us to help show them the right way and the sort of certified way to run MongoDB in these environments. What else is really confusing is there's tons of MongoDB in Docker and containers and Kubernetes out there in open source. But none of that came from MongoDB until recently with our operator. And so that's typically more of our enterprise customers that really need that support and they can rely on a company to back it. But the operators just made it a lot easier. So if you go to find just your random MongoDB deployment out there in Kubernetes, it's going to be a lot of the ammo like we were talking about. Now with the MongoDB operator, it reduces now to as little as nine or so for a simple MongoDB cluster. We have one CRD, so that makes it a lot simpler. And a lot of people want to run MongoDB as a service. This is the tool for these enterprises to actually offer their own MongoDB database services inside their own private data centers. That's something that I've seen is really popular is especially when you get to these big banks and insurance companies and folks like that, they all have a central database team that's running pick whatever you want of a database and they've got a bunch of DBAs that run it for you. But they can offload their work to your expertise in the operator. In a way, but also the one other point I wanted to make was like our operators kind of, well, maybe not unique, but it has, basically it has our enterprise management monitoring solution called MongoDB Ops Manager that is more or less bundled with our operator because we already had a bunch of features in that around automation and we can basically do what Kubernetes does in terms of managing the pods just with VMs in our own agents. So we took a lot of that feature and functionality and we kind of use it with our operator. But the neat thing is those DBA teams, they actually do get this completely independent control plane where they can just look at the Mongo data, drill into collections and things like that. They don't even know that it's necessarily running an open shift so that you can have a nice separation of those responsibilities. Awesome. Yeah. That's what we need to get everybody to. That sounds perfect. All right. Next question is for Simon with Storage OS. I'm curious, we all know that storage is really critical for the cluster and your operator is a big part of running a healthy storage tier. You talk about some of the features that the operator has to keep your storage running correctly and perform it on an open shift cluster? Yeah, sure. I mean, the operator SDK itself, apart from being a really good starting point with the scaffolding when you create your first operator, it includes a lot of tooling around testing as well. Not just unit testing, but being able to describe all of the permutations of configurations, you're able to, through the operator test framework, test all those combinations. As well as that, it helps you run some end-to-end tests. For example, deploying on various versions of Kubernetes and things like that. We're able to do performance regression testing. We're able to do more chaos, either one bit. We're able to do more chaos monkey-style testing to ensure that data is consistent. For example, John Willis earlier was talking about Merkle trees. We're able to, through block checksums, verify the integrity of replicas before we fail over to them. Awesome, yeah. I think that's the exact type of expertise that we want to depend on these operators for instead of having to understand that about a storage layer even. That's super powerful. You get all that baked in, release after release. All right. Last but not least, for our first question, Peter, how are you doing? Your customer has appraised your operator for its ease of use. Can you just talk about that experience and how you've designed it, what you're looking to do maybe in the future? Absolutely. Dynatrace comes out of the APM space and the enterprise monitoring. What we've developed this operator together with Red Hat when they were initially bringing the operator to market, that framework, we created a Go operator. The focus around Dynatrace is being able to make it as easy as possible for our customers. We don't want our customers to really have super expertise in their application code to try and instrument things and such. Our goal was always to make it as easy to deploy the agent and get the democratized the data that comes out of it. Our agent-based solution in the non-criminality space would be an install. Because of the operator, instead of having to use a daemon set where you would have to take it down to put up any updates or put up a new one, with the OLM in the operator, basically through just two YAML files, you could have the entire operator deployed running in your entire cluster. And the one agent or agent technology would automatically be deployed to all the nodes in the cluster, providing visibility into all the application stacks. So what this really drives for our customers is the providing monitoring as part of the platform, but deep monitoring end-to-end full stack view of the entire cluster. Yeah. So we can turn this more into a little bit of free-for-all now. I want to touch on two topics that we visited earlier in the day around GitOps and then multi-cluster. And so let's start with multi-cluster. I'm just curious, any of you, with your customers, are you seeing them run multiple clusters and how is the operator helping them get that consistency between the clusters? I could start. I mean, most all of our customers are enterprise-grade, you know, level customers in the Fortune 500 or larger. And what we find is that hybrid is really kind of the way it's gone. There's customers that have these large legacy data centers spread around their organization, but then they've, you know, started to build out into a cloud, whether it be AWS or Google or Azure. And having the ability through the operator to be able to kind of see all of the clusters and all the nodes for those clusters really gives us an ability to help monitor all of that across the board. So that's something that was critical for our customers as well as for having the operator in the first place. Awesome. Anyone else, multi-cluster experience? Well, I'll just say that, honestly, I haven't heard much about multi-cluster in a few months. A while ago, there was a lot of customers interested in it. And so we've kind of been following the SIG multi-cluster and seeing where that's going. But I'm not really sure. I think the verdict is still out on if a lot of people are going to be using those. You think, is the data gravity? Is that what it is? Yeah, I'm not really sure. I think there's just different camps. Some people, it's like a lot of clusters or one cluster. There's those different camps of people. So we'll see. I mean, obviously it should work seamlessly. That's the trick. So it'll be interesting to see how that continues. I think what he said, that's the problem we are trying to solve with storage. So we already support a lot of clouds like Google, Amazon, IBM, Azure. And with Kubernetes and the operators, it is a seamless experience for customers. They can just run portworks without knowing. They just tell us what capacity they need. Internally, we go and provision the capacity. And they can just migrate their apps using portworks from one cloud to another. We have a few customers who are running on multiple clouds. It really benefits them to have the same experience everywhere on all the clouds. So I think that solves the data gravity problem for our customers. Yeah, just to echo that, we certainly see a lot of customers with multiple smaller clusters and maybe shorter lived than you might expect. So providing tools to migrate the data is essential for supporting them. Awesome, yeah. So it sounds like storage layer obviously extremely critical for multi-cluster for sure. I actually have one other thing about that. We also, so we have MongoDB as a service, MongoDB Atlas, which is fully hosted by us. But that is actually available. We have an open service broker for MongoDB Atlas that just came out this past summer. So in a way, because Atlas supports Google, GCP and Azure, you could have multiple Kubernetes clusters that just consume Mongo services running in those same clouds. So functionally, there's a lot of ways I think you can get around some of those requirements that might not need actually multi-cluster federation or something complicated like that. Good point. Moving on to GitOps, so we saw an example of using both MCM and Argo-based workflows for doing this. One of the things that I think is really powerful about operators is when you do have these complex distributed applications or a storage layer that are made up of a ton of different YAML objects, when you're doing like a PR review on a GitOps change, you can start looking, if it's an operator, really, you're looking at one YAML file for this CRD that might represent a ton of changes into the hood. But you don't need to see that complexity. And so I think that's a really powerful way to do GitOps. And I'm curious if y'all are starting to see this with your customers. Are they doing these types of flows or is that still more, it's the audience at KubeCon, but it hasn't made its way down into the Fortune 500, that kind of thing? I haven't seen a lot of customers talking about this. Maybe it's something we'll see in the future. Gotcha. We certainly see a lot of customers using CICD as a primary use case with our product, but whether that's GitOps or not, I'm not sure. Yeah, I don't think it's quite come down to the common folks right now. But just internally at MongoDB, we have our own Kubernetes cluster that has a CICD, very similar. I mean, it's drone, but it's the same kind of thing. And I can really see that model is really solid, I think. I mean, because we write code, we like source control, why not put all that stuff in source control too? It's great. With the GitOps story is actually quite interesting. So we have a lot of customers that have these very complex pipelines, large Jenkins or concourse or whatever. And so we started seeing kind of a pattern of the GitOps deployment, like how are they deploying their code when they deploy their code from stage to stage? How do they validate and quality test things like that? And so I'm wearing this shirt cap, and this is an open source project that we started at Dynatrace to try and address this specific kind of issue around that GitOps story about the Git flow and the model. And ideally, I'm working with the team here to try and build operators to do the deployment of captain as well. So that should be, I think that the GitOps story, though, is becoming very, it's bubbling up to the surface, and it's becoming more aware that awareness is coming back. Yeah, it's not going anywhere. It started with Terraform and the like on the infrastructure side, and it's moving its way up. All right, I think that's all we have for this panel today. I want to give a warm congratulations to all of these companies. Check out their operators. They're on Operator Hub. They're inside of your OpenShift cluster. If you need any of these services, storage databases, monitoring security tools, queues, web servers, it's all out there. So thank you all so much, and congratulations.