 and welcome back to another OpenShift Commons briefing. Today we have something kind of different, but really kind of exciting as well. We have a panel of folks from IBM Cloud that's going to be moderated by Cy Veneman. And we're gonna talk about building with OpenShift and a lot of other topics on IBM Cloud. And a lot of these folks who are on this panel are from a lot of the different upstream projects as well. So we're gonna touch on some of that in the panel as well. So I'm gonna let Cy introduce himself and then everybody else do it. And then we are just gonna ask them a lot of questions. And if you have some, ask in the chat and we will relay them to the panelists. And this is Indeed Live. So Cy, take it away. All right, thank you for that introduction, Diane. Hello everyone, you guys are in for a treat today. My name is Cy Veneman and I'm a technical offering manager with IBM Cloud. I've had the pleasure of working fairly closely with all of our panelists today. And believe me, again, you're in for a treat. These are folks who are extremely motivated, engaged and talented in their respective fields within IBM. And I wanna take a minute here to just quickly touch on each one of them and have them introduce themselves. So if you don't mind, please introduce yourself with your name, your title and then what technologies that you focus on then anything else you'd like to add just maybe keep it to about 30 seconds to a minute each. Chris, we'll start with you. Hey, sure, thanks Cy, glad to be here. My name is Chris Rosen. I'm program director of offering management responsible for kind of all things containers and microservices related in IBM Cloud. Specifically focused today on Red Hat OpenShift on IBM Cloud. All right, thanks Chris. Next we'll go to Doug, Doug Davis. Hey, thanks Cy. My name's Doug Davis, work for IBM obviously, technical offering manager for Knative. So obviously the technology of choice for me these days is Knative. And I'm actually involved in delivering Knative in multiple different projects within IBM Cloud. So we'll talk a little about that later, hopefully. All right, next up, let's go to Peter. Thanks Cy. So I'm Peter Clank. I'm the offering manager for our DevOps tool in the IBM Cloud. So that's, you know, involves also reaching out to a lot of open source communities in some of the upcoming technologies like Tecton and how we can deliver that really effectively through OpenShift, through the public cloud as a service, as a role your own. And so that's my area, so have fun time. Great, great. Josh, to you. Hey there folks, my name is Josh Mintz. I'm an offering manager for a whole cadre of databases in the IBM Cloud, like Postgres, Elasticsearch, MongoDB and Cloudint. Not many people are throwing in fun facts, so I'll add one. Big soccer fan, love to watch Chelsea. So if anyone's a Manchester United fan, we can talk after the panel and have some words. And Josh, I think you also win for the most comfortable-looking chair. Yeah, it's a throne, if you will. Great. And finally, we've got Ram Venom. Hi, Sai. I'm one of the technical offering managers on the IBM Cloud Kubernetes service and the Red Hat OpenShift service. My focus is on service mesh, so I work closely with our SDO open source team and also service mesh on Red Hat OpenShift as well. And if you guys are noticing that there's a resemblance, that's because Ram is actually my brother, so I'll be sure to throw him all of my really difficult questions today. Thanks, Sai. I expect nothing less. Now, a quote that I want to start with today from IDC. By 2022, 90% of new applications will be cloud-native and developed with agile methodologies and an API-based architecture that leverages microservices, containers, and serverless functions. Now, from the introductions today, I think we saw that we have a wide panel of panelists and they kind of cover multiple key technologies. But at the root of it all, I truly believe is OpenShift, the container-based platform that powers cloud-native and container-based technologies. That's where I want to start. Chris, I want to start with you here. Why are microservices and container technologies being so much support in this current era of cloud-native and application development? Yeah, so I think there's a lot of reasons really driving developers and organizations to embrace technology like cloud-native and containers and OpenShift. And ultimately it comes down to three main use cases that we see time and time again. The first one is really around, I'm building cloud-native applications because we're all developing software in some form or fashion and we need to be able to do so quickly, but then more importantly, very securely. So I think for these net new projects, we're seeing a lot of gravitation toward cloud-native microservices, containers, OpenShift. The second use case is really around the complexities that we live in with a multi-cloud hybrid cloud world. Customers that I talk to every day are running on-prem, IBM cloud, other clouds. So they need to be able to have that portability and that's exactly where containers and OpenShift come in to give them that common abstraction regardless of the IAs that they're running out. And then the third one is around app modernization because again, the customers that we work with have a long heritage of existing applications. So how do we help them modernize that footprint into containers, cloud-native type architectures? Great, so that's perfect. There's clearly a number of use cases for taking advantage of container and cloud-native technologies. But at the root of it all and the reason that we're gathered here today at the OpenShift Commons is OpenShift. So how do you see OpenShift as a solution to help address these use cases? I mean, obviously Red Hat, long before IBM acquired Red Hat has put in a lot of effort working upstream in the community just like IBM has as well. But they've taken these open source technologies which the reality is they're hard. There's a lot of different projects. There's a lot of moving pieces. How do we make sure that a new version of Kubernetes doesn't break something else? There's some other plugins or some other projects. And Red Hat is really focused on bringing and packaging all that you'll need to be able to run that containerized workload. So that's really where the importance and significance of OpenShift comes into place because Red Hat has really hardened and secured and has a known process to give those updates to developers and consumers. Then obviously fast forward IBM acquires Red Hat and we've launched the Red Hat OpenShift on IBM cloud offering. We have to provide that as a managed service to our customers. Again, just raising the bar of responsibility so our customers don't have to be experts in OpenShift per se. They're developing applications and running them that are solving their line of business objectives. And while we're on you, Chris, one last thing I wanna touch on. You mentioned that Red Hat even much before Red Hat was acquired, they really focused on open source and contributions and working in this container space. Now, post acquisition IBM has launched this OpenShift on IBM cloud offering. Now, how is IBM a company that's fundamentally, many ways the same as Red Hat, but also has been historically different. How has IBM kind of shaped itself and changed with the acquisition of Red Hat? How are we working with open source and how are we kind of making that apparent in our cloud? Yeah, so that's a great point. I mean, I think both companies continue to learn and grow and evolve from each other, which is gonna make IBM Red Hat combination honestly unstoppable going forward. That being said, we're really trying to harness the best from both of those cultures, both of those organizations and from a broader IBM perspective, although we've always built on and contributed to open source, we're kind of learning the Red Hat way and that's helping us accelerate those things, move faster, continue to build on those offerings because obviously we're all here because we love open source, that the velocity that the community can move at is much faster than any single or handful of organizations can move. So we're gonna continue down that path, building out in the open, allowing that portability that way, that's really why developers gravitated towards containers anyway, was I could package up my app, all of its dependencies and move that thing from here to there to anywhere consistently. So we're gonna keep doing that and bringing some important operational characteristics to those open source projects, like the security, the hardening, the upgradeability, all the things that are really important to our customers once they deploy something, how do you actually manage it and keep running it from day two onwards? Perfect, thank you for that. And with this focus on open source and open technologies, I wanna move on to the next panelist here. Doug, your focus day to day, as you mentioned, is capabilities like Knative and Serverless. And as we know Knative is an open source project, it really grew in the open source community. Now, Doug, the first question I'll start off with here is where do you see Serverless or Knative, these technologies that you work with, fit in with the solutions that we just heard Chris talk about? Yeah, so it's interesting Chris talked a little bit about modernization of the app, right? And if you take that, and it says value in terms of what it actually means, well, it means containerizing your stuff, breaking up the model of stuff like that. Serverless is actually the next logical step in that progression, right? You take your model that you break it up in the microservices, but then Serverless goes one step further and says, okay, rather than just at a microservice level, what if you can actually break it down to individual functions? And that sounds cool from a technology perspective, you know, we're all geeks, we love that kind of thing, but why are we doing that? Well, you're really doing it so you get finer grained resolution or deployment of your application, right? So you can scale one little slice of your application instead of the entire thing or even just a microservice level, right? So you get better resource utilization, you get all the cool features of Serverless like scaling down to zero, so when it's not being used, it's not even running at all, so you get cost savings, right? So to me, Serverless is the next natural extension for your containers of service type of offerings, right? Just breaking up things down to even smaller little bits for better resource utilization and scaling type stuff. So it's me, it's just the natural next step. And as you mentioned, Knative is obviously right in the middle of all that because depending on who you talk to, right? It's either a hosting platform, kind of similar to Cloud Foundry a little, but a lot of people describe it as a Serverless platform, right? And it's specifically designed to handle the notion of taking application, containerizing it and running in a Serverless type platform, but on our favorite infrastructure, meaning Kubernetes. Definitely, thank you. Now for our users that are watching and when they hear Serverless, many times they think about things like Lambda, maybe IBM Cloud Functions, these platforms that enable you to do code first and then basically abstract away the requirements underneath. Now those requirements underneath are kind of what we're focusing on today, that open shift layer, that container-based layer. Now if we compare Serverless of four or five years ago to Serverless today, how has Serverless evolved to adopt these new technologies? Yeah, that's great. I think you're right. To a lot of people, Serverless means things like give me your source code and I'll host it for you. And that's definitely true depending on the platform. But you're right, under the covers, they are all for the most part leveraging something like containerized technology. And so you look at something like functions and Knative, they are all using Knative on the covers, they just hide it from you. Now a lot of these platforms like, for example, Knative do actually allow you to see and deploy containers themselves or container images themselves. So they do actually allow you to work at that level. And what's really cool about that is it kind of reinforces this idea that the line between Serverless or functions of service and containers of service is actually getting very, very blurry, right? Because if you look at the functionality in terms of give me your source code or run it for you and your container or run it for you, auto scaling, toss in a little bit of scale to zero, there's really not much of a difference anymore between container to service, function to service and Serverless. And I think what you're gonna start seeing with platforms like Knative is that line becoming very, very blurry to the point where the user doesn't even have to think about what is the as I wanna deploy this stuff onto, right? I just hand over my application, whether it's source code or an image and the infrastructure just runs it for me. And then as a runtime, I just choose the configuration knobs I want, right? So I don't have to think anymore, well, do I wanna pass versus fast versus cash? That choice is meaningless anymore, right? It's just one of the runtime characteristics you want. And I think that's the direction we're seeing with projects like Knative. And you can see IBM's really pushing that with some of the newer offerings that you can find on our platform. Okay, great, great. So one last thing I wanna key in on here on the Serverless front. Now OpenShift has kind of touted itself as Kubernetes for the enterprise. In fact, if you go to the Kubernetes documentation, you'll actually see verbiage around how Kubernetes was never actually meant to be a developer platform, but it actually provides a building blocks for a developer platform. Now, I'd say, and I think many people would agree that OpenShift has solved that problem for Kubernetes by creating abstractions, a dashboard and experience for end users to more easily be able to deploy applications. They've in essence created a developer platform on top of Kubernetes. Now, what we're saying here and what you're kind of saying is that I believe Knative is further extracting that. Now, can you touch in a little bit of how exactly is doing that, what the use cases are and why we need to do that in the first place? Yeah, so it's interesting. Even though everybody keeps talking about how, you know, Kubernetes is one of the container war and stuff like that, and that's all true. Unfortunately, you have to kind of become an IT expert in order to use Kubernetes, right? When the whole cloud computing thing came on board or they said, hey, this is great. We're gonna abstract things away from you. You don't have to work with the VMs directly. And that's true, but now you have to understand how to do networking and all the other infrastructural pieces just to get stuff running in Kubernetes and it's non-trivial. As great as it is, it's non-trivial. Well, Knative takes a step back and says, well, what if we can hide all that stuff from you, right? What if instead of you telling us all the various gazillion different configurations that are out there, how you want those things set, how you wire them together, what if we take a different approach and say, just give us your container image and what are the runtime characteristics you wanna see, right? Do you wanna see scale to zero? Yes or no? Nope, I wanna stop at maybe five instances because I need five running all the time, that kind of stuff. I don't have to worry about how the auto-scaler makes that happen. I just say, give me five, right? And that's the kind of abstraction that people I think are looking for. They just wanna say, here's my stuff. Here are the runtime semantics I want to see from an external perspective and Knative does all the magic under the covers, wires it all up together for you, all leveraging Kubernetes, but you don't have to worry about it anymore and it just manages it all for you the way it should be, right? It's that abstraction that we keep talking about. Now, it's great you mentioned that Kubernetes was never meant to be the platform for users to actually interact with. Well, Knative is almost the same thing, right now. While Knative's user interface is darn nice, I love it. It's really not meant to be used by the end users themselves directly, right? It's meant to be a platform for serverless platforms, right? And you can see Red Hat and IBM doing the same thing there. We're taking it in-house, offering up to our customers, but wrapping it with some additional Red Hat, IBM goodness around it to make it even easier to use so that you really don't even need to understand YAML in many cases, which is great, right? So you can see it popping up even more at a higher level within our own platforms themselves. So that's kind of where we're headed with all this stuff. Perfect, thank you. Now, what we talked about a lot there was, how Knative and OpenShift are making it easier for developers to work with the platform. But many times, developers need automation. They need methodologies. If we go back to our quote, we said, by 2022, 90% of new applications will be cloud native, dot, dot, dot, developed with agile methodologies. So with that, I wanna move on to our next panelist here. Peter, I think, can we talk a little bit to the agile methodologies that have been built to really support the technologies that Chris and Doug covered? Being able to work with OpenShift, being able to work with Knative in a way that developers are better oriented with. Can you talk a little bit to that? Sure, and I'm actually gonna turn it around because I was thinking about this and we're coming up on the 20th anniversary of the Agile Manifesto, believe it or not, in February next year. And so the industry's been talking Agile for a long time, and I've been in the tool space that whole time. And I think what's been missing is that, it's great to talk about faster velocity. It's great to talk about abstraction and isolation. It's great to talk about testing pieces and smaller chunks, but we didn't have the architectures 20 or 10 years ago to really support that, not in a real way. People were working with VMs, they were working with large Java monoliths packaged as a war. And what we've really seen now with the rise of containers is the, and microservices is the architecture and the runtimes for that architecture that support an Agile development. So, once you have microservices, your code base is smaller. It requires a smaller team. So now you can start talking about two pizza teams that can collaborate more in real time. That bounds the size of the problems you're solving in any particular microservice. So now the idea of very short iterations or sprints becomes a lot more doable because you're bounded. Refactoring in a way is a lot easier because you've invested in setting up the boundaries between those microservices and the APIs and you've decoupled them to an extent. So you wanna change implementation languages. You wanna change back in databases. You want to move from a very rigid definition of how it's gonna get deployed to more of a serverless approach where you're just talking in terms of general scale and doing a lot of auto scaling and things and not being as prescriptive about how this thing is gonna appear in production. And all of that's really reinforcing the Agile way of working. And so you see in the tools, things like continuous integration have been part of Agile from the beginning and tools like Jenkins came up and everyone knew how to do a build in Jenkins and whatever. And I'd say what we saw in the last five years ago was the next generation tools were all about deployment and kind of geared to how do you deploy a monolith and schedule downtime windows and notifications and a lot of manual processes and synchronizing with those manual processes that were running in other tools. And essentially, how do you adopt a DevOps approach and really embrace Agile with an architecture and an implementation that aren't really geared to that and weren't really conceived with that in mind? What we're seeing now in tools is a little bit different. The actual act of deployment, moving the bits into a cluster has gotten a lot easier. You build an image, put an image in the cluster, you declare it in a YAML file, it's pretty straightforward. Right, and I wanna touch on that a little bit here, Peter, and I might actually pass this back to either Chris or Doug. Do you believe that kind of to Peter's point that the platform and the orchestration layers, things like Kubernetes, OpenShift, were built from the ground up knowing the way that users were gonna be deploying on it, the Agile methodologies from one end to the other. Were they built with users and Agile methodologies in mind? I'd like to see Doug's input as well. From my perspective, I would say that most of users that I talk to, they're not interacting with necessarily Kubernetes directly. They are, their integration point is from that CI CD tooling. They're pushing code, they're doing things and then magic happens that they're not necessarily involved with. So I think from that perspective, yes, but the other side of it is, like Doug talked about earlier, all the complexities of Kubernetes itself and it's obviously a steep learning curve there. Yeah, it's interesting. I think about it that way, but if I had to guess, I'd actually say no, in the sense that I think most of what I think I see around things like Kubernetes and stuff is more around, hey, we've got the really cool technology containers. It has a whole bunch of benefits, better scaling, portability, all the stuff that Chris talked about. And I think they were trying to build tooling around that to make it easier to manage those things. And I think once people realize that you have this really cool deployment artifact called a container image that's portable and it contains everything you need, you don't have to worry about all the install scripts that go around it. It's all just sort of bundled up together. I think it just naturally led into the entire DevOps story. I don't think they necessarily had DevOps in mind when they did it. I just think it's just such a natural fit for it that it just sort of happened organically, but it could be wrong. Who knows what was going through these guys' heads? Yeah, I mean, I just want to add to that. I think you're right that I think it just, everything just fits really well together. I don't know if it was an actual thought of, you know, whether they were thinking about one while they were building the other. But the declarative model of Kubernetes or OpenShift, I think is what lends really well to this agile and continuous delivery like methodology. These individual development teams are able to declare exactly what they want, how the system should run, and they're able to do that in basically a static config and just apply that config. And the declarative model of these container platforms, as well as the controller mechanism that runs in there that is turning things to equal what your desired configuration is, just lends really well to that and it fits really well together. So, I mean, developers can just, you know, declare their config in some sort of source control and then it turns into actual running artifacts. Definitely, excellent discussion here. One thing I want to probe a little bit here is, you know, Ram, you mentioned that the model of Kubernetes lends itself better as a declarative config approach. Now, Peter, kind of back to you here. We mentioned tools that have been around to support agile and agile workflows, you know, everything from how teams manage issues to how teams actually deploy code, maybe using something like Jenkins. But as we know today, there's always some cool new hot capability for doing DevOps. And it seems like every year a new capability is announced. Now, why do you think that is and can you speak a little bit to the tools in the OpenShift ecosystem and how they're enabling DevOps? Yeah, I mean, I think they keep coming up because, you know, developers love developing tools for themselves. You know, I think there's a little bit of that, right? It's the place where you see kind of innovation first. That's what, before I get into the tools themselves, I want to talk a little bit about, you know, how the nature has changed. So, you know, just echoing what like Ram said, you know, I was describing tools five years ago as very procedural, you know, move this bit here, check this memory size, or that's taken care of for you by the platform now. So the kinds of things people are automating are different. You know, and I think what you're seeing now is they're using the tools a lot more for automating. I always want to say decision-making and judgment, you know, for what gets to production. So how can we, you know, essentially how can we mitigate risk earlier? You know, how can we do more automation around security scanning? How can we do more automation around integration testing? And again, taking advantage of things like Kube and OpenShift, you know, the idea of spinning up new environments on which, you know, different kinds of very ad hoc probing, whether it's security or performance or anything else, you know, it becomes a lot easier and cheaper. So I think people are now automating things that I think ideally, you know, if I were doing a marketing talk five years ago, I would have said, yes, you should automate, you know, all these things for your giant monolith job application, but in reality it wasn't happening. So, you know, still that need for automation, it's kind of shifted, you know, higher up the value chain, as a lot of the details of bits have been taken over by the platform. Tecton's the project we're really excited about at IBM and at Red Hat, and we were actually, both IBM and Red Hat were in the day before that we came together under one umbrella, because we really saw that the CICD engine that we wanted didn't exist. There were some, you know, interesting commercial technologies. There were some, you know, Jenkins has kind of been sort of open, you know, previously under CloudBees Control, now under the Continuous Delivery Foundation, but it's also a generation back from containers, and it's really not a, you know, in its own architecture, not a container-centric application which makes it hard for, say us as a public cloud vendor to scale it and manage it across multiple regions. It just wasn't built for that. It was very much a single tenant kind of on-prem design center. So, you know, taking these trends, you know, we want a cloud-native technology for our implementation. We want to leverage containers in that implementation, but we also want to leverage that Kubernetes model of declarative definitions and use that for defining, you know, CICD pipelines. So, Texan brings those things together, and you know, as someone was saying earlier, how, you know, Kubernetes itself isn't really, didn't start as a development platform, it started as a way you build a development platform. I think Texan is kind of that same thing. It's a good set of primitives that I think, you know, abstract, you know, the notion of you're running steps in containers, steps are organized into tasks that, you know, full control over the execution graph, maybe think front and parallel, maybe the sequential, you have joins, you know, so arbitrary complexity of how you run this stuff, but then you need to have an experience around it. You know, when do different pipelines run? What triggers them? You know, often it's events in a Git repo that developers are initiating. Maybe there's some long running jobs that are gonna go do some kind of soak test for 48 hours that eventually come back and, you know, trigger the next step in your pipeline. You know, so all these things that have to get glued together, and I think that's what the experience in IBM Cloud and our continuous delivery service and the experience directly in OpenShift with OpenShift pipelines, you know, kind of do with Texan is they take that core, you know, leverage its strengths as being a cloud data solution itself and then, you know, put an experience around it that lets developers really just think, I'm gonna keep my code, the right stuff is gonna happen. Perfect, thank you, thank you. Now, I wanna build on that a little bit here and we mentioned that, you know, Kubernetes was never meant to be a developer platform, but the wonderful thing about OpenSource is that if there is a problem, someone out there is going to solve it. It's almost like golden rule. Now, for Kubernetes, there are a lot of those problems. When Kubernetes was first announced, there was a lot of missing capabilities in Kubernetes. So what did the community do? They went ahead and started to solve it. Now, today, if you look at the CNC app or the Cloud Native Computing Foundation, there's over 1,400 OpenSource projects under the CNC app. Now, if I were to span that outside to talk about any OpenSource project in the Kubernetes ecosystem, I don't really even have a number for that. Basically, there's a lot of OpenSource projects out there to help you solve each individual problem that you might have. Now, the way I presented that makes it sound like a good thing, but for an enterprise, for a business, for a company who's trying to solve problems and just get something out the door, it's daunting, it's overwhelming. How do enterprises make the correct choice? So, Ram, I wanna start with you here. Can you speak a little bit to the ecosystem capabilities in the containers and OpenShift space and then particularly maybe touch a little bit on how IBM and Red Hat can really help our customers, businesses, just end users in general, make the right decisions? Sure. I think to build on that problem you were mentioning, each one of these OpenSource projects tries to maintain a very sharp focus on the problem that it's trying to solve. And the focus is usually pretty narrow, and that's a good thing, right? Like if Kubernetes focuses on just container orchestration, then it's like first-class citizen would be containers. It's a building block. I mean, Kubernetes is one portion of the entire solution and it's expected that you would use a different block to solve a different set of problem. So, when cloud providers or, when cloud providers are trying to provide an end-to-end solution to their customers that is just trying to solve a business problem, which is how to help developers deploy their applications faster. That solution needs to be more end-to-end, consisting of blocks that build closely together. And I think that's where you would use cloud providers because they've tested, they work closely with the communities of each one of these blocks, right? And they build integrations and they build best practices that fit really well together. And then they build abstractions on top, whether the abstraction could be as simple as a CLI or dashboard or integrations with other services like continuous integration, for example, and they're able to provide that as a solution. So, that's really where you need to be looking at to consume the breadth of ecosystem and not get lost in each individual piece of it. Perfect, that's great. Now, Ram, I wanna drill in a little bit into kind of what you do day in and day out at IBM. And that's helping our customers and users that need to manage all of these complex microservices, cloud-native capabilities, the services that are running in the cloud, thousands of microservices running across multiple data centers. How do these users manage all of these at scale? Yeah, so what I do every day is help customers deal with microservices, right? They've broken apart monolith to microservices for all the various advantages. And as we all know, the big disadvantage of a microservice is the exponential growth of the network traffic and the focus on the network layer. So, users now need to are concerned about getting control of this network layer again. And that's exactly what a service mesh aims to solve. Users want to enforce policies on which microservice is allowed to talk to which other microservice. They want to do encryption within their environment. So these are all things that the service mesh allows you to gain control over. So I work on Istio and service mesh on OpenShift. And these technologies are becoming really critical not only for the control of the network layer, but also the observability that is missing when you move to microservices. Perfect. And, Ron, can you touch a little bit about what OpenShift is doing with Istio in regards to OpenShift service mesh? Yeah, OpenShift service mesh is based off of the Istio open source project. Just like you have said before, that OpenShift is like enterprise Kubernetes. You can think of Red Hat OpenShift service mesh as enterprise Istio. So they've taken Istio and they built a set of abstractions on top of it. They have a set of best practices and some policies that are applied that are good defaults and it integrates well with existing OpenShift resources. So it's part of just building that end-to-end solution. Perfect. Now, we've talked about a lot of, we started with the base, the core of OpenShift and container-based platforms. We spread out a little bit and we talked about agile methodologies and then services around that layer, things like serverless. And now we're drilling into the ecosystem. Now, without a doubt, data and databases are a core part of this ecosystem. Now, Josh, this is a question kind of geared towards you. With OpenShift and container-based applications in general, stateless applications are becoming the norm. And in fact, even before container-based applications were really taking off with the 12th factor app becoming popularized, stateless apps were the way to move forward in working with cloud native applications. And that means people needed more data and they needed this data to be replicated and to be highly available and generally stored outside of local execution environments. So Josh, how does OpenShift and maybe even cloud platforms, maybe specifically IBM cloud, help users tackle this new and revitalized need for data? Yeah, for sure. So OpenShift and Kubernetes just generally really crushing doing stateless applications. In terms of stateful workloads, I think there's room for improvement that the community is putting into the newer versions, especially stateful workloads in regards to the database space. Over the last few years as databases moved to the cloud, there's been a lot more complication developing databases for the database developer like IDAM Cloud and or the Postgres community and the fact that now everything is a distributed system. Once you put it in the cloud, it's distributed. It's away from your laptop. It's away from a server that you can access in your data center. And as part of that, especially when we talk about relational databases, you're introducing more complexity just by the nature of having a distributed system. Muxing that together with something like Kubernetes, which is really good at turning things off and turning things back on again is a really good way to lose data when you're running a database. So I think that for the most part, when I talk to customers, there are two camps right now for those that are running OpenShift and Kubernetes workloads. One is the net new cloud native application that wants to use a cloud native database. One that has geographical replication and distribution sort of the new SQL group databases. You might be familiar with like Cockroach DB or MemSQL. They would either run it themselves in OpenShift with the advent of Kubernetes operators and OpenShift operators. It's become tremendously more possible to run databases at scale. In fact, our cloud database as a service products are actually built and run with Kubernetes operators and rerun more than 20,000 databases worldwide. So take that as a data point for operators are really, really useful for databases. They solve a lot of the hard problems, but I think there's a few more years to get it to easy adoption at scale in terms of running databases yourself on Kubernetes. Because on the other hand, I'm also in the Apache Couch DB community and a lot of people are having problems running databases on Docker and databases on Kubernetes. That's why we released the Apache Couch DB operator, but they're losing performance, they're losing data. Tracking back to my original point though, there's two options for user. There's run the database yourself in OpenShift or adopt a managed service from a cloud provider or at least these are the two options I've discussed the most with customers. Some customers want to be hands-on. They have the operations team and the experience to run a database by all means, would always recommend to use a Kubernetes operator provided by the vendor or a trusted open source committer because there's operators that don't come directly from a vendor or vendor doesn't exist for a project because it's in the Apache foundation, the Linux foundation. So definitely do your due diligence there, especially as you move higher up the stack and levels of operators. You don't want to make sure that the people that are adding automation and self-healing to the database or the people you probably want to be trusting with that level of addition to a database because there's all sorts of ways databases will fail and they will fail. So you definitely want to go with the subject matter expert in that space. And the other option is to use a managed database from any of the major cloud vendors and OpenShift and Kubernetes make that quite easy to find an application to an external database. So it really depends on your comfort and skill level running a database and whether you are okay with handing off the management automation, scaling, security and compliance of the database to a cloud vendor so you can spend more time working and building OpenShift applications with your team. So I'll pause there. I know I covered a lot of ground, but in terms of data being the core, I find that when customers are successful with data it makes it a lot easier to move faster developing the stateless apps and the OpenShift applications they're building. They just knock that one out, make sure it's stable and they can spend more time doing the things that provides them the most business value rather than trying to do like schema design or high availability of the database. Excellent, excellent. And I think we've seen a recurring theme today of putting gearing up open source capabilities that of course anyone can run them, they're free but the overwhelming thing that we've seen here is that you're gonna need to manage it yourself. You're gonna need an operations team to do it. And I think that's what you kind of walked through there with using operators to run databases yourself versus just going with a managed service. If we take this discussion back to what Doug initially talked about, yes, you can move forward with open source Knative but you're probably better off taking advantage of a platform. The same thing goes for service meshing and Istio and service mesh and Peter with the DevOps tools you could kind of explain as well such as Tecton and go with the open source capability. So I wanna kind of open this up right now for either any of you to kind of chime in here and talk a little bit about how, the space that you're focused in on, whether it's DevOps, serverless, Istio, when does it make sense and what are the key things that a user needs to look for to decide should I go for the open source solution or should I go for the cloud vendor provided opinionated platform? I mean, sort of the DevOps tool space, I mean, I think we have this conversation a lot and I think it's historically something people are used to running in-house themselves and often there's a central team managing it, sometimes there's not, but it consumes resources and is there really differentiation enough in the tools from you running it versus using a cloud service where you don't have to think about it? And I think, especially when we're looking at open source projects like Tecton and all the others, it's the same thing. If something very bad happened in the world and you needed to run it yourself, you have the open source, you can do that. But day to day, why are you choosing to put your effort there instead of in the applications you're building and the business domain that you work in? Let us run it, we run it across the cloud, across many regions and data centers and we've gotten good at it. And I think that's kind of the generic argument you make for cloud platforms in general is let you specialize on what matters. Last thought on DevOps is, I think that's the tool piece. Now I think we're at a place where people still will write their own CI and CD pipelines and processes to run on top of that tool. I predict we're heading in a direction where more and more of those pipelines themselves will be standardized. So not just the tool, but the logic of what are you doing? What kinds of quality and security metrics are important to test for that everyone should just be doing? And something we're trying to do on top of Tecdon is build that reusable set of assets for, here are the kinds of things you need to think about if you want to meet SOC2 compliance or HIPAA or FedRAMP or any kinds of these standards that we as a cloud provider need to meet. Most of our clients that we work with need to meet as well. So sharing more knowledge a little higher up the stack. Definitely. And with that, I want to maybe take it a little bit closer to the stack. OpenShift itself, I think Chris, you mentioned in the beginning that with the acquisition of Red Hat and then kind of the onset of our focus on OpenShift, we announced OpenShift on IBM Cloud. As we know, OpenShift is based on the open source platform, Origin Kubernetes distribution or OKD. Chris, can you talk a little bit to when a user needs to consider open source versus OpenShift versus the third flavor which is managed OpenShift, I'll let you take that. Right, yeah, absolutely. I was going to jump in. I definitely agree with a lot that Peter said because we run into this where OKD is free, I can go out there and I can play with it. And so when we have these discussions, it's not to imply that you or your organization is not technically competent to deploy and run OKD as a platform. But similar to what Peter said, we think you should focus higher to things that are solving your line of business objectives, not running OKD. And as a part of the managed service, specifically with Red Hat OpenShift on IBM Cloud, you'll get the 99.99% SLA, the HA masters, multi-zone clusters, bare metal worker nodes, all of the compliance that Peter mentioned. So that's the value of the managed service. Ultimately, you're still going to build, containerize workloads and run cognitive apps, but it's really about SLA, the availability that redundancy built into the offering and allows you to focus on your objectives. Perfect, perfect. Now, one more quote that I want to use from IDC here and while we're on you, Chris, I want to shift gears a little bit here. And this quote here says, by 2024, over 50% of user interface interactions will use AI-enabled computer vision, beach, natural language processing, and either AR or VR. So these are high-level services that are being offered by cloud platforms, especially the key thing here, IBM, Watson, and the Watson services available on IBM Cloud. Chris, can you touch a little bit about how these higher value services can be consumed from the cloud native platforms that we've been talking about today? I truly believe that responsibility lies on, as IBM Cloud, as a vendor, to make these easily consumable so that end users don't have to have a PhD in data science to be able to effectively take advantage of these technologies that are available. So I think you touch on, there's a lot of different aspects of kind of adding these higher value services to your apps. And obviously we think that consuming those easily and securely is of utmost importance. So obviously as I'm building my app, I want to add that cognitive capability to whether I'm adding, like you said, voice to text or I'm doing a chat bot or I'm doing some other, adding more intelligence to my app. We want to be able to do that. So that's obviously very important. You can easily deploy a Watson service and there are a number of different cognitive capabilities that you could then leverage within your containerized workloads. But another area that I think is probably more important is really around data access, data controls, using that cognitive capability in the right way. And hopefully you've seen some of these announcements recently around, you know, with all the things that are happening in the world that we live in today, that IBM has basically announced that we're not going to use that AI technology for, you know, some of the racial profiling and things like that, that maybe is definitely not the right use of that technology. So I think, you know, there's one hand of using the technology, using it the right way, and that IBM is kind of a steward of the community and really advocating that we do use that technology in a way that is beneficial to the broader society as a whole. Not just, you know, we're selling technology and widgets, you know, that's obviously important too, but, you know, IBM as a corporation is really focused on the societal aspects of our technology as well. Excellent, excellent answer there. You know, one of the things that I can kind of think up off the top of my head is the call for code initiative that IBM has launched and the focus on using IBM cloud technologies potentially solve, you know, COVID related or pandemic related issues that we're all facing today. I'll pause here for a second and see if anyone else wants to chime in on the use of AI and higher value services from the perspective of cloud native platforms. So as you mentioned, this user interaction layer is changing, right? The traditional, here is my web server that can serve, you know, thousands of requests by itself and is able to serve like static content in a synchronous manner, like that, that doesn't cut it anymore. So to take advantage of AI and all these other cognitive capabilities when you write your user interface, you have to write that layer a little bit differently. So we mentioned, you know, leveraging existing APIs to build this layer. So you're calling external services like the Watson APIs but you're having to rely on all these various APIs on every request. It requires more processing power on every request. It requires more data analysis on every request. And in many cases, these user experience are expected to be a step ahead of the user and give the user what they want before they actually have to go around finding that necessary information. So there's a lot of like predictive analysis that needs to be kind of built into the good user experience layer. But where I'm getting at is that all of these requirements put a different set of needs on that application layer. These applications end up being very network intensive, very resource intensive and they need to be able to quickly scale up and down. So containers and the visualization layer that the container platform provide, I think allows you to build these layers in a much more effective way. They give you that control of being able to, you know, scale up and down as your requests from your users go up and down and take full advantage of the underlying infrastructure that your container platform is built on and leveraging. Versailles, I just wanted to circle back around to something you were talking about earlier about, you know, when to choose open source versus something offered by a platform and stuff like that. And I just want to sort of dovetail on what Chris was saying there. Because my initial answer to your question was, or if you focus on the basic question, open source versus offering from a provider. To me, I would look at it as, I would look at the open source technology first just to see if the base functionality, something that looks like it's going to solve your needs. And you know, use that, play with it. Obviously it's free, it's low cost, you can install it, play with it, have your devs go, have some fun with it, right? But at some point, then you get into what Chris was talking about and says, okay, great. This technology at a base level suits my needs. Do I now want to manage this myself going forward? And that's when you start looking at, okay, how much time do you want to invest in managing it versus having your developers work on your business logic and your actual stuff that makes you money, right? And that's when you can decide, okay, nope, I need to manage this myself versus no, I want to be on a platform which just allows me to install it versus a platform that it will install it for me and manage it or go one step further and do what you're seeing on other platforms where it's like, that's a service, right? Where you just deploy your application and everything's hidden from you, right? So you got a whole breadth of things to choose from. And this goes directly to what Wolfgang asked in the chat here, right? He, for whatever reason, has a real trust problem with managed services. Okay, fine. But he could still leverage the open service technology at a layer where he can manage it himself but someone else is perfectly okay with IBM Red Hat managing it for him. But the point is that they can stick with the core open service technology, then they get the freedom to move around if they need to, right? They get the same core technology, different providers, different levels of managedness, if you want to call it that. But they don't have to necessarily swap out everything just because they're going to switch from one provider to another, right? The core technology should be the same wherever they go. So they get that interoperability, portability aspect but still have the level of choice and how much they want to manage themselves. Anyway. And I think that's a great point that you made on how users are able to maybe start with open source, investigate and then choose to go with the managed approach. You know, going with the managed approach doesn't mean you're going with an entirely new platform. It's not like the platforms have forked and going off in completely different direction. For a platform like IBM cloud, you know, we work in the open source. So although we have our managed offering, whenever there's key changes or customer requirements or that kind of thing that force us to create new features, this is the same thing for Red Hat, the same thing for any company working in the open source with managed offerings. Those changes are then contributed back into the community. So I'd like to quickly touch on what IBM has been doing in the community and specifically what we've been doing around things like Helm operators and the operator hub and these key open source capabilities that we're taking our expertise and know how and then contributing back to. So if we can touch on that a little bit, I'll open that up to the panel for anyone to chime in. I mean, I can talk a little bit of, oh Chris, you go first and then I'll talk about databases. Sure. So there is an IBM cloud operator that you can get on operatorhub.io. So we're obviously excited about that to be able to run some of the fundamental commands and things within the platform. There are also a lot of different teams contributing and Josh will dig into that from the database side and what they're doing. But we are completely aligned with Red Hat strategy around operators and adopting that methodology to simplify deploying our content. And one of the things they got announced at Red Hat Summit earlier this year was the Red Hat marketplace. And basically it's IBM and Red Hat, our ISV ecosystem in OpenShift 4.4, it brought the marketplace into operator hubs. And now I deploy an OpenShift cluster and I see all of that content. IBM provided, Red Hat provided ISV ecosystem and allows me to quickly deploy that content. And again, just simplifies not only deployment but then ongoing lifecycle management. So I just, again, moves my responsibilities up higher. Excellent. And just a quick heads up, we've got a couple more minutes here remaining. So Josh, I wanna let you answer your piece on the databases and open source front. Ideally we can keep it open. I'll keep it short. So I was just checking out operatorhub and IBM has a whole host of new content on there even since the last time I checked. So we have the cloud operator, their storage operators from Spectrum and there's product specific operators like for Kafka and streaming. The event streams are for IBM cloud object storage. On the database front, we've been really thrilled with the release and the reception of the Apache CouchDB operator that comes from IBM. CouchDB is really good at moving data around wherever you need it to be. So one of the roadblocks that I see customers have with CUBE is their applications are supposed to be portable but data is importable. CouchDB helps folks solve that and we send good uptake there. And over time there's gonna be a lot more investment in the IBM community, especially around data and operators. So look forward to that. Excellent, thank you. And I just wanna say thank you so much to all the panelists that have joined me here today as well as the audience for tuning in. Diane, back to you. All right, well I almost could say, I couldn't say any of this any better than you guys. It's wonderful to have you on today. It's wonderful to see the enthusiasm for open source, for OpenShift and all things Kubernetes at IBM and it's been really a very interesting growing experience getting to have the extended community participation of IBM and the Red Hatter's doing that. So it's really kind of an exponential growth and the number of people contributing to open source projects that we've been all working together. And so it's been great getting to know you. So thank you for taking the time today. We definitely are gonna get each one of you back on for a full hour long deep dive on these topics because every one of them is something we wanna hear more about. So thank you for taking the time today to introduce yourselves and being part of the OpenShift Commons. Really appreciate your participation. ["Pomp and Circumstance"]