 Good morning, everyone So if you're interested in IBM cloud private and red hat open shift come on and we'll talk a little about what we're doing today so What we announced yesterday? was a joint partnership to bring together IBM cloud private and Red Hat open shift under one hybrid solution and really to highlight and for some reason my layouts a little bit off here So let's try this one more time All right, we'll push forward here really to highlight delivering certified red hat enterprise Linux containers for IBM middleware and for their platform itself running on top of red hat open shift being able to mix and mingle the certified rel images for IBM middleware and open source together and Then being able to actually deploy IBM middleware everywhere that open shift is supported today So I want to talk a little bit about the things that we have found with clients as we talked to them about Their various application patterns and their workloads and in particular what we see are these sort of three primary workflows for Applications and then data governance So the first is creating new applications that are based on a microservice architectures Then extending existing architectures with new interactive API to create new systems of engagement And then finally lifting and shifting existing workloads to optimize how they're deployed how their cost is managed, etc so we found these three patterns as lift and shift extended enhance and create or refactor new microservices and we wanted to build a platform around cloud private They could address all of these different use cases So this is one of several deployment models that we have with IBM the first being IBM public Which is our hosted manage IBM cloud solution offering managed Kubernetes along with other services like Watson and blockchain and IOT then dedicated which is a An environment that is isolated to your production workloads But ultimately is still leveraging some shared infrastructure like networking, etc And then finally private, which is our software form factor. So you deploy it in your infrastructure Wherever that's at your data center or other cloud providers And so cloud private itself is made up of four key components and part of this Announcement is that instead of bringing our own Kubernetes We can actually leverage the open shift Kubernetes and run directly on top of that But we'll still bring the common services that run on top of that layer So this includes how we build and collect logs. How do we manage the health of the application? How do we manage alerts? How do we actually deal with licensing consumption and common security? And then IBM middleware. So this is the content So this is kind of the the critical aspect here being able to deploy IBM middleware directly on top of open shift and a fully Supported way and each of the pieces that get deployed Automatically tie in to this common infrastructure that's made available on the platform And then we still provide cloud foundry as well. We find some clients who still have a need for cloud foundry They want a very tight Very opinionated way of building applications But cloud foundry doesn't offer the same flexible models that we have with Kubernetes in terms of how I manage Stateful workloads how I manage middleware messaging, etc. And that's why we we pivot a lot of our content there on Kubernetes So with cloud private we focus on OCI compatible Docker images running Kubernetes. We use helm as our packaging mechanism We like helm for for various reasons But it provides us with a open way to both package our IBM middleware as well as allow you to build and add to the Catalog as needed and then we use terraform as our cloud provisioning layer So anytime we're provisioning compute network and storage in different clouds We can actually extend those terraform templates and manage them directly in the catalog alongside the helm charts At a high level this represents the different run times that we're able to run on and with this Announcement we're now able to actually substitute a red hat open shift and put that in between Basically replacing this box for Kubernetes the other boxes to remain the same terraform cloud foundry common services and that middleware So this was the architecture chart that we showed yesterday at the keynote This highlights the ability to run across different infrastructure and with open shift They've already certified several different clouds. We saw the announcement yesterday with Azure We have IBM cloud support coming for open shift as well And so then on top of that we're running red hat enterprise Linux Then open shift and then the layers that are above in blue are being provided by IBM cloud private as certified rel containers So now you have a fully supported stack from the bottom all the way up to the application And then the common services and the catalog allow us to both Deliver more content to you but also allow you to extend that with your own content as needed And so you can mix and match again both IBM middleware and open source components I'll leave this up just for a moment just to highlight some of the content that's available today So this represents what was available as of 1q it continues to grow on an ongoing basis So in the open source category database services like Mongo and Postgres Web terminal access if you want a web-based shell then on the enterprise side Components for devops like urban code deploy obviously Liberty node MQ Lots of variations of DB2 depending on your scale and your needs Cloud automation manager. This is the component that helps us build and manage Terraform templates So as we build as we actually deploy cloud automation manager It integrates itself into the catalog and actually then begins to bring Terraform template content into the catalog directly And then we have a long history with part of the team in fact that's building cloud private and our HPC space So spectrum symphony and LSF which are high-performance computing products that have been used at very very massive scale for quite some time Those also are now available to run on top of the Kubernetes platform and take advantage of the way that it manages compute So here we'll highlight a couple of key pieces and hopefully we'll have yep We should have plenty of time. We'll do some some live demos here But we'll look at these sort of four value propositions that are key to the way that we deliver cloud private the first being Able to quickly deploy and get up to speed with new applications This helps us support that use case where we're creating new microservices or refactoring existing services Then hybrid integration being able to connect to external services whether that's an AI service like Watson or messaging or other security services is needed and then deploying that actual IBM middleware Directly in the platform and then of course the management console that surrounds it. So with that, let me actually switch over here All right, perfect. So Nope, the TV is not on All right, we'll get that fixed real quick bear with us just a minute here No, all right, perfect Okay, so here what we're seeing is the actual catalog of content and in this case we have We've got pieces of IBM as well as those open-source components. We were talking about a few moments ago Now we'll go through and actually do a quick deployment and in this environment I'm interacting with cloud private, but this is actually running on top of open shift So the open shift console and all of its interactions with the underlying Kubernetes API is Going to apply one for one here because what I'm looking at in cloud private is also Based on the Kubernetes API as well. So if I look at The actual deployments and other resources All of the namespaces that I see here are also the same namespaces that I have Exposed as workspaces and open shift. So it's it's the same environment and now if I go through the catalog and We'll pick on MQ to start with so This is a helm chart all of the values that you see here are part of the parameter values that I can supply into the helm chart I can do this through the UI where we provide some content assist and help to guide the user as needed but also I can do this through the command line and the key aspect that That the command line makes so important is because that's how we would integrate it with a CI CD pipeline We'll pick the new world target namespace and And then I could select other options here as well really in this case for MQ I haven't enabled my persistence layer in this cluster But I could bring any persistence that supported on cloud private or open shift And so there I could have dynamic provisioners whether it's cluster IBM spectrum access or other storage back ends and We'll set a Q manager name a Q manager is used by the application to interact with its messages and And then here we'll set a password and then click install Now at this point all of the resources that are required to run MQ are actually being deployed and so if I look at the helm release the helm release is that deployed version of the chart and Here we see the stateful set the service that it exposes and the credentials and secret that actually are used to Configure that the passwords and things that we saw a few moments ago And if I go into open shift I'll see the same information. I see the stateful set. I see the pod and in fact we'll go ahead and create a route We'll bind it to the port for the web console and the web console in MQ is using TLS So we'll do pass through TLS termination and then click create So now this route that I've just created Has actually exposed that MQ service that we just deployed a moment ago And here it'll come up and it'll redirect us to the login console and we'll log in and The key thing we're just showing here is that this is real right? It's not it's not smoking mirrors There's there's actually a real MQ service running as a container if I bound storage into it and the container or a pod Where to fail it would automatically do failure recovery bring up the new pod Bound in new storage mounted the same storage etc And at this point as a developer I have a self-service way to get access to components that are going to be part of my production system as an operator I have a consistent way now that I can deploy and manage from Inception all the way through to production and the idea here is not that you're always going to be Clicking through the catalog as part of your DevOps life cycle But this becomes sort of the foundation that you can then use to automate your entire DevOps pipeline and Helm charts along with Kubernetes resources like stateful sets and deployments Automatically have built-in the behavior to do continuous rolling updates right so with Helm charts You're going to push out new updates for the release and in fact in this environment if I click back over to my available releases We also do provide cues to help you understand in this case I had deployed 1 2 0 a couple weeks ago and now there's a newer version available So we can look across all pieces of middleware that you have running So you're supporting databases supporting messaging etc and help you understand when there's new updates that are available for you And that's true all for our middleware, but also for your applications as well okay, so The other thing that we will highlight here So it's not just about deploying it when I actually deploy something Everything deployed directly out of the catalog automatically gets tied into the common operations plane So Kubernetes is wonderful for running applications But still requires additional work to be done in order to really integrate it to the data center So what we're doing is we're actually doing all of that out of the box So here we come with a common set of dashboards. We automatically deliver Prometheus collectors for different pieces of middleware and the idea is to actually optimize The experience for both our middleware as well as your applications So that all the pieces that you would otherwise have to stand up law collection alert management health metrics, etc Auditing all of that is stood up for you out of the box to save you time And in this example, we can see the stock trader application All of the different pods that are running whether it's mq or db2 or the java-based microservices on Liberty are automatically sending in metrics And then if I switch over to That actually the new world namespace We'll see sort of the birthdate that happened here just a few minutes ago, right the pods that we started deploying Automatically began to get tracked You can see where I cleared the environment out a few minutes before and then the new ones actually pop right in So there's nothing extra that has to be done It's automatically built into the entire life cycle and The other thing that we showed yesterday on stage When I look at those different pieces for stock trader, I'm looking not only at At the capability to run traditional microservices But also these other pieces of software like ODM and the example we showed Integrates something like ODM for business rules and our AI services from Watson So here tone analyzer is actually running in IBM cloud and the application running on top of cloud private is consuming that service And so what we're doing then is tying together the complete architecture where I have a service that talks to That collects data from the user collects feedback and then submits that to tone analyzer and requests What's the tone? What sort of is this happy is this angry is this sad? And then that input becomes one of the pieces that allows us to make a decision on the loyalty program And that's worth something like ODM comes in to actually encode the loyalty program rules So if they're angry, maybe I give them extra credits or I give them an additional call Write something to help go back and make sure that that customer relationship is always helped very healthy And so we'll actually go through and show that here So the portfolio application itself is simply They're listing my stock and if I look at my user details Something happened earlier, and I was angry maybe where I actually kicked the plug and the TV went off but in any case in this case I can go back in and submit feedback and It's going to go back out. It's going to make an API call to watch and tone analyzer It's going to take that text and figure out. What's the context? What's the tone that this is bringing in and then it's going to go back to ODM and it's going to say Okay, what do I do now that I know the emotional state of this user and in this case if they're happy We have business logic that says one free trade, right? So you get one free trade as part of your portfolio if I go back and I say something like If I give it angry feedback The business logic actually goes back and says, uh-oh. We need to give them three, right? We're going to give them a little bit more We're going to give them something else to demonstrate that we care about their opinion And we want to try to reconcile Whatever thing that we did that damaged this relationship and so all of this is all running on top of OpenShift Right, so if I go back and I'll look at my Deployments here and again, I'll just pull up the OpenShift console one more time and switch over to StockTrader So all of the pieces including all the middleware again are all containerized all right, and If you want to actually try this out yourself There's two good options one you can just go to GitHub clone a repo try community edition And then there's an option that actually has full environments that are all stood up ready to go You click through and in two minutes. You're actually running your own cloud private You're able to kick the tires today. These are still we're using our Kubernetes environment from cloud private We'll begin the tech preview process for ICP running on OpenShift through the end of this quarter And we anticipate to have it fully GA sometime in 3Q so if you're interested in participating in that reach out to me MD elder on Twitter and I'd be love love to help you get involved with that and collect feedback And thank you so much for your time