 Today, organizations are building applications that are transforming their industries. To differentiate their offerings, companies must constantly strive to build new innovative processes faster, automating workflows and scaling solutions. Red Hat and Port Works have been collaborating for years to accelerate time to value for crucial projects with organizations around the globe. Today, we'll hear from leaders at Red Hat and Peer Storage about the challenges that organizations are facing and how to accelerate your business outcomes. Hi, I'm Sarah Kirk, Content Strategist for CIO Marketing Services. With me today are Andy Gower, Head of Partner Marketing for Port Works by Peer Storage, Simon Doddsley, Director of Open Source Integrations for Peer Storage, and Michael St. John, Technical Alliance Manager for Red Hat. Before we get started, I'd like to thank today's sponsors, Red Hat and Peer Storage. And of course, thank you to our audience for joining us. Now, let's go ahead and meet our guests. Andy, thank you for being here. Can you tell us a bit about your work? Absolutely. Nice to meet everyone. My name is Andy Gower. I lead the partner ecosystem work here at Port Works by Peer Storage. I've been here actually two years as of this week, so two year anniversary this week. Look forward to chatting with you guys about what Port Works is doing with our partners, specifically with Red Hat today. Great. Thanks, Andy. Simon, what's keeping you busy? Hi, everyone. Yeah, my name is Simon Doddsley. I am Director of Open Source Integrations at Peer. I've been with Peer for nine and a half years now, and I've been focusing very much on our API integrations for anything open source related, specifically around orchestration, observability, and pertinent to today automation. Great. Welcome. And Michael, tell us about your responsibilities. Yeah, sure. So I manage the Global Technical Alliance as some Red Hat's partner ecosystem team, and so my focus is really on co-development opportunities with our partners that really focus on helping organizations with some of today's most impactful strategies, sort of like the cloud native app development modernization, automating their enterprise. And I know that's all a bunch of buzzwords, but if you kind of think about the digital landscape and how it's changed in the past 10, 20 years, really, we've pivoted from driving efficiencies through technology to really truly affecting change and transformation in the types of products and services that businesses, institutions, governments are all providing. So people are really transforming their industries and the world around us, really. If you think about things like automated driving, chat GPT, it's not just the bleeding edge initiatives. You know, think about the ability to just pick up your phone and deposit a check or order dinner and have it delivered, right? So my responsibilities are really working with our partners like Pure and Portworx to enable organization differentiation and really even transform business use cases. It's really an exciting area that we work in these days. Great. Very exciting. Thank you, Michael. Thank you to each of you for being here. We have a great discussion ahead of us. So let's go ahead and dive in. Andy, could you set the stage for us first by telling us a bit about the evolution of the cloud and how we've come to this point? Yeah, absolutely. So I think about the cloud kind of in three distinct phases. And the first one dialed back maybe 10 years to about 2012, 2013. Cloud 1.0 came out. This was the initial phase when this whole thing of cloud came to market. We started here in cloud. I've got to move to cloud. I'm going to put all of my applications in the cloud. And it became a very popular idea of, look, I can just take all of my applications, take them out of my data center, stop paying for that data center, just run them on the cloud and it'll solve all my business problems. We heard it from boardrooms. They'd say, hey, I've moved to the cloud. That was the buzzword and earnings calls. That was the buzzword talking about the technical strategy to the market. What we saw though is that that lift and shift kind of cloud 1.0 model didn't really work. If you just took an application and moved it to the cloud, all you're doing is moving the cost center. Instead of paying for the data center, you're paying to run that same application maybe even more so on the cloud. You weren't getting any efficiencies. You weren't getting any value from that. You're just shifting around your costs. Fast forward to maybe 2017, 2018. And we started to see the notion of containers and Kubernetes come to the forefront. This idea of what if we abstracted just a little bit, made it a little bit more lightweight and we put that thing on the cloud. We started building applications on that layer, on that container Kubernetes layer. And then we deploy them on the cloud and we start running those workloads on the cloud to gain efficiencies. And we started to see some traction, certainly that cloud 2.0 phase. We saw some traction of people testing stateless applications in the cloud or starting to build net new cloud native applications on the cloud. But it wasn't really until the last maybe one, two, three years that we've seen a big uptick in what I call cloud 3.0. And that's the adoption of stateful workloads on the cloud. That's the adoption of moving built for the cloud applications that have data associated with them that are doing the things Michael talked about that are allowing you to process credit card transactions in real time or take a picture of a check on your phone and cash it in real time that are letting you do things like rebook your flight from your phone. That requires data and cloud 3.0 is all about how do I get that data into the cloud using this container and Kubernetes apparatus and take advantage of that to grow innovation within my business. Yeah, actually, Andy, if I could just interject a little bit, you know, some of the things that we see, especially now with more controls around data sovereignty and data egress costs from public clouds, really comes to bear. And especially since we still have a lot of traditional applications or processes and workflows that are dependent on things like on applications that are running on prem, the ability to have that portability across different infrastructures, whether it's on premise bare metal, virtual machines, public or private clouds, but even even more so today, looking at edge deployments and being able to hit all points and have that portability across all of these things. And that's, I think where containers and Kubernetes really has a distinctive play in offering that type of portability across all of those different types of environments. And I think it's very important as well as we look to the future. Great points really well said. With that in mind, what does container native storage mean? And why do I need a service like that provided by partners? It's a great question. So everything that we just talked about around cloud 3.0, this idea of moving state into the cloud into a hybrid environment, being able to manage state and storage as it moves around from on prem to public cloud or in a hybrid deployment, all of that requires something to manage that data to manage that state to manage everything about that application so that it doesn't break. And when we, you know, initially got to this phase, people started to use CSI. They started to use this idea of why don't I just match my storage to the underlying array? Why don't I just use, you know, what I've already paid for, what I've already bought and have in my data center and use that for storage? The problem with that approach is that you're still tying that application to a physical server or to a physical location. And you're using, you know, storage and data that's associated with that server specifically. What's great about containers and Kubernetes and OpenShift is the portability of it, the ability for you to move those applications whenever you want, wherever you want, scale them up and scale them down. And to do that, you need a storage layer that is an extension of Kubernetes, that is speaking the same language of Kubernetes and allows you to scale and be portable right alongside the application. So what container native storage does is it goes and sits on the Kubernetes control plane and lets you bring that storage and data management into the application. So as you're doing those various things, you're maintaining, you know, the disaster recovery, the day one, the day two operations of management you need with the application tied to the application, not tied to the underlying storage that it's sitting on. Yeah, those are all great points. And, you know, beyond that, if you just think about running in an OpenShift or Kubernetes type of environment, it's not the same kind of deployment as you would have in, say, a traditional environment where your state is held at the server level. Actually, you need a persistent storage layer to keep the state of those applications within the context of a cluster and cluster management. But also one of the things that Portworx brings to bear is beyond just the storage, it's actually delivering Kubernetes data services to the applications as well. So, you know, if you think about things like, you know, that you would normally be taken care of in a traditional environment, things like security, things like, you know, beyond just having encryption at the cluster layer, but being able to go down into the volume layer for security and encryption, governance, resiliency, data discovery, all of these services need to be encapsulated within the Kubernetes environment. And that's what we bring to bear as well with the Portworx partnership. Yeah, and if I can just add on that, you know, the second part of the question, why Portworx specifically, you know, to address this need, you know, what makes Portworx unique, what sets it apart from other, you know, opportunities or things you could pursue in the market is, you know, Portworx covers all of these things Michael talked about from a single platform, you know, there are point solutions you could find to address certain pieces of this puzzle, but you're still trying to stitch four, five, six different things together to provide the set of data services you need for your application. What Portworx brings to the table is everything you need to get started on day zero, day one and day two and manage the storage and data associated with your application, whether it's security, whether it's providing the persistent storage layer itself, whether it's providing disaster recovery and backup, whether it's helping you automatically scale up and down so that you can, you know, optimize and intelligently manage your costs under the covers. All those things come from the Portworx platform and it's unique in being able to deliver those rather than have to, you know, stitch different things together to make it work. Could you talk a bit about why I need to be concerned with backup and recovery? Can't I just use my data center data protection solution? Andy, maybe I'll start off and you can talk a little bit more specific about what you do with Portworx, but you know, if you think about in a traditional environment, I just mentioned, you know, the state is held by the server and typically what you're doing is your mounting some storage to that server for, you know, for the storage for that application, but in a Kubernetes environment and an OpenShift environment, that state, Kubernetes is ephemeral in nature, so if you lose POD, it restarts, but it doesn't have context and so therefore the application has lost its state. And the way that we tackle that is by having a persistent storage layer actually tides that application to give it its state. Now, when you do a backup of that environment, right, what needs to be, what you need to keep in mind is that you do a backup of that data, everything fails, you bring it back up and that data has no context to what it was running, and therefore that application will still fail, it has no way of connecting back up. So within a Kubernetes type of environment, what you need is to, you need to have APIs that are cluster aware, are application aware, and are providing the context of that application and that data together. And in that way, we're able to do a backup and then once something fails, you're able to restore it in context to the application and the cluster that it's running in. Yeah, I think that leads perfectly into the second part of the question, which is why can't I use what I have today in my data center? And I'll split that into two answers. The first one is to everything you just said, a lot of folks have a traditional backup and recovery tool that was built for a virtual machine world. And so the context it has is virtual machines, it's going to snapshot a whole volume, it's going to snapshot a single location and restore it to a single second location. The problem with containers in Kubernetes, as we've talked about already a few times today, is they are distributed in nature. They are by nature scaling at a rate that is 10 or hundreds or thousands of times bigger than a virtual machine environment. And so you have to have a backup and recovery tool that matches the scale, that matches the distributed nature of Kubernetes and containers. And that's not something a traditional backup and recovery tool is able to do. Now, the second part of my answer, why can't I use what's in my data center? I've got this great box in my data center, it comes with disaster recovery and data protection built in, why can't I use that? Well, the problem is, OpenShift and the value of OpenShift and Portworx is being open and being hybrid in nature. Being able to move that application, be portable with that application, scale that application, burst it to the cloud for short periods of time. Well, whenever you do all that movement, if your disaster recovery data protection solution is tied to a single array or a single set of arrays in your data center, as soon as you move it, you lose that data protection. You've got to figure out a different way to protect that data once it's off that array. What Portworx does is it provides you with a consistent disaster recovery, you know, backup and recovery set of features wherever that data lives, wherever that application lives. So it allows you to keep that application portable without relying on the underlying storage to provide that data and that security and that recovery capability. And so that's why you have to separate the notion of my data center for my disaster recovery and backup solution. You have to have something that can travel with the application that's not tied down in place to the underlying physical storage or cloud infrastructure that the application is built on. Yeah, I see. Thank you. Thanks for those explanations. So how much work is it to install and use Portworx? Yeah, so sure. I mean, one of the things that, you know, part of our partner ecosystem, we work with partners to make sure that in the OpenShift environment that they're certified and they have an OpenShift operator that they can work with. There are several layers, top layer being a level five operator, and Portworx has been working with us for years to have that higher level level five OpenShift operator. Now, this goes beyond just basic installation, but it includes day two operations, seamless upgrades, a full lifecycle management, insights, and an autopilot ability within the operator as well. So there's a lot of great synergy there. But beyond that, we've been going through looking at a lot of different co-development initiatives. And, you know, one of the things that we'll be bringing to bear very soon is this concept of a hybrid cloud GitOps pattern. So if you think about like a distributed pattern with a lot of different applications that are running within it, typically, you know, you can set up a nice reference architecture of how everything should work together. But typically what happens is once you get into a proof of concept, or you're trying to deploy that type of environment, you tend to run into some issues and you have to kind of, you know, work a little bit under the covers to make everything work together. With a hybrid cloud GitOps pattern, what we do is we bring together operators and then help charts as well for the configuration piece of that within a declarative GitOps and pipelines deployment so that everything is codified. It makes it easy to deploy. It makes it easier to configure for all of these different types of architectures. But it also includes a CI CD pipeline so that you're able to keep everything up to date. It makes it more repeatable. So it's so that, you know, you don't have to go and rebuild the wheel based on a blueprint every time that you're trying to do it. It's all codified. It makes it extensible so that you can put it on prem, out to the cloud, out to the edge. And it really helps with scalability as well. So these types of things we're working with pure import works on, I think are very helpful for the next level of ease of deployment and configuration in the field. Yeah. And if I can just jump in on top of that too, from a license and kind of getting started standpoint, we also make it very easy for OpenShift users to get started. So reach out to us. We've got a free offer for you to jump in and get your feet wet. Today you can get access to the full functionality, the backup, the disaster recovery, the persistent storage layer, everything. So we've got that ability today let you get in with a very low barrier to entry so you can start using port works and OpenShift together today and start seeing all that value from day one. Great. One of the great benefits of Kubernetes and containers is the ability for application developers and others to have a self-service ability to create and manage their own projects quickly, giving them a cloud native experience regardless of the underlying infrastructure. So how does working with port works help? Yeah. So I'll take the first part of this one then I'll let Michael chime in. But we've kind of talked around this a little bit today and the value that we've talked about with port works and OpenShift is unlocking the storage and data management capabilities for the enterprise. So why is that so important? How does that eventually help my developers? Well, today we know that developers are the ones who are driving the next set of innovative applications. They're the ones who are coming and making all of the things that we talked about at the beginning of the conversation. Things like the new app for rebooking your flights, things like the connected car, things like the app on my phone so I can order lunch straight to my door. All of that has to be built by someone and those developers want the services and the tools they need to build without waiting for operations and the rest of the IT function to catch up. And so as that function has grown, operation teams have been overwhelmed. How do I support the platform? How do I make sure that it's secure and protected and the storage is available and highly available and that everything I need to maintain the application is there? And so what we've talked about today is how port works can deliver that from a storage and data management perspective in conjunction with OpenShift to provide the storage and data management, the automation you need to keep the platform up and running. Well, the flip side of that is by keeping the platform up and running by automating the back end, you're able to then offer your developer self-service. So leveraging something like port works data services, you can then provide a self-service managed like database as a service option for your developers where they can come in, they can just click on the database or the data service they want to use, deploy it and be off and encoding immediately. While on the back end as an operator, you know that the security policies are in place, the failover, the disaster recovery, the backup, everything you need that we've talked about today for day zero, day one, day two, operations is in place, but your developers are off and running without waiting for you to go through steps or to provision a ticket. Right, yeah, exactly. It's all, it's really all about accelerating time to value, right? So developers, not just developers, analysts, data analysts, data scientists who are building ML models for artificial intelligence, all of these people are under extreme pressure today by their business leaders to come up with new innovative projects and get them to fruition, right? And so, you know, this is really the benefit of Kubernetes and Red Hat OpenShift in general is being able to put the resources in the hands of those developers, those data scientists, those analysts, because if you think about before, typically the way that some of these larger monolithic type of application platforms were developed, you would go through a huge long cycle and then you would have to look to your IT ops folks to provide you with the resources that you needed to develop. And it got to a point where a lot of developers were just saying, hey, I'm just going to plunk down a credit card and go to the cloud and start my development there because I don't have time to wait, right? So this is really, you know, the beginning of the promise of Kubernetes and OpenShift for developers to be able to have that cloud like experience anywhere that they're developing. But beyond that, it's a question of, okay, so how do we get this self-service nature around things like, you know, if we look back a few years back, a lot of what was being developed was stateless applications. Whereas today, we see more and more stateful applications being built or being deployed. And these things need databases, data cache, analytics, machine learning, artificial intelligence. There's a whole set of data pipelines and event-driven architectures that are based on these things. And how do we really give those resources to somebody in a self-service nature so that once again, they're not waiting on IT ops to come and deploy what they need in their environment. This is exactly the promise that we have with things like PortWorks Data Services and why we're working very collaboratively with PortWorks around this initiative to get this out in the hands of developers. So Michael, I think you started to touch on this a little bit, but IT operations teams have their processes in place and they use scripts and cron jobs to automate some processes already. So why do they need to consider automation? Yeah, I will. So, you know, with scripts and cron jobs, you know, they were great for their time. You know, there tends to be, you know, tends to have creep into human error. There's employee attrition rates, skill set replacement is very difficult to find, things of that nature. You know, so as you're building out these deployments, typically they are prone to having, you know, problems over time. There are issues with bug fixes, you know, moving forward as well. And so, you know, adopting a platform such as the Ansible platform from Red Hat really allows you to have the ability to go in and create an automation platform for different applications and different environments and platforms, so that it makes it much easier for folks, whether, you know, you're looking at network automation or platform automation or application automation, being able to write these in an automation platform that has everything scripted and is human readable, so you can bring in new people and they can hit the ground running with, you know, modifying or adapting those automation scripts over time, I think is very important. And I know we've been working very strongly with our partners and especially Pure Storage. Simon, if you'd like to, you know, maybe pitch in a little bit about what you've been working on with automation and Ansible, that'd be great. I mean, I would love to reiterate what you just said there, Michael, about, you know, scripts and cron jobs were great for their time. They really were. I mean, but they do go wrong. And we've all been there. We've all seen people, whether those scripts are human, you know, their bash scripts or whether they're handwritten scripts that people follow, people go word blind, people mistype things. And those can lead to errors. Now, those errors can be annoying or they can be catastrophic depending on what the error is. And that's the last thing you want. And so automation is really the real, the best way of making sure that what you do is error proof and repeatable. And repeatability is actually really important as part of automation is concerned as well. You know, over the last few years, we've seen certainly through the pandemic and the post pandemic for eight phases, there has been a lot of requirement for companies in the IT organizations to do more and more automation, do more with less. You know, the people they've got, they want to be able to utilize their time much better. And therefore, by instead of them running jobs in an evening and being doing overtime and all those sort of things, they can actually use automated platforms such as Ansible Automation Platform to do that. It is incredibly helpful going forward. And, you know, more with less is always the best. So yeah, I completely agree. Pure has been working a lot on this. So come on, Michael. I mean, to that point, exactly Simon. And it's really not just about driving IT operational efficiency, but also being able to build responsive IT services based on that platform really accelerates innovation throughout the solution. Yeah. And that self service is backed by automation. You know, you can request storage and servers at one o'clock in the morning and you don't have to wait for the IT team to come in the next day or wait for the Monday or whatever it is. It's just there and it happens because it's running automated scripts. You know, these playbooks that you get with Ansible can just be run using triggers, webhooks, those sort of different things. It's very, very cool. Yeah. It's very good. Simon, what are what are some of the things that you've been working on with with us? I know, you know, there's collections Ansible collections out there for Flashray and Flashblade. Maybe you could describe a little bit of some of that work that you've been doing. Sure. Yeah. I mean, Pure's been working with Ansible for quite a few years now. And we were delighted that we were one of the first partners to get that little red hats, red certification tick for our collections and our modules. That was a good day. And we've been continuing to develop our Ansible modules and our collections moving forward. You know, each time one of our platforms has a new feature or, you know, some change in the way the backend APIs work, those Ansible modules are really there. Day one, we are making them work with the new feature functionality of our platforms. And you mentioned the Flashray and the Flashblade. So those are our two big, big platforms. But we also have a couple of other things that we can automate with Ansible. We have our Pure One management system. It's a cloud-based management solution. So you can do Ansible automation with that as well. And recently, we've produced a new offering called Pure Storage Fusion. It's very much like a storage management platform federated storage control plane. And we've seen a lot of our customers who are, you know, early adopters of this are heavily utilizing automation. It doesn't actually have a GUI today. So the only way to work with it is via CLI. And people want to use automation tools. And Ansible is absolutely the tool of choice that people are using today for Fusion. And as we move forward with our development of automation integrations, Ansible is absolutely key to Pure. Well, Michael, Simon, Andy, thank you so much for this great conversation. We are just about out of time. So at this point, I'd like to invite you each to share a final thought to leave the audience with today. Andy, why don't you go first? Absolutely. Yeah. So thanks for having us. And, you know, what I really want to focus on and leave you with is, look, OpenShift is the gateway to unlock, you know, that true hybrid environment to be able to build the next set of applications. And to do that, you need a storage and data management platform that goes right alongside OpenShift and make sure that data is always there, that data is protected, that data meets those business SLAs you have to have wherever you're running it. And so, you know, one takeaway from today is, you know, as you're building that next set of applications, make sure you think about, you know, how OpenShift and Portworx can work together to provide you with the rails you need and the stack you need to be successful as you innovate and create new applications. Great point. Simon, what are your thoughts? I think my takeaway from this is, you know, automation is incredibly important moving forward. And if you're starting to bring storage automation into your workflows for your IT organization, then, you know, you couldn't do worse than using Pure. We have the best integration for Ansible these days in the storage platform. We're very happy to say that and trying to keep that true moving forward. Wonderful. Michael, what would you like to leave the audience with? Yeah, so, I mean, just in general, I'm really excited about Red Hat's Partner Ecosystem Success Team, where we've come in the past year under the leadership of Stephanie Shiris in our organization, where we're going, a lot of the new co-development efforts that we have coming out with partners. And I think, you know, Pure Storage, Portworx, they have been a rising star around this. And I'm really excited to be working with them specifically. We have a lot of events coming up, both Red Hat and Pure and other industry events that are coming out in the near future. We'd love to engage and see customers come out and talk with us, request a demo. We're actually going to be doing a couple of workshops as well, hands-on lab type workshops where folks can come in and get a chance to interact with some of these environments. The patterns that I mentioned, the GitOps patterns, that's all open source. So folks can go in, they can take a look at them, deploy them, fork them, modify them for their own uses. So another great opportunity there. The Ansible Collections are available. Just in general, I'd like to see more and more engagement from the field and see how we can work better together to make you successful. That's great. Thank you. Well, with those final words, we do conclude today's webcast, Accelerate Business Outcomes with Solutions from Red Hat and Portworx by Pure Storage. Thank you once again so much to our speakers, Andy Gower, Head of Partner Marketing for Portworx by Pure Storage, Simon Doddsley, Director of Open Source Integrations for Pure Storage, and Michael St. John, Technical Alliance Manager for Red Hat. And of course, to our sponsors, thank you Red Hat and Pure Storage for making this event possible. For CIO, Red Hat and Pure Storage, I'm Sarah Kirk. Thank you so much for tuning in.