 Hi, this is Kirsten Newcomer, Director of Cloud Security Strategy for Redact. And I welcome to another OpenShift Commons. I'm joined by Tim Riley and Maxim Jankowski from ZetaSet and they're here to talk to us about next generation data protection. So Tim and Maxim, I welcome to another OpenShift Commons. I'm Kirsten Newcomer, Director of Cloud Security Strategy for Redact. I'm joined by Tim Riley and Maxim Jankowski from ZetaSet and they're here to talk to us about next generation data protection. Tim and Maxim? I just enjoy technology in general and I think we've seen the evolution now into containers and we look at Red Hat and what they're doing with OpenShifts and the various verticals as something exciting and I think data protection is probably one of the things that we need to catch up with all this other tech. And I'm Maxim Jankowski, the CTO at ZetaSet. I've been with ZetaSet for almost nine years and been in the technology field for about 20 plus years and I've done things from enterprise databases to enterprise data security and here at ZetaSet we're working to bring, I want to say, security into 21st century and we're really excited about our partnership with Red Hat and what we can do with it. So let's jump in here. I love this statement from General Brown over there in the Air Force because it, I think, is essential to everything we need to get from data and have trust. Trust in each other. If you're going to be able to share data to truly extract value and benefit all of us in all our services, you're going to need to trust each other. I think the wealth of data sharing, the wealth of data exchange, the edge computing, the use cases across everything, it's there but no matter what, data is what everyone covets, all the bad actors want it and we need to make sure it's protected. So who is ZetaSet? So we look at ourselves and say we need to provide data protection across all the different environment. We're software only, there's no hardware that needs to be in this box comes on. You can do it on a white box, it's great, it's transparent because we're down low in the stack, can be for a virtual physical hybrid, all the usual. Because encryption is a calculation, it normally will cause a delay or impact performance. We've constructed in a way proprietary software-wise where it's a minimal performance impact. So in that sense, our encryption is high performance. What we've come with in a new world is how do we tie all of encryption together regardless of what part of the stack it's in and what's going on with it. You call it crypto usage and that's really what the centralized management console and monitoring do enforce and we'll get into that later. We really are excited about this. It's a first of its kind. And of course, if you're data protecting, you're doing data protection with crypto whether it's at rest or in motion. And one of our proudest moments was finding out that we were awarded the Red Hat Solution Partner of the Year, not the only one of course, but it meant a lot to us. Thank you. We've just been upgraded to an advanced partner as well. So we are really loving our Red Hat relationship and see a bright future with all the different ways we can go with this technology. Let you know we're smart people. We've got people like Maxim here and we've got over 20 patents filed around cryptography and data protection. So we know what we're doing. We're based out here in the Bay Area. So we know everything that's going on out here. We've got all the well-known security companies around. Right. And it's important to highlight the centralized management and monitoring here because we've been doing enterprise encryption for quite some time and then we realize that encryption is the way that you protect the data, but as environments become more distributed and more, I want to say, convoluted, it's critical to be able to manage them from one or more of the single pane of glass and it's also critical to be able to look at them from a single pane of glass and realize what's happening with your data because the compromise will happen. It's not a question of if it's a matter of when. And when the compromise does happen, you want to be able to respond to it quickly and detect it within timely question. So how do we work with Red Hat and specifically with OpenShift? I think this says it all because no matter what, in any of these environments, whether it's virtual, whether it's physical, and across all the usual environments of Edge, whether it's on-prem, good old brick and mortar, or maybe you have a hybrid, which I believe is really where the world is going with OpenShift, whether you have one cloud or two, you need to have consistency of data protection. And that's where ZataSec comes in. How we can do it with containers and leveraging OpenShift's abilities and being parallel and walking with them, we'll get into that in the next couple of slides. So when we look at where these environments are, they're extremely complex. There is no basic environment where you're going to stand up a Kubernetes cluster and have it run by OpenShift. It's going to expand multiple environments, no matter what. But one of the benefits of all of this is sharing the data. You need to share value. I need to know what Maxim's doing. You need to share data with Maxim, and together we're going to have some additional value for the end user, for ourselves, for a service, but we need to make sure it's protected. And we may be forced to make sure it's protected with regulatory compliance, and we all know that that will continue to get more stringent, especially considering the current events of cybersecurity and cyber attacks that are hitting all of the enterprises and governments globally. And then lastly, we love DevOps. Everyone loves DevOps. We all talk about it. But DevOps, just like most technologies, the early adopters, and we're racing forward to get the benefit of these technologies to get first mover and just cost savings and everything else that comes with a new technology, but security always seems to drag behind. It's an afterthought. And we're trying to educate everyone out there to say, why don't you walk out with it together? But we know there's plenty of deployments already. So how do we catch up and create that without causing that inertia that comes with change? Awesome. Tim, Tim, are you going to dive into DevSecOps a little bit more and where your solution fits in a bit? Absolutely. Absolutely. Yes. Right. And one of the things is data sharing has been around for ages, right, since the early days of information technology, we've been sharing data with different organizations. But data sharing itself is going through sort of a DevOps transformation also, is that I want to be able to share my data with an external entity, but I want to be able to maintain the data ownership. That is to say, I don't have to call up on the external entity and ask them for flavors to delete my data when they're done using it. I want to be able to click a button from a centralized place and revoke access to my data even though my data might reside into somebody else's environment. That's some unique challenges that are accelerated by DevOps and DevSecOps transformation. So briefly, we've been hitting on it a little bit, but these are the use cases we're seeing with end user, the one we just kind of talked about. You may have the JAD2C, which is joint attack domain where the different departments in government have to share data, and you need to trust that what they're doing with that data is something that won't be in some level breached and some level of that actor gets access to that data. Government definitely needs this. Health insurance. We all know that HIPAA and high tech are very stringent on data sensitivity and what you can do with PCI, sorry, that's financial, and PHI. So we've had a couple of customers whereby they're sharing data, they're putting it into some level of analysis as a service provider and they need to have it protected. And the last one is the Telco 5G, and we will give an example of that one as well as with the healthcare, which I think most might be interested in hearing more about. Okay, we're covering this really fast because it's important, and I think we're beating this up right now, encryption has to follow storage. I tell everyone, if the data goes to the edge, the data goes to the cloud, you're managing the data, and ZetaSet and our protection needs to follow that. We're going to follow and protect regardless of where it goes. And you can see here, Maxim is very keen on making sure it's independent from hosting containers. He tells me this all the time. Yeah, Kubernetes is stable. Containers are originally designed to be stateless from the get go. And so containers can come up and go down, and they can disappear and reappear on the host. So we cannot have a data protection approach, which ties up a worker node to a container or to a data wallet. So it's critical that that happens. And also, it's critical that we look at the DevOps environments and we say, who is responsible for what? What does it mean for the data to reside in a QA environment or a development environment and then be transferred to a production environment? What does it mean for the data to be exchanged between those environments? And to a certain extent, the DevSecOps, which Kirsten will get into coming in a couple slides from now, they touch on these two, DevSecOps and some of the separation of duties and making sure the integrity of the container remains solid is something that I know the other DevSecOps offerings have, and you'll see how they fit together with us. But these one, two, three are not just for data, they do cross over into the other aspects of Kubernetes and DevOps. Yeah, I think, I think, sorry, apologies, I think one of the really interesting things you have there is the separation of duties, because in fact, DevSecOps sort of merges certain duties intentionally to help people understand each other's roles and to ensure that things are kind of managed security in particular, but are managed throughout the life cycle and also that Dev andOps kind of work together. But I love that the point about not having visibility into our knowledge of encryption keys and processes, this is a place where as you just mentioned, right, with Kube initially being designed to manage stateless apps, there were really some initial security gaps, many of which have been addressed, but Kubernetes secrets are still not particularly secret. And so, you know, this is I think a key area that your solution helps to address. Absolutely, and thank you for mentioning that, especially the Kubernetes secrets, because yes, we've heard times and again that, you know, why do we need to increase our data? We have Kubernetes secrets, don't they protect us? And the key, well, pardon the pun, the key here is to understand that Kubernetes secrets are a great way to store, you know, credentials, small security related data elements, but they're not really a data protection methodology. And also the fact that they're not, they're not so secret, especially up until the later releases of containers and Kubernetes. But what we also talk about when we talk about separation of duties is managing responsibility. What I mean by that is you have a developer creating software, you really don't want that developer to make decisions about whether the data he or she has been working with, whether or not this data element is sensitive or not. You just want to make sure that whenever data they're working with, it's encrypted, it's protected, wherever it resides and however it travels between different Kubernetes environments. Kirsten, you make a good point, we can go to the next slide, but the surveys I've seen that ask who's in charge of security in these Kubernetes environments and DevOps. And it does, it spans, well, the DevOps team is, there is an IT team, there's also maybe a sophisticated security team. So it does span. We're looking at the weakest link in the chain if we want to call it that. I mean it only as if we can upscale DevOps, that team, maybe they're going to manage it. Maybe unfortunately the budget restraints, there's no additional resources. We want to make sure they at least have some level of data protection and our software gives them. Real quick on encryption. You might have some user, you know, viewers out there who see this and go, well, guys, I see multiple levels of encryption out. I hear about self-encrypting drives, I hear about file folder, I hear about full disk. Easiest way to describe this, they do it for my own family to make sure they get it. I want you to think about the beach, you go on the beach, there's one gate to get in, that's self-encrypting drives. On the other end you have the sand, the grains of sand, that's file and folder, it's detailed. Right in the middle is a bucket of sand, that's us. One doesn't have any bit of real security, you get in, you're done, you're in. The other one is trying to protect every little grain of sand and you can imagine the performance impact that has. We're trying to avoid that and if you talk to enough folks in the security industry, they will tell you, yeah, there's some level of granularity in you, but protecting partitions and volumes is good enough. That's where we sit, call it the Goldilocks. I just want to make and emphasize our integrations with Red Hat. We love you guys, we love all the different ways and the flexibility that Red Has for integrations. We have our management console that integrates with OpenShift and integrates with Rackham. We have the satellite, which we will get to in another three or four quarters, how we integrate with them as well. We're viewing Red Hat and the way we can help protect the data in Red Hat environments across the portfolio as not just Kubernetes and Rackham and OpenShift. It's more any environment with RHEL as well. We're excited about all this and if the question is going to be asked any second now and without the advanced cluster security, we are complimentary. We sit side by side with it. You'll hear me say this a couple more times. Integrity of the container is what the advanced cluster security is therefore and does a wonderful job for it. Absolutely complimentary and an area that I think provides, you all provide significant additional value. I also really love that you're looking across. As you said, you support, your solution is available anywhere. RHEL, which really ties nicely with Red Hat's open hybrid cloud story and the reality that in some cases, our edge customers will use OpenShift and in other cases, they'll use Podman with RHEL and may not use OpenShift at all. Can you look at our slides, Pia, as I hear? We have one that says that. Something that was mentioned in the slide is that as Christian, as you said earlier, containers were originally designed stateless and then data was kind of brought into the picture as a later point, but data systems and data storage systems think Red Hat's self-storage. That comes in. It needs its own data protection. It needs its own integration with automated data provisioning and all the management and operational tools. One other thing that we're working with and working on is to provide transparent high-performance data encryption and automatic volume provisioning for self-storage that's attached to containers or attached to classic Red Hat. We're proud to say we've been one of the partners selected to launch with 9.0 and RHEL in this coming year. So very flattered that Red Hat sees that value we can provide to the ecosystem. Yeah. When 9.0 comes out, we'll be right there already certified with it. So, Christian kind of touched on this. And the management platform we have, so you can see all this stack and you can see all the different levels, whether if you look at the middle in the pyramid, you have the hardware guys, you have RHEL running on the hardware. Sure. You have the systems and you have different distributions of Kubernetes and then you have the Docker and the container itself. Well, there is an insertion point in all of them for encryption, specifically ours, of course. Now, that can stretch. You can run that same encryption in cloud or you can go to the edge with it. Want to make one thing clear that we do hear this sometime. Well, we have encryption in the cloud for both. And that's fine. You can stick with them. But in my mind, and I'm sure RedHats as well, the world is never going to be one of you. You're never going to be 100% on-prime. You're never going to be 100% in cloud. You're going to have a hybrid environment. And at that point, encryption breaks down because you need to have it consistent across both of these environments and the management console does that. And as we reach to the edge, which we'll talk about edge computing when we get to the telco example, you can see how having a consistent unified data protection strategy across your entire environment and ecosystem is huge. And I want to say I have kind of a more, how do I say this, more aggressive stance on trust and security in those environments. Yes, especially. Yeah, I do. And some of them relate to, well, two things you should never be doing. Three things actually. First, never store your keys under the door mat, meaning don't store the key next to the data. Second, never trust your key or keys to the cloud provider. You're already putting your data in there. And third, make sure you can run in different infrastructures, whether it's cloud, hybrid, you don't let your cloud provider tie you down to their own internal infrastructure so you cannot migrate to different clouds. So the one-on-one thing, you have server side encryption and you have client side encryption. Server side means you give everything to the cloud. You let them do the encryption hold the key. Client side is on-prem, you have control of the decryption and encryption process and you hold the keys. So you can see the difference and who you're relying upon to do this. And we have seen, of course, there's an always a breach or theft from a cloud provider because something's misconfigured. And we're going to try and avoid that if you control the keys on-prem. Briefly, you can see across the product line. It's great. Start at the top. There's the management console. We have to because we can provide encryption across. Every different environment, we work with all the major key management matters. On the left, you can see the cloud providers. They all have the key manager. On the right, you see some of the legacy, the old hardware guys. By the way, in the middle, we have ours. If you don't want to use either of those, we got a slick one. It's virtual. It grows. It shrinks. It does whatever you want. It works with the edge. And to do that, we got the stuff on the right. There's the DevOps. There's all the Kubernetes and containers. On the left, yeah, there's your legacy. Now you can do the other encryptions that are out there. You can see them in red. And you use a key manager to touch them. And we control the key manager. It's really that simple. So no matter what, even if you have installed the incumbents legacy, that's fine. Why would you have encryption at different levels across different places? I think I read some article that said in some of the larger enterprises, they might have five different key management vendors. Five different ones. You can imagine the encryption that's on it at different levels. Well, wait a minute. Let's just unite everything and work it together. With RHEL, we obviously have RHEL. And we just came out with root OS encryption. And that then truly gives you full partition volume of a RHEL server. Because up until now, we've protected the data and the partitions. But now we're able to actually protect the operating system, which gives an additional level against bad actors. I'm going to keep going real quick. The management console, right there. I've talked about it. We're unifying everything. Doesn't matter where you are, how you are. DevOps can do this. IT can do this. Or you can give it to your CISO and your team under the security umbrella. Regardless, you deploy it. You deploy it. It's going to touch all these different places where a key is. It's going to watch to see if there's crypto usage. I like to say this, because this is a great one. If Kirsten has her credentials compromised, bad actor gets to her credentials. Well, suddenly, if they're looking at data and they're decrypting data, say, at 2 o'clock in the morning and there's six petabytes, that's going to be a flag. And we're going to see that. If it's an uncommon profile for her to look at this much, look at this data, it's called crypto usage. We see that. So on the other side, because well, you do get this. I'm sure you've heard it. Well, if you get encryption, you get the key then the data isn't decrypted and you're done. Encryption loses all of its protection. It doesn't. At that point, you make crypto not about just being preventative. You can help it be a detective control, because now you're seeing what's happening. And soon to come is what happens once it gets decrypted and once it gets encrypted and decrypted, what they're looking at and what kind of what they're doing. That's coming soon. We'll leave it at that. And that's what we talked about when we talked about shortening the compromise detection window and be able to quickly respond to a compromise because you want to keep the compromise detection window at least within the span of your most recoverable backup. Right, and this is an area that is so key that that kind of detect and respond in cloud native environments in particular because sort of as you've talked about in your previous session, many of or even alluded to earlier, many security tools were designed for traditional architectures. And so the kinds of detection and detect and respond, the intrusion detection solutions that are generally out there really aren't designed to work with Kubernetes and containers. And the context, the level of information that you need to kind of figure out what's going on really requires a different kind of oversight and correlation of data that you didn't have to make previously. And I love that you're adding this layer of auditing, monitoring of encryption and decryption. I think this is a big win. This is great. I think Kristin not only saw our slide deck, I think she also saw our management comes. Yeah, that's what we were talking about. What we'll be showing later is putting encryption, not just making encryption, not just about encryption keys and what they are and what their life cycle is, but also putting them in critical context in terms of what those encryption keys are protecting. So, well, how does it work with OpenShift? What's the stack look like? This is what we've put together and I'll let Maxim take a minute or so on it so we can see. Yes, so we always say when you take an application and transition into DevOps, Kubernetes containers, Microsoft Business Architecture, that's a good opportunity and also a necessity to architect the application or rearchitect the application that it runs natively and to be able to take advantage of everything that Kubernetes and OpenShift provides. So we have done what we preach all along. We've taken our data protection bundle and we've made it into a native Kubernetes and native OpenShift certified solution. We split our services into containers and microservices. We created a special operator which has a Kubernetes and OpenShift construct that manages deployments. We created the operator to allow for automated installation and life cycle management. So all of the critical services that we provide as part of the encryption solution and data protection solution, they all run natively as Kubernetes services and containers and also integrate directly with Kubernetes storage layer by providing our custom container storage interface or a CSI driver. Okay. So the question, how do we fit with the rest of DevSecOps? I always tell everyone, look to the right, honest or right. Look at the top. Those are truly the wealth of container tools that you have for the integrity of the container worlds and Kubernetes. And you can see there's a wealth of them specifically we're very tight with advanced cluster systems or security. So that's where we stand. Now, if you go down that, you'll notice you have persistent volumes. And then below that, you have more of the legacy storage type of encryption and protection. There is a middle ground in there with persistent volumes because persistent volumes still have exposure, which we're trying to highlight that maximums point in this new Kubernetes DevOps world, you will have exposure of persistent volumes and you have to ensure they're just as open to attack by a bad actor as it was in legacy systems. So that attack vector hasn't gone away. In fact, it's become more complex. Yeah. And in fact, this is an area where I periodically get customer questions about protecting persistent volumes. Questions can include things like, how do you mitigate against insider attacks? How do you ensure that, how do you manage the data lifecycle for persisted volume and ensure that a volume can only be accessed by a specific container? So definitely an area where I hear from. Right, and this is where it's important to understand and Chris, again, thank you for highlighting some of the things that we're doing kind of unintentionally maybe, but this is where the key granularity, the granularity of encryption keys assigned to persistent volumes becomes super critical. This is where it's a great opportunity for me to say, don't trust data protection to your infrastructure. The infrastructure being legacy servers, legacy encryption solutions which were developed with client server architectures in mind, because legacy encryption vendor will have you believe that because Kubernetes runs on servers, you can just protect the servers and be done with it. The trouble with that is Kubernetes environments are inherently multi-tenant, persistent volumes are shared between services and containers and you gotta be able to manage compromises if not if, but when a container gets compromised and think about it, one tenant container gets compromised, you don't want to expose other tenants and other containers belonging to other applications. So by having a key granularity, which is unique encryption keys used to encrypt a unique persistent volume, allows you not only to manage and control the exposure, but it also allows you to surgically react to an exposure by saying, I'm going to revoke access to that particular persistent volume, but all my other applications and all my other containers will continue to be able to operate and access all the other persistent volumes. You don't have to kill the entire server, you don't have to kill your entire cluster to respond to a compromise. That's awesome. And I would add that while we would love, just like Red Hat that the world only use stuff, we understand that there are other legacy storage volumes out there. So in the event you have to protect, if you've deployed open shifts, but you do have some of the other vendors out there like a vSphere volume or an EBS, we're still able to sit above all of them and encrypt each one of those persistent volumes. So in a truly agnostic, multi hybrid environment, if I used all the buzzwords right. Absolutely. And our customers use many different types of storage, including EBS and S3, so super happy that you're there to support them wherever they are, whatever storage they're using. So one of the items that this is, it's a busy slide, but actually I give full credit, this is the Red Hat Partner Ecosystem Security Team that put this together, just to make the point of where we fit in this. And I'm sure some of your viewers may have seen this slide, there's two of them in fact, but they point out the different places that different DevSecOps vendors fit into the DevOps lifecycle and how they can protect. And we're, go ahead. Yeah, no, one of the things I like about where I see ZetaSet here and it ties back to something you said earlier. One is that you said the developers shouldn't have to know where the sensitive data is, they shouldn't have to inform operations in order to ensure that things are protected. And I often, when I talk DevSecOps with folks, I often say that people tend to think about the DevSec side of the solution, that is what can I add to my CICD pipeline and they forget the SecOps because they just kind of assume it's already solved. And as we've been talking about, really it's not as well solved for cloud native as you need these new tools. And so I love that ZetaSet is clearly in that, SecOps section of DevSecOps and also can offer protection in a way that makes the developers' life easier. Absolutely. So I would like to mention a current bundle that we've worked closely with Red Hat, the OpenShift team and also with CyberArch. And this is where we see the first true security bundle of trying to protect OpenShift environments. And Max might know you have stuff to say because he's been helping pitch. We've gone out and pitched a few of the larger SIs about how they might utilize this with their end user. Yeah, and then it's important. I mean, one of the driving forces for this bundle was really not just to provide a complete and comprehensive data protection solution that spends from data protection itself to protecting the container stack and execution runtime and credentials, but also to realize that modern transformations just beyond DevSecOps as well, looking at 5G networks, for example, we have to realize that the data is no longer in the data center. The data is moving as close to the age as we've... Actually, I've never imagined data being that close to the age. Could be as close as Rantower and the 5G networks. Absolutely. It's multi-tenant. It's accessed by not just telco communication provider, but also can be accessed by outside vendors. And now the physical security of the data center is gone. And in addition, I cannot possibly imagine a style where I have to deploy a team of engineers to a micro data center somewhere out there in the desert so that they can securely decommission a set of servers. What I want to be able to do is click a button and say, his data is no longer accessible because it's been compromised. So 5G networks is definitely a big driver for this. You're going to transition to the next. I think so. We got one more and then we get to the 5G. This is pretty colors. And I joke about this, but as soon as your eyes get adjusted to it, it fills you the full story of how, regardless of the environment and the product that Red Hat has and whether that's OpenShift with persistent volumes or good old fashioned rel, there's data present in both of those. How can we help? We can manage across both. Whether it's... This is where the satellite comes in in addition to Rackam. If satellite's managing rel servers, we'll have an integration with that to watch over that. We currently will be releasing a automated integration with Rackam at the end of September. We've tested and worked with that team over there. They really, they love our solution. And like I said, you can see right next to us is the advanced cluster security. And in general, I just wanna make clear that this is purpose built for Red Hat. And the one question we've gotten is, well, for Linux or rel, we've got encryption. We know it's free. It just comes with rel. You say, we understand, but what that means is, for those of you who don't understand this stuff, and I'm really going out on a limb here with Maxim right next to me, is DM Crypts and Luxe. And DM Crypt does the encryption decryption and Luxe manages the keys. Well, if someone says, well, they've already got it, well, they don't really have it. That doesn't scale to large deployments and we're able to manage and enhance Luxe. So you can see that, okay, on the server side, we have this, now let's go over and reach on the open shifts in these new environments. And I'm gonna let you take it from here. Yeah, I mean, DM Crypt and Luxe are excellent components which provide encryption and key management. However, it's important to know that in this period of true unit slash Linux components, they do what they do, they do one thing really, really well, and they're developed in the context of a given server. They provide encryption and key management services, but they provide them in the context of one single server. Now think about a server farm powering an OpenShift cluster or multiple OpenShift clusters, somewhere not easily accessible. You enable DM Crypt and Luxe and then you need to reboot the server and you need to go to the console and enter the password. Suddenly, it is encrypted, but it's not so manageable. In the DevSecOps environments, being able to centrally manage and to provide fully unintended boots, reboots, service restarts and all of these features, this becomes not just important, not just critical, it becomes a necessity. So will DM Crypt and Luxe give you encryption? It will. Will it give you management? Absolutely not. Yeah, and not to take away from what you're saying, but there is a feature in REL, first delivered in REL7, called network-bound disk encryption, which allows for, you know, you leverage a tang server, for example, which might be an alternative to the X-Crypt management console. That isn't saying that that tang server is as full-featured necessarily as what you're doing with the X-Crypt management console, but in terms of that automated, that taking away the necessity to go in and enter that key when you're rebooting network-bound disk encryption available to you with REL really does help there. And I still think X-Crypt has these additional capabilities of being able to, you know, work with multiple key management systems, manage that kind of the granularity of encryption that you're talking about, I see as a genuine value add. You know, I'm curious one thing that I completely agree with you, we're gonna come across technologies, whether it's REL, whether it's something else, that does have encryption, and we're not offended, but when that then gets combined in mixed environments, in the case of it's REL and OpenShift, well, nothing's gonna span that. One's gonna be on one side, one's gonna be other, let's unify them. And that's where there's also a differentiator for our management console. One of the items you did mention, which was below, which was in an edge environment, I believe that's the next slide, you'll see that it's nodes, you know, you have the master and the worker, and on the servers, you just have the REL servers. So we understand in an edge deployment, you're gonna have both, can be mixed and matched. No matter what, we wanna touch both, okay? And yes, here it is. So we like to look at this, we did some research and looking at the top left, you can see that there's plenty of things within deficiencies for lack of a better way or places to improve. Whether that's, again, performance is a key one. We all know that edge computing, one of its main functions and strategy is to be closer to the edge and have the data extract value without having the bounce back and forth. So latency is of top priority. We wanna make sure that because of that, you're going to have different offerings. You can have a vertical offering in industrial or smart city, anything along these lines. So what we do within the edge computing is, yeah, you're gonna give that ability to folks. And of course, the big one, cost. How much more do you need to invest in in your infrastructure? Van with the usage-wise, you wanna optimize that. From my telco days initially, that is a huge one. That can really rack up your build quickly. And what this all gives you is, you're gonna have some kind of competitive differentiators that come out of it. And this data actually comes from an IBM study, that's pretty recent. Look at that, service providers, they look at it and see financial gain. 91% of them in the next five years, that's massive. And I think we all hear that touted in the markets and I think there's a lot of truth to that. So you can see the various things that this is gonna give you. What we've seen is progression and maybe this is something that Kirsten has seen with the end users is the focus on the left down at the bottom, all of these benefits that come from edge, all of the infrastructure. Once you get that in place and optimize and have a cost to minimization, you go over to the right and it gives you all these different use cases and these different verticals. Yeah, one of the interesting spaces, one of the things with RAN in particular, right? Or, but edge especially, right? So systems are going to be disconnected from time to time. And so one of the questions that we get, and I'm curious whether you get this as well, is let's say you've designed for automated decryption on reboot, which again you can do with RHEL network-bound disk encryption which is also supported on OpenShift or perhaps, and you can do that with set-a-set. So what happens if network connectivity is lost and the system reboots with for set-a-set? Right, so that's actually interesting because that's one of the things we have a button to run, which is how do we provide encryption services in a disconnected environment? Nice, yeah. It's pretty interesting what we can accomplish with a creative topology of the key management services and because our key manager is software only and runs natively in Kubernetes, including Kubernetes microclusters and edge nodes. It requires very little resources to run. We can actually position distributed key manager services as close to the edge as the data is. And so what that means is that it still provides the same level of security, but the keys are always available whenever the micro data center or the server needs to reboot in an environment where there is no connection to the centralized network. That's great. So as we decided to really enter into this area and going to the edge with Telco, we noticed that look at those wealth of different types of threats. The threats have just expanded. I mean, there's a wealth of reports that have come out, whether it's a European cybersecurity or I believe there was just one that came out last week from the NSA and CISA. Yeah, they've issued hardening guidance for Kubernetes and an initial blush. Much of the hardening guidance is actually borrowed from the CIS Kubernetes benchmark. They actually attribute it to the CIS Kubernetes benchmark. But yes, there are some additional guidance in their hardening document as well. You should have been talking to my chief security architect then. He said the exact same thing. He thought there might be some plagiarism there, but at least they're- No, no, they attributed it. Yeah. They did. And CIS license is Creative Commons, so it's all good. It's all good. Mine's more than a joke. I'm just kidding. So anyway, on the right, I just selected a couple items that deal with integrity and protection of the data, and of which you're gonna get some level of protection, whether it's with the advanced cluster security or our own offerings. Now, this was something that again, just came out not too long ago with a security survey that they asked in the security teams of these different 5G offering or service providers and said, what are your top security concerns and what do you need to focus on for your strategy? Now, you can see that those top six between ZetaSet, RACOM, and advanced cluster system or security, we take care of the bundle of us, take care of all of them. And as that grows with the edge computing and the more remote, you can see where any implementation of OpenShift or route at the edge can be covered and secured. I have a feeling you wanna say something. Yeah, I mean, the security challenges for 5G, they mostly stem from the fact, like I said previously, the data is very, very close to the edge and the environments are massively multi-tenant. So using legacy approaches, using infrastructure type encryption, using self-encryption drives or file and folder encryption, this is just not going to fly because it's not gonna provide the level of protection that will allow you to respond to a compromise. And that's what it's all about essentially. Compromises will happen and it's how quickly you detect and how quickly you react to those. That's what matters. And when we talk about security, we just wanna reiterate. We know that data protection is only one facet of a true security solution. We have plenty of discussion time and if you wanna talk about any other piece of security, we have the wealth of knowledge over here. We understand there's authorization and access management. We understand there's other types of monitoring and logging and we need to fit into that ecosystem to protect the data. So how does this in visual mode look in an edge deployment? You can see as you go out and like Kirsten has been saying, you go all the way up to that remote server. It may have open shift, it may just have rel, but I want you to imagine them being 1,000 of those and then have open shift running maybe at the infrastructure edge in every way and back. Managing that on a global scale is where we feel we can truly be a differentiator and add value by giving data protection. I want you to think, is there one government out there that won't have some stringent privacy regulations around data in the next three to five years? It will be everywhere. And there will constantly be a cat and mouse game between the bad actors and the protectors of data. And we wanna be there as a frontline weapon in that. Specifically, if we look into a great example of Rackham and open shift, the deployments to the edge, you can see how with our integration, wherever Rackham is touching, we're gonna be able to encrypt that data and protect it. And the other piece that we spoke to is yes, the crypto usage, let's monitor and see what's being encrypted and decrypted. Let's see what it's going to, so we get flagged. So between us and advanced cluster security, monitoring and scanning the integrity of the containers at the edge, I think we've got a great combo for protection. Again, nothing's ever 100%, but I think we give some great coverage with the three of us. For folks who don't know, Red Hat Advanced Cluster Management allows you to manage cluster lifecycle and configuration across multiple clusters, a fleet of clusters, no matter where they are. And so for example, it can ensure in this case that X-Crypt is deployed to all of the clusters. Very good point. So I'll take a pause there. We have a data sharing one. If there are questions about the edge, more than happy to take them at this point. You've done such a great job. Yeah. So here's the other one. If there's another regulated industry that is only going to get tighter and more stringent, it's going to be data sharing in healthcare. And the importance of it could never be more stressed than the amount of data that needs to be shared to the benefit of a human being. It really is at this point, we need to make sure we can do the benefit and health and improve people's life. And to do that, it's going to take taking multiple sources of data, cross-checking, analyzing and finding new benefits of maybe you can detect cancer sooner. Maybe a blood test needs to be taken for opioid use and it needs to go to a law enforcement or it needs to go to a social agency. Well, that exchange of data and that triangle can't happen because without some level of data protection, you won't be able to move forward with it. So you can see how much data that is. 59 Zeta bytes. And to those who don't know, Zeta sets name is about that quantity of data Zeta and then the data bytes. So you can see that the combination with two is where we came up with the name and we're protecting in all those environments. In this case, look at how much is created just because of data sharing when it comes to anything related to opioids. And we're just one country. It's a very sad epidemic and we're trying to address it. Australia is actually one of the unfortunate leading impacted countries about the opioid crisis. So we got to share data. How do we share data between healthcare providers, law enforcement and social agencies? And one of the quotes I heard from law enforcement was, if I knew that when I pick up so and so from the corner because they got caught buying opioids, when I take them to book them, if I know that they're already in the system and this might've just been they slipped once, if I can keep them from being booked and sent into the criminal and the criminal justice system, picking up a cell and dealing with an additional law, if you go up to that next level, the DA doesn't want to deal with this. It's more workload than you really are necessary based on this minor infraction because they were on their way and they were trying to get better and they had one lapse. If they all could share that data, we could better understand a human's problems and how we get them better, okay? So how do we overcome them? Maxim has some thoughts on this, but these are the challenges you're gonna face. Just remember, you have to share private data and this is where things fall down. You have to share private data, absolutely. And because you're sharing private data, by the way, in terms of data privacy and just data sensitivity, I would say a data element may or may not be sensitive in and of itself. Once you put it in the context of all the other hundreds and thousands of data elements that are available out there about a person or an entity, that data element becomes sensitive in context. And therefore, the concept of maintaining data ownership when you're sharing the data is not just critical. It has to be given. So you wanna be able to share data. At the same time, you wanna be able to control access to that shared data without having to reach out, like I said previously, without having to reach out to the third party entity. Data sharing is critical. Secure data sharing is a must. So this is what it looks like and you can insert whatever sensitive data on that left database that you want. When we spoke at GovSolve or it was back in November, GovLoop, I'm sorry, they had the question of, well, if project overmatch, which is the Navy's trying to share data with say the Air Forces from another branch, when they send it over, how did they have a level of trust, all the way back to what General Brown said, that their data is safe and is being properly accessed? So that was one. You can just take that same logic and apply it to healthcare. How do we know that when that data goes from the healthcare provider to law enforcement, even across state lines, which in and of itself has its own regulatory issues, how do I know it's safe? So this is how OpenShift would deal with it. And once we share the data, by the way, I wanna say share the data, the ZSAT way, not only you maintain possession and control the data at all times, you're also able to granularly revoke access to the data, and you're able to monitor the data access even when it's performed by remote entities. So with our management console, you will actually have visibility into when your data is being accessed by third parties. And pulling it all together, this is how we view our value add to Red Hat is we're gonna just simplify data protection. We have to be able to cross-platform across the product portfolio, where the technologies are, where the technologies are going into more hybrid environments, following them to the edge. In all of those cases that there's data involved, and we don't wanna inhibit progress. And if we're able to do that transparently and with minimal performance impact, I've done my job. I need to stop breaches from taking that next level. As some of you may know, if you get breached, you don't have to report if the data is encrypted because you can exhale, because that data is encrypted and no bad actor is gonna get to it from there. If DevOps is still in charge of security, which unfortunately they are, and I have no idea how long that will be. That could be another five, 10 years as those decisions finally get weaned over to even IT before they get to a security team, we have to make their lives better. And if we do that, when regulatory bodies come in and say, well, you're dealing with unprotected data, the DevOps teams, that's the last of their worries. So let's take it off their plate. Let's make sure that we can do it, do it transparently. They can keep driving innovation and we can make sure they're in compliance. Yeah, and that's a great value add. And the only thing I might take issue with a little bit is let's make it, instead of that developers don't have to make security decisions, let's make it easier for them and reduce the number of ones they have to make. When it comes to managing identity and access for their apps, that's still a decision they need to make. Vulnerability management in the end, the developers are gonna have to deal with updating their apps for those. So I would just make it a little less black and white. I think, yeah, Kristin, you're making an excellent point. Developers should still be conscious of their application security or their infrastructure security, what we're helping them with is not having to worry about the data security. Which is awesome, yeah, that's such a big win. And that comes back to the overarching, we understand where one E feature in the greater security solution. So if we can help in there and we can help with any type of decisions about architecting even beyond us, we are completely happy to do that. Okay, now I'll pause and see if we have any additional questions. We'll give Maxim a couple of minutes to do a demo so you can see what this thing looks like. And I'm gonna shift out of the camera because he's driving. Okay, I'm gonna stop sharing for a second and switch the screen over to our demo environment. And you should see the ZetaSet centralized management console login screen. Kristin, can you confirm that it's visible? I can, and just a heads up that we've got about four minutes for the demo. I think I can do that. Awesome. And you kind of gave me a few of them, what to focus on previously when we talked about, you know, earlier in the presentation we talked about protecting the data and not just encrypting the data, but also putting, as I call it, putting encryption in the context of the data that encryption is protecting. I'm gonna focus on that and I think it's gonna show some interesting insights into what data has been protected and it's gonna show something that you will not see in the management console of key managers because every key manager product will come with its own little management console. It'll allow you to manage access to keys. It'll allow you to see what keys are out there, how they're backed up, how key managers are plastered and so on and so forth. What it's not gonna show you is the context information about what data elements or what system elements the keys are protecting. So what you can see in the key manager is something like, here's the key with its own little cryptic ID, activation and expiration dates of the key and the key state. It'll show you the key ownership and other things related to the key itself. What it will not show you are things that the management console is showing you are things on the left that I'm highlighting right now. Those are the ones that really tell you that, oh, this key is used to encrypt that partition, the DEF SDC3 on a database server and it's installed with the X-Crypt full disk product. Or if we look at X-Crypt Kubernetes product and X-Crypt OpenShift product, it's actually, here's the key that is encrypting a particular persistent knowledge and you can see the persistent volume in the same place as you see the keys and on that particular worker node. So let's say this persistent volume gets compromised. Let's look at what this installation comprises. So we have the installation of the X-Crypt Kubernetes product on this worker node, which is probably micro data center and let's say I know that this worker node is compromised. I don't need to reach out to my security admin. I don't need to ask them to chase down the keys that are used to encrypt partitions and volumes and PVCs on that node. I can just say I would like to revoke access to this key and I'm gonna mark this key as compromised. As I go back to the keys, I know that some of the keys are now being compromised and they have been revoked on the key manager and the data access to those keys have been lost. So what's happening here is that we're putting the data in the context and we're putting the key elements in the context of the data that's been protected. This is great to see. As I mentioned earlier, that context is so critical with Kubernetes because again, as you all know, pods move around, IP addresses change, host names are, you're not tied to a host. So it's terrific to see this. That chasing down and backtracking might be very, very difficult to do without the data set centralized management console. And as Tim already mentioned, we're integrating this with Red Hat OpenShift Advanced Classroom Management with auto-integrating with ACS directly, but it's really on a little bit of a different plane and provides a little bit different degree of visibility. So therefore it makes a lot more sense to integrate with Advanced Classroom Management directly, not with ACS. This will give you the visibility into our OS encryption product, what operating system on what servers are being encrypted as well. So it unifies, I wanna say, this is all about unifying data protection and providing you surgical tools to react to compromises. What's not showing here is that it'll, the partitions and keys and worker knowledge will highlight with our advanced monitoring and login feature. They will highlight to show the key usage, they will highlight in green to show the normal ones and the key application for it. Tim and Maxim, I wanna thank you both so much. We're just a minute over the hour. This has been a terrific session. Thank you both so much for your time and the great conversation. Great, thank you very much for having us.