 Hi, everyone. I guess we're going to get started. Thanks for coming to the session. That's a really long title up there, dynamic. It looks even bigger up on the screen. I want to talk a little bit about what we did at Time Warner Cable Cloud, but more specifically, some problems we had. We were trying to solve from a security perspective and some of the solutions we looked at in the space and what we settled on. So just for some introductions, my name is Jason Rualt. I was the, I was going to say was because I recently left, the senior director of cloud engineering and operations at Time Warner Cable and slash charter. I also have here today with me, Richard Eisenberg and Nathan Randall from Cloud Visory that are going to help do a demo at the end. I'm going to move pretty fast because I want to get to the demo because that's a cool part. And I'd like to also save time for questions. So forgive me if I fly through. I do want to give a little bit of context on just the Time Warner Cable Cloud so you guys know what we're talking about. Then I'm going to spend some time talking about the security problems in a little bit of depth. And then visit some of the architectural requirements we had for our security solution and then we'll do that demo. So for those of you who do not know Time Warner Cable, they were the second largest cable provider in the US. They provide video broadband phone business services, that type of thing. They had four national data centers. They had over 20-some market data centers spread across the US. And they were in the Los Angeles and New York markets. They were acquired by charter communications last year. So if you may know that name better than Time Warner Cable. The cloud, just to set some context, so the cloud that we built was based on OpenStack at Time Warner Cable. And it was set up to run across the two of the national data centers, supporting about 15,000 VMs. We had three petabytes of usable object and block storage, multiple tiers. We provided all the kind of core IAS services you would expect from OpenStack Cloud, plus some enabling services like load balancing as a service, database as a service, monitoring as a service, and so forth. We were using Neutron for SDN. We were using ML2, the VXLine overlay. We were using OBS. We were very happy with that. I do want to point out that we weren't using anybody's distribution. We were rolling our own from the community. We had full CI CD in place and automation. And for some of the services, we were just a few weeks behind Trunk. And that will become an important point later in the presentation, so just remember that. And then, why OpenStack, it's the same reason everybody uses it. We believed it was the most flexible and adaptable infrastructure that we could put in place for private cloud. Most importantly, though, it provided self-service kind of like AWS to our internal users, and that's what we really wanted. So we had a lot of great things you get from OpenStack. We love the multi-tenancy. We like the abstraction layer, the self-service, the speed it gave our users to be able to reliably and consistently deploy their applications. That was awesome. At the same time, it also exposed some problems. The architecture, and it's not actually an OpenStack only architecture, but the cloud architecture that was introduced here actually presented some problems around what some of the traditional perimeter security tools had a hard time detecting and controlling. For example, we were really concerned about horizontal breakouts. So something happening within one of our customer's tenant networks on a VM and then it breaking out east-west. That was a big concern for us. And we had no tooling in place to be able to manage that concern. The other problem that we had was technology problem, but it was also as much a cultural problem, was we had this weird dichotomy now between the security and network teams and the DevOps teams that were building applications on the cloud. It was kind of a tug-a-war thing. How much control do you give the DevOps teams to manage their security group rules and policies for their applications themselves versus how much control and insight do you give the security teams for those applications and the deployment of them? And that was a big problem. And it was really magnified by the self-service nature of the cloud. So let me get into the actual business problems here. And I've broken them down in a couple categories. The first one is visibility. Essentially, the cloud is a black box from a networking perspective to most of its consumers. If you look at the DevOps team and the people that are consuming the cloud, they have very limited visibility and often lack of understanding of how the networking works or how to do security group rules and things like that. And that makes it very hard for them to onboard applications to troubleshoot applications. They have no idea what's going on when their application doesn't work. And a lot of times what they do is when things don't work, they go in and they just open up all the security group rules just to kind of eliminate that, hey, it's not something with the security. And then unfortunately, sometimes those don't get locked back down. So that's an issue. And that visibility problem actually gets magnified as those applications span from your OpenStack cloud maybe back into your traditional data center or span across regions or to other cloud providers. It just gets magnified. From a security team perspective, they also lack the visibility. How do they monitor and validate that the security controls that were agreed to are actually put into place and being enforced? That was a problem. And then I don't have it up here, but from an operational standpoint, how do people that are operating the cloud troubleshoot and fix things and support their customers? I always wanted and always thought it would be great Nirvana if I could have a holistic view of my environment and all the resources, the providers, the workloads, and the data flow between them. Where's the data flowing? What ports? And where things are being denied, that type of thing. That would really have been very helpful. And that's actually what we wanted to get to. The next problem area had to do with control. So the security controls in OpenStack, using security groups and rules, are pretty easy to get wrong and error-prone. And that can be a bad thing. It's very easy, you can go in and break your application or worse, leave it in a vulnerable state. And then you get attacked. And then managing these controls actually doesn't really scale with large deployments. It's fine when you have 10 VMs, 20 VMs, you get to 100. And we start getting to thousands of VMs. And then you start talking about different environments. You've got devs, staging, production, and then again, potential regions, multiple regions. And providers, it doesn't scale real well. It's really hard to manage that, which gets to my next point about, these controls actually really need to be dynamic. Yeah, you can script security group rules. You can use heat templates, things like that. You can use Ansible. That's great, but not all the consumers of the cloud use those tools, know how to use those tools, or have the ability to do that. We needed something that was more inherent in the environment, and also it was very simple. If you're familiar with how Kubernetes works and uses labels, I really envisioned kind of a context-based approach to security. So because this application, this workload, had metadata that it's a production workload. It's in US East and it's PCI. The security controls should automatically go into place. And as it scales out, the security controls should scale out with it. And as it moves between environments, the security controls should change. And then the last point here is really around the idea of who manages those security controls, or security group rules. In our environment, it was dependent on the application, the business owner of that application. And if it was the IT side of the house, the subscriber side of the house. So sometimes the DevOps team actually managed the security controls. In other cases, the security team did. And the problem that that represented is that we had a mixed-use environment. We had one cloud where people were running production, dev, test, a whole different set of heterogeneous set of workloads. And so we're leveraging that same cloud and things need to be secured differently. You're not necessarily going to want to lock down your test environment so much as so as you would your production environment, which requires PCI. So we wanted to have the ability to define a trust zone. You can give it another name. But it's a way of grouping workloads based upon VMs within a project, across projects, or grouping of projects, where you could then say that you have these are your administrators. These are the people who can define policy. And that just wasn't there natively in OpenStack. The next area is compliance. It's a big one. And the problem here is, how do you demonstrate compliance? How do you know when or how do you know what the security group rules are in place right now and that they match the compliant rules that that was agreed to? And even more so, have those security rules change. So being able to track that is very difficult. And then detecting and responding to events is problematic as well. Like, how fast are you going to be able to detect that somebody changed the security group rule, potentially exposed in application, and then be able to go back in and fix that? Or what if there is malware on a VM and we start having this horizontal breakout? How fast will you be able to detect that and mitigate that? Those are real concerns for us. So we started looking at, OK, well, we need a solution here and what are going to be the requirements. And most of these requirements are kind of architectural requirements because we were building and managing our own cloud and we needed to make sure it worked within an environment. The first one is we needed it to be, whatever we implemented needed to use cloud-aware security. That means there shouldn't be another security controller in the environment. That would just make it even more difficult for us to operate and support the environment. It just complicates things. The next piece is our users really came to like self-service. So we didn't want a solution that came in and locked things down that they weren't able to move fast anymore, right? If they can spin up a VM and hook it all up really quickly and all of a sudden they need a wait to get somebody to set security group controls, that wasn't going to be good for them. They needed to continue to move fast. The third point here is the most critical. We had a lot of mission-critical workloads where performance was key. They were customer-facing. And serving up video over IP, things like that, we needed to make sure that any solution had minimal to no impact on the workload performance. And our cloud was growing fast. We needed to make sure the solution scaled with our cloud as it grew. That was key. And because of the mission-critical cloud, anything that was in our cloud needed also to be highly available itself. Installation and upgrades of whatever solution we were going to implement needed to tie into our CI-CD system. We didn't want to have any snowflakes. Everything needed to be managed the same way in our environment. And more importantly, whatever we put in place couldn't hinder our ability to do deploy as a production. We deployed to production multiple times a week. We didn't want anything to prevent us from updating the kernel, updating the OS, or updating any of the OpenStack services. Role-based access control, that's tied to the notion I mentioned earlier about being able to group administrators over certain workloads. And then, of course, an API so that you can integrate. So those are the requirements. And we started looking at, what are our choices here? There are some vendors out there. There are some things we could cobble together ourselves. And we were able to narrow things down pretty quickly. So sometimes you'll go out and you'll see there's some vendors that are actually doing things in the space. And what they want to do is they want to take over your SDN, or they provide an SDN. Well, that's a no-go. We're really happy with our implementation. It's working great for us. Don't want to do that. Another approach often is you'll see install a VM on every hypervisor, and that's going to be controlling all the traffic flowing for the peers on the hypervisor. While that is a viable approach, and we're kind of open to that, we had some concerns that that's just going to further complicate our ability to kind of support the environment and the tools we had as far as doing rolling upgrades and moving workloads around and things like that. So we decided, oh, probably don't want to do that. Other solutions involve a kernel module. And we decided, no, we definitely don't want anything to deal with the kernel. We're problems with the kernel are real bad for your cloud. We had learned from the hard way on that. So we wanted to treat that very carefully. We also wanted to be able to update the kernel whenever we needed to, because security buildings come out, and we didn't want to roll a patch out. We can't wait multiple weeks doing testing, things like that. And then the last option here is some vendors and solutions will say, OK, here's an agent. You're going to put this on every single VM. And while that is also a viable solution, we didn't want that, because in our open stack environment, we don't have access to any of our customers, consumers of the cloud, their VMs. So we can't install it there, and we can't really tell our customers to install something on there. So that was out. So basically, none of these options really worked for us. The good thing is we found a solution, and we worked really closely with a company called Cloud Visory for over a year, actually. And they provide a solution. It's 100% software-based. And what it does is it provides a this. It works at this, we call it the security management plane, if you think of it as a model like this. And it interacts directly with the Neutron API, which continues to be the control plane. So we keep the cloud-native security controls in place. But it also extends the functionality that you would get from Neutron, because one, it abstracts away some of the complexity and the error-prone-ness. It provides the ability to actually monitor and audit the whole environment in your real time. And you can set controls in place to remediate changes and things like that. So I'm going to turn it over to Richard, who's going to real quickly talk about Cloud Visory Platform and then demo it. So here we go. What's in? How much total time do we have here? I'll see you. 20 minutes. 20 minutes? All right. Thank you, Jason. I appreciate that introduction. See if I can manage the controls here. So I just want to repeat a couple of things that I was typing out as Jason was talking. In terms of some of the things he was looking to manage from a security perspective with regards to OpenStack. One, he was concerned about that east-west threat. Something does get into an instance, and it has that ability to travel east-west. What am I going to do about that? There's not a lot of visibility into the health and security of the environment. Once I deploy virtual instances and this related security controls, it's like a black box. There's no monitoring of the security controls. How do I quickly troubleshoot and fix security problems when they pop up? And ultimately, how do I deliver not just compliance, but real-time compliance? How do I know that the environment is healthy and that the security controls remain as they're supposed to remain in terms of how they were initially deployed? And also, I heard this requirement for RBAC control, if you will. And for us, all of this gets delivered through this separate security management plane. So think of this place where you can go and control all of this, the management of this, the monitoring of it, the enforcement of policy. But it's separate from the data plane or the infrastructure plane, and it's loosely coupled. And it gives you a tremendous amount of power by setting the world up this way. So first off, and I'm going to jump into demos here very quickly, what this platform does is it provides continuous discovery and visualization of the cloud infrastructure as it changes. So you're going to be spinning up the environment. We're going to discover that. You're going to spin up additional workloads. We're going to discover that. And you can see some of that idea here in the little visual image there. We're going to be monitoring any policies that have been deployed. We're going to note any changes or updates that happen. If any of those go against what the compliance state is, we're going to identify and detect those non-compliant policies or non-compliant data flows. So you deploy security groups, and now you see data flows that are popping up that don't match those. Well, that's an indication that we potentially have that east-west malware threat that we want to take care of. So discovery and visualization is number one. We then have this ability to do, so while today we're talking about OpenStack, this is about hybrid multi-cloud security policy management. So we can do AWS, Azure, Google Compute, VM, where, et cetera, and so forth. So it's about giving you, again, that single security management control plane that can deliver policy and control out to these hybrid environments should you need that. It does granular policy micro-segmentation. You kind of get that idea here from the picture here, where we're able to set policies up in a variety of different ways so that they are very segmented and very granular. And again, this is one of the things that protects against the east-west threat, along with the next piece, which is real-time policy and flow monitoring to make sure the environment always stays compliant. And again, I'll show you this in the demo so it'll become more clear. And then we want to be able to do enforcement. So we know what the policy should be. If somebody maliciously or even accidentally changes that policy, we want to be able to recover from it right away. We don't want to have to go through some kind of change management exercise or some long troubleshooting exercise because all the while that these policies are changed, the environment could be down. And that'll be clear in the demonstration. At the end of the day, we're reducing the human middleware piece of things. We're lowering costs. We're speeding up operations through fast change management. We're hardening security. And we're, hopefully, very successfully thwarting nation-state hackers. So with that said, let me pop out of PowerPoint here. And I am going to pop this up. So first thing I want to do is, is there, am I not going to be able to project this up? OK. Apologies. All right. So you heard our back control was one of those requirements that Jason had. So I want to just, I want to talk about this idea of multi-tenancy that the environment we're going to show you is designed to allow you to scope what an administrator can do, what a business user, or an IT user, or a security administrator can see and do within the environment. So I'll do this very quickly here. So right out of the gate, this is the dashboard that you come in. If you look on the left side of the screen there, you see that this is a hybrid environment. So it's got AWS, Azure. It's got even Kubernetes along with OpenStack. And so this is just the plain, the dashboard that you come into so you can very quickly understand the health of your environment. We can tell you where there are potential faults and issues. And then we have this idea of visualization, the exact thing that Jason was looking for, where we can really show you exactly what's going on, again, in a hybrid environment. Ultimately, I'll focus in on OpenStack for the demos today. But you can see here, this is a very complicated environment. This is a security administrator who can see everything. And I'm just going to quickly flip screens here. This is a connection summary. So I can click on a workload and get a connection summary so that I can see exactly what's going on for any individual virtual machine within my environment, OpenStack or otherwise. And I'm showing you that here. And what I'm going to do after this is I'm going to pop into logging in as a different user. Just close it to this. And you'll see here that you'll notice the menu items there. So you've got manage, configure, administrate. Now a login is a different user. And you'll see that that administration item is not there. And now when I go to visualize what this user can see, it's tremendously scoped down. Same environment. I can only see AWS and Azure, for example. So we have this ability to slice and dice this down to the project level so you can really control who can see, do, what, and manage exactly what you want them to. One last peek at this through an auto finance administrator. We'll visualize that. And again, this person can only see Kubernetes containers and OpenStack environment. So I'm going to pop out of this. And we'll go on to the second case. So I want to quickly do this idea of discovery. So you can see what it is we do here. So this will be done in OpenStack. I'm going to drill into a series of projects as part of this OpenStack account. And right here, I've just opened up one of those projects, the HRM project. And you see one workload, HRM01. Just one virtual machine is currently there. I'm going to go out and run a quick orchestration script. I'm just showing you here in OpenStack. Same thing. By the way, that was just to make a quick point that we're reading OpenStack live. We're reading everything from within OpenStack and displaying it in Cloud Visory. So we're going to run an orchestration script here, quickly spin up some additional workloads. I'll go back to OpenStack so you can see that those workloads are now available inside of OpenStack, just to let you know that they're there. And now I'm going to go back to Cloud Visory. And here you see in the visualization environment, same thing, those two additional workloads within that HRM project have now appeared. Now as we get into the use cases, you'll see not just the workloads, but the data flows that are happening between those workloads as well. So let me go ahead and pop you to the next one. Now we want to talk about this idea of fast troubleshooting. How do I identify problems in my environment? Again, when there are security group problems, your applications actually cannot work. We just had a recent situation where a prospective customer called us because they had a live environment, they had to move to their backup environment because they're having problems with their live production environment, they moved to backup and the applications weren't working. And they had no idea why. And they were troubleshooting code, they were looking all everywhere which way. Turned out it was problems with security groups. So it is very critical to be able to do this and troubleshoot the environment very quickly. So what I'm gonna do here is just open up the environment, show you how we can select what it is we actually visualize. So I'm just picking five objects here and I'll view the selection. You see I dramatically scoped the environment down. And I have here a very quick demo app that just has two tiers. It's got a web tier here and a database tier. You see that that's inside an OpenStack account and inside what's called the order project. And I'm gonna go out and show you what that little demo app does. It's pretty simple. I just load a web page and it opens up this button and I click this access order button. Oh, it won't load. Database, something's wrong. I can't access the database. In Cloud Visory, I show you right away what the problem is. You see that there's a green line from the internet into the web tier. I can get to the web tier, but the web tier can't talk to the database tier and this happens all the time. It's just a misconfiguration issue. We can turn this very quickly into a solved problem. All we do is, and what I wanna show you here quickly are just the rules. So these are the OpenStack rules. Notice no outbound rules on the right side of the screen, right? There's no way for that web tier to talk to the database tier. So I'm just gonna click on that red line and I'm gonna say add policy. And I do all the calculations here. I know what needs to be added as rules for the web tier. I know what has to be added as rules. You see there the 3306 port is what needed to be added. I know what needs to be added to the database tier. I'm gonna go out to the app. I'm gonna refresh the page and all of a sudden when I click the button, the database loads, right? That's how critical these security groups are and how fast you need to be able to troubleshoot them to keep your environment up and running. And here now again, you see in Cloud Advisory, sure enough, everything is green. Look what we deployed. If I dig into the rules here, again, these are the real OpenStack security groups. Now you see that the ports, the 3306 ports are there on both the web tier and the database tier. Here are the rules of the database tier. And the inbound rules to allow the web tier to talk to the database inbound are there as well. So in one file swoop, we took care of that problem very quickly and we deployed policies at both ends of the issue. Moving on, I'm gonna jump down to compliance first and I'm gonna come back to this idea of large scale policy provisioning and change management. So this is the idea of compliance. I've deployed some virtual machines. I've deployed security policies related to those virtual machines. Somebody goes in and either maliciously or accidentally mucks with those security controls in OpenStack. How do I know that happened? I would have no way of knowing that happened unless I have a way to monitor and manage this. So once again, I'm gonna go out to my demo app here. I just wanna show you it's working. So I can access the database, everything's good. Within Cloud Advisory, you see I have green lines, everything's good. And now I wanna show you the rules. So you see here, just look on the left side of your screen. You see the Port 80s there? I just want you to be aware of those Port 80s. And we're gonna go into the OpenStack interface and administrators doing his daily job and he pops into OpenStack here and goes into that rule set and says manage rules. And here are those rules. And he's gonna click, click, click, and delete. Okay, I don't get any warning. I'm hurting the application. I'm just confirming that I deleted those rules. They're gone. And what's the impact of that? Well, let's go back to that little test app and I try to load this page. There's no Port 80 to get from the internet to the app so it won't load, right? We go back to Cloud Advisory. You'll notice at the top there, there's an enforcement violation right at the top. I'm gonna drill into that enforcement violation and it's gonna show that very thing that was just done. We uncovered right away, you see it says removed, removed, removed. We detected that those Port 80s were removed and ordinarily in production, you probably would want this to be automatically rolled back for the purposes of the demo. You see the green button over there, right? Roll back and we're just gonna go ahead and roll this back to the compliance state and we'll go back to OpenStack, refresh this and you'll see that those Port 80s are right back in place, okay? What's the impact on the application? Just what you'd expect, I'll reload the page and it will load and I can access the order list, right? So again, that's how again critical it is to be monitoring for real-time compliance that your security groups remain in that golden state. All right, similarly, I wanna be monitoring the environment for potential east-west attacks, okay? I'm gonna get attacked, there's no doubt about it. Piece of malware is gonna show up. How can I find that out? If I'm watching the data flows, so we're just gonna drill into things here and you see, just watch this HRM 01 workload there. What we're doing is the following, you've got deployed security groups so there are certain rules that are allowed for that workload and we're watching the data flows every minute of every day to make sure that those data flows match the deployed security controls and as long as they match, everything's good and green, right? We're gonna show you what it looks like when a piece of malware actually goes after that workload. Let me just open up the environment here. We're gonna go out and run a little malware script that's gonna simulate this attack and you know what happens when malware gets in there it starts scanning all the ports. So you'll see what happens in the visualization here. We get detect and alert, bam, look at that. Those ports are being scanned. There's a whole bunch of red lines because those are all attempts at communications that go against policy, right? So once again, the policy state matters. We've blocked them, that's why they're dashed red lines but now what we wanna do is shut down this threat and we're gonna do that by quarantining this asset. Again, we're gonna do it manually so you can see it happen. But first I want you to see all the outbound rules. You see the seven outbound rules on the right side of your screen? The way we shut this down so that that malware can't get anywhere is we're gonna just shut down all outbound communications in one fell swoop. So again, it would normally be an automated action. You're gonna see me go out, click change the state to quarantine. Yes, you want a quarantine, I do. And immediately what that's gonna do is shut down all those outbound communications. They're gone and the reason why they're gone, I can show you through the rule set is because look, all the outbound rules are gone. So this is about really protecting your environment. How else would you do this if you're not watching 24 by seven in an automated fashion and comparing what's going on to the actual compliant data flows? So this all brings us to the last part of the demo here which is about how we actually do all this automated provisioning and ongoing management and monitoring. And as Jason was talking about, he was mentioning there has to be a way to organize the world. He was talking about using tags which is one of the many ways that we have of managing policy. So you would deploy a tag along with the orchestration of your VM and based on what that tag is, one or more tags, we're gonna deploy certain policies to that virtual machine as well just based on a workload being in a certain project, it can obtain certain policies. So we have many different ways to slice and dice the environment so that it can be managed in an automated fashion. So let's look at what a policy screen looks like in Cloud Visor. I'm gonna drill into this idea of a PCI policy because you heard Jason mention they have a PCI environment, you have a variety of different projects that need to access that PCI environment but it's limited, not everyone is allowed to access that. So you see that there is a series of inbound and outbound policies that have been designed as part of this policy definition and you'll also see that there's a tag related to it. It says compliance equals PCI. So we're searching for any virtual machine, any instance that has that tag and if it has that tag, we're gonna basically deploy these inbound and outbound policies. I wanna call out that right now here on the right side there are no workloads, there are no virtual instances that are associated to this policy yet. So let me just show you the rest of the policy. You're seeing the four or three ports there for inbound here, the outbound policies. There's a series of them and no workloads. I'm gonna click on a couple of projects here. No objects are found in the SAS project, no objects are found in the SAP project. I'm gonna go ahead and run a couple of orchestration scripts and I'm gonna spin up some new workloads and if you watch the screen, you'll see in the middle of that code, I can actually stop it and I think point it out to you, you'll see where it says compliance equals PCI. So all we did was when we orchestrated the virtual instance, we put that tag in. We didn't deploy any security groups. That's all gonna happen as we discover the environment. As soon as we discover those virtual machines, we discover the tag as well and we're automatically gonna apply the policy. So now as I open the SAS project, now all of a sudden there's three workloads there, I'm gonna click on one and you'll see a couple of things. You'll see that it has the tag here, PCI. So we discovered that. When we discovered the workload, we also discovered any metadata. So it's got that tag and now we're gonna drill into the rules and you'll see that it has those very rules, the four or three ports and all the outbound rules that you saw there are automatically delivered. Same thing on SAP, three workloads spun up there, I click on one, it's got the PCI tag and the same exact set of consistent rules have been deployed to two different projects. This just as well could have been a VPC and AWS. It could have been a resource group in Azure. We can deploy across environments consistently. We interpret the policy and deploy it directly to OpenStack, AWS, Azure, et cetera, in the policy language of that provider. So this is a very powerful way to manage policy. You now see that we have a nice inventory which is great for audit. Which workloads in my environment have access to the PCI environment? I can see that right here. So now I wanna show you why this is yet even more powerful because what about change management? What about when I have to alter policy, update policy? Now we're just showing you for demo purposes six workloads here, but imagine an environment with 4,000 workloads or 10,000 workloads that needed to get a policy update. How are you gonna do that and how are you gonna manage it? As Jason said, are you gonna use templates and spreadsheets, you have to do a tremendous number of calculations across all of these workloads and all the things they have to talk to if you don't have a management platform like this. So I'm gonna go in and I'm gonna update this rule set. Let me just go back to the policy screen. We'll go into that PCI policy. We're gonna do an update to the policy definition. So we're gonna add a policy, new outbound policy. And we're gonna add a TCP port 636 to this. And then we're just gonna say add this rule. So we're just changing the policy definition. Then we're going out automatically behind the scenes doing the calculations that are required to deploy that across any workloads that already have that PCI tag. I'll go in and just drill through. One of the things we can do here is edit scope for the visualization UI so that I can take a look at exactly what I wanna take a look at. If you're managing very large environments as Jason was, you wanna be able to drill into the exact thing you wanna see. So now when I look at that rule set, you'll see that the 636 port has been provisioned. So there's no guesswork, there's no manual provisioning, there's no writing complex scripts. This is all being calculated and delivered. There you see it again in one of the other workloads, the 636 ports that were part of that policy update. So I wanna just summarize here by jumping back to my PowerPoint if I can. So at the end of the day, what we're doing here is we're trying to dramatically simplify the environment for you by giving you a singular interface for cloud policy automation. So again, we did all this today on OpenStack, this works across public and private cloud environments and across providers. So you can create policy definitions that are ultimately usable across all of these different environments, will translate for you. It discovers and visualizes these multi-cloud infrastructures. Again, not just OpenStack, but AWS Azure, VMware, et cetera and so forth. And it defines that context by looking at the metadata, by looking at what part of the infrastructure it's in, by looking at what its role is, maybe application or application tier, we can ultimately define provision policy and then we are looking at the data flows and we're visualizing any critical security violations. We automate the provisioning of policy, we automate the rapid change management as just saw me demo. And what we are doing through all of this is we're delivering micro segmentation to thwart the east-west threat. With all of this, we don't wanna just provision it, we wanna monitor it, we wanna make sure there's real-time compliance of the security controls and the actual data flows that are happening between the workloads to make sure there isn't malware coming in, to make sure there isn't anybody maliciously or accidentally misconfiguring the environment which could bring it down. And as you saw in the very first part of the demo, it is meant for cross-discipline use. So we manage this in a multi-tenant manner and we have role-based control so that it can be used by DevOps, security and even line of business. Okay, that's it, I thank you for your time. I guess we're out of time for questions? You guys have any questions? Any questions? And yeah. Organizationally, how did you guys overcome that at Time Warner? You talked about the dichotomy between the DevOps team and the security team. Did the solution come first or did you get buy-in from upper management that you had to find a solution? Well, the solution really came first, right? The biggest problem, that cultural problem there, had to do with the lack of visibility. And once you're able to provide that visibility and then be able to kind of define who's gonna control what, that is what helped overcome that, yeah. Please, sir? You mentioned that you're monitoring for changes. Now, looking at your screen, your changes are going into what looks like IP tables. Are you monitoring the changes that are only coming through your interface or are you actually monitoring the changes in IP tables itself? So, if somebody compromises root, for example, and directly modifies it, does your system catch it? Listen, step up to the microphone. You go over, go right to it. It goes right. It is actually monitoring at the API level in OpenStat. So OpenStat has this thing called security groups, right? The security groups control the traffic that goes in and out of every VM that's showing up in a particular hypervisor or every container that shows up in a particular hypervisor. So the security groups is what we are monitoring. So we go, do API calls and make sure that the security groups have the same rules that the policy is defining the state for that particular set of workload. So the management is really a policy definition that defines the state of the infrastructure security in OpenStat. So we keep discovering what's changing. If something changes in the security groups, we will detect because it's non-compliant policy. In addition to that, we go and monitor the flows because the flows have to follow these policies as well. That's why if somebody puts a malware inside a particular workload and the malware starts scanning non-compliant rules, meaning connections that are not allowed to be blocked by the security group, we will discover them and say, okay, there's a word. There's something going wrong with this particular workload because there are connections that are non-compliant to the policies going everywhere right now. That's what you saw in the video. So we want to ask questions or answer questions. So I think we're going to go outside the door. There's another session that needs to switch over in here. Please just grab, listen, myself or Jason will be right outside the door.