 Good afternoon. We're here to talk to you guys today. The title of our session is Catch Me If You Can. And what we're going to be doing is delving into how one, you understand here in front of, all right, so for the video, I guess I have to be stationary. We're going to be delving into some conceptual and some advanced features of monitoring metrics from a platform of Cloud Foundry in addition to the external environment. And the whole intent of this is so that we can show how we can adaptively react to persistent threats, to quality of service degradation, to things that make our foundation, our environment, poor. So that's what our session is today. I'll go ahead and introduce myself and my two colleagues on stage with me. My name is Merlin Glenn. I'm with Pivotal. I'm a solution architect with Pivotal. And the reason this session is a little important for me and it's work that we're actively doing with a lot of our customers today is I'm a network architect. I design SDN networks. And this tends to be a topic that is very difficult to meet customer requirements and be able to adapt to threats as they change, and be proactive instead of reactive in how we actually resolve an issue that degrades the quality of service inside of a platform. So that's why it's important to me. And let me go ahead and introduce Sean. Good afternoon. I'm Sean Currie. I'm also a solutions architect at Pivotal. So we're the guys who get to play with all the new toys. And we get to dream up these new ideas. On this project, I was the caustician. I like to give myself new titles, too. So the reason I'm interested in this kind of stuff is because we're going to break things. And if we know how they fail, we can learn from them and improve them in the future. We got to play with the machine learning algorithms. I don't know, anybody here have an experience with machine learning hands? Predix guy, no machine learning. Developers, any developers in the room? Operators, business people. Who's here? What do you guys do? All right. I'm going to hand this over to Keith. Good afternoon. I'm Keith Streny. I'm also on the solutions team with Sean. It's like you say, we get to break a lot of things. But we also have direct lines into customer feedback. And we understand some of the frustrations that customers have both in and just in general with putting an environment out there. We battle every day with getting things under configuration and adjusting to drift and things like that. But then what happens is even if you had all that perfect, the environment does need to change because it's always facing constant adversity. So that's some of the things that we want to discuss here today. Again, we want to give a special shout out to Ray Lee, who couldn't be with us today. He's on our big data services team. He helped on some of the big data and platform analytics side to help us get up and running. So moving from there, I took my cars. All right. So the main goals of this talk are to bring SDN into the conversation. We feel like it's that time. We do a lot of stuff in the environment, Bosch and the infrastructure, Cloud Foundry, the application layer. But it's important to bring in the entire environment because it does affect what we do. Promote the need for an active platform. So today, technically, the platform is monitored. You have dashboards, heat maps, things like that. So you can see what's going on with the environment. It's not so much on the active side. We're actually doing and making decisions in real time with those indicators that we get through the different environmental probes. And then finally, highlight the power of code deploying an analytics platform with Cloud Foundry. And this isn't just for special data science cases, but as a part of the normal deployment of the platform, hooking up the intelligence at every single layer at the platform. Because a lot of the stuff is there. We have abstractions in Diego now. We have Bosch. We have a tool set that we'll talk about a little bit today. Enamel that helps drive Bosch at that infrastructure layer. We have SDN with the APIs and the separation from control and data. So we have the ability to dynamically reconfigure these things. So all these things bringing them together into a conversation about why it's important to bring intelligence to the platform as it is today. So those are what we hope to cover. And I'll turn it over to my colleagues to talk about quality service. Nice handoffs. So quality service. So for those of you that are network guys, QoS or QoS might relate to a protocol stream or putting priority of one type of traffic pattern over another. What we're really talking about here in this aspect of quality of service is quality of service of the environment as a whole. And we're at Cloud Foundry Summit. So we're going to be focusing on Cloud Foundry and applications published on Cloud Foundry. So that's the viewpoint I want to set in the frame of mind of QoS. When we're going to walk through visually here in a second what we're talking about, because it's kind of hard just to read a bunch of bullets up on the screen and kind of see or understand what we're trying to achieve when we say an adaptive platform. But really the key concept I want to put in mind here is when I say QoS, we're not talking about QoS from a network pattern standpoint. We're talking about QoS from availability, reliability, security, uptime of the platform itself. So quality of service of the platform is one of the things we're keying on. And what kind of things degrade quality of service? I mean, some of you guys raise your hands when you're Cloud Foundry operators, right? I mean, you name it. Infrastructure outages, DDOS attacks if you have persistent threats, data loss, misconfigures, change controls. I mean, there's lots of things that you have to be able to adapt to. And one of the key concepts we're going to be going over is how do you, you know, today most operators are limited by the sets of metrics that they might collect from an environment that are healthy, unhealthy type metrics, right? Or I might have some green threshold regions and you build a pattern of remediation actions based on those thresholds. And that's a very static, it's a very hard thing to do. It puts you always behind the gun, which that's the title of the session of Catch Me If You Can. It always puts you behind the gun of trying to figure out or predict what's going to cause a degradation of quality of service. And that's the whole point of what we're building here so that we begin to build analytics in so that we can predict that based on patterns and we can even predict remediations as opposed to having these static thresholds and things that are just hard to track and chase when threats are adaptive and threats are coming in from all different directions. And you might want to talk about threats a little bit because you're like, you're the chaos guy, right? I'm the network guy, you're the chaos guy. Is that it? Today. Nice guy, bad guy. Good cop guy, cop guy. So, yeah, we have a spectrum of things. We're always touched on a couple of them. The one we really haven't touched on yet is our advanced persistent threat, right? We work with big customers. A lot of you guys have sensitive data you need to hide, right? People are always trying to get that data, be it from external or internal. And then they're trying to screw up your systems. The competitors maybe are running the cast monkey against your system, shutting things down. Anybody have that happen to them yet? No, good. Okay. So, we can DDOS ourselves, right? Anybody ever have an application that just goes out there and just keeps making requests to a service that isn't there, right? Overloads the service. Self-DDOS, right? We have some patterns in application development, like the circuit breaker. Anybody familiar with the circuit breaker pattern? Right? We're thinking about taking the circuit breaker pattern and applying it from the network all the way down through your applications, right? And that's the pattern we're trying to talk about here. Cool, so ultimately, it all comes down to what if we could actually improve the performance of the platform and respond to environmental adversity at the same time, rather than reacting to it afterwards. So, what we'd like to look at here is what would continuous improvement over the environment adversities? What would it actually look like? How would we do this in real life? Do you pack everything into Bosch and make Bosch super, super smart and then now are things down there? Do you pack everything into Cloud Foundry and do stuff with Diego's abstractions? Do you do everything in the SDN layer? Or do you have to actually extract all these things out so that you can do higher level analytics and sets of patterns and recipes in order to sort of articulate the entire environment changing to the threat? Because if you think about how a threat works, it doesn't come in the same way every time. It comes in differently. And so, there might be a combination of things that respond to a DDUS that's different than a set of things that would respond to an APT. Or there might be a set of things that, in network outages, how you need to route that traffic over may be more than just a single step of doing it. And so, that's where we're gonna talk about a little bit, spend a little time on this slide. First, I'll run the animation and you guys talk about your difference. Okay. So, something bad comes into the environment, sets it on fire. And today, what we have is, we have Bosch agents that will actually report that state. But what ends up happening is, we just monitor that. We say, okay, but something bad happened, we run around, we do something with it, ops people, I'm sure you're familiar with this. And it's like, we gotta go fix this, where do we start, this kind of thing. So we're, yeah, a lot of firefighting. So what we're hoping to do is get beyond this so that we can actually reason on it in a consistent fashion. And through the dynamic analysis, and by driving this through SLAs, we will actually start sending these recipes out to the environment at the different levels. Some things will go to Bosch, some things would go to Cloud Foundry, some things would go to the SDN. And then through that remediation, you'll see the environment actually adapt to the threat. And then this way, ultimately, it's all cleaned up and we've self-healed. So this is why it's important, it's like that. It's important, it's important to understand it from a holistic perspective that it's no one thing in the environment that needs to be changed. So I'll turn it over to my colleagues, we'll talk a little bit more about the details and the different colored schemes. Yeah, so the environment, so I guess kind of walk through a story of this. The environment doesn't necessarily mean just Cloud Foundry, right? Because Cloud Foundry has a rich set of metrics that we can tap into the fire hose and event states, but that doesn't give us the health or the expected or current state of the environment as a whole. So we're really talking about pulling metrics from your SDN layer, from your IAS layer, in addition to CF. So that all of these metrics are fed into an HDS style system and given the capability to analyze what happens when expected metric X, for example, a go-router response time of a metric from Cloud Foundry, what does that actually correlate to what's happening on my physical routers or my virtual routers in my SDN layer? Is that a pattern? Is that a fingerprint that's happened already? So when we look at this environment box here, we're not thinking of just Cloud Foundry, we're really thinking of the entire environment, the solution as a whole, which could involve and does typically involve perimeter networks, things that are beyond objects that Cloud Foundry could even report on. So in our animation where we had our bad cop, get back from the Terminator and do the whole Terminator sequence. So this could be anything, we don't know what the threat is right now. And the concept is in traditional monitoring scopes, we may have a set of hard thresholds when this happens. So we know something odd has happened in the environment and a certain set of metrics has now changed or delted in an unexpected state. In a traditional pattern, we are having to build thresholds. We're having to guess what that unexpected state might be. And this is the pattern we're trying to get away from and why we're introducing analytics into this and the next phase is gonna occur because it's gonna be a losing game or it is a losing game for most customers. And I'm using the term customers because we're like services guys. We do a lot of deployments for Cloud Foundry. So most Cloud Foundry users are environments. To be able to, for example, respond to a threat. How do you know what that threat's gonna be? A DOS style attack, what is that gonna actually be? Is that gonna be a hit to an application you're hosting? Is it gonna be something in your control plane? Is it gonna be another endpoint external Cloud Foundry but still follows the same data path or the same data route? How do you build metrics into detect that that's happening? And so what we're trying to do here is push all of that stream of metrics into the dynamic analysis engine, which is what these foreign general may be going to because I'm an SDN guy, right? I know the stuff about the networks. But the key concept here is we're trying to get away from having to define static thresholds. We're having to get away from having to predict what the problem's gonna be and actually be able to analyze and predict what the resolution's gonna be. Anybody out here work for a storage company? Storage is cheap, right? So we can keep all these analytics on our Hadoop file system, whatever file system and mark them as an event through our learning models, right? Then when we see that pattern again, we can apply our previous solution using our DSL libraries, right? So our DSL libraries will just be all those calls out to Bosch, right? Bosch create stem cell, whatever it may be, right? CF scale application. Add network for the SDN stuff, right? And then we have continuous learning, right? This is where obviously we need our data scientists involved and we need to start looking at patterns and one of the things we're talking about is sharing that information across customers, right? This is an open source. Cough foundry is open source. We're gonna share this information. We're also talking about enhancement where there's a product called Apache Metron. Anybody familiar with the Metron project? So what they do is they aggregate known bad IP addresses, for example, right? So when our data comes in, right, we add that and we learn from that, right? We can create these internal sources as well. So really for me, the clustering of the data saying this pattern looks like that pattern, we should do something about it is key and then our DSL library is just gonna change really based on the implementation, right? Of what our environment is. So this is kind of the meta pattern. We're gonna look at a specific implementation later, how we did it, this first project and then you guys can go do it however you wanna do. Cool, thanks, Sean. So ultimately what you're hearing is we really need a way to detect and analyze running behavior, right? So we have a lot of stuff on static behavior. We have configuration, that type of stuff but it's important to detect, analyze and then actually consistently do something with that running behavior and those detections. And this is the important part, the reason why. So I mean, we could do something up here and that would be fine but really the thing is to have very, very basic building blocks, recipes, Bosch recipes, CFCLI recipes, enamel recipes, the drive infrastructure or SDN recipes for REST API calls to say NSX or something like that. But that's not real, the meat and potatoes is. The meat and potatoes is being able to run that model a thousand, 10,000, 100,000 times to understand which of the efficient combinations of those recipes so that when we do get that runtime behavior and we are able to look at what the most efficient way to mitigate that is, what we realize is we can get to where we have very, very, very little outages, we have a high level of reliable predictability and stable responses and this also mitigates against how many have you had junior level operators on your staff and then what they thought was the right thing actually made the problem worse and so what this does is take some of that out of their hands and allows you to apply more senior level strategic guidance in those goals from an SLA perspective and then execute that across the environment. So when we look at the next piece, oh, sorry, somebody was playing with the slide, gotcha. All right, so this is what the project actually looked like. So the way this is set up is we have, we used NSX because we were on a vSphere piece so that we could work with the API. Stream Cloud Dataflow to flow metrics and logs out of Cloud Foundry into a Hawk. Hawk is, so NSX is your SDN, Cloud Foundry you're all aware of that's our application level adaptations. We have Bosch which is our infrastructure adaptations and then we have the Spring Cloud Dataflow which allows us to real-time insight on the streams so that you can deal with it in two different ways. You can do like an actual right then and there threat that you can run through the stream, split it off, re-spawn to it via an ML or CFC LiR SDN recipes based on the DSLs but then you also have long-term patterns. So when we talk about something like advanced persistent threat, the reason why it's so difficult is because it's malware, it embeds, it sits there for a month, six months a year and then it comes alive and then it starts doing stuff on your network so it's not that easy to detect. However, what you can do is if you build up a month, three months a year worth of training of data, historical data on yours and you create those baselines, now suddenly as soon as that anomalous behavior pops up, the system recognizes it and says, no, repave this entire thing, shift the entire environment over to something else, get rid of the malware threat, start a fresh new install, route everything over, connect to the new data sources, et cetera. So the ability to do both real-time analytics and the actual long-term with both Hawk and then Madlib is basically where we build the models. Sean's gonna talk a little bit more about Madlib and Hawk in this particular side but really that's the key here is understanding the two types of behaviors you need to deal with most diversity in the environment. Yeah, so Hawk is a SQL interface on a Hadoop file system. We have a huge group of people who have SQL skills who now have access to this big data and it's open source. Madlib is those machine learning libraries. We can continue to run these queries time and time again. As Keith said, we have our training data. We can test scenarios against that training data to see if our DSLs would be triggered. But again, this is what we did. You can go out there, there's other tools. You can use Kafka instead of Spring Dataflow. You can use Apache Metron. You can go out and create a route service in CF. You can stream all that stuff in through there so the traffic doesn't even get to the applications ahead of time. Yeah, the only thing we added here that maybe you guys don't know anything about is called enamel and it's just a tool that we use to automate our cloud foundry deployments as well as some more other deployments and this helps us really close the cycle for automation which is really important to us to make sure we can do this in a repeatable fashion. So ultimately, what we're looking at is you could deploy the analytics platform with every cloud foundry instance, right? And what this does give you with the analytics pieces, over time you get repeatable solutions. I mean, that's what we're after, repeatable solutions. But also against every customer baseline you are able to aggregate sort of a foundation of analytics library and all of these would be field tested, right? Because they're real customer solutions that dealt with different customer traffic patterns. So the idea is when you do this, let me go back real quick. When you do this, the model sits in Madeline. So it's a generic model and you have a set of recipes that will grow over time, open that goals column. And then what ends up happening is for a specific customer, they're gonna have a very unique traffic pattern typically. So you run that against their traffic patterns. That model evolves and then there's particular goals in SLAs then adjust the environment. But as a whole, what you're able to deploy is a library that continues to grow bigger and bigger and bigger with all the different threats that are out there. And also just adversity in general. Let's not even talk about cyber for a second, talk about just network intermittency. So right now it's very difficult to optimize on network intermittency because it's so fast. And with the human in the loop unaugmented, you can't make decisions that fast, like recognize that there is intermittency, deal with the intermittency and then act upon it. So what ends up happening is, think of it like a cybernetic suit. It's still you on the inside, but the idea is to have so many probes and environmental probes out there that you're able to respond with the speed of because of the computational assistance that you're given in the environment itself. And this is really, really important because now what you can do is prioritization algorithms. I have mission critical data for a customer and non-mission critical data. And by prioritizing those algorithms, having the environmental probes, being able to compute at the speed of computation power in the cloud, now suddenly we're able to make sure that mission, even if the throttle goes way, way down, mission critical data is still being transmitted from that particular customer. If you have multi-tenancy, perhaps you have a bronze tier and a platinum tier type of customer. You wanna make sure that platinum tier when that gets throttled because it's not your fault, your cyber target, but it happens nonetheless. And so it's to get that platinum tier customer as part of their SLA to continue to get their mission critical data out and then your bronsters gotta sit around so you can remediate the cyber problem, right? And then in combination with both of them, hopefully you don't have any outages at all because the platform is constantly evolving. Let me go to the next one, just... So our use cases for the demo, we kept them basic. As you can see, it's more generic. It says we're responding to environmental adversity, but in our particular case, we wanted to kind of do some concrete things. So we have DDoS, right? Recognize the foreign IP, add an ACL to NSX, do that through the API. Pretty simple remediation, but that could take a while today in a manual sort of monitoring mode. Quality of service, detect the network throughput deficiency, add plus of the routes. It's not exactly adding routes, but you would do some things in terms of the NSX piece. APT, so this is the one where it gets a little bit tricky. So what you would do is you recognize the IP loads, some unique signature within your environment itself, and you can spin up an entire new cloud foundation. Cloud Foundry Foundation, including switching over the NSX routes, and adding, attaching the data sources, including shelling out that old compromised cloud Foundry so that your forensics team can come in and look at that, honeypot that, start studying where the attack is originating from so that you can switch from being defensive to more offensive, being able to report that and have those things be remediated by whoever handles that remediation. And then finally, DDoS, so detecting network throughput, identify best selves throughput. We heard a little bit about segmentation, isolation segmentations. So as those come online, being able to move those high priority workloads to those isolated segments and being able to do it with different types of workloads. So by being able to dynamically control the SDN piece, what you're able to do is something like move a Finserve workload over who have very, very different requirements than say today's workload on that particular cell, but ultimately sells VPNs. So now you can do dynamic service chaining and meet the actual compliancy pieces that that Finserve customer would have. And so now that workload can be shifted over. It's still secure. It still meets the compliancy and it got away from the threat. So basically you're out running it and going on. And there will be no demo. So where do we go from here? You wanna tackle a little bit of that feature? Yeah, sure. So one of the other concepts too, I wish we had the graphics still up is we mentioned DSL, right? And we mentioned a lot of, we've been touching on SDN. We keep saying NSX because that was just in our development environment. All those tasks that are being performed, all the DSL that gets executed as a byproduct of the analytics detecting there's a problem or there's a signature for a problem. Those also feedback to the same engine. So it's really a loop. So you're also detecting, did that corrective action that we just applied, did it actually have the desired effect? That's probably one of the missing pieces I wanna make sure that we portrayed to you guys because this is a constant learning engine. So it's learning, did the corrective measure, did the corrective detail response? For example, give me the expected result or did it give me an adverse result somewhere else that I didn't intend on? And so to get that level of sophistication in the DSL libs to train the mad lib libraries to build that type of pattern for a cross-section of customers so that it would be something usable. In other words, we could get to you guys and say, hey, run this in your environment and you could actually see adaptive rules taking place. It's something that takes time for us to validate in actual scenarios, right? We've got to capture those signatures. We've got to capture those patterns and we have to simulate them. And so that's something we're doing internally at Pivotal. We have a unit called customer zero where we simulate the customer and we simulate all sorts of weird, havicky things and also a lot of things that are pretty generic and pretty lowest common denominator that fit most of our customers. So we're actually running this project through the customer zero filter so that we can simulate and build that pattern and build a library so that we can then match them to appropriate DSLs that actually makes sense for an operator. Because at this point in time, hopefully it's evident that we're talking about learning systems. So those learning systems have to be taught in some way. And so it's not something that an operator could just go deploy today and have a preset base of learning or preset set of knowledge inside of their HDS environment. It's something that we're training and we're building. And that's one of the things where we're going for now. So with that said, I think we're running a little short on time. This is probably, as I said at the outset, these are things your Cloud Foundry can do and your environment can do, but they're not features that the operator pulling open source or going to Pivotal and running Cloud Foundry and Elastic Runtime for Pivotal are gonna function out of the box. So this is definitely something that we're building a solution stack around so that we can make it something portable and deployable that you would lay on top of your Cloud Foundry implementation. There are a lot of key components that we had up there in the slide before that we're building the solution based on such as Hawk that are sort of lowest common denominator, to use that term again, those are things that we would expect to deploy on top of any environment, but there's also gonna be some environment specific things like the SDN. We wouldn't think that everyone here in this room is a VMware customer and running NSX. There are other topologies out there, other ways with which we can deploy DSL that hit things at the network layer, same thing at the IaaS layer. So some of these things are gonna be abstracted and definitive DSL may be dependent on other components inside of the environment solution. So we're working to get that as a package thing. Okay, and I'm just gonna touch briefly on the second point, right? We have DevOps, everybody know what DevOps is, right? Next we're gonna have Net DevOps, NetSec DevOps, et cetera, et cetera, right? Everyone working together to optimize the platform. And so finally, I think it's important if you take one analogy away from this, it's like this, right? So today we worked very hard to make Cloud Foundry bulletproof, right? So what's harder to destroy than an armored vehicle is an armored vehicle that moves, right? And so that's, you know, first you start with that, the three R's, I don't know how many of you have read Justin Smith's about how we can repave the entire platform to kind of clean slate it, but you can't just do that every single time there's a threat, right? Because it slows things down, you DDOS yourself. I mean, essentially your customer uptime goes way down, so what's harder than hitting an armored target that moves is an armored target that moves, that has countermeasures, that attacks back, that has these ability to suddenly there's a risk there. If I attack this particular target, you know, I'm going to get detected, it can come after me, it can report forensic data, and that stuff can be used to prosecute the actual attacker, if environmental adversity, same kind of thing, instead of reacting to it and saying please, I hope my environment's perfect today, perfect tomorrow, perfect next week, what you're really saying is, I recognize it's going to fail, there's going to be outages, there's going to be these different parts, so we deal with H.A. today, but what we don't necessarily do is how to actually optimize the entire environment holistically, and I think that's the most important piece is that we're trying to get to that armored vehicle, that moves, that does defensive countermeasures, and maybe even one day offensive countermeasures to really make it hard for Cloud Foundry to be a target, that's it. Any questions? Is it going to be built into Cloud Foundry while there's a long and arduous process to get it accepted by the community, but we will definitely put our source code out there and make it available? Yeah, we know, that's why we're doing it. Any other questions? In the back there. Well, the idea is not to send alerts, because most of the alerts are ignored, with the idea is to have it learn and take action and remediate without you having to be involved. You get an alert if everything's on fire. And that's it. Yeah, so we tap into the fire hose. If our demo was working at all, you would see. We go right into the fire hose log stream, and we pull our metrics from there in real time. And strain them into Hall. But like Merlin said, it's not all there today, right? So we need things like additional, like a Bosch agent that would be able to monitor network traffic, PCAP data, things that are cyber related, or things that are network related for actual quality of service pieces. So there are a lot of things to still do, but the idea is if you think about it, we have abstractions at every single layer. We can drive the infrastructure to do things, to act, but you need that intelligence backing, right? You don't wanna put everything into the environment, and then have it manage it, and then also have it have the intelligence. So I mean, there's best of breed technologies that are out there that can do these type of analytics and reasoning, and there's linkages. That's what we did. There was linkages between the platform and the analytics piece. And making that sort of a standard solution that gets put in front of a customer, that's what's critical. And then you grow it over time, and you adapt it to, you know, and help drive it to their particular needs. Other questions? Very simply, like very, very simple. Like latency, right? Looking at some things like latency, when something's out of the normal thresholds, things like that. Pieces of, what was the other thing that we did on it? It was the latency and the, well, foreign IPs, source destination, yeah. Those are sort of like those static metrics I was talking about before. So the key piece here is, yes, we'll build a first initial, hey, if you see this pattern, apply this DSL. But we're also building a, this is the expected state after the DSL should be applied. So I was talking about the feedback loop in which the system begins to learn did this remediation have the desired effect based on this fingerprint? So that was a very simplistic, you know, first set. We're just looking at things to start the learning with, you know, what's the latency on go routers? What's the latency on the physical routers ahead of it? What's the number of packets being pushed through to detect the DDOS? But did we, we also had to feed it an expected state of after this, the latency should go back to a certain medium. You should have no more packets that aren't in a drop state from the origination source address of the attacker, which would be a perimeter thing. So you have to tell it, what do you have to look for at the end of it and determine did that remediation actually achieve the desired effect? If not, what are the second set of DSLs? What is that learning mechanism that you have to begin to build into it? And that's some of the customer's era stuff. I was gonna say the other thing that went into it is the product technology that was chosen. Every single thing up there was open source. Everything up there is available for you today. Except that the sex, except that the sex. Except that the sex. Well, that was because we were working on vSphere. But the SDN API, right, you know, open flow, whatever, those other pieces also have the same ability to do that. But what's also important to understand is like, you know, when we put it into HDFS, because that's really what's under the hood in that Hadoop distro, you know, you're able to do directed oscillograph. So you can do instant, you know, relationship build. I mean, there's so many things in terms of how you would interpret that data that that's the biggest thing is getting that data in and then start building within the foundation. It's not a one-off analytic. It becomes a community analytic. This is how we sniff out this. This is how we do deal with this problem. The idea is to grow it as a community so that that library becomes, you know, yeah, we saw that problem. Nobody else has seen that problem yet, but we saw that in this customer. And it's not a customer-sensitive thing. It's more of like, how do we deal with a DDS, right? We're not dealing with customer data for that DDS, but how do we respond to it and what do we see? So it's really important to build on top of those sort of open tools that everybody has already built visualizations, analytics. You know, it's not a new skill set that you have to learn in order to get this up and running. And we have an internal epivital. We have an internal cloud foundry that we can leverage as we move forward to test our models, right? So that will be our training and our testing ground. Yes, so that's how you evolve your actual enterprise, right? So absolutely, because we're talking about the reason why we went from our standard nozzle to like something like Spring Cod Dataflow is so that you can split and split and split so that you're feeding different analytics for looking for different things. Like today it's not there, right? We don't have that analytic, which we're talking about like five threats at once that are all different in characteristics, but there's no reason that you can't split that initial stream coming in that's already monitoring your network, your containers, your VMs, everything, you know, the entire, all the different layers and then streaming those off into the different analytics to deal with the different problem sets so that you would detect those type of things. All right, we're gonna take one more question because this gentleman has his hand up and then you can grab us in the corner if you need to. Least response or kitchen sink? Depends who you ask, probably up here. Yeah, the approach we were taking is MVP least amount because throwing the kitchen sink, that's deciding multiple mitigations or multiple remediations for one signature, one pattern. How do you know which one actually resolved the issue? How does the system learn from that? So the idea is to begin small remediations and does that effect, does that give the desired result or not the desired state? Ultimately it's driven by your SLA, right? I mean, everything is about what did you sign up to? What kind of service tier did you sign up for for that customer? So if you have to throw the kitchen sink because you're trying to meet that SLA, then that's what you do. Now it might be sequential, so it's small thing at a time but the entire remediation process was like 50 small steps. Each one test, did it fix, did it fix, did it fix but you know, at the end of the day it became the kitchen sink because that's what was required to get you within the thresholds. So, that's it, thank you.