 Hi, today we're going to be talking about Cerner's journey towards a secure software defined data center. Really there's a little bit of false advertising here because it's really about NFV. Here's our legal notices that we have to flash up. My name is Joe Quent. I work for the Cerner Corporation. That is not the CERN nuclear foundation. It's a healthcare IT company. Over here I also have some co-presenters, Tarun and Manish from Intel and the guy with the nice hair. Hi, I'm Pino de Candia from Iroquura. Today we're going to structure our talk around who is Cerner, what is our requirements. Then we're going to get into IPS and IDS, what that means, which is our virtual network function. Then we're going to get into some use cases, which are the attack vectors. We'll roll into some demo, then we'll get into the nitty-gritty details of the solution. So Cerner, who is Cerner? We are the world's largest publicly traded healthcare IT company. Go ahead. We're in over 30 countries. Our software is found in over 5,000 hospitals used by nearly a half a million physicians. Our software is very similar. Our journey is basically very similar to SAP. We started out as a mainframe, a lab system, and then we've evolved to a Win32 oracle system that we host in our own internal environment. Then as we've moved and migrated to any device anywhere, we are now running on OpenStack and in the cloud. If you've ever been to the doctor's office, which I'm sure most if not everybody has, you have this form called HIPAA. It sounds a bit like a disease, but I promise you that it is not. It's about reasonable safeguards against your personal health information. We call it PHI. What it is is it safeguards around your past, present, and future mental and health conditions as well as any payment information. So what we do is we have to make sure that there's not unintentional disclosure between one doctor's office or somebody else that's trying to get your information. Centers for Medicare and Medicaid services. What these guys do is they actually provide 30% of the United States health care. And as that, they actually provide stricter guidelines than HIPAA. So when we looked at HIPAA, they said reasonable safeguards. Well, CMS for Centers for Medicare and Medicaid services, if you read the Risk Management Handbook Volume 3, Standard 3.2, it's a riveting read. Read it on your plane ride. It'll help you fall asleep. But if you do that, you will find that they say that you can't just use perimeter security such as enterprise firewalls and IPS or intrusion protection and intrusion detection systems. You actually have to do what's called defense and depth, which means that you have to incorporate firewall and IPS IDS throughout the entirety of your entire data center platform. Let's go ahead. So what is perimeter security? Well, we're not going to try to divulge what we do for our own security, but this is kind of a high-level graphic here. So you have the internet on the far right or far left. And from the internet, we actually provide security tools such as IPS and IDS sitting in between our enterprise network, which is depicted in the top right hand of this graphic, and our open-stack environment. And we use these security tools to prevent unintentional traffic or malicious traffic going from one to the other. John Lambride from the last North America summit gave an excellent presentation on unobtrusive intrusion detection. What this was about was he went into what intrusion detection and prevention is, such as monitoring network or system activities for malicious activities, and then went into how this could be implemented inside of open-stack. It's a fascinating topic, and basically what we've done is we've taken that work as a derivative work and extended that work. So we've looked at what he talked about, what network-based IPS and IDS are, which is basically what you do with that perimeter. It's a device that sits in between networks. Host-based IPS, IDS, runs on your laptop, much like what's running here. And it prevents malicious behavior such as Cryptolocker from taking over your laptop. What I'm going to do now is, as we move forward, we're going to talk about Meadonet and how it's a critical component inside of Neutron, and we're going to get into how this integrates into our solution. Pino? Thanks, Joe. So I'm going to tell you a little bit about Meadonet. Meadonet is a network virtualization solution for containers and virtual machines. So the key points I want you to take away from Meadonet is it's built by Meadocura. It's open source. Open source since late 2014. It's multi-platform. It's overlay-based because we wanted to allow it to work with any standard IP fabric. And we try to get rid of all single points of failure and make it linearly scalable. So that means getting rid of the single points of failure means getting rid of the network nodes in Neutron, essentially. And that means that for Layer 2, Layer 3, and stateful Layer 4 devices like source NAT, firewall as a service, load balancing at Layer 4, we don't use appliances. We do it all in the agent. And so instead of moving flows around to go to appliances where the state is held, we actually push state around the network. And that achieves a linear scalability. I'll show that with some visuals in a second. We believe in using Layer 3 to scale your network. So as you deploy more and more racks, you don't want to have to stretch your Layer 2 for the on-ramp into the cloud. So we use the Layer 3 on-ramp onto the cloud with BGP. And finally, because of how we've built the system, we've pushing essentially Neutron models out to the hypervisor. And having the models at the hypervisor means that we can have deep visibility into what's happening to each and every flow, meaning for each flow, we know exactly what devices it traversed through the Neutron topology. And we can report that so that you can do debugging and troubleshooting in a very easy way. Next slide, please. Next slide. Okay. So here I'm showing a kind of standard Neutron topology. Not used to seeing an edge router. For us at Metocura, we use an edge router that is virtualized so that we can have the virtualized on-ramp. In this slide, I'm just trying to point out the standard Neutron topology has virtual machine port-level firewalls. Those are the ones that you're seeing right above the virtual machines. Then you have virtual routers that have source NAT, firewalls service, load balancing as a service. You have a public network, usually an external provider network. For us, that's also an overlay network. So next slide, please. So that topology, what I want to show you is that in our implementation, we'll be able to do a traversal of that entire topology which had only layer 2 through layer 4 elements, no layer 7 elements like VPN or advanced firewalls. But we can do that all in a single physical hop. So here you're seeing sort of a classic open V-switch architecture but without the open V-switch daemon. We have a flow switch which is the kernel open V-switch module. You have VMs on both sides. The VMs that are circled on the left, a green VM and on the right, a blue VM that is sort of darker. Those are the VMs that you see in the smaller logical topology in the center. And the one on the left is sending a packet that should get to the blue VM on the right. So what happens is just in classic open V-switch, a packet goes via the network channel, a miss packet goes to our agent. Our agent does a flow computation. We actually simulate the entire logical topology that you see in the center. It's as if the middle net agent is kind of thinking about what would a real network do if it got this packet. And it computes the entire, all the transformations that would happen on that packet's headers, computes that and then applies them in one flow rule and installs the flow rule in the flow switch. And so the packet is tunneled over to the destination hypervisor and delivered directly because the tunnel key tells the destination hypervisor where to deliver the packet and all of that without going to any appliances anywhere. So if you have a layer of seven appliances, we'll have to jump around the network, but otherwise we've removed all of the network hops that cause, you might have heard the term, traffic trombones. So I'm going to hand it back to Joe to talk about security. Thank you, Pino. Yes, we're not really into trombones here. I myself am a trumpet player, so we're going to try to remove all trombones out of our entire open stack environment. But first we're going to talk about three different attack vectors. The first attack vector is outside in attack. That means that there's an evildoer outside of our open stack trying to compromise a host within our open stack environment. Based on the discussion that Pino had, which is the traffic actually has to traverse through what's called a Mido net gateway. So before it flows through that Mido net gateway, it flows through the IPS, or then that malicious activity would be flagged and then cut off. And the other attack vector, which is inside out, in the unfortunate case that there's a virtual machine that is compromised within our cloud and trying to do some evil work outside of our cloud, then the traffic has to traverse through the gateway, then through the IPS IDS, and then eventually to the victim, where eventually what would happen is the IPS IDS would catch that traffic and make it so that we would be flagged that we would know that there was something going on inside of our cloud that's causing the problem. So the issue in lies with the implementation or the goodness that it derives from Mido net. So if there's an inside, inside attack vector, what happens is that there's a VM to VM communication. Well, on the left-hand side of this chart, we would see that that traffic never passes through the perimeter IPS IDS because the host that's compromised internally is trying to get to another host or another VM internal and that traffic never passes through. So one of these, an example solution would be, well, route all your traffic through the IPS IDS from the perimeter. Well, now you're sending data from the VM through the Mido net gateway into the IPS IDS where then it would be flagged and then back down through the Mido net gateway into another host. Now, this may seem like it's okay, but it's not really a scalable model. Notice that there is what we call a traffic trombone. All traffic has to go out to that IPS IDS and that's really an undesirable situation. So how do we resolve this? The way we resolve this is by virtualizing the IPS IDS as a network function. So what we do, we take advantage of the Mido net capability of service chaining and what we do is we route the traffic from the, from all VMs through that IPS IDS that would be local. And then as it tries to traverse to another VM, it would be then flagged and caught. Notice this is also a linearly scalable solution because we, as we add additional Nova nodes, then we can add additional VNFs as well. Let's get into what this looks like. We have a demo that Manish is gonna talk about. So you heard a couple of things, right? I mean, you heard about the problem statement that is the need to, the need to have to trombone traffic if you have an edge device that's gonna be doing the traffic inspection and then just the ability to have visibility to east-west traffic and be able to secure that. So what we basically did was when the request came in from Cerner, we put together an environment in which we could start doing some of these tests and figure out how we can do that in a manner that's automated and ensure that security is propagated across the data center, right? This is just a snapshot of what the, what the physical configuration looks like. I'm not gonna go into all the details, but what you can see essentially are the things I wanna point out are you see the metronet component over there. That's essentially your virtual switch. You see the security controller. It's called ISC over here. It's an inter-open security controller that we have put together to enable us to orchestrate security virtual functions. You see the virtual IPS components over there. That's from, that's a McAfee IPS at this point in time, but it's a virtual IPS and then the security manager that you see on top. And essentially what you have are the three nodes. And if I go into the demo setup, essentially you have an OSC tenant, an open security controller tenant, which is seen as a, if you think of it as a controller node. And this is just for the demo, right? But it really depends on how you're gonna be deploying it, what the topology looks like within your environment. And where you wanna place that particular control, right? So you can see the OSC tenant has the security controller in place. And you see all the virtual IPS nodes in there, all the virtual IPS instances in there. And the tenant two is essentially, and then you see the two tenants. The tenant one is essentially where you have three web servers sitting behind a load balancer and think of it as a standard web server farm. And tenant two is a compute node as well. But from which we have an attacker sitting in there, from where we will be instantiating some attacks, right? So you see the attacker node over there. What we did was we go through the router onto the, onto tenant one. Why are the load balancer and towards the web servers, right? And it's a basic demo at this point in time. But essentially what we're showing is we do this, the attack goes through. After which we use the open security controller to insert the virtual IPS at that spot so that you're able to make sure that the attack does not go through and you get the, you get any kind of information that relates to a particular attack. So this is the snapshot of the open stack environment. Those are the three tenants we talked about. And this is really the console of the open security controller, right? I mean, we are in the process of developing this. But what you can see is you see the connections into the, into the WIM, the virtual infrastructure. And then you see essentially the ability to define security groups. Those are basically the components that you're going to be securing within your environment. And then you can, you're going to see the element managers. You know, those are kind of the firewall managers, the IPS managers, things like that. And you're able to, to get to a point where you can define where these controls are going to be enabled, where these controls are going to be deployed, and how you're going to be able to deploy them in a manner that's effective, right? So this is more of an automated process. You don't have to run scripts. You don't have to write scripts for the same. But essentially you're able to state a deployment spec that says for this environment, ensure that you're able to deploy these controls in a manner and such that they're positioned in a specific way, right? And it depends on your topology. It depends on how you want to control it, how you want to enforce those controls, right? What you're going to see is now this is just a security manager. You can see there are no events that came up over there when we were, before we started the attack. And we're going to access the web servers. And it's just a standard script, right? So it's a hello world script. You see that it goes through, no problem. And then when you run some of the controls, like you run a command prompt, you run root, everything goes through. And you know, it's at this point in time, there are no security controls enforced. And it's basically allowed to go through. And then at that, what we're going to do now is essentially going to bind the controls onto what your target environment is. You go in, you enable the IPS. You have two policies, essentially. Command cannot run. Root can run, but we want to be alerted once root is run. And once you go ahead and bind it and you see it's passed, so the policies are now enforced on there. And we go through the process of doing the exact same thing. We go to the attack VM, attack of VM. And we're going to access the web servers, the compute web servers, right? And you do that, you see, before we do that, no alerts, nothing on there. And then we go through the process of saying, hello world. That's allowed to go through. No problem. It goes through. You go to the command prompt, and you see nothing has really happened. You can still see the hello world script running over there. So it's been blocked, and it's not been allowed to go through. And this is a control that's been inserted by the security controller, right? And as soon as that happens, you see an alert come up in the security manager. And the very same thing for root attacks, root, right? I mean, it's not prevented. But we want to be alerted every time that's run. So the exact same thing. And you see those alerts came up. The intent of the demo, really, and we tried to do a live demo. We had some technical issues yesterday. And a couple of us just put this together. We put the video together. So the editing is not really the best. But honestly, what we were trying to show is the ability to insert a security function in a manner that's automated into your environment based on your security needs and security policies, right? And this is with an IPS, and it can be done with a firewall. It can be done with a WAF, things like that. And so it can be extended to other security functions as well. So that's kind of like a snapshot of the demo. And we'll go into some more details. We'll go into some architecture details on Meadonet and then some of the architecture details in the security controller. OK. So I'm going to tell you a little bit about how Meadonet software defined networking and service chaining enable Open Security Controller to insert devices, IPSs, between workloads and attackers. But before I do that, I just want to point out that, again, service chaining is just an enabler. If you were to try to use service chaining yourself, you'd still have to take care of virtual machine management, placement on specific compute nodes, failover if that compute node fails, and load balancing of workloads onto service VMs. So all of that is done by the Open Security Controller, which makes the solution really cool. So what I'm showing here is, on your left, you see a workload VM with the standard neutron port level firewall and its network. I'm not showing you the rest of the network, but it's just a regular topology. So you don't have to touch your topology at all. And this is going to be very important. We don't want to have you modify your topology in order to get layer 7 security. Instead, Open Security Controller insists on a layer 2 bump in the wire model. And that means that we can transparently inject the service VM in the path of the traffic. So before anything starts, the Open Security Controller, in a way that Tarun showed during his demo, will deploy service VMs according to your specifications in the UI. So for example, do you want one per rack, one per compute node, and so on. But at this point, I'm going to assume that the service VM is already placed. Just want to point out that the service VM has two networks. On the top, you see a management network. That management network is used for communication from the service VM to the security controller. And a few things are happening here. The service VM can report on traffic, but will also take configuration. And that's what the management network is for. On the bottom, you have a service data network and a port firewall that are just set up automatically by OpenStack. In practice, neither of those is used because we only are using the service data network to get a port, which you can't get without a network in Neutron. So at some point, you decide that a specific VM is part of a workload that should be bound to a specific security policy. Press the button. You say, protect my VMs with this security policy. So what the open security controller will do is it will send a message first to the service VM saying that a specific VLAN is being bound to a security policy. Now, the service VM is ready to protect your workload. The next thing the open security controller does is it does the service chaining. Now, I have two icons there because at the moment, it's a Meadonet-specific API. Not proprietary. It's open source, but it's a Meadonet API for doing service chaining, not the Neutron service chaining spec. So we're going to implement that later, and that's why I have OpenStack in there. That's what we'll eventually do. But after that call, service chaining logic is inserted between the VM and the rest of the topology and between the service VM and its data network so that when traffic arrives from the internet or from another tenant or within the tenant is arriving towards the workload VM, the service chaining logic intercepts the packet and adds a VLAN header that has a tag which signals the policy and has the PCP bit set in order to let the service chaining logic know in what direction the packet was traversing. And we'll use that in a second. Now, the service VM, and usually this is maybe this is a SIN packet. It's not really usually an attack packet. Off the bat, it's usually let through. And the service VM would send the packet right back out of the same port. So you see this is a one-armed appliance. And the service chaining logic at the service VM recognizes, thanks to the PCP bits, that actually the traffic should be re-injected but not just re-injected which direction it's going towards the VM. And hence the packet is put right back where it was. So the packet hasn't been modified. The VLAN tag has been now stripped. So the packet looks exactly like it did before and it went through. So layer 2 bump in the wire model. Now the VM responds, for example, to that flow. And the packet is, again, intercepted by the service chaining logic, redirected with the same policy tag, but PCP bit set to say that this is leaving the VM, going away from the VM. And again, the service VM, it goes through the service VM. And if allowed, returned, intercepted again there. And it continues on its path as if it had never been intercepted. Now at some point in time, the service VM, sorry, the flow is established. So the service VM is just in the path and sees all the packets and is allowing them. And it's important that it continues to see all the traffic. Otherwise it wouldn't be able to block. Attacks can happen at any time in a flow. A flow can look innocent. And then later, there can be an attack injected. So the service VM is the bump in the wire. And then an attack occurs, like that cmd.exe in the URL. And that's recognized by the service VM. The service VM blocks the flow. And packets in both directions are stopped. The VM is protected. And then, of course, the service VM will also alarm and send. Depending on what kind of IPS you're using, it will report to the SIM. So well, very briefly, if the service VM fails, there are ways to simply have all traffic stops. But there's a policy that's more lenient. And that it allows that if we can detect that it's failed and the policy is more lenient, meaning it fell open, then MetaNet will just allow the traffic right through, because we don't want to block if the service VM is down. OK, I think it's over to you. So this is Manish now. I'm going to just spend a couple of minutes to talk about what we are doing with our security controller. There was a tech talk about this yesterday. You can find the video on that. But I'm just going to walk through real quick. So the role we're trying to play here is of a software-defined security orchestrator. And there's like four key things we're trying to do for the security function. One is automation and orchestration of the security functions. So you saw an example in this Surner use case where they are trying to do this for the IPS as a bump in the wire. But we can also do this same thing for web application firewall, for any kind of other action firewall and other application delivery controllers and so on. So the idea is you have some kind of a policy that you want to deploy for your security, for your workload, or your infrastructure. And then this is how we're going to do the automation and orchestration. The second part is the coordination of security policy. So it's key to understand that we're not going to start managing the policy, but we're coordinating it with all the security function managers. So you're still going to be interacting. For example, the screen you saw with the demo where the network security manager, which is IPS manager, still manages this policy. But now, as far as coordinating those policies, like which workload, how do you insert all of that? And then the third part is interacting with the virtual infrastructure and the SDN controllers. So you'll see some more details in the next slide on this. But basically, we are trying to get to a point where we can take this orchestration, as far as the security service is concerned, and then interact with the open stack multiple things and also for the SDN controllers. And finally, to do the scale out of security. So in the cloud, scale out for your workload. But then you also want to be scaled out for the security. For example, if you have a web store with 10 VM, and then your auto scaling needs it to go to 1,000 VM. But your policy says that it needs to be protected by a web application firewall. So you need to be able to auto scale your security also. So this is kind of like a very high level. I just spent two minutes on this. It's really on the right bottom you see the virtualization. This is where the open stack layer is. You have the multiple SDN controller examples over there. And then you have on the top, you have the application intent. The applications and also the user intent and the policies. And on the left bottom, you have the security function manager. This is where the IPS manager, next gen firewall, application delivery controller, and so on. And then you have the virtualized security function itself. So this is where the, from multiple vendor, multiple security functions. And the security controller, what it's doing is it's taking the policy or the intent from the top. So you have the UI. You saw some of the screenshots of the UI during the demo. But of course, there is an API, which is a Northbound REST API. So you're actually going to do things like you're describing how you want the security for your workload. So you can deploy. You saw one tenant attacking the other tenant as an example, but you can have multiple multi-tenant, multi-BNF type of policies. And you're just coordinating all these policies using your connectivity to, basically, through the API interfaces. I show the interfaces from the number two to the SFC. So with the SDN controller, if they provide a SFC API, we'll use that. Otherwise, we'll use a lower level API. And then, of course, we're going to use the Neutron SFC. But then also, you have the number three, which is the interaction with the virtual infrastructure itself. And then four is where we do the policy coordination. And five is actually where we are interacting with VNF to get some of the telemetry and other information so we can do auto-scaling and things like that. There's also a control plane channel or agent that we may make it optional, because if the security function manager can do all this, then we might not have to interact with the virtual security function. So that was just a very high-level kind of overview. And you can find more about this project on intel.com forward slash OSC as an open security controller. So I'll let Tharun summarize. So in summary, you saw a couple of things that we really touched upon. We had an enterprise user who had issues in trying to effectively secure East-West traffic. And we saw the components that came into play with the internet providing the SDN layer and with the open security controller being able to orchestrate the security controls in an automated manner. So the challenge, like Joe talked about and what was really stressing about is really two things. Like one was the tromboning of the traffic, which effectively introduces latency into any kind of environment. If you're going to have to go all the way to the edge to any kind of security inspections or security enforcements. And then the East-West traffic, being able to have visibility to that kind of traffic and be able to see it in a manner that you can take decisions on. We have some future work. I mean, so the open security controller is a project that's evolving currently. You will see additional VNFs becoming part of the ecosystem. You'll see additional controllers being part of the ecosystem. And we are also looking at figuring out how this can work effectively with container orchestration as well. So that's some of the future work that's coming in. We'll probably have more updates over the next few months. But if you have any questions on the implementation, on the problem statements, on what we've done, please feel free to ask us or you can always connect with us at a later point in time. We might have a question. So you tied the controller, the security controller, to Neutron or how do you... I think somebody talked about the steering traffic. So how does that tie into either the SDN solution or the Neutron, the native Neutron? Sure, I can explain it. Is it a plug-in or what sort of thing? Yeah, there's multiple ways, but let's talk about the mid-on-net implementation. Right, so the open security controller allows for multiple SDN solutions and for service functions as well. In the case of the SDN solution, we have an interface that's called for the bindings. So essentially, the open security controller has a bit of mid-on-net code that will reinterpret the service chaining in order to bind the service chaining. So the Neutron part, so the open security controller will directly call Neutron to do things like create the management and data network for the service VMs and to launch the service VMs onto that network. But it will call the SDN controller in order to do the binding. Now there could be a generic service chaining plug-in for the Neutron SFC API. However, there are some kinds of things that you can do in the plug-in. So for example, we have the ability to detect the attacker that's behind a source NAT. That's not something that you can do with the Neutron API. So there's still some specific things that the controller can do. And so actually, we're doing that right now where there's a function call there that can unmask the attacker behind a source NAT. And so that's something mid-on-net specific. Yeah, that's actually a good point because the implementation that we have with CERNA, there's a six-tuple implementation where we won't get that level of granular detail where we are able to see source IP, destination IP, source VM name, the destination attacker name, things like that. So we get into a more granular analytical information that can be acted upon with this implementation we have. Maybe I would just add that I think it's likely that you'll see in the future that there will be a Neutron SFC API generic controller that you can use in the open security controller. And that will work just with generic Neutron. When you set up the open security controller to talk to your open stack, you're giving it the open stack API endpoints. And you choose from a dropdown the controller plug-in that you want to use. Did you mention, did I get this right, that mid-on-net or the security solution needs to do some change in their coding to tie back into maybe misunderstood the controller? There's the controller plug-in. That's a Java package that runs on the security controller. And then of course, mid-on-net had to develop service-chaining APIs. And there is an analytics piece, which is we basically have to go back to the flow records to find the flow that has the source net. So when we're doing the unmasking of the attacker that's behind the source net, we have to go to a flow record database. Yeah, but from an integration standpoint, we're getting to a point where we have standard APIs, APIs that you can call, and integration should not require you to fundamentally change any code within the controller. Is that the question? Yeah. For the security solution or vendor? Yes, for use integration to the open security controller. That's correct. Thank you. Thanks for the question. Another question to Pino. Can I enable traffic mirroring with mid-on-net? You mentioned the service-chaining only. Just some strange case. We're working on it. Both in the controller and in mid-on-net. We do have port mirroring in mid-on-net. We have to enable it in the controller plugin. It is absolutely something that we're working on. Yes. Thanks. Hi, so my question is about the fundamentally, if I look back, you're trying to achieve two functions. One is the service-chaining, which is inserting security or service in the telepath. The second is life-cycle management of the security. So if I look at the first path, there are a lot of options which are coming up, right? For example, the OVN would provide some sort of native service-chaining capabilities. If I look at different SDN controllers, they have their own capabilities of how you can insert services, right? So I see sort of a multi-dimensional world there, right? With one more option being added here from your side. And then as far as the life-cycle management is concerned, I would think that many security vendors are gonna do it themselves. And of course, there are different environments, like public clouds where you go, where it's gonna be done in a different way. So I'm just trying to understand that, I mean, I like the idea fundamentally, but I'm just trying to see the ultimate success of this idea in a universe, which is, there's so many options out there. Yeah, so first, the service insertion part is a very easy answer, because as you can see here, and Pino described a little bit of that, like how we're doing a mostly a plugin, right? So if tomorrow, if there's a Neutron or any API, because you're just providing the intent from the top on how you want to insert those security functions, and that we can use so that part we're not really trying to do, we're just gonna use whatever's available. So now, your second question about security orchestration, that's a very interesting question. I don't have a slide here, but yesterday I did the Tech Talk, more detail on this, where I show that we want to do this, so we're already working with the security VNF vendors. So we're working some of the industry leading security VNF vendors, and one of the challenges that they're seeing is, they have this barrier of entry, right? The same thing that you described, they have to do many, too many integration with all these different permutation and combination of the security, for the virtual infrastructure, where you have to do with opens tag, with different flavors of SDN. Sometimes you have to do even with the different flavors of service insertion options, right? You can do bump in the wire, you can do simple service insertion, or you can do the actual SFC, like advanced SFC with network service header and all that, right? So we're trying to make it easier for them to integrate with all these different environments, and we're also gonna go work on additional integrations with other cloud, and so we have some integration with vSphere NSX, but also going to work more into container as a service, and we have not done any work for any of the public offering, or like, for example, Azure or Google Compute or any of that, but it might be something if we need to, we might do it in the future, but at least right now it's not on the paper, but the main function for us is to be able to provide an integration point and also a security control point where you can have this across multiple data center for multiple VNFs, and so kind of building this as sort of something with the understander. So eventually we want to take it as a open source project. Yeah, I think the key issue that you heard was the multiple integration points, right? I mean, so I mean, for an SDN vendor, if you can provide a single integration point with which they can access multiple VNFs, I think we're making life easier for everyone, and vice versa, right? So for a VNF vendor, one integration point wherein all of a sudden they integrate into multiple SDNs, multiple SFCs, that's basically the problem that we're trying to solve, and it's an issue that we've seen with Enterprise and things like that, but they want to do this in an automated manner, and that's essentially what we're trying to get to. Maybe I'll add a point. We've spoken to other vendors of security appliances and load balancers, and generally they don't want to build a controller. It's quite an investment to build this orchestration, and actually then we have discussions, well, are you going to build the orchestration? Should I? I'm providing the service chaining. Who should do that? Because each of us, I want to focus on SDN. This orchestrator is not really my specialty, and likewise, the security provider, the load balancer provider, their specialty isn't to provide this kind of orchestration either. So then we see that some of the vendors have developed heat templates to deploy their appliances, but it's not as flexible and dynamic as this kind of orchestration. Thank you. You're welcome. Thank you. Do we have any more questions? You do. Can you go into a little detail on how you monitor the health of the McAfee IPS, since it's in transparent mode? Sure, actually it's very basic right now. So at the moment we are just monitoring the interface being up. So that's at the level of the hypervisor. So what we have in the works is to actually do things like ARP or other kinds of health check. But right now it's very basic. There's just the interface being up and seeing traffic through it. It's very difficult, I know. What about load balancing if there's multiple IPSs? Is that something that... It's not a current capability. It's not a current capability, but that's part of our service chaining, sort of on our service chaining road map to be able to choose. So we have layer four load balancing in Meadonet. It's stateful. This is not layer four in that you're not actually modifying the packets. So it'd be sort of more like an ECMP load balancing. But it has to be stateful so that you keep going through the same thing. So we've clearly thought a lot about it, but we haven't started working on it. And this is a project in its infancy, to be honest. I mean it's really something we started effectively moving into testing and integration about three or four months back probably. So I mean we're still in our early stages with this. Thanks. Thank you. Okay, no more questions? Thank you so much for your time and attention. Appreciate it. You guys have a good day. Thanks everyone.