 So, we're going to have a presentation today and also a demo. My name is Alan Kaplan. I'm working for Ericsson. And also, we have a guest speaker here with us as well and also a vendor that's actually using TAS. So, we're also going to talk about them. That's Anil Ra from Gigamon. And also, we have one of our main lead software developers here who has actually worked on building the TAS, which is Vinay Yadav. So, let's take a look at the agenda of what we're actually going to cover today. So, we're going to talk about TAS, what it is, go through a basic introduction for people that haven't actually been following the discussions that we've been having, like, since the Kila release. We'll talk about the design, the data model, what's going on under the hood. We're also going to cover, like, a use case, right? So, there is actually a vendor that's actually using this. And Anil is actually going to talk about the use case of Gigamon and actually what they're using TAS actually for. Then we'll actually have a demo. So, Vinay is actually going to go through a complete demo of actually what we've actually built while we're contributing back to the community. And then we're going to talk about the next steps. So, there is still some work to be done with the TAS. So, this is kind of like the first reference iteration implementation of what we actually built. So, let's take a look, basically, what TAS is, right? So, it's an API, basically, to enable part mirroring. So, it's typically what you would have in a lot of, like, existing, you know, Tor, edge of rack switches out there today in the enterprise world. And the reason why we're actually bringing this to OpenStack is that now we basically have virtual switches. So, we need to be able to do part mirroring, basically, for specific instances like VMs or containers or bare metal that's actually running, let's say, using OBS as an example. And what is, basically, TAS? It's a simple API using Python. We also have, like, an agent framework. We actually have a specific driver. So, the first driver that we've actually added to port for is for OpenVswitch. It's extensible. So, you could add other drivers if you actually wanted to do a part mirroring on bare metal. If you wanted to do a part mirroring on the top of the rack or edge of rack, it's possible, basically, to write a driver, contribute that, basically, add it into the agent framework so you can actually do a part mirroring, not just on OBS. So, what this really allows us to do is that it allows a tenant or an administrator to basically do tapping specific ingress or egress ports in Neutron, right? So, today what we're doing is, basically, when we spin up a couple of VMs, what we're able to do with the TAS is we're able to actually do port mirroring on the actual OpenVswitch port for where the active specific VM is actually attached to. And then there is a couple of other advanced services, right? So, I think, like, in the telco world, we don't just use port mirroring, basically, you know, for doing, like, you know, TCP dumps, IFTOP, or running Warshark. There's actually a lot of other services that are actually using port mirroring, and I'm actually going to go through some of those now as well. So, I think the first thing, like, we got a lot of questions in Paris, and I think the most common question we got was, why do we need TAS? Who really wants this? What services can it support, right? So, the first obvious one that we do is typical network debugging, right? So, a lot of vendors out there, basically, when they're instantiating VMs or hooking that up using Neutron, there are still a couple of, you know, configurations that they actually need to take care of. And a lot of people will actually want to have network debugging capabilities, basically, for specific services that they're running in VMs. So, you can imagine that people will actually want to enable that, say, oh, I actually have an Apache server, but I'm actually dropping packets. Is that a NIC issue? Is it an application issue? And typically, what they do is the way to correlate these events and to do the network debugging is basically to run it to, like, Warshark, for example. Now, the other ones, basically, there was a discussion the other day that Red Hat were actually giving around intrusion detection, right? So, a lot of people actually also use port mirroring for what we call passive IDS. The difference, basically, being is it's duplicating the packets onto our, i.e., mirroring the packets. And those mirrored packets are then being sent, like, to an intrusion detection system. And then the other one is network analytics. So, again, like, there's a couple of different ways to actually do network analytics. And a lot of the people out there, like, for example, in the financial services centers that are actually using network analytics for running, basically, tools for doing SLA-like enforcement and actually SLA checking, they actually do, like, packet mirroring on switches. And the reason why they're doing that is that they actually can't, like, segregate the traffic, basically, as it's coming in ingress and egress on the switch. So, what they do is they capture some of those packets, basically, coming from specific machines, from one specific machine to another specific machine, feed it into the network analytics, they'll actually make sure that the SLA is actually being endorsed. And then there's some future services, right? So, I think one of the services that you could actually see could be, for example, lawful intercept. So, I'm not saying that that will be one of the services that a lot of people would actually be using the tap for, but there are additional services like lawful intercept that could actually be used in taking advantage of the API service that we've actually built. So, it's a really simple design. So, basically, what happens is a tenant or an administrator can basically call the API. From the API, basically, it's calling the plug-in service that we built, and then, basically, below the plug-in, using RPC is that we have the agent framework. And it's under the agent framework that we actually have the specific drivers. So, the one that we're actually contributing back to the community is the OVS driver. So, if you're not aware today, OVS actually supports part mirroring. So, this plug-in and this specific driver is really just enabling the configuration so we can actually turn on part mirroring for a specific port where a VM is actually attaching that port. And then, for future, you can actually see we have driver ABC. So, depending on where you actually want to mirror on the actual traffic inside the neutral network, you could actually mirror it up, basically, on the top of a rack switch. So, you could write a driver if you actually monitor an Arista switch or an Extreme switch or a Pluber switch or a Cisco switch, right? You just write another driver, basically, to enable that port mirroring on the top of a rack switch. So, there's two scenarios that we're actually going to cover today in the actual real-life demo. So, I kind of wanted to touch on them very, very briefly, just to give people an idea, like in a physical implementation, like what really this is. So, typically, what you're going to do in the API, you typically specify what we call the destination port and mirror port. So, this is basically where all the traffic that you're going to mirror is actually going to be sent to that actual neutron port. And then, the other thing you're going to do is you're going to specify, for example, the actual source mirror port, right? So, in this example, what we're actually showing is what we call remote port mirroring. So, the tenant or the administrator has actually gone on to the portal and actually say, I want to monitor VM number 102. And basically, he types in the VM he actually wants to monitor. And then, the traffic is actually sent to the actual monitoring VM, okay? So, you can actually see that they're actually instantiated on two specific compute instances. And the reason for that is because the tenant never really specifies where he wants a specific VM instance to run as an example, right? Now, in the other case, there will be another scenario where you actually have the packets that you're going to mirror from a specific VM, from a specific port. And they're actually happened to be on the same blade of where you're actually going to send that and where you're traffic to. So, in this case, one of the things we're going to talk about when we get to Taz on the hood, we're actually going to talk about, like, you know, how the actual, what we call the bridge tap is able to figure out what's local, what's remote. It knows basically where to actually send those packets to the destination port, okay? And I'm going to introduce you to Anil Raer from Gigamon and he's actually going to go through a use case of what Gigamon is actually using the Taz for. Thank you, Alan. Thanks. I'm going to stop today about what somebody does wants such a service, a stack of the services available. At Gigamon, we are in the business of doing traffic monitoring. That is, on the left-hand side, you see boxes which could be our VM instances. On the right-hand side, you could have a farm of tools. These could be security or analytic tools. So one of the use cases we're going to be talking about here is what a company like Gigamon does in the middle. And what we are doing is essentially creating a set of nodes in there that delivers certain specific types of traffic to the respective tools. In the process of delivering the packets, additional functionalities provided, such as replication, deduplication, masking, striping, et cetera, so that the tools are getting exactly what they need and no more. There are some more advanced features, like load balancing, for example, that could allow you to spray your traffic onto a farm of tools out there. So how does TAP actually help us achieve this? TAP is the services, essentially, what was the missing link in this whole scenario. If you're looking at it from a tenant's point of view, prior to this service, there was no supportable way for a tenant to actually extract traffic that is flowing inside the switches. There was a lot of work done in Neutron to deliberately turn off port mirroring capability from a tenant's point of view, and there was a right reason for doing it that was to prevent cross-tenant data leakage. So we have reopened the doors, but this time in a controlled fashion that is supported through Neutron APIs. So in this example, we are showing a scenario where somebody is interested in monitoring the traffic of tenant X. And as you can see on the hostbed, the VMs from tenant X are residing. There are other VMs there, too. So a TAP service instance has been created, and on the destination port, we have actually been using a Gigamon VM instance, which is serving as an aggregation point. And then the traffic from the two VMs, in this case, are mirrored onto the Gigamon device. Some additional packet processing as well as filtering happens inside the Gigamon virtual appliance, and then the traffic is tunneled out to either physical devices or to analytics tools. This whole thing is orchestrated through our fabric manager, which will be interacting with the OpenStack controller. And from the OpenStack controller, the fabric manager derives information such as the types of VMs running in that tenant and to start and stop TAP service instances. And now I'll hand it over to Vinay who'll describe the data model for the TAP as a service. Thank you. In terms of how the service is exposed to the tenants, we have a simple data model with two essential resources. One is a TAP service, which actually represents the port to which the mirror traffic is sent to. So in what Alan described, the destination port of a mirror is what we call a TAP service. And then we have a TAP flow, which represents the source from which you want to mirror the traffic from. And then you have an end-to-one relationship between the TAP flow to the TAP service. That is, you can associate multiple TAP flows to a single TAP service. So essentially allowing you to mirror traffic from multiple ports in a neutron network to a single neutron port. Under the hood, we have certain security considerations, especially with respect to security groups, which we had to do to get the traffic inside. One thing is we need to actually have the port security extension enabled in the ML2 plugin for this to run. And then when you create a port, you need to create a port with the port security flag disabled for that port. So this allows the port to receive and forward traffic to the VMs, which not intended for the VM to receive, to the MAC ID and the IP pair being different. Apart from this, we also had to disable the MAC learning on the Linux bridge, which connects the TAP interface of the VM to the OBS, so that the packets can enter in. And also currently we are actually using VLANs to isolate the traffic of one mirror TAP service from the other. So this is an implementation detail. And so we need to block a range of VLAN IDs to do this. And right now we specify it in the neutron.conf file to essentially do it. Taking a little further, the implementation details, Anil would talk through with how the bridges are handled. Thank you. So this picture is actually showing the internals of an OBS switch that would be typically seen on a compute node in an OpenStack cluster. There is an integration bridge in which the VM ports or the instance ports are instantiated. And at the bottom you see the tunnel bridge, which basically implements a mesh of tunnels to connect various compute nodes together as well as with the network nodes. So this part of the solution, which is actually implemented by the ML2 Neutron driver, remains intact. What we have done as part of this program is we have introduced a new bridge called BRTAP. And it's connected to the previous two bridges using appropriate patch ports. These patch ports allow mirror traffic, for example, to be redirected from BRINT towards BRTAP. So for example, if a TAP flow was created on this particular host that is shown in this diagram, we would be mirroring the traffic and delivering it to BRTAP. In BRTAP, this is actually a placeholder for a lot of future work that we envision with this program. People could add in extra filters in here, for example, to segregate certain types of traffic that needs to be delivered to the destination. We plan to implement things like rate limiting, as well as controlling the number of flows that are allowed for a particular tenant. If, for example, the destination happened to be residing on the same compute node as the source, which in one of the previous slides, Alan talked about as local mirroring. In that case, we bring the traffic back into BRINT. But the reason I'm sending it into BRTAP is that additional filtering and other bandwidth rate limiting sort of actions can happen in a nice compartmentalized fashion. But on the other hand, if the destination node was not residing on this blade or host, the traffic is forwarded onto the tunnel bridge, and then we use the same GRE or VXLAN tunnels that is currently employed in OpenStack to forward the traffic onto the host where the destination node is residing. As Vinay had just mentioned, we are actually relying on VLANs in our current implementation to segregate traffic so that we can keep the streams completely independent of each other. The traffic from, let's say, different tenants, TAP services will never be mixed together as well as, within a tenant, you could have multiple instances of TAP services that will be kept separate. On the receiving side, when the traffic comes in through BRTAN, we redirect it into BRTAP. Some additional logic could be put in place in there to decide if the traffic should get delivered and at what rate, and eventually it'll make its way into BRTAN where the destination port sits. So with that, we will kind of like to move into the actual demo. This is a live demo in here where we will be showing the basic flow as well as a couple of experiments. So in terms of how this setup is done, as we have two compute nodes, one controller node and one network node, all running inside VMs DevStack standard installation. And then the workflow for this TAP service is, as I tell now, you create a port as we suggested that you create a port with port security disabled and then you launch a VM, which preferably is a VM which you would like to do monitoring analytics and stuff on that, on that particular port. And then you associate this port with the TAP service. So when you create a TAP service, you specify the port which you created with port security disabled as a parameter to it. And then you could choose any other port in the network to associate with the TAP flow and associate this TAP flow with the TAP service. So that gives an indication to our agent that what is the source of the mirror stream and what is the destination to be reached. So that is the general workflow. In the interest of time today, we have already created the port with port security disabled and put a VM on top of it. But we'll obviously be running through creating a TAP service and TAP flow. So we'll run with it. So as we can see in this horizon dashboard output, we are having actually a multi-node desktop set up running in here. There are two compute nodes and there are two VMs running on one on each of them that has VM one and two. Both these VMs have been provided with external IP addresses which are in the range 172, 16, 52, one and 52, two. We'll be using these external IPs to actually communicate with these VMs. On one of the compute nodes which happens to be compute node number two, there is a monitoring VM that has been instantiated. This is the VM that is actually sitting on the port that Vinay was just mentioning which serves as the destination end of a TAP service instance. So our goal in here is to basically tap into traffic that is going in and out of VMs one and two and deliver them to VM one. And we will go about the process and see how this is done. So here we have two VMs and we will tap traffic from both the VMs and this represents a scenario of both local as well as remote. So one of them is on a different host and the other VM is actually on the local host as the same as the monitoring VM. So we covered both the cases. So in this example, we have one demo running I mean demo tenant which is part of the standard DevStack environment. So anyone who's familiar with that would know that you get a demo tenant instantiated. And this tenant has a private network called Private. We have actually created a public network through which we are actually going to be sending traffic in and out of this OpenStack cloud. The private and public networks are hooked together using a virtual router. From this particular tenants view, we see these ports that are there and Vinay can go over them. Yeah, so you actually see one particular port here which actually has a name enumerated with the monitor port. This is the port we created with port security flag disabled. And this is a port on which the monitoring VM is actually running. And then on the top, you see two other ports on which the normal VMs are running which you would like to monitor. So if I would just take the liberty of switching back to the horizon screen for a moment so that we set some context. The two VMs that is VM 1 and 2, they have local IP addresses which are in the range 10.00.4 and 10.00.5. And they have external IP addresses which were 52.1 and 52.2. So in this screen where we're showing the neutron list of ports, we can actually see the two ports in there which are 10.00.4 and five. So we will use that as the source side of our tap service and the destination would essentially comprise of the monitor port. So let's go about the process of creating a tap service instance. So we also have created a CLI for TAS which would eventually would like to integrate with the neutron. So here when you're creating a tap service you specify two things essentially. You have to specify the port on which the tap service is sitting on our user's support there. And then you also have to specify the network ID which the port is based on. Name and description is obviously optional. So as you can see, I will use the monitor port for this tap service instance. So we have instantiated a tap service instance which gets its own unique ID in here. And the next step what we're gonna do is we're gonna actually take ports that belong to those two VMs we have one and two and we're gonna hook it into this tap service instance. So to do that actually we create a tap flow giving the port ID of the ports from which you want to mirror the traffic. Along with it, it has two mandatory parameters. One is the port of course from which you want to mirror the traffic and of course the tap service ID to which you want to send the mirror traffic towards. So this is where you associate a tap flow to a tap service. And as I mentioned before, you could have an end to one association so you could associate multiple tap flows to one tap service. So we're picking the port for the first VM in here and we have used the bidirectional monitoring direction in here so we're gonna be looking at traffic both going in and out of that VM. I'm always creating this. And one more important parameter that I actually missed in the API actually is the direction which has to be mentioned. So you can either monitor packets which are egressing into a VM, ingressing into a VM or both. In this case, in this example, we have given the direction flag to be both. So you can choose between in, out and both for the parameter here. So we are gonna basically log in to the console of the monitoring VM and we're gonna run a couple of tools in here to show how the traffic is being received on that side. So we're using an open source freebie software that is available with standard Linux distributions called IP traffic. And we'll try to send some traffic towards that VM. We are using an external IP address in here. And as you can see inside the VM we are receiving and we can actually see that flow happening in there. The second thing we'll do is we'll add in a second flow which is for the second VM to show that we can have multiple tap flow that's associated with a certain service. And as you can see, we see the second IP address there on the receiving monitoring program in there. So what this is demonstrating is that from a remote location that is sitting not necessarily on the same host in this example, one of the VMs happened to be on the same host as the monitoring VM. The other one was a remote location. We were able to basically pump the traffic into a known designated neutron port. What this allows us to do is we can do some interesting things that we'll be showing in some subsequent experiments in here. Yeah. The second use case we show is actually we try to run a port scan on the VM. Should we do the other one first? Yeah, sure. So what we will demonstrate next is what kind of things can we do once such a service is available? It's one thing to bring the traffic onto a remote site into a VM, but there must be some interesting things that we could do on it. In the interest of time and also to keep this thing very neutral, we're actually using open source tools in here again. The actual complexity of what analysis happens is as far as one's imagination can go. So we're gonna be actually showing a couple of experiments. The first one is from a monitoring VM can we gather some insight into the bandwidth consumption or the relative bandwidth consumption of those two VMs? So I will be running a small application in here, which is essentially IP top, which is again a standard program available with Linux. We have done a little bit of configuration to make sure that the scaling is done properly because this program involves a small graph and we kind of scaled it down to make our experiments fit in here. So I will kill these two ping sessions that you can see on the graph on the right hand side. So what we'll do actually is we'll basically hit these two VMs with a little bit faster things at two different rates. So we will try to hit the first machine. We're sending like 1K packets to it at the rate which is about 0.25 per second. So you will notice that IP top has detected that thing which is an analysis tool. So it's actually computing the bandwidth that it is noticing for that VM. We will now go on to the second one and deliver traffic to it at a slightly different rate and this time we are going at half as fast that is at 0.05 seconds. And what we are demonstrating here is that in the monitoring VM, one can perform these kind of studies and it could be a lot more sophisticated obviously but even this simple experiment demonstrates that some amount of insight for VMs can be derived from a remote location without having to instrument those VMs themselves. This tool allows us to look at consolidated bandwidths. We can look at one direction or the other direction or we could split up both directions in here. Now going forward we'll kind of take a look at another experiment and this one has more to do with let's say the security aspects of traffic monitoring. While this experiment showed how people can do analytics with respect to their behavior, there are a lot of times when we might want to actually find out what is happening with respect to security. So what we are doing in this experiment is we actually launch a port scan on one of the VMs and you'll see the sync packets coming in and then we log it in the IP traffic monitor and show them. So this can be used to detect any intrusions in your system. So we'll just run through the demo now. So basically what we're trying to do to show potential use cases of the service actually. So in this particular experiment the security groups on the source side have been opened up to allow TCP and UDB traffic to go through. So when somebody's trying to connect into the box we'll actually send the traffic in there. But some of the ports obviously are not running any services there. So you may not actually make a connection with anything. What this experiment is trying to demonstrate is that from this monitoring station which is our VM on, a security analyst might want to find out what kind of activity is happening to different ports on those machines. So we will actually run through a small experiment in here and we are essentially going to be scanning ports from 50 to 70. And we are sending TCP traffic in here to the first VM. You will notice that the connection was refused because we didn't have anything running on that side but on the monitoring VM somebody can immediately notice that this kind of activity was taking place on that VM. So from a security standpoint, this is very useful because security tools can be deployed in a centralized or a consolidated fashion that actually has a nice big purview and it can survey traffic going on in a tenant, for example. In this particular example, like let's say somebody is trying to narrow down on a specific port range now which is going to 50 to 55. You'll notice that there is additional activity happening on a smaller set of the ports. And this might trigger certain security alarms because if somebody doesn't expect a certain type of traffic going through, then they can flag certain errors, they can shut VMs down or change policies with security groups. We will show a slightly different experiment in here and that has to do with, let's say somebody is trying to do a remote connection to one of these VMs. So in this example, we actually have the telnet daemon disabled in the VM so a telnet connection cannot happen. But if somebody were trying to access this VM through a telnet call, somebody might be interested in finding out from where that request is coming. And what we will show in here is that an attempt to make a telnet connection here, for example, has demonstrated on the monitoring VM side that it has come from that specific IP address. And that IP address in this case happens to be the IP address of this machine from where I was sending the telnet request. So what we are trying to show with this particular experiment is that a lot of security related issues that people might face when they are deploying their services in a cloud, an open stack cloud, can actually be monitored from well known centralized positions where you can deploy very sophisticated security tools to carry out this kind of analysis. Thank you. With this, the... This one was slide. Yes, so we will kind of go back and talk a little bit about the remaining items and some forward looking work for tap as a service. In terms of the next steps, actually, what we have created now is a basic framework and a reference implementation. There are, of course, a lot of things which we could improve the service with. One thing is with the rate limiting of the monitor traffic, as you assume, if something goes wrong in a network, that is when you want to actually mirror and find out. And if there's a storm going on in your network, you don't want to mirror everything and then send it to the mirrored port. So you want to actually ration the amount of traffic you send it in the mirror traffic so we could look at how to limit the rate of mirror traffic going on. The second one is adding pre-captured filtering. As you saw now, we didn't have any filtering, so you basically mirrored everything which showed up on the port, ingressing or regressing if used. Both flag is selected and send it to the mirrored, the tap service port. But then we would also like to do find granular selection of which packet flows you want to actually mirror and analyze. And then we also want to add control on the number of tap flows that tenants could potentially create because it should not happen that a spurious tenant actually goes on creating a large number of tap flows and hampering the underlying network. And of course, we need integration with the existing systems like Horizon and also Tempest for test cases and stuff going further. So these are some of the items which are yet to be done and we are looking at taking this forward and it would be good if we get a lot of community support regarding this to help us go forward with it. Cool. So let's take some questions. I hope everybody liked that. I had a really good demo from the guys. Gave you some really, really good use cases that's actually being used in the Tars like in today. And there's probably some additional services that can actually be built on top actually using the Tars API. So yeah, go ahead. Yeah, no. So we're gonna contribute this basically to Stackforge repo this week. So it will be made available. And then depending on the discussions we need to have with some Neutron Core and the Neutron PTL Cloud, I believe the intention actually would be that we'd actually roll this up into a Liberty release. Sure. So I think one of the things that like that a needle and also a neighbor touching on, right? So it's basically any packet that comes in on the port, right? So it's multicast, unicast, broadcast, right? It doesn't discriminate. I think one of the features that we'd like to add next is basically to have like, you know, flow base enabling capture. So we can actually specify that we only wanna capture for example, multicast packets. Or we only wanna see specific broadcast packets basically that match the specific DHCP or ARP requests actually coming from one client to a server, right? So I think those enhancements we actually intend to actually add in the Liberty release. Oh, it's just a capturing, yeah. Okay, sorry, another question? No, you wanna take that? Sure. At this point, our first reference implementation is with OBS. Yeah, I mean, we can definitely talk about that after the meeting. Be happy to sit down with you and actually figure out like what we could actually do to add support for the next bridge. Sure. If your tap flow is the destination and the security group in the destination would drop the packet, does your tap still see the packet before it's dropped or does the drop happen before? We essentially mirror all the packets which show up on the OBS port. So the security groups today actually sit above the security groups. So we would see the packets before they're dropped by security groups entering the VM and vice versa, if the VM is sending any packets we would receive it down here. Have you looked at performance degradation issues at all? Our experience has been like maybe a 30% hit. Yeah, so I think that's a really good question, right? So I don't think we've actually looked at performance, but if you remember one of the slides that we talked about, so if performance is actually a concern and you have actually concerns basically doing the monitoring basically enabled on open V switch, you could actually write a driver and you could actually do the port mirror like further up in the aggregation of the network. And that could actually be like for example on the PCI card on the computer blade or it could actually be on the top of the rack switch. So you could alleviate some of the performance problems basically from at least from the computer blades. Yeah, cool, yeah, go ahead. Can you speak up? In the current implementation that we have been using which is basically the reference implementation for OpenStack Neutron, there is a Linux bridge hooked up between the OBS switch and the VM itself. And that Linux bridge was essentially put in place to implement security groups which aren't currently supported in OBS. So in our scenario what's happening is that we're actually taking traffic that belongs to somebody else and redirecting it forcibly into the monitoring VM. So the Mac and IP addresses for those packets they won't necessarily match that of the destination VM. So what we found was that the Mac learning that was happening in that Linux bridge was ending up dropping the packets because there's no entity inside the monitoring VM that is responding to those art requests. So we dropped it off. We don't really see any significant performance hit because there's only one port to which that bridge is typically sending the traffic. So Mac learning really didn't help or hurt in any way. So turning it off was actually what we looked for. Okay, so we've got time for just two more questions. So I think this gentleman on the microphone is here first. Go ahead. She was. Okay. I think we have an end to one mapping but as you said we need to go further with a lot of testing regarding this actually. But we do have support for it that you can mirror multiple ports coming into a single tap service port. Theoretically and implementation wise yes it is supported but we need to see as the gentleman was actually saying regarding performance when you do a lot of aggregation into one port what actually happens. The thing is right now I think with the packet we mirror actually the VLAN is stripped out. We actually only see the IP packet and I mean the Mac and the VLAN header is actually taken out. So we don't have any visibility of the VLAN since I'm monitoring VMs whatsoever. Yeah, my question was the is this restricted for just doing ports within the same tenants or can you go between tenants or how does it protect one tenant from being able to mirror another tenant's ports? So I'll answer that one. At the moment we are basically trying to enable a tenant to carry out traffic monitoring. So there are deliberate checks put in place to ensure that both the source and the destination belong to the same tenant. The whole idea is that we want the community as such as well as the user community to become comfortable with the fact that a tenant is actually enabling mirroring. Going forward we can loosen it up even further. We could have cooperating tenants that decide to work together in which case we can expand this model to allow the traffic to flow from one tenant into another one. So we're trying to take it at one step at a time. But you are doing checks to make sure that today that it stays within a tenant. Today's implementation has those checks in place. All right, cool. Thanks everybody. So I hope you found this useful. I'll see you again.