 All right, good afternoon, folks. Just having a good afternoon. Good day today, so far? So good? All right. So let's talk about the Gigamon Visibility Platform. Why visibility? Why do you need it for two kinds of architectures, the NFVI architecture as well as the private clouds? And we'll see how the same platform works in an agnostic fashion across both solutions. Why traffic visibility? Have anybody heard of Gigamon other than the people in the front? All right. So why traffic visibility? We're trying to solve the use case where you need access and aggregation from the traffic, so you can either feed the security tools, which inspect the traffic, or you're monitoring tools in an NFVI space. So the common theme in all of this is you need traffic. Logs are not good enough. So you need the traffic as a source of truth, so we want to get visibility into that traffic. And how do we do that and why is the next 20 minutes of the session? Who deploys Gigamon? The who's who of the industry, right? Couple of quick stats here. 80% of the Fortune 100, and then eight of the mobile phone operators. Anybody from the telecom services here? Perfect, right? So obviously, you guys all need access to the traffic, and then we are deployed in the largest of them. A lot of them also are moving to the private clouds with the open stack private clouds. So how do you get traffic to the east-west blind spots as well? So what is our platform? It's not a product. It's not a single end product or a single packet broker, as some people might have called it. But this is a platform that encompasses both the virtual, the physical, as well as the orchestration layer as well. The idea here is you acquire traffic from wherever it is, whether it is the on-prem, private, public, or data center. We also announced our solution for Amazon last year. You put through a visibility platform, look at all the traffic going through that platform, apply a set of intelligent policies on top of that, and then serve it to the tools. In this case, the tools could be the security tools, could be the application performance of the monitoring tools. And then finally, more importantly, it can also be orchestrated through APIs. So it's not a legacy single one product box. It's actually a platform that can serve multiple needs. But let's look at a couple of use cases, the security delivery platform. The same platform that we just saw in a previous slide is actually laid out in a security context use case, whether it is the on-premise or the in-line protection of the perimeter, for example, or whether you want to do some net flow generation, for example, rather than do net flow from multiple locations in the network, maybe you want to centralize that. But the idea here is that you have a single security platform that can do that and then optimize your tools. Another variant of that same platform, again, is the subscriber awareness of it, specifically on mobile carriers. How do you provide specific services for high-paying subscribers? So that's, again, the same platform now can morph itself into a subscriber aware platform as well. Again, both security as a subscriber needs. Now, that's all great. The platform is available. But why are we here at OpenStack Summit talking about virtual visibility? So we'll talk about some of the challenges that exist today and how we morph into that solution. So any of you recognize this world where we're happy in a single node, single application server, working through switches. And you actually had to span or switch ports to get the access to the traffic. But as you virtualize this world into a hypervisor with high-density virtual machines, you have a bunch of blind spots in your network. How do you get access to this traffic to the same tools? In the old world, it was pretty easy, right? Put a tap point one, put an agent, there you got it. But once you move into a virtualized world, you have these east-west blind spots that get created. How do you get access to that visibility? And that's what we will try to solve with a Gigamon platform. What are those forcing considerations? Security, for example, we talked about that. Application performance, how is your customer experience behaving, for example? Or even your application KPIs, for example. And then finally, for the service providers, the VNF monitoring. As these physical functions are moving into a virtual functions, how do you get access to that traffic? Now, you can do it a couple of ways. One option, and we have seen customers and vendors try this, is they start putting their virtual tools inside your hypervisor itself. Yeah, it makes sense when you're small, for a single low density, it might work. But as you scale out, those virtual probes become pretty heavy, and they start eating into the compute capacity of the hypervisor itself. So not a scalable solution if you look long term. A better solution is you keep your high density, deploy a gigamon solution there to access and aggregate that traffic, and then feed to all the tools. Now, the advantage in this model is, for example, if you want to add more probes down the road, you don't have to disturb your physical hypervisor at all. You're just feeding into the gigamon platform. Now you have access to all the traffic that comes in. So again, this is a much more scalable solution as you look ahead into accessing all this traffic. Now, how does it work in a traditional data center environment? Let's say you have your east-west traffic going on. You have your physical network. Drop in a gigamon platform there, and then access traffic, and then feed it across your network. So you have your first level inspection within the hypervisor or within your private cloud, and your second level inspection at the gigamon platform. So with a combination of both the localized inspection as well as the gigamon platform and the hardware, you get a full pervasive visibility into all your east-west or north-south traffic going in your cloud. Now let's say, so that's the solution, but how do we bring it back into OpenStack? So let's look at a couple of things that gigamon has done. So we have co-proposed TAP as a service for about three years now, working with Eric Sinnerd, a few other vendors, into Neutron to mirror traffic. You would ask why? Why is it so important that you need TAP as a service? One of the big, great things about OpenStack is it's a tenant environment. You do not want to be deploying virtual mirroring on every OVS element, for example, or across your cloud. You may want to give that flexibility to a specific tenant to do their own mirroring. So TAP as a service was created to address that specific use case. While it was working through its committee, we also had an intelligent at the edge solution, which is an agent-based solution. We'll talk about that as well. Obviously, we are in a cloud. We are in an elastic cloud environment. So how do you orchestrate this? So the Gigabu fabric manager allows you to orchestrate this entire visibility across the board. And then finally, bringing it all together is the visibility platform, whether it's a virtual appliance or the physical appliances to aggregate and provide traffic intelligence. So again, three components. Acquisition, orchestration, and intelligence. Those are the three layers that can provide you pervasive visibility. So let's take a look at the first approach where we have tapping at the edge, for example. That's for the intelligent edge solution, where you call it as monitoring from within another approach. You acquire the traffic at the edge, and then aggregate at the intelligence before sending the traffic back to the physical appliance. Now the advantage with this model, some customers have called it, is a tax-free way of doing it. You're only worried about accessing your tenant and not the others. Another approach is, oh, sorry, before I go to that slide. So I want to give a quick overview of tap as a service. Again, it's been tracked at the GitHub, so it's there. You can download that and then include it as part of your dev stack to try it out. There's a lot of work done by Gigamot and a few others. But this is how it kind of lays out the architecture. Something integrated with Neutron, for example, with OVS, programmed to the OVS. So the idea here is the tap as a service defines a set of APIs. So as other V-switch vendors plug into it, they can write to the same APIs so they can make that available. But again, Gigamot has done a thought leadership working with Ericsson to develop this support by a few other vendors now. So we took that and then said, let's see how we can play into the Gigamot solution. We talked about getting it from the agent, from within the VM. But in this case, with open V-switch plus tap as a service, you can actually have an agent-free approach to get visibility and then still feed the V-series or the aggregation layer inside the cloud. That way, now you have acquisition in a much more seamless fashion, a non-invasive way, but still preserve the pervasive reach of your visibility as needed. Again, we are in tech preview, so if you are interested, please talk to us. We have a booth on C14, so if you want to discuss a lot more, we'll be more than happy to do that as well. So this is available for, and I'll show a quick demo of how this works as well. Again, in this case, the acquisition is kind of supported by tap as a service. The intelligence Gigamot support provides that with the V- series nodes. Now, that's good from a private cloud perspective. If you're an enterprise and you're moving to a private cloud, it makes sense in that world. But let's take the same world and put it on a virtual network function, right in the service providers. So they are virtualizing at a faster clip than we expected, specifically around mobile core. For example, the IMS, for example, MMEs, for example. Now, if you virtualize all that, you've got all these blind spots now, right? The enterprises make sense. We talked about that, but in your service provider, you have the same challenges. How do you get visibility for that? So what we have done also is we've been working in the OpenStack community again. The same tap as a service function can now be utilized to access the traffic into the V-series and deliver to the tools. Let me go back one slide here. The idea here is get tap as a service from your virtual network, feed it to the V-series as another virtual function itself, and that virtual function now delivers the customized traffic to the gigamon. Now, in this case, what happens is all your MME traffic can be inspected centrally and then delivered to the tools. So this is looking ahead a little bit in a world where it is all NFE, where there are no physical appliances at all, because the industry is going to a COTS-based architecture across the world. In that case, how do you get access to traffic? You use your tap as a service, acquire the traffic, do the intelligence within the cloud, and then forward it to your virtual monitoring tools. Now, the virtual monitoring tools could be your voice analysis tools, for example. Your customer experience management, application performance, or security tools. Could be anything. The idea here is that in an all COTS world, you get access to this traffic in a virtualized function. Now, one of the things I do want to call out in this specific use case for subscribers is you may not need all the packets. You may want to scope it down by sampling specific sessions, because those sessions are important. So those are some gigamon value added technologies called the FlowView and GTP correlation that can also narrow down specific subscriber traffic for inspection as well. Again, the idea is in the future, as you look ahead, in a COTS-based architecture, the solution can also support it as well. Now, with the new normal is, as you've grown into a visibility architecture with your virtualized functions, deploying the v-series now gets full visibility for your subscriber awareness. Your blind spots are gone using this solution. So I do want to start to do a quick two-minute demo on this, and I'll walk you through the demo itself as to how the solution works. So this is, again, visibility using tap as a service in OpenStack. Again, all of you recognize this picture probably. This is the OpenStack topology, again, where you can take one traffic, use a monitored instance. We're also using a monitored tool, a wire shark, for example, to get access to the traffic with a GigaMon node called the v-series. We are trying to get the production network traffic to be inspected through our solution. So what we'll also show in this is the same traffic can now be replicated to multiple tools. It's not enough just for one tool to be inspecting it. Now, this is where the task links are constructed, again, using the OpenStack commands. So what we're going to do here is we're going to show pinging a specific node, some random, imagine you're simulating an attack against a virtual machine. What we want to see is all the traffic being inspected by a tool, in this case, wire shark. Let's just look at how this is orchestrated. You drag and drop your inspection endpoints, for example, in this case, the traffic rules. You drag and drop a tool. This is a wire shark, for example. Just link it up. That's it. You link up the source and the destination and deploy it. So now what you'll see is the wire shark now seeing all the traffic going to that VM. Pretty straightforward, right? A couple of clicks of drag and drop, and you got your traffic being inspected. This is as simple as setting it up. Now, while that's going on, that's great, but your security tool might want to do something else, right? Not just the packet analysis. Maybe it wants to do some other analysis. So in this case, what we'll do is we'll take the same traffic source, the pass all, and then also pass it along through some intelligence. In this case, we are sampling packets, maybe one in 10. Maybe you don't need every packet. Maybe you want to sample at a certain rate. So we'll go ahead and sample some traffic at a rate of one to 10, and then feed it to the tool. That way, your cloud tool may not need to be fully inspecting all the packets. Maybe it can only inspect some parts of it. So again, we'll go ahead and deploy that. And then you should see the wire shark now slow down a little bit. Maybe you only see one in 10 packets, rather than at the rate that it was happening before. So what this shows is basically you can do filtering, acquisition, and replication while providing intelligence along the way. So that pretty much is what we want to cover here. So let me see if there's one more. But quickly to summarize. So again, the flow mapping, which is actually our filtering engine, the acquisition engine, the replication engine, that's a patented architecture. Gigamon has had this for about 10 plus years, a pioneered that architecture, a simple drag and drop to provide the infrastructure to create the orchestration. Again, all of this is available as REST APIs as well. So if you want to spin up more workloads, you can automate this sequence. The automatic target selection. This is, again, a patent pending architecture, whereas new VMs get spun up. There's no user intervention required. We automatically pick up the VMs. As long as they meet some traffic criteria, they're picked up for inspection. Again, we talked about the OpenStack APIs and the REST APIs for automation orchestration. The GigaSmart intelligence. It's not just enough for getting the package alone. It's also need to be inspected along the way, whether it is intelligence by slicing, sampling, masking, whatever you want to call it. You want to provide some intelligence of that traffic before delivered to the tool. The idea here is to optimize the tool as well, not to overload it with a lot of information. This allows us to do that. And then finally, an agnostic platform. It's a true multi-cloud platform, whether it is OpenStack, whether it is Amazon Web Services, or even VMware. It's the same single pane of glass that supports both. We're also providing different deployment options. You may start off with all physical, and your journey might be to move to all virtual. And this allows you to do that flexible options as well. That brings to the end. I have a couple more minutes, and if you have any questions, more than glad to talk to you about it. Thanks for your time. Appreciate it. Yes, sir? Can you repeat the question? I can't hear. OK. Let me see if I can go back to a series of slides here. So there are two ways you can orchestrate. Let's talk about this one first on the virtual sphere. This v-series set of nodes are actually scaled out based on the amount of traffic coming in. So what those services also can be chained. For example, you might want to say, let me do flow mapping or filtering, for example, first. Then you may want to do slicing first. Then you may want to do masking after that. So we can service chain a bunch of giga smart or intelligence operations one after the other. And of course, you can pick and choose how you want to do it. Now, a variant of that also is in the hardware, too. So you might want to optimize it in the cloud first, get it to the hardware, and then you can service chain with NetFlow, for example, or something else after that. So again, it allows you the flexibility to do whichever way you want to do it. Good question, sir. So it's a good question on the SRIOV. So I don't want to speak for my distinguished colleague who is helping on the tap of the service. Looks like they're also targeting SRIOV and tap of the service as well, a little bit out of the way. One thing that we are thinking with SRIOV that I want to point out is, in this case where we say tap of the service as the entry point, we are looking at options where in the SRIOV world, maybe the SRIOV, because of the high throughputs, for example, they might actually come through North-South. It's not really east-west. They are pretty much, if you look at a physical virtual function today, it's pretty monolithic. It's only a virtual name, but not really microservices or cloud native. So what we believe is the top of the rack switch through which they are connected could be the one that is spanning out that traffic. So it doesn't have to be tap of the service itself, so we maybe consider an option of the ER span out of the top of the rack switch. But still, you can localize the inspection of the traffic to the cloud first before delivering all the traffic to the tools. So that's an option that we are looking at. Again, some of this is all vision in the future, but that's where we are thinking ahead into the... Can you speak up? So good question on the performance impact. So we are looking at the performance impact as well, but as you can tell, any span technology, no matter whether it's a physical switch or the virtual switch, there is going to be a performance impact on the span. What we are looking at is what is that optimal performance impact? In fact, one of the reasons why our tapping agent is also being pretty attractive to the customers is they think they can control that performance. They can give it a bigger, fatter VM if needed and so on. If you are a tenant and you are restricted to what the cloud provider provides as your hardware, you're limited by what kind of span capabilities that can provide. Whereas if you are a tenant and you can get a fatter VM, you can put your own tapping agent there and give it the performance it needs. So we are obviously paying attention to both to see where the industry leads us, but obviously we cannot abandon both. So it ends looking at the performance. But it's a good question. The performance is an impact. In fact, when customers ask us, the question that I ask them back is how much of throughput are you getting out of that VM? It's not a Boolean answer, really. It depends on the hypervisor performance, the server performance, for example, and so on. So I've got three more seconds and actually I'm above my time. Any other questions? I'll take one more, sir. So task is actually into the OVS. It actually plugs into the OVS itself. Yeah. All right, thanks, guys. I appreciate the time. Thank you very much.