 Live from Boston, Massachusetts, it's theCUBE, covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. And we're back. I'm Stu Miniman with my co-host, John Troyer. Getting to the end of day two of three days of coverage here, the OpenStack Summit in Boston. Happy to welcome to the program, Sujal Das, who is the Chief Marketing and Strategy Officer at Netronome. Thanks so much for joining us. All right, so we're getting through it. You know, really, John and I have been digging into, you know, really where OpenStack is, talking to real people, deploying real clouds, where it fits into the multi-cloud world. You know, networking is one of those things that, you know, took a little while to kind of bake out, you know, it feels like every year we talk about neutron and, you know, all the pieces that are there. But to talk to us, Netronome, I know you guys make smart NICs. You've got, obviously, some hardware involved when you hear a NIC and you've got software. What's your involvement in OpenStack and what sort of things are you doing here at the show? Absolutely, thanks too. So we do smart NIC platforms. So this includes both hardware and software that can be used in commercial of the shelf servers. So with respect to OpenStack, I think the whole idea of SDN with OpenStack is centered around the data plane that runs on the server, things such as the OpenV switch or virtual router, and they're evolving new data planes coming into the market. So we offload and accelerate the data plane in our smart NICs because the smart things are programmable. We can evolve the feature set very, very quickly. So in fact, we have software releases that come out every six months that keep up to speed with OpenStack releases and OpenV switches. So that's what we do in terms of providing higher performance OpenStack environments. So I spent a good part of my career working on that part of the stack, if you will. And the balance was always like, right, what do you build into the hardware? Do I have accelerators? Is this the software that does? Usually in the short term, hardware can take it, care of it, but in the long term, you follow the just development cycle, software tends to win in terms. So where are we with, where functionality is, what differentiates what you offer compared to others in the market? Absolutely, so we see a significant trend in terms of the role of a co-processor to the X86 or evolving ARM-based servers, right? And the workloads are shifting rapidly. With the need for higher performance, more efficiency in the server, you need co-processors. So we make essentially co-processors that accelerate networking. And that sits next to a X86 on a SmartNIC. The important differentiation we have is that we are able to pack a lot of cores on a very small form factor hardware device, as many as 120 cores that are optimized for networking. And by able to do that, we're able to deliver very high performance at the lowest cost in power. Can you speak to us just, what's the use case for that? We talk about scale and performance. Who are your primary customers with this? Is this kind of broad spectrum or certain industries or use cases that pop out? Sure, so we have three core market segments that we go after, right? One is the NFV infrastructure market, where we see a lot of open stack use, for example. We also have the traditional cloud data center providers who are looking at accelerating even SmartNICs. And lastly, the security market. That's kind of been our legacy market that we have grown up with. With security kind of moving away from appliances to more distributed security, those are our key three market segments that we go after. The irony is in this world of cloud, hardware still matters, right? Not only does hardware, like you're packing a huge number of cores into a NIC, so that hardware matters. But one of the reasons that it matters now is because of the rise of this latest generation of solid state storage, right? People are driving more and more IO. Do you see what are the trends that you're seeing in terms of storage IO and IO in general in the data center? Absolutely. So I think the large data centers of the world, they showed the way in terms of how to do storage, especially with SSDs, what they call desirogated storage, essentially being able to use the storage on each server and being able to aggregate those together into a pool of storage resources. And it's being called hyper-converged. I think companies like Nutanix have found a lot of success in that market. What I believe is going to happen in the next phase is hyper-convergence 2.0, where we're going to go beyond security, which essentially address TCO and being able to do more with less. But the next level would be hyper-convergence around security, where you'd have distributed security in all servers and also telemetry. So basically you have storage appliances going away with hyper-convergence 1.0, but with the next generation of hyper-convergence, we'd see the security appliances and the monitoring appliances sort of going away and becoming all integrated in the server infrastructure to allow for better service levels and scalability. What's the relationship between distributed security and then the need for more bandwidth at the backplane? Absolutely. So when you move security into the server, the processing requirements in the server goes up. And typically, you know, with all security processing, you know, it's a lot of what is called flow processing or match action processing. And those are typically not suitable for a general-purpose server like the ARM or the AX86. But that's where you need specialized co-processors, kind of like the world of GPUs doing well in the artificial intelligence applications. I think it's the same example here. When you have security, telemetry, et cetera, being done in each server, you need special-purpose processing to do that at the lowest cost and power. Suja, you mentioned that you've got solutions into the public clouds. Are those, you know, the big hyper-scale guys? Is it service providers? I'm curious if you could give a little color there. Yeah, so these are both tier one and tier two service providers in the cloud market, as well as the telco service providers more in the NFB side. But we see a common theme here in terms of wanting to do security and things like telemetry. Telemetry is becoming a hot topic. There's something called in-band telemetry that we're actually demonstrating at our booth. And also speaking about with some of our partners at the show, such as with Mirantis Red Hat and Juniper, where doing all of these on each server is becoming a requirement. When I hear you talk, I think about, here at OpenStack, we're talking about the hybrid or multi-cloud world, and especially something like security and telemetry. I need to handle my data center, I need to handle the public cloud, and even when I start to get into that IoT edge environments, we know that the service area for attack just gets orders of magnitude larger, therefore we need security that can span across those. Are you touching all of those pieces? Maybe give us a little bit of a dive into it. Absolutely. I think a great example is DDoS, Distributed Denial of Service Attacks. And today you have these kind of attacks happening from computers, right? Look at the environment where you have IoTs, right? You have tons and tons of small devices that can be hacked and could flood attacks into the data center. You look at the autonomous car or cell driving car phenomena where each car is equivalent to about 2,500 internet users, right? So the number of users is going to scale so rapidly and the amount of attacks that could be proliferated from these kind of devices is going to be so high that people are looking and moving DDoS from the perimeter of the network to each server. And that's a great example that we're working with with a large service provider. I'm kind of curious how these systems take advantage of your technology. I can see it, some of it being transparent, like if you just want to jam more bits through the system, then that should be pretty transparent to the app and maybe even to the data plane and the virtual switches. But I'm guessing also there are probably some API or other software-driven ways of doing like to say, hey, not only do I want you to jam more bits through there but I want to do some packet inspection or I want to do some massaging or some QOS or I'm not sure what all these smart nicks do. So is my model correct? Is that kind of the different ways of interacting with your technology? You're hitting a great point, great question by the way. Thank you. So the world has evolved from very custom ways of doing things or proprietary ways of doing things to more standard ways of doing things. And one thing that has kind of standardized sort of data plane that does all of these functions that you mentioned, things like security or ACL rules or virtualization, is Open VSwitch is a great example of a data plane that has kind of standardized how you do things. And there are a lot of new open source projects that are happening in the Linux Foundation, such as VPP for example. So each of these standardized the way you do it and then it becomes easier for vendors like us to implement the standard data plane and then work with the Linux kernel community and getting all of those things upstream which we are working on. And then having the RedHeads of the world actually incorporate those into the distribution. So that way the deployment model becomes much easier, right? And one of the topics of discussion with RedHead that we presented today was exactly that. As to how do you make these kind of scales, scalability with for security and telemetry be more easily accessible to users through RedHead distribution for example? So Joe, can you give us a little bit of just an overview of the sessions that Netronome has here at the show and what are the challenges that people are coming to that they're excited to meet with your company about? Absolutely. So we presented one session at Mirantis. So Mirantis as you know is a huge OpenStack player. With Mirantis we presented exactly the same, the problem statement that I was talking about. So when you try to do security with OpenStack, whether it's stateless or stateful, your performance kind of tanks when you apply a lot of security policies, for example, on a per server basis that you can do with OpenStack. So when you use a SmartNIC, you essentially return a lot of the CPU cores to the revenue generating applications, right? So essentially operators are able to make more per server, make more money per server. That's the essence of what the value is. So that was the topic with Mirantis who uses actually OpenContrail virtual router data plan in their solution, right? We also have presented with Juniper, which was also based- Speaking of OpenContrail. Yeah. So Juniper is another version of Contrail. So we're presenting a very similar product, but that's with the commercial product from Juniper. And then we have just today presented with Red Hat. And Red Hat is based on Red Hat's OpenStack and their OpenV switch-based products where, of course, we are upstreaming a lot of these code bits that I talked about. But the value proposition is uniform across all of these vendors, which is when you do storage, sorry, security and telemetry and virtualization, et cetera, in a distributed way across all of your servers and get it for your appliances, you get better scale. But to achieve the efficiencies in the server, you need a smartNIC, such as ours. I'm curious, is the technology usually applied then at the per-server level? Is there a rack scale component too that needs to be there, or? It's not on per-server basis. So it's the use cases like any other traditional NIC that you would use. So it looks and feels like any other NIC, except that there is more processing cores in the hardware and there's more software involved. But again, all of these software gets tightly integrated into the OS vendor's operating system and then the OpenStack environment. Gotcha. Well, I guess you can never be too rich, too thin or have too much bandwidth. That's right. Yeah. Sujal, share with our audience any interesting conversations you had or other takeaways you want people to have from the OpenStack Summit. Absolutely. So without naming specific customer names, we had one large data center service provider in Europe come in and their big pain point was latency. Latency going from the VM on one server to another server. And that's a huge pain point and the request was to be able to reduce that by 10X at least, right? And we're able to do that. So that's one use case that we have seen. The other is again, relates to telemetry. How this is a telco service provider, right? So as they go into 5G and they have to service many different applications such as what they call network slices. One slice servicing the autonomous car applications. Another slice managing the video distribution, let's say with something like Netflix, video streaming. Another one servicing the cell phone. Something like a phone like this where the data requirements are not as high as from a TV sitting in your home. So they need different kinds of SLA for each of these services. How do this slice and dice the network and how are they able to actually assess the rogue VM so to say that might cause performance to go down and affect SLA's telemetry or what is called inbound telemetry is a huge requirement for those applications. So I'm giving you like two, one is a data center operator on infrastructure as a service. Just one lower latency and the other one is interesting telemetry. So Joel, final question I have for you. Look forward a little bit for us. You've got your strategy hat on. Netronome, open stack in general. What would you expect to see as we look throughout the year? Maybe if we're sitting down with you in Vancouver a year from now, what do you hope that we as an industry and as a company have accomplished? Absolutely. I think you'd see a lot of these products so to say that enable seamless integration of smartnecks become available on a broad basis. I think that's one thing I would see happening in the next one year. The other big event is the whole notion of hyperconvergence that I talked about. I would see the notion of hyperconvergence move away from one to two, just storage focus to security and telemetry with open stack kind of addressing that from a cloud orchestration perspective. And also with each of those requirements, software defined networking, which is being able to evolve your networking data plane rapidly in that run. These are all going to become mainstream. So Jadas, pleasure catching up with you. John and I will be back to do the wrap up for day two. Thanks so much for watching theCUBE.