 From Los Angeles, it's theCUBE, covering Open Source Summit North America 2017. Brought to you by the Linux Foundation and RedHash. Okay, welcome back everyone. Live here in Los Angeles, this is theCUBE's special coverage of Open Source Summit North America. I'm John Furrier with Stu Miniman. Two days of wall-to-wall coverage. Our next guest is Ed Warnacky, who's a distinguished consulting engineer with Cisco. Welcome to theCUBE. Glad to be here. Thanks for coming on. Love to get into it. We love infrastructure as code. We love the cloud. Developers, the young generation loves it. Making things easy to use. All sounds great, but there's still work to get done. The networking. So what's going on here at the Open Source Summit? This is the big tender event when there's a lot of cross-pollination around projects. Obviously the networking side, you guys at Cisco are doing your share. Give us the update. Networking is still a lot more work to be done. It's a very strategic part of the equation. Certainly making it easier up above makes it programmable. Yeah, no, I mean you have to make the networking invisible when you get to the DevOps layer. There are certain things they need from the network. They need isolation and reachability. They need service discovery and service routing. But they don't want to have to think about it. They don't want to be burdened with understanding the nitty-gritty details. They don't want to know what subnet they're on. They don't want to have to worry about ACLs. They don't want to think about all of that. And the truth is there's a lot of work that goes into making the network invisible and ubiquitous for people. And in particular, one of the challenges that we see arising as the world moves more cloud-native as the microservices get smaller, as the shift happens towards serverless, as Kubernetes is coming on with containers, is that the network is really becoming the runtime. And that runtime has the need to scale and perform like it never has before. So the number of microservices you'd like to put on a server keeps going up. And that means you need to be able to actually handle that. The amount of traffic that people want to push through them continues to go up. So your performance has to keep up. And that brings a lot of distinct challenges, particularly when you're trying to achieve those ins with systems that were designed for a world where you had maybe two nicks on the box where you weren't really thinking when your original infrastructure was built about the fact that you actually were going to do a hell of a lot of routing inside the server because you now have currently hundreds, but hopefully someday thousands and tens of thousands of microservices running there. Yeah, Ed, I think when we've been talking about the last kind of 15, 20 years or so, I need to move faster with my deployments. It always seemed that networking was the thing that held everything up. It's like, okay, wait, when I virtualized, okay, everything's great and everything and I can just spin up a VM and do that up, but I need to wait for the network to be provisioned. What are the things you've been working on? What open source projects? There's a lot of them out there are helping us to really help that overall agility of work today. Absolutely, so one of the things I'm deeply involved in right now is a project called fd.io, usually pronounced FIDO because it's cute and it means we can give away puppies at conferences. It's great. And what FIDO is doing is we have this core technology called DPP that gives you incredibly performant, incredibly scalable networking purely in user space, which means from a developer velocity point of view, we can have new features every three months. From an extensibility point of view, you can bring new network features as separate plugins you drop as .so's into a plugin directory instead of having to wait for the kernel to rev on your server. And the revving process is also substantially less invasive. So if you need to take a microservice to network as a user space thing and rev it, it's a restart of a process. You're talking microseconds, not 15 minute reboot cycles. You're talking levels of disruption where you don't lose your TCP state, where you don't lose any of those things and that's really crucial to having the kind of agility that you want of the network. And when I talk about performance and scalability, I'm not kidding. So one of the things we recently clocked out with VPP was being able to route a terabit per second of traffic with millions of routes in the forwarding tables on commodity servers with no hardware assistance at all. And the workloads are starting to grow in that direction. It's going to take them a while to catch up, but to your point about the network being the long pole, we want to be far ahead of that curve. So it's not the long pole anymore. So you can achieve the agility that you need in DevOps and move innovative products forward. And one of the things that comes up all the time, I want to get your reaction to this because you're at an important part of it, is developers say, look it, I love DevOps. And not even ops guys are saying, hey, we want to promote DevOps. So there's a, there's a mind mel there, if you will. But then what they don't want is a black box. They want to see debugging. They want to have ease of manageability. So I don't mind pushing Dev. From an ops guy to send the Dev down, but they need a path of visibility. They need to have access to debug fast, to get access to some of those things. What do you see as gates, if you will, that we got to get through to make that seamless and clean right now? I mean, obviously Kubernetes, a lot of stuff going on with orchestration and containers are providing a path, but still the complaint and nervousness is, okay, you can touch and program the infrastructure, but if something happens, you got to be reactive. That gives you exactly to the point because the more invisible the network is, the more visibility you need when things go wrong. And for general operational use. And one of the cool things that's happening in Fido around that is, number one, it's industrial scale. So you have all the sorts of counters and telemetry information that you need from a historical point of view in networks to be able to figure out what's going on. But beyond that, there's a whole lot of innovation that's been happening in the network space that has yet to trickle down all the way to the server edge. A really classic example on the visibility front has to do with in-band IOAM. So we now have the technology, and this is present today in DPP, to be able to say, hey, I would like an in-band trace on the flow through the network of this flow for this customer who's giving me a complaint where I can see hop by hop through the network, including in the edge where VPP is, what's the latency between hops? What path it actually passed through? And there's even a feature there where you can say at each hop, please send the capture at that hop to a third party point where I can collect it so I can look at it in something like Wireshark. So you can look at Wireshark and say, okay, I see where this went into that node and came out that node this way, node by node by node. I don't know how much more visibility than that is actually physically possible. And that's one of the kinds of things that the velocity of features that you have in DPP is made very possible. That's the kind of thing that would take a long time to work into the traditional development line for networking. What's the Cisco internal vibe right now? Because we covered the DevNet create event that Susie, we put on, which is kind of like a cloud native, cool event, kind of grassroots, kind of guerrilla, I love the mojo there. But then you got the DevNet community, the Cisco developer, which is a robust killer developer community on the Cisco side. How are those worlds coming together? I can imagine that the appetite for the Cisco DevNet teams or DevNet developer community is looking at cloud native as an opportunity. Can you share some insight into what's the sentiment, what's the community vibe, what's going on for folks that are, it's got to run the networks? I mean, look, this is like serious stuff. In the past, I've been like cloud native when you're ready, we'll get there, but now this seems to be an onboarding of cloud native. Talk about the dynamic. There has to be because cloud native won't wait, right? And there's a lot of things that the network can do to help you as the run time. The IOM example is one, but there are a ton more. We are, again, cloud native won't wait. They will find a way. And so you have to be able to bring those features at the pace at which cloud native proceeds. You can't do it on six month product cycles. You can't do it on 12 month product cycles. You have to be able to respond point by point as things move forward. A good example of this is a lot of the stuff that's happening with server measures and Istio, right? Which is coming really fast. Not quite here, but coming really fast. And for that, the real question is what can the network do for DevOps, right? Because there's a synergistic relationship between DevOps and NetOps. So you're saying that you're saying, I mean, let me try to get at the point. So yes, are you seeing that the DevNet community saying, hey, we love this stuff? Because, I mean, they're smart. They know how to adapt. Moving from networks to DevOps. To me, it seems like they're connecting the dots. Can you share some color? Are they? Yes, no, maybe? They're absolutely connecting the dots, but there's a whole pipeline with all of this, right? And then DevNet is at the short pointy end where it touches the DevOps people. But to get there, there's a lot of things that have to do with identifying what are the real needs, getting the code written to actually do it, figuring out the proper innovations, engaging with open source communities like Kubernetes so that they're utilized. And by the time you get to DevNet, now we're at the point where you can explain them to DevOps where they can use them really cleanly. I mean, one of the other things is you want it to come through transparently. Because people want to be able to pick their Kubernetes Helm charts off the web, take the collection of containers for the parts of their application they don't want to have to think about, at least right now, and have it work. So you have to make sure you're supporting all the stuff that's there and you have to work to be able to take advantage of those new features in the existing APIs or better yet, just have the results of those APIs get better without having to think about new features. So they're ingratious, not a collegious, not friction. No, no, no, no. It's pretty much synergistic. Network guys, get the DevOps equation. No, we get the DevOps equation, we get the need. There is a learning process for both sides. I mean, we deeply need each other. Applications without networking are completely uninteresting, right? And this is even more true in microservices where it's becoming the runtime for the network. On the same side, networks without applications are completely uninteresting because there's no one to talk, right? And what's fascinating to me is how many of the same problems get described in different language and so we'll talk past each other. So, DevOps people will talk about service discovery and service routing. And what they're really saying is, I want a thing, I don't want to have to think about how to get to it. On the network side, for 15 years now, we've been talking about identifier locator separation. Basically, having an IP address for the thing you want and having the ability to transparently map that to the location where that thing is, without having to, it's the classic renumbering your network problem. They're at a very fundamental level, the same problem, but it's a different language. The game is still the same. There's some language nuances that I think I see some synergies. And I see people getting it. It's like having learning two languages, okay, the world's come together. It's not a collision. But the interesting thing is, networking has always been an enabling opportunity. I mean, this is a fundamental nuance that if you can get this right, it's invisible, as you said. That's the end game. Absolutely. I mean, that's really what you're looking for is you want invisibility in the normal mode and you want total transparency when something has to be debunked, right? Because the classic example with networks is, when there's a network problem, it's almost never the network. It's almost always some little niggle of configuration that went wrong along the way. And so you need that transparency to be able to figure out, okay, what's the point where things broke? Or what's the point where things are running suboptimally, right? Or am I getting the level of service that I need? Am I getting the latency I need, and so forth? And there's been a tendency in the past to shorthand many of those things with networking concepts that are completely meaningless to the underlying problem. Like, people will look at subnets and say, well, for on the same subnet, we should have low latency. Bullshit, right? I mean, basically, if you're on the same subnet, the guy could be on the other end of the wind in the modern era with L2 overlays. So if you want latency, you should be able to ask for a particular latency guarantee. Yeah, it felt to me that it took the networking community a while to fix things when it came to virtualization. But the punchline is, when it comes to containers and what's happening in Kubernetes, it feels like the networking community is rallying a lot faster and getting ahead of it. So what's different this time? You've got kind of that historical view on it. Are we doing better as an industry now? And why is it? So a couple of things is the Kubernetes guys have done a really nice job of laying out their networking APIs. They didn't get bogged down in the internal guts of the network that no DevOps guy ever wants to have to see. They got really to the heart of the matter. So if you look at the guarantees that you have in Kubernetes, what is it? Every pod can talk to every other pod at L3, right? So L2 isn't even in the picture, right? Which is beautiful because in the cloud, you need to worry about something that's like you need a hole in the head, right? The, then if you want isolation, you specify a network policy. And you don't talk about IP addresses when you do that. You talk about selectors on labels for pods. It's a beautiful way to go about it because you're talking about things you actually care about. And then with services, you're really talking about how do I discover the service I want? So I never have to figure out a pod IP. The system does it for me. And there are gaps in terms of there being things that people are going to be able to need to do that are not completely specified on those APIs yet. But the things they've covered have been covered so well that it's, and they're being defended so thoroughly that it's actually making it easier because we can't come in and introduce concepts that harm DevOps. We're forced to work in a paradigm that serves it. Okay, great. So this will be easy. So we'll be ready to tackle serverless. What's that going to mean for the network? Serverless gets to be even more interesting because the level of agility that you want in your network goes up. Because you can imagine something in serverless where you don't even want to start a pod until someone has made a request. So there's an L7 piece that has to be dealt with but then you have to worry about the efficiency of how do you actually move that TCP session to the actual instance that's come up for serverless for that thing and how do you move it to the next thing? Because you're working at an L7 where from the client's point of view, they think it's all the same server but it's actually been balkanized across all these microservices. And so you have to find an efficient way of making that transparent that minimizes the degree to which you have to hairpin through things all over the cluster because that just introduces more latency, less throughput, more load on the cluster. You've got to be able to avoid that. And so by being able to bring sophisticated features quickly to the data plane with something like FIDO and VPP, you can actually start peeling those problems off progressively as serverless matures because the truth of the matter is no one really knows what those things are going to look like. We all like to believe we do but you're going to find new problems as you go. It's the unknown unknowns that require the velocity. All right, so it sounds like you're excited about serverless though. Hugely, yes, definitely. So I love serverless too and I was just talking about it. So what is your opinion, the confusion? There are some people who are like, oh, it's bullshit. I don't think it is personally. I think it's nirvana. I think it's what people want, right? Most developers want. I mean, there's always, there's a server behind it. I mean, it's not serverless per se. It's just from a developer standpoint, you don't have to provision hardware or containers or VMs or any of that. I personally think it's a good thing. It's just a better naming convention. I mean, give the people, what's the nuance? Why are people confused? I think it's much more fundamental than just the naming convention, right? Because historically, if you look at the virtualization of workloads, every movement we've had today has been about some workload run, some runtime technology. VMs were about virtual machines. Containers are about containers that run time technology. When you get to microservices and serverless, we've made the leap from talking about the underlying technology that most developers don't care about to talking about the philosophy that they do, right? And just- Their runtime is their app. Exactly. Their runtime assembly is their code sandwich. Not necessarily the network. Just as in serverless, I don't think anyone doubts that the first run of serverless is going to be built on containers. But the philosophy is completely divorced for them. So I'll give you an example. One of the things that we have in VPP is we have an ultra-high performance, ultra-high scalability, user-space TCP stat. I mean, we're talking the kind of thing that can trivially handle 10 million simultaneous connections with 200,000 new connections coming in every second, right? And right now, you can scope that to an isolation scope of a container. But there's no reason with technology we have you can't scope it all the way down to a process. So you control the network access at the level of a process. So there's a lot of headroom to go even smaller than containers, even lighter weight than containers. But the serverless philosophy changes not a wit as you have that improvement come in. That's beautiful. Ed, thanks so much for coming on the queue. Really appreciate your perspective. I'd like you to get one final word in to end the segment. Describe what's happening here because the OS, I mean, I'm sorry, the OS summit or the open source summit, North America, it's the first of its kind. It's a big tent event. What's your take on it? What's the purpose of the event? What's your experience? Share it with the folks who aren't here. What's this event is all about? So it's really exciting because as much as we love the Linux Foundation and as much as we've all enjoyed things like LinuxCon in the past, the truth is for years it's been bleeding beyond just Linux, right? It is actually, I don't see the OS summit so much as a shift in focus as a recognition of what's developed, right? Last year we had the open source summit here, we just called it LinuxCon. The year before we had the open source summit here, we just called it LinuxCon. And so what's really happening is we're recognizing what is. There's actually no new creation happening here. It's the recognition of what's evolved. And that is open source as a tier one reality. Absolutely. That goes way beyond Linux, which is by the way, super valuable, the kernel. Oh no, we all love Linux. All Linux apps, all the apps are such a Linux apps. Which is, but it's a bigger thing. This is a growth and scale that's coming. Yes. It's unprecedented. I think a lot of people still are pinching themselves. Stu and I were commenting that what's coming is going to change the face of software development for generations to come. I mean, just on the exponential scale of software libraries coming on board. Yeah, up to 400 million was forecasted by 2026. That sounds conservative to me. Well, I mean, just look at the scale. So there's going to be some leadership opportunities for the community in my opinion. Absolutely. And this is where the open source summit actually, I mean words matter because the shape the way we think about things. So where I think the shift to the open source summit has huge value is that it starts to shift the thinking into this broader space. It's not just a recognition of what's happened. It's a new load of software here for the community. This is not a marketing event. It's a recognition of what's actually happening. I love that quote, open source summit, brilliant move by the Linux Foundation. I think it's created a big 10 event for cross pollination, sharing of ideas. This is the ethos of open source. Ed, thanks so much. Such a pleasure. Coming on theCUBE live coverage from the open source summit in North America, formerly LinuxCon and all the other great events here in Los Angeles, I'm John Furrier. Stu Miniman, more live coverage after this short break.