 I'm John Furrier with theCUBE. We are here in Palo Alto to showcase a brand new relationship and technology partnership and technology showcase with, we're here with Neil Vanjoen, who's the CEO of Netronome. Did I get that right? Almost, look at you say it. And Nick McHugh and his chief scientist and chairman and co-founder of Barefoot Networks. Guys, welcome to the conversation. Obviously a lot going on in the industry. You can't, we're seeing, you know, massive change in the industry. Certainly digital transformation is the buzzword the analysts all use, but really what that means is the entire end-to-end digital space with networks all the way through the applications are completely transforming. Network transformation is not just moving packets around. It's wireless. It's content. It's everything in between that makes it all work. So let's talk about that. And let's talk about what your company is. Neil, talk about your company. What you guys do, Netronome and Nick, same for you for Barefoot, start with you guys. So as Netronome, our core focus lies around smart Knicks and what we mean by that. These are elements that go into the network service, which in the sort of cloud and NIV world gets used for a lot of network services. And that's our area of focus. Barefoot is trying to make switches that were previously fixed function, turning them to something that those who own and operate networks can program them for themselves to customize them or add new features or protocols that they need to support. And Barefoot, you're walking in the park. You don't want to step in any glass and get a cut. And I like that. Love the name of the company, but brings out the real issue of getting this IO world to a Knicks that goes back to the old school mindset of just network cards and servers. But if you take that out on the internet now, that is the IO challenge. In real time, it's certainly a big part of the edge device whether that's a human or device. IoT to mobile, and then moving it across the network. And by the way, there's multiple networks. So is this kind of where you guys are showcasing your capabilities? So fundamentally, you need both sides of the line, if I could put it that way. So we on the server side, and specifically also giving visibility between virtual machines to virtual machines, also called VNFs to VNFs in a service chaining mechanism, which is what a lot of the NFE customers are deploying today. And really as the entire infrastructure upon which these services are delivered, as that moves more into software, and more of it is created by those who own and operate these services for themselves. They either create it, commission it, buy it, download it, and then modify it to best meet their needs. That's true whether it's in the network interface portion, whether it's in the switch. And they've seen it happen in the control plane, and now it's moving down so that they can define all the way down to how packets are processed in the NIC and in the switches. And when they do that, they can then add in their ability to see what's going on in ways that they've never been able to do before. So we really think of as those as providing that programmability and that flexibility down all the way to the way that the packets. And what's the impact? Nick, talk about the impact and take us through it like an example. You guys are showcasing your capabilities to the world. And so what is it, what's the impact? And give us an example of what the benefit would be. I mean, what goes on there? Because instrumentation, certainly everyone wants to instrument everything. But what's the practical benefit? I mean, who wins from this and what's the real impact? Well, you know, in they've gone by, if you're a service provider providing services to your customers, then you would typically do this out of vertically integrated pieces of equipment that you get from equipment vendors. It's closed, it's proprietary. They have their own sort of NetFlow, SFlow, whatever the mechanism that they have for measuring what's going on. And you had to learn and live with the constraints of what they had. As this all gets kind of disaggregated and broken apart and that the owner of the infrastructure gets to define the behavior in software, they can now chain together and the modules and the pieces that they need in order to deliver the service. That's great. But now they've lost that proprietary measurement. So now they need to introduce the measurement that they can get greater visibility. This actually has created a tremendous opportunity. And this is what we're demonstrating is if you can come up with a uniform way of doing this so that you can see, for example, the path that every packet takes, the delay that it encounters along the way, the rules that it encounters that determines the path that it gets. If it encounters congestion, who else contributed to that congestion so we know who to go to blame? Then by giving them that flexibility, they can go and debug systems much more quickly and change them and modify them. It's interesting. It's almost like the aspirin, right? You need the headache now is I have good proprietary technology for point measurement and solutions, but yet I need to manage multiple components. There's an add on to what Nick said, which is the whole key point here is the programmability because there's data and then there's information. Gathering lots and lots of telemetry data is easy. The problem is you need to have it at all points, which is Nick's key point, but the programmability allows the DevOps person, in other words, the operational people within the cloud or carrier infrastructure to actually write code that identifies and isolates the data, well, the information rather than the data that they need. So is this customer-based for you guys, the carriers, the service providers? Who's your target audience? Yeah, I think it's service providers who are applying the NFV technologies, in other words, the cloud-like technologies. I always say the real big story here is the cloud technologies rather than just the cloud. Yeah, yeah. And how that's done. And same for you guys. This is its joint same target customer. I don't think there's any disagreement. Well, I want to get drill into the whole aspirin analogy because there's other things that you brought up with the programmability because NFV has been that, saving grace. It's been the holy grail for how many years now. And you're starting to see the tides shifting now towards where NFV is not a silver bullet, so to speak, but it is actually accelerating some of the change. And I always like to ask people, hey, are you an aspirin or are you a vitamin? One guest told me, I'm a steroid. We make things grow faster. I'm like, okay, but in a way, the aspirin solves a problem like immediate headache. So it sounds like a lot of the things that you mentioned, that's an immediate benefit right there on the instrumentation in an open way, multi-component, multi-vendor kind of benefits the proprietary but open. But the point about programmability gives a lot of head room around kind of that vitamin, that steroid piece where it's going to allow for automation, which brings an interesting thing that's customizable automation, meaning you can apply software policy to it. Is that kind of like, can you tease that out? Is that an area that you guys talking about? The first thing that we should mention is probably the new language called P4. I think Nick will be too modest to state that, but I think Nick has been a key player in along with his team and many other people in the definition and the creation of this language, which allows the programmability of all these elements. Yeah, drill down. I mean, teach your own horn here. Let's get into it because what is it and what's the benefit? And what is the real value? What's the upshot of P4? Yeah, the way that hardware that processes packets, whether it's in network interface cards or in switching, the way that that's been defined in the past has been by chip designers. At the time that they define the behavior, they're writing Virolog or VHDL. And as we know, people that design chips don't operate big networks. So they don't really know what capability is to put in. They're good at logic and a vacuum, but not necessarily in real world, right? Yeah. They don't have to insult chip designers. They're great, right? So what we've all wanted to do for some time is to come up with a uniform language, a domain-specific language that allows you to define how packets will be processed in interfaces, in switches, in hypervisor switches inside virtual machine environments in a uniform way so that someone who's proficient in that language can then describe a behavior that can then operate in different parts of the chain services so that they can get the same behavior, a uniform behavior, so that they can see the network-wide, that the service-wide behavior in a uniform way. The P4 language is merely a way to describe that behavior. And then both Netronome and Barefoot, we each have our own compilers for compiling that down to the specific processing element that operates in the interfaces and in the switches. So you're bridging the chip layer with some sort of abstraction layer to give people the ability to do policy programming. So all the heavy lifting stuff in the old network days was configuration management. I mean, all those, I mean, it was like hard stuff and then now you've got dynamic networks that even gets harder. Is this kind of where the problem goes away? And this is where automation is. Exactly, and the key point is the programmability versus configurability. In a configurable environment, you're always trying to pre-guess what your customer is going to try to look at. Guessing's not good in the networking area. It's not good for five nines. In the new world that we endow, the customer actually wants to define exactly what the information is they want to extract, which is your whole question around the rules and so forth. So let me see if I can connect the dots here, just kind of connect this forward. So in the showcase, you guys are going to show this programmability, this kind of efficiency at the layer of bringing instrumentation and then using that information and or data depending on how it's sliced and diced via the policy and programmability. But this becomes cloud-like, right? So when you start moving, thinking about cloud where service providers are under a lot of pressure to go cloud, because over the top right now is booming. You're seeing a huge content and application market that's super ripe for kind of these kinds of services. They need that ability to have the infrastructure be like software. So infrastructure as code is the DevOps term that we talk about in our DevOps world. But that has been more data center kind of language with developers. Is it going the same trajectory in the service provider world because you have networks, I mean they're bigger, higher scale, what are some of those DevOps dynamics in your world? Can you talk about that and share some color on that? I mean, the way in which large service providers are starting to deliver those services is out of something that looks very much like a cloud platform. In fact, it could in fact be exactly the same technology, the same servers, the same switches, same operating systems, a lot of the same techniques. The problem they're trying to solve is slightly different. They're chaining together the means to process a sequence of operations. A little bit like though the cloud operators are moving towards microservices that get chained together. So there's a lot of similarities here and the problems they face are very similar. But think about the hell that this potentially creates for them. It means that we're giving them so much rope to hang themselves. Because everything has now got to be put together in a way that's coming from different sources written and authored by different people with different intent or from different places across the internet. And so being able to see and observe exactly how this is working is even more critical. So I love the rope to hang yourself analogy because a lot of people will end up breaking stuff as Mark Zuckerberg's favorite quote is, move fast, break stuff. And then by the way, when they hit 100 million users and moved, slogan went for move fast, be reliable. So he got on the five nines bandwagon pretty quick. But it's more than just the instrumentation. The key that you're talking about here is that they have to run those networks in really high reliability environments. And so that begs the challenge of, okay, it's not just easiest to throw in a Docker container at something. I mean, that's what people are doing now. Like, hey, I'm going to just use microservices. That's the answer. They still got stuff under the hood, underneath microservices, orchestration challenges. And this kind of looks and feels like the old configuration management problems, but moved up the stack. So is that a concern that in your market as well? So I think that's a very, very good point that you make because the carriers, as you say, tend to be more dependent almost on absolute reliability and very importantly performance. But in other words, they need to know that this is going to be a hundred gigs because that's what they've signed up the SLA with their custom form. It's not going to be almost a hundred gigs because then they're going to end up paying a lot of penalties. Yeah, they can't afford breakage. They're ops dev, not dev ops. Which comes first in their world. Yeah, so the critical point here is that this is where the demo that we're doing which shows the ability to capture all this information at line rate at very high speeds and the switches. So let's talk about this demo you're doing the showcase that you guys are providing and demonstrating to the marketplace. What's the pitch? I mean, what's the essence of the insight of this demo? What's it proving? So I think that it's good to think about a scenario in which you would need this and then this leads into what the demo would be. Very common in an environment like the VNF kind of environment where something goes wrong, they're trying to figure out very quickly who's to blame, which part of the infrastructure was the problem. Could it be congestion? Could it be a misconfiguration that it was? Everyone pointed finger at the other guy. Two days later, what happened really? Typical way that they do this is they'll bring the people that are responsible for the compute, the networking, and the storage quickly into one room and say go figure it out. The people that are doing the compute, they'll be modifying and changing and customizing, running experiments, isolating the problem. So are the people that are doing storage. They can program their environment. In the past, the networking people had ping and trace rat. That's the same tools that they had 20 years ago. What we're doing is changing that by introducing the means where they can program and configure, run different experiments, run different probes so that they can look and see the things that they need to see. And in the demo in particular, you'll be able to see the packets coming in through a switch, through a NIC, through a couple of VMs, back out through a switch, and then you can look at that packet afterwards and you can ask questions of the packet itself. Something you've never been able to do. It's the ultimate debugger. Basically, it's the ultimate debugger. Go to the packet and say. Programmable debugger. Which path did you take? How long did you wait at each NIC, at each VM, at each switch port as you went through? What are the rules that you followed that led you to be here? And if you encountered some congestion, whose fault was it? Who did you share that queue with so we can go back and apportion the delay? So you get a multiple dimension of path information coming in, not just the standard stove-piped tools. And then everyone compares logs and then there's all these holes in it. People don't know what the hell happened. And through the programmability, you can isolate the piece of the information. So the experimentation agile is where I think, is that what you're getting at? You can say, you can really get down and dirty into a duplication environment. Also run these really fast experiments versus kind of in theory or in all kind of work. Exactly, which is what, as Nick said, is exactly what people on the server side and on the storage side have been able to do in the past. Okay, so for people watching that are kind of getting into this and people who aren't, just give me an order of magnitude of the impact and the consequences of not taking this approach vis-a-vis the available today's available techniques. If you wanted to try and figure out who it was that you were sharing a queue with inside an interface or inside a switch, you have no way to do that today or no means to do that. And so if you wanted to be able to say, it's that aggressor flow over there, that malfunctioning service over there, you've got no means to do it. As a consequence, the networking people always get the blame because they can't show that it wasn't them. But if you can say, I can see in this queue there were four flows going through or 4,000 flows and one of them was really badly behaved and it was that one over there and I can tell you exactly why its packets were ending up here. Then you can immediately go in and shut that one down. They have no way to, they go randomly shut that down. Can I get this for my family? I need this for my household. I'm not going to use this for my kids. I mean, I know exactly the bad behavior. I need to prove it. No, but this is what the point is. This is fast. I mean, you're talking speed too, another aspect. What's the speed lag on approach versus taking the old current approach versus this joint approach you guys are taking? What's the, give me an estimate on just ballpark numbers. Well, there's two aspects to the speed. One is the speed at which it's operating. So this is going to be in the demo. It's running at 40 gigabytes per second but this can easily run, for example, in the barefoot switch it'll run at six terabits per second. The interesting thing here is that in this entire environment this measurement capability does not generate a single extra packet. All of it is self-contained in the packets that are already flowing. So there's no latency issues on running this in production. If you want to then change the behavior you need to go and modify what was happening in the neck, modify what was happening in the switch. You can do that in minutes so that you can say. Now the time it takes for a user now to do this, let's go to that time series. What does that look like? So current method is get everyone in the room, do these things, are we talking? I think that today it's just simply not possible. So it's a new capability. So this is a new capability. This is a new capability and exactly as Nick said, it's getting the network to the same level of ability that you always had inside the system. I got to ask you guys as founders of your companies because this is one of those things that's a great success story as entrepreneurs. You got, it's not just a better mousetrap, it's revolutionary in the sense that no one's ever had the capability before. So when you go to events like Mobile World Congress, you're out in the field, are you shaking people like you need me? I need to cut the line and tell you what's going on. I mean, you must have a sense of urgency. Is it resonating with the folks you're talking to? I mean, what are some of the conversations you're having with folks? They must be pretty excited about it. Can you share any anecdotal stories? I mean, we're finding across the industry not only in the service providers, the data center companies, Wall Street, the OEM box vendors, everybody is saying, I need, and have been saying for a long time, I need the ability to probe into the behavior of individual packets and I need whoever is owning and operating the network to be able to customize and change that. They've never been able to do that. The name of the technique that we use is called in-band network telemetry or INT and everybody is asking for it now. Actually, whether it's with the two of us or whether they're asking for it more generally, this is, you'll see this every while. It's a game changer, that's right. Great, all right, awesome. Well, final question is, is that what's the business benefits for them? Because I can imagine you get this nailed down with the property, the ability to test new apps because obviously we're in a wild west environment with a tsunami of apps coming. There's always gonna be some tripwires in new apps, certainly with microservices and APIs. I think the general issues that we're addressing here is absolutely crucial to the successful rollout of NFV infrastructures. In other words, the ability to rapidly change, monitor, and adapt is critical. It goes wider than just this particular demo, but I think these technologies- It's all apps, it's all apps on the service provider. It's effectively the ability to handle all the VNFs. Well, in the old days it was simply network spikes. Tons of traffic coming and now you have apps could throw off anomalies anywhere, right? You'd have no idea what the downstream triggers could be. That's the whole notion of the programmable network, which is critical. Well, guys, any information where people can get some more information on this awesome opportunity? You guys, sites, wanna share quick web addresses and places where people get white papers or information? For the general P4 movement, there's P4.org, P, the number four.org, nice and easy. You'll find lots of information about the programmability that's possible by programming the forwarding plane and what both of us are doing. In-band network telemetry, you'll find descriptions of the P4 programs and white papers describing that. And of course, then the two company websites, Network Home and Barefoot. Right, Nick and Neil, thanks for spending some time sharing the insights and congratulations. New Cape, you'll keep an eye out for it and we'll be talking to you soon. Thanks for coming on. Thank you very much. This is theCUBE here in Palo Alto. I'm John Furrier, thanks for watching.