 All right, hello, hello, hello. Can you guys hear me okay in the back? Kind of feels like we're in a Zeppelin era, if you look at the roof. All right, guys, thanks so much for being here. I'm gonna try and keep this pretty light and somewhat casual. So what I wanna do is talk about my experiences over the last 10 years, creating an SDN company. This is kind of like a technologist's view of creating an SDN company and then bringing it to about a billion dollar run rate. And it's mostly a story of what we got wrong, which was almost everything. I gave a similar talk at a keynote at the Open Networking Summit, but actually I changed a lot of stuff for this crowd because I wanted to focus on different things. And in particular, interactions with things like OpenStack and why that's treacherous for startups. So hired a bit, if any of you are out there aspiring to go do your own company, this is kind of like things to be aware of and maybe some tribulations that'll be helpful. So for those of you that don't know, my name is Martine. So I was at Stanford and did some of the original work that became SDN along with a bunch of other people. So Nick McEwan, and Scott Shanker, and so forth. And so I graduated almost exactly 10 years ago. And at the time I was gonna go be a professor at Cornell, but instead I was convinced to create a company called Nasera. And I think actually probably one month to the date will be a decade since we created that. And so what I wanted to first do is talk about kind of what we were originally thinking of when we created the company Nasera. Like what was the basic idea and then talk about how that evolved to what eventually became like the final product, which eventually became like this large run rate product. So the idea was, and by the way, a lot of these initial slides are from our original slide decks, right? So like where I could go and we'd go pitch VCs. So the original idea was like, okay, we thought that Networking was fundamentally broken. And we kind of identified two problems. And a lot of this came from the research, which you just basically kind of repackaged and uses part of the pitch deck. So if you distill it down, we really focused on two problems. One of them is if you looked at Networking at a time in order to implement new functionality, it was often implemented in an ASIC. So like anytime, and by an ASIC, I mean an actual like hardware chip. So anytime you wanted a new type of functionality, for example, like security or mobility or whatever, they would take the Ethernet frame format and they would change it and they'd say, oh, listen, you need to go buy a new box in order to do this. And because like developing an ASIC takes, what, four years, $10 million, evolving the data path was really, really slow. So one problem that we focused on were like, actually it's really hard to evolve Networking if every time you wanna do something innovative, you're spinning an ASIC. And then the second problem we looked at was we said, another problem is like, networks are very complex and getting very big, but the way that we operate them or management is like box by box. And this is like requiring human beings to solve distributed state management, which is, I mean, it's hard to get programs to solve this, getting humans to solve it is even more difficult. And so we thought, okay, listen, so there's two problems, one of them, we want more rapid innovation by basically taking functionality out of hardware. And two, we wanna provide high-level global abstractions so that you can manage it from a high level. And again, like this idea, this very kind of high-level idea came out of the work that we did at Stanford. Okay, so how are we gonna do that? So if you go back to the original slide decks, this is 2007, we had this kind of really high-level idea and really no idea how to implement it. We're like, it's easy. All we're gonna do is we're gonna build this software layer that goes across the network. And so we're gonna run these sort of the software layer and then the functions, instead of having them in hardware, we're gonna run them as applications at the software layer. And if we do that, now the functionality is in software and you've got global abstractions because we've got this software layer. And so we solve the operational problem. So that was kind of the high-level view. And if you distill it down, we're saying, here are the two problems I talked about. We're gonna build an SDN platform. We're gonna target, for our first thing is the virtual data center. And the reason we decided to target the virtual data centers because there was new build-outs in virtual data centers at the time. And it was like the operational problem was particularly complicated there because VMs were coming and going and moving around and in order for the network to keep pace, people had to run around and kind of update the state management. So we're like, okay, here's a problem domain that's acute, we're getting new build-outs, that's what we're gonna target. So like good academic computer scientists, we thought, okay, so to solve this problem, we're gonna solve the problem in generality. And so we created a system, this is, we started this in 2007, which was supposed to be a basic platform to solve all of the world's SDN problems. And that system was called NOx. And the idea was the following. You have a server and you run software in that server and that server's gonna manage all of these switches and that server is going to expose to programmers this API of a graph. So if people or developers want to manipulate networking state, they will operate on top of this graph. I mean, it turns out, so we actually built this system, it was actually used in two production products. But we quickly found out that there were a couple of problems with it. The first one is, so my background was actually as a computational physicist. So I was like a distributed systems guy that worked on big simulations. And it was very clear to me that we were actually tackling the problem as developers. When the reality is that networking, like writing software for networks is primarily about distributed state management. So if you're writing, and I'm writing a platform like an operating system for compute, what's gonna run on top of it? Like anything, right? We use compute to solve physics and to solve entertainment and to do word processing and to do business. I mean, the platform of compute is as general as possible. But when it comes to writing networking software, it really is just about moving the state on the network. And that's it, right? You always have the same list of things you wanna solve. I mean, I've been in networking for 20 years. It's always forwarding, security, performance management, visibility. I mean, there's like a list of five things you wanna do in the network and it doesn't change. And all of them are managing state at scale. And the second problem is we kind of assumed that if you sucked the brains out of all the switches and you put them in a single server, you could actually reduce the complexity that comes to distribution. So to build scalable networks, you actually have to distribute compute. It's an n squared problem. You've got n things talking to n other things, right? And so in order to do that, you have to have some level of distribution. And we thought, well, if you're sucking the brains out of switches and you're pulling them into servers, maybe you can have two servers or three servers and you can actually limit the complexity. But the more we started building it, we realized that this is fundamentally a distributed problem. That is, if you're solving the problem for like three servers, you might as well solve it for five or 10 or something like that. So we decided to take another crack at this. And he said, okay, here's what we're gonna do. Having one platform that's kind of more compute focused doesn't solve this problem. Why don't we build a general platform that enables developers to build distributed applications for state management? So the idea is we're gonna make it easier for developers to build kind of distributed applications that will manage all switch state. To do that, we kind of went back to kind of what worked before and we're like, okay, so the graph works. So what we're gonna do is we're gonna say, listen, if you wanna build some sort of a new network, we're gonna give you a graph, but now this graph is distributed and that state is stored in like distributed, like distributed data structures or distributed data stores. And we're gonna give you a bunch of tools that you can use to operate on top of that state. Like we're gonna say, here's distributed locking, here's leader election. And now your job is easier because you've given you all these tools and you can build this control that will solve all of networking problems, whether it's database or whether it's data centers or WAN or whatever. So the problem with that is the first thing is we kind of realized there's no one networking problem. So when it comes to distribution, you're always making trade-offs between things like state consistency and scalability and correctness. And we found for like every aspect of the network that was very different. Like a data center is very different than the WAN, is very different than mobile. So we couldn't really distill, you know, like here's the networking problem and here's a general component that you can use for all of them. And so what you end up with is you end up with just about as much complexity as you start off with. That is, if I create a platform for you, but like I need to provide as much generality as you need to solve all of these problems, you're not reducing complexity a lot. Like I like to think about it as follows. Like I think we do this as computer scientists a lot. Like it's just like making your bed in the morning like you wake up and there's the bed and there's like the bump on the bed. And so you're trying to make your bed and instead of like getting rid of the bump, you just kind of move it around. Like you like move it by the wall and it looks nice. You like move it by the pillow and it looks nice or whatever, but you're not really getting rid of the bump. And in this case, I think what we've done is we've kind of taken that complexity and we just kind of moved it to a different place where we're just kind of putting the onus on the developers still to deal with distribution. But there was value in there and I think that the primary value was the following which is networking people at the time and less so now we're protocol people. Like seriously, like if you want to have a conversation about implementing something, they'd go down to like bits and headers where distributed system folks, I mean, this is a decade ago, we had talked about distractions and state management. And so the one thing it did is for those working on the platform, they started to view networking as a distributed systems problem, not like a low level implementation protocols problem. So there was definitely some value there, but I don't think we reduced complexity from like a developer standpoint. All right, so we're working on this platform or trying to make the platform kind of help, everybody's problems and then we figured, okay, well we need to like build a product that people use and put into production. And so our thought is we're gonna build this thing we call the virtual network controller. I think the product had like seven different names in the life cycle of the company, but the idea was simple. It was like, listen, if you go to a virtual data center, the VMs have a certain operational model, right? They come up dynamically, they grow, they shrink, they move around, they disappear. They're often driven programmatically or through a UI, but networks aren't like that. Like networks are manually configured, so there's a mismatch between I wanna provision compute and I wanna configure the network. Again guys, this was like 2008. So we thought, here's what we're gonna do. We're gonna build this networking layer so that you can operate the network in the same way you operate a VM, right? You wanna make it totally programmatic and totally flexible. We weren't exactly sure how we were gonna do that, but that's what we were gonna kind of throw the SDN problem at. So along the way, and this was probably in 2008, we kind of had a realization that was the following. So I remember like starting to look at clouds at the time and I walked into one of like the main cloud providers and they were using aggregation of 40 VMs per server. And that like blew my mind. Because like if you take like where I did my PhD, like the floor, like there's an entire floor where I did my PhD, and there were probably 40 servers on that floor. And all of those servers are now like, and all of the servers are now running in one server. And in order to connect 40 servers in most work environments, you need networking. You need security, you need the ability to log. I mean, you actually need networking. So it's very clear that as servers were becoming virtual, a networking layer was being sucked into the servers. And you actually had to do really interesting stuff. And we actually had this posit that you could actually implement most of networking. Security, QoS, visibility at that software layer and you could do it on x86 without actually having to implement it in an ASIC in the network. And at the time, by the way, this was a super radical idea. Like now we're like, oh, that's obvious. But at the time people like, that's bullshit. Like you can't do it. So we started developing OpenV switch at the time. And the idea was, okay, we're gonna have like this primitive that you can use to implement much of networking into the network. And I think that turned out to be like one of these like fundamentally good ideas. I mean, it's all of the bad ones. That became a very good idea. And I'll talk later about why that was. On the other hand, we never could get the hardware ecosystem to work ever. And by hardware ecosystem, I mean, IHVs, independent hardware vendors love to talk to software startups, whether it's like a switch vendor or it's a Nick vendor. And they always go to the software startups and like, what can we put in the Nick to help you out? And we, I can't tell you how many resources we engaged with these big huge IHV vendors to figure out like what you can put in hardware to speed up the software that we were doing. And that never worked. In the 10 years that I was like deeply involved in this, this never worked. And I think there's two real critical reasons. I've got three here, but there's two reasons. One of them is really parsimony of function. Like the conversations were always the same, which is the hardware vendor would come up and say, hey, listen, we can do anything in hardware that you can do in software. That sounds amazing. Okay, great, let's do it. And then we'd get down to talking about specific headers. And I remember once I had to argue for 16 bits as opposed to 15 bits. Now from a software person where like for me like declaring a variable is like, I can put a 64 or 128 by it. Like this is ridiculous, right? But actually much more significant than that is it turns out hardware refresh cycles are incredibly slow, incredibly, incredibly slow. So now, if you're a vendor of a switch, for example, or a server, you can deal with this because basically the hardware is going in the bits, the atoms that you're actually shipping. But if you're a software vendor and especially a startup, you kind of have to wait for them to show up. And so I just want to give you one example. So there was a company, it's actually in New York, it's a very large cloud, that had been looking for hardware acceleration in Nix for tunneling for as long as I can remember. I remember having conversations seven years ago with them. So they're like, listen, at some point our Nix are going to have this hardware acceleration and then you can use it. So I was there about a month ago and they still hadn't got that as part of their standard supply chain. Seven years later, right? So like it's just kind of this kind of waiting for a Godot thing is if you're a software startup and you're hoping the hardware supply chain shifts, you really need to take into account not only the development cycle on that ASIC and then that ASIC coming into like whatever board you need, but then that's going to be in Taiwan and then wrapped around sheet metal, then put in a shipping container, then put on shipping switches, then going into some warehouse somewhere and then you still have an entire refresh cycle before that actually pops up at the customer. So I think actually tying functionality from a software standpoint to a hardware tends to be a pretty fatal mistake for startups. Okay, so then there was a question of, okay, so we want to build this system which makes it easier to operate networks. And we kind of had an idea, I mean it's an easy thing to say which is we're like, today network operations are box by box and they're topology specific. How about we provide some high level interface that makes it really easy to do. And again, as good computer scientists, we saw the problem in this entire generality. We said the answer is obvious. We're going to create a domain specific language and that domain specific languages can use high level names like Martins and whatever group A and group B. So that's topology and specific. So you make declarations at a business level but it's a full language. I mean this is a subset of data logs so you can express anything you want to express. And now everybody's happy because you can express exactly what you want and you do it in a way that's topology and specific. So now if you want to determine how your network runs you just kind of write this policy language and everything's fine. So there's two fundamental problems for this and this is basically a non-starter. And if I ever, well I'm probably not gonna do another company but if I ever guide another company in this space I would say never start with a domain specific language for the following reason. Which is what you want is you always want maximum expressibility and minimal complexity. Like that's what languages are for. Many of us are programmers. We think that we're reducing the world. But at least in the networking space we ran into two problems. The first one it was never really clear that the languages we came up actually supported the entire set of semantics of networking. We was networking for a lot of funky stuff, right? I mean we've got PV lands, we've got static routes, we've got basic switching, we've got VLANs. I mean like there's all of these things that dictate connectivity on the data path. Of course we've got broadcast and multicast and it was never clear to me that you could actually express the full range of those things using a declarative language. But here's much much more significant of a problem. It was like if you have a set of users that are used to using something and thinking about that thing there's a set of abstractions that are in their brain. In the case of networking those abstractions are like L2 and L3 and whatever. As soon as you change those abstractions you blow up people's brains. Like they don't even know how to think about what you're doing and the entire tool chain that was built on the old abstractions no longer applies to the new abstractions, right? I mean if you think about it, networking's been around since the 70s and we've been talking about IP addresses and ethernet and all of this other stuff. And so not only is all of the literature written on it and all the tools written on it but that's how people think. And all of a sudden I'm like here listen, Datalog. Like what is this stuff? And I think that this resulted in at least for me kind of the breakthrough that made everything really fall into place because until that point and this is probably say 2009-ish until that point it felt like you were preaching something that people never really understood and like every discussion was a knife fight. And so at this point we're like, listen, you need topology independent abstractions. That's clear. You need it to be programmatic. You need them to be global throughout the entire network. Why don't we just make those abstractions the same as the physical abstractions and just call it a day, right? So why don't we, and this is entirely decoupled of physical network. Why don't we just say listen, if you want, let's say you have a data center and you've got VMs. If you want to attach them to something, attach them to a virtual network, it looks just like a physical network. It can do L2, it can do L3, it'll support whatever, NetConf or whatever you want. And you can put in any topology that you want and you can control it globally through an API and now you can manage the network just like you can your VMs except for now this kind of globally decoupled abstract layer. And there's a whole bunch of confusion around this concept to begin with. I mean, so when we first started, I remember a lot of the answers were like, well, VLANs are virtualization. And I don't know if it's obvious now but like there's a big difference between segmentation and virtualization. So what's segmentation? What's segmentation like x86, say segmentation is if I give you a hardware resource, then I give you some identifier and it slices that hardware resource, right? You segment it, right? So what does a VLAN do? It takes a physical topology and it'll segment it. That's very different than saying, I'm gonna give you any physical network. Let's say an L3 network and then on top of it, you can build any other network of any other configuration. So I can give you an IPv4 L3 network and on top of it, you can create an L2 network, an IPv6 network, some complex funky topology, whatever you want. So the goal was to do like true, true address level virtualization where you've really decoupled the hardware from the software. And that resulted in what became NSX. I mean, again, we went through a bunch of names but it was basically building what we call the network hypervisor which is kind of a stupid name. But the idea is if you have a data center and you're running VMs, you actually need networking abstractions to pull off all the stunts that networking normally does whether it's visibility, whether it's QoS, whether it's firewalling and all of that and it must match the operational model of the VMs. Unfortunately, the story doesn't end there. So we actually built this, we did it using whatever mechanism it's not really important. It was all, I mean, it was implemented using the V-switches at the edge so it didn't require any changes to the physical network. And we kind of thought we were done. We're like, okay, great. In the physical world, people buy switches independently of buying servers, right? So like they'll buy servers and they buy switches, that's what they want in the software world. And it turned out not to be the case. Which is I don't think today outside of boxes, physical boxes, anybody wants to differentiate between compute, networking and software. So you kind of go into the operations, say you're running a big cloud and you buy all of these switches from Cisco, here we've got this software layer that you can buy independently. And the reality of these guys, I think you actually know. Like, listen, people consume compute, networking and storage as a whole. I want to make sure that whatever we buy is the end-to-end workflow and pieces that I've got to integrate and support separately. And so we started working on OpenStack at the time it was Quantum and then Neutron. So I gave my first talk at this conference in seven years ago, so it was 2010 in San Antonio. And at the time, we didn't really know how any of this stuff would play out. But the idea was, we're going to get behind a very promising project and it was very clear that OpenStack was gonna have a big impact and I think it really has and it continues to have a big impact. Why don't we kind of chisel out a network-shaped hole in there so any vendor can kind of plug their networking piece and then it'll all work in a way that's happy. And it was just kind of this base assumption that if you have an open-source ecosystem, like say OpenStack or whatever, it's easy to build some sort of pluggable framework so anybody can kind of bring their toy and plug it in and it works. And what we learned is that's really, really, really hard. I mean, it's just super seductive ideas. Like, we've got this open-source project and we're gonna make it pluggable because we're all great architects and designers and we're gonna have this flourishing vendor ecosystem. And in the long-term, this is a path that's fraught for power for many startups and I wanna try and describe why because I think it's super relevant to this audience. Probably the biggest battle for startups is that if a customer is buying something, what they buy and who they buy it from has all the account control. Does that make sense? If what I'm interested in is buying OpenStack, that's what I'm interested as a customer and so if I buy OpenStack from vendor X, my relationship is with vendor X and the support contract is from vendor X. And so from a startup's perspective, either you have to have a separate sales motion which now like who knows if vendor X is gonna even support your thing or you basically kind of hand over the keys to vendor X in some way. And in my experience, one of two things happens in every case that I've done. So thing number one, if what you're doing is strategic. So let's say that plucky young startup is working with Linux vendor X and we go to Linux vendor X and we're like, okay, listen, we wanna provide you kind of this networking system, we're gonna go ahead and plug in. One of two things happens. Number one, you will kick off a build versus buy decision from vendor X every time. They're like, listen, are we gonna allow like some like pipsqueak startup to kind of cram some bits in here and like potentially ruin the account. So you kick off a build versus buy decision or if you're not strategic, you get no resources or the be team. And I've played this game tree out so many times like every single time build versus buy or you get the be team. And in both cases, it's very difficult for you to maintain relevancy. I mean, often it's just better for you to try and do an independent sales motion. But then the, and you'll see this actually playing out today very often, like let's say you're selling networking, but you can't kind of crack kind of the vendor ecosystem. What you do is you're like, okay, I'm in an open stack vendor now. So now you've like got this DNA of this company that's kind of like built, you know, everything that it cans it and you've become an entirely different company with an entirely different sales motion and sales model. So it's a very, very difficult thing for startups to do. I'm not saying you can't, I'm just saying it's difficult. Of course, open source often becomes proxy battlegrounds and it's very, very difficult to project and manage around these things. Now, what's interesting, I think this really, like the more I thought about it and the more I went through my notes and the more I went through conversations we had, I think this really is primarily about young ecosystems, like more mature ecosystems, like Linux, clearly you can do this. And that's just because you don't need to have the same types of relationships with the vendors of Linux that you need to, if something's changing all the time, just because of like the shifting of APIs and versioning. So if it's an early project and you're a startup, beware. All right, I wanted to give you kind of a relative timeframe of the growth of the product. So the product was called MVP at the time. We were acquired by VMware in 2012, about five years after that we kind of merged with an internal product to VMware called NSX. And then we grew it, I guess it was just a few months ago is announced that there's now a billion dollar product line. So it became this very large product line. And the reason I want to show you this is to kind of show the relative timelines for actually bringing something to market. Which is like, and I'll get to this a little bit later, but it turns out that like R and D and even finding product market fit are way, way less relative investment than taking something to the market and even more importantly changing customer behavior. Okay, so it pop out as kind of a poorly dressed PhD student from Stanford. I think good technology wins the day. I kind of go through the Scotland and this meat grinder for a long time. And one of the big lessons I learned is actually if you're building a company and you're sitting at the helm of that company and you're responsible for P and L profit and loss. So like you're responsible for the balance sheet. It turns out pretty much everything that drives your life after, I don't know, you go to market is actually go to market. Like that's it. And the reason this follows and listen I don't know if this is even of interest to you guys but like if you find yourself in this situation it's gonna be a hell of a wake up call. Forget the fact that you need to sell something or you need to market something or anything like that. The reality is for a business you're judged on financials and all of your financials in an enterprise company are dwarfed by sales. Full stop. Like that's it. That's your valuation. That's your margin. That's how people think of you. That's your reputation. It's all driven by the go to market costs. And the reason is as follows and it took me a while to really understand that but like R and D is kind of like this fixed cost. It's like this sub linear cost. Like and that's a great thing about software is as you scale to tons of users like you don't need to linearly scale your sales or you're sorry your engineering team your R and D team, right? I mean like like a few engineers can build amazing products for the entire globe. And of course you have to grow it but you can do it in a way that's sub linear but when it comes to sales that just isn't true. And in early markets like pre-cas and markets like nobody's ever thought of it type markets like you need to do direct sales so you need sales forces on the ground and they drive all of your cost. So it doesn't matter like how hardcore you are and how technically you are and how awesome your technology is at some period of time like your entire life is gonna be dictated by one motion that by the way for me you have no training in order to do. And to put it in a little bit more perspective this is kind of where we spent the relative amount of time which is this isn't exactly to scale so I kind of just to be perfectly honest but like technology development is really important but like a lot of the reason that R and D actually spend so much time is because you don't know what you're building. So you know like you're going like oh we're gonna build a platform we're gonna do this is not the right thing. And so you're kind of like wandering in the wilderness you're trying to find product market fit at some point you find product market fit and that takes time. I mean that will often take years like I'm on seven boards right now I mean I'm an investor now but I'm Drieson Horace right now I'm on seven boards right now I see lots of companies I've made a number of investments and like there's this phase unless you're very lucky where you're wandering in the woods and finding product market fit and R and D always tries to keep up with that and often like if you do too much R and D upfront then you've got to kind of undo it later but all of this is dwarfed by the amount of resources needed to go to market all of it. Okay so I'm gonna talk quickly about lessons learned then I'm gonna talk a little bit about what I think kind of the next big things are and then I'm happy to open up the Q and A. So the clear lessons for me that are my big takeaways if I distill like a decade of my life down are the following. If I were to do it all over again instead of like worrying about like solving the problem and all generality because I'm a great computer scientist and like building a general platform, et cetera I would have just focused on the product and I would have gotten to market much, much sooner. I probably, we probably lost two years from that. The second thing is I think you should think very, very carefully before changing abstractions. I think this is the PAS versus the AWS model right? Like people know and it's not a technology issue it's the way that people's brains work. Like putting concepts in people's brains like people wake up and they think about X things changing those things they think about is hard. It's like the Leonardo DiCaprio like inception thing like you've got to go in and actually change how people think it's a very difficult thing. So if you can keep abstractions that they are much better software for the win for sure and sales and marketing. Okay so let me talk a little bit about the future and this is fairly quick but I think this is kind of where the world's going at least there's that I'm interested in. So I think networks are basically defined by what they connect. Like a network by itself doesn't mean anything right? It's a bunch of wires. It's a bunch of connectivity right? And like actually what you put in a network is a function of the needs of what it connects right? So if you look at physical servers physical servers you give them ethernet nicks that have L2 they want L3 and then there's an operational model around servers of like you have to pick them up and move them so if you want to go walk and configure the network and you do that that's fine. So if you have a physical build out of servers you can do a physical build out of networks you're fine and that's why networks look the way that they do in the physical world. Virtual machines change that a little bit again. Virtual machines have nicks they run legacy applications you don't change anything so you still have to sport L2 and you still have to sport L3 but the operational model shifted right? Now you have a virtual operational model where things come and go and move around and everything else and so you want network abstractions that follow that operational model and this is where network virtualization came in. All right so you can make an argument and I'm not making that argument that containers have a bit of a different operational model I mean don't really care about L2. It's more about application deployment and application portability as a use case so maybe we need something like that's like virtual networking for L3 but I actually think that that would be a mistake to view that the way the world is going and I think everybody thinks that's the way the world's going or many people think that's the way the world's going but it's more and more clear to me that actually networking is jumping up a level and so what I think is really happening is all of networking is jumping up to the API level and I think that's probably one of the most significant changes we've seen in the history of networking and I wanna talk a little bit about why that is and so if I were to make a guess yes you're gonna need networking container level stuff sure but I think that the action is really now at the HTTP REST JSON kind of API level so let me kind of lead up to that very quickly okay I know if this is Google Trends I know it's kind of funny but like even if a bad first order approximation it's like somewhat of an inside of what's on people's minds I've actually been schlepping the commute on 101 between Silicon Valley and San Francisco for 17 years almost two decades and while I go back between San Francisco and there like the billboards have always been kind of what's on Silicon Valley's mind and let me tell you over the last maybe year or so it's they're talking about companies whose primary interface is APIs and this just happened kind of overnight I mean you've got multiple billion dollar companies whose primary interface is an API you've got kind of APIs for everything you've got tons of GMV which is actual dollars going through APIs today so I think we're seeing this kind of massive shift moving from like content web centric stuff to APIs does that make sense well I think the endpoint I mean the endpoint used to be a physical server or a VM or a container it's very very clear to me now applications are basically becoming distributed components connecting APIs right so this is a non-trivial statement I'm saying like listen to build an application today instead of stringing together a bunch of machines what developers are thinking about are distributing a bunch of APIs together and using whatever they use to do it and I think that's driving like kind of the most exciting technologies which like I'm starting to think of as as networking technologies I mean like we never have this level of interest in anything I've ever built ever right so I'm actually on the board of Mache which develops Kong so it's a company I'm familiar with but if you look at like these growth statistics they're unbelievable and so if you think about it's like okay and by the way so that includes LinkerD, Rapid and Envid like all of these are technologies that are interposed providing what you would consider network like functionality things like discovery fault tolerance, failover, load balancing, security all of that's being implemented but it's being implemented at like a much much more interesting layer and like if you actually look at the interest in these technologies I mean it's going to have some tautic and so I just want to leave you with this and I'm happy to open it up to two questions after but like there's long been in networking this holy grail of like semantic networking it's something that we've always wanted to do and have never been able to do and let me talk about what that means so here's what networking the networking problem has been since the 70s which is I have a box and a packet comes in that box and I look at the packet headers which is this random arbitrary set of bits that doesn't mean anything and I make a decision to drop or send that packet that's the networking problem that's it so what are the decisions made on IP addresses they mean nothing right they're a location on the internet they're not even a box because of rebinding right I have no idea who's doing it I have no idea what operation is being doing I know nothing for me I just I'm setting packets from point A and point B and people have always wanted semantic space networking but if you think about it because now you actually have reasonable endpoints things like APIs and applications are written in a way and this is the most important thing applications are basically written in a way to take advantage of these endpoints we're seeing the emergence of an entirely new layer in components that are taking off which allow you to interpose and make decisions on that layer so for example now if I have let's say I'm using an HTTP mesh in a data center for like a Kubernetes deployment I can dictate that that mesh make intelligent low-balling decisions based on whatever like what resource is actually being modified or let's say I wanna make a security decision and I can differentiate between like a put and a get so now like instead of like these really crude approximations that haven't worked for a really long time we're actually seeing the evolution of networking to actually have be semantically aware which honestly like if that gets rid of all of networking and kind of replaces it for application development I think this is a good thing so with that I really appreciate you guys taking the time to spend with me I'm happy to open it up to questions so thanks so much please and thank you Hi, my name is Big Fan Thank you I'm lucky to be here so I have three questions the first one is related to OpenStack deployment so by today what is available from open source community to use with OpenStack Neutron either open-contrail or go with ODL or ONUS versus yet another paradigm with the Calico Layer 3 only so what's your thought on this? This is the first question the second one is regarding the switch to product itself there's a momentum delivering transactional lunatics running inside the switch such as CoreOS or Ubuntu Core to have a snaps in it with the SquashFS having the applications inside as a firewall or load balancing and the third one is Let me answer the, okay so let me get this one in time because I'll forget this is great these are great questions the first questions I honestly don't know like I've actually been out of the OpenStack world as far as kind of the network vendors for a while and so I'm not the right person to answer that like I mean I've got my ideas from like four years ago but like literally when I got acquired by VMware like the primary focus was the vSphere ecosystem and so I just don't want to say something that's irrelevant second question is a great question so I think now that we actually have a reasonable networking layer that's the HTTP layer I think the data center of the future is converging and it's very clear the data center of the future is you've got an L3 networking fabric that's probably a factory and everything else is x86 everything, everything so I don't see any reason to put any software that isn't doing L3 forwarding of packets in the network I don't see any reason to try and put middle boxes in the network or get them close to switching or anything like that I think that it's just like chassis design so like if you get a chassis from like say Cisco you've got line cars that have a bunch of intelligence you've got a backplane that they connect to I think the physical network becomes that backplane and the line cars have all the intelligence and I think all that intelligence is gonna be on x86 and I hope that the network confectionality is gonna be implemented in a semantically meaningful layer which is like the HTTP layer or something like that does that make sense and so I don't think that like I think often as computer scientists we over generalize everything so we're like listen you've got a switch and it's got a CPU so I'm gonna containerize that CPU like there's no point I think you're just misunderstanding the fundamental technical problems would you decide to do these types of things and so I think that like you listen the networking people are gonna continue to be focused on protocols and containers and low level stuff for a long time and all the while the rest of the world is gonna pass it up and basically re-implement all of these things that is a proper distributed system Okay Your answer makes sense in the sense that Intel is pushing RexCare with the backplane communication with the photonics design and so on but the reality today is we still buy and deploy pizza boxes going to the public services No, no, no, sorry, I misspoke or I wasn't clear I think the physical network will be an L3 fabric from whatever vendor you want and whatever physical configuration it's gonna be 10% of the total data center spend and that's it end of story I can't think of any innovation you actually want to do in that physical network I mean I don't know what else you'd put in there I don't know what like virtualizing like this wimpy CPU that's driving a Broadcom chip buys anybody so for me when I think of a data center I abstract the entire physical network as like one big L3 switch and I call it good and I don't do any more implementation then you say okay so now I've taken the entire physical network and I've abstracted it away then you say what about middle box functionality my belief is we're actually building proper distributed systems thank goodness because of microservices and so things that we normally put in the network things like discovery things like load balancing things like security things like false tolerance or fault isolation that is gonna move into basically the HTTP layer whether it's something like link or D or it's envoy or it's an API gateway or whatever so and those all run on x86 right because they're gonna be fundamentally distributed by nature so I think that's the evolution so you should think about physical network connectivity and then microservices and distribution harness to manage that above it the last question is around the semantics doing more using semantics in the sense of service function chaining what we saw kind of in the last two years going with the service function chaining inventing it with the NSH header sort of thing that is literally facing what you face by means of the people tied with the protocols exactly like change the header format yeah exactly so NSH was kind of an attempt to do you know network virtualization or overlaying by again changing like protocols and header formats and whatever so I think this is actually a great example like let's say you want to do something new in networking you could like create like a new packet format and a new set of headers that everything needs to understand and the network needs to understand or you know you build a new microservice and you add data to it and it's just you know a part of like an HTTP get and life goes on and I think if you actually look at like where effort is being spent and what's being adopted it's very clear to me that all of this stuff is moving up to the application layer and like the rest of it's just gonna atrophy over time yeah thanks for the questions good presentation thank you follow up on the API yeah evolving networking towards the API yeah and these solutions the service management solutions like link and the et cetera they do provide that abstraction within a domain within a close domain yeah great question once you leave the domain yeah great question IP, VGP, technology yeah great question yeah awesome question yeah yeah this is a great question so well I'll let you finish before I so my question was do you see why did that networking evolving away from IP I mean there have been attempts like ILNP but nothing really took over it's still IP and all this innovation happens within close domains you leave it go back to the old days so what's your view on that I think IPv4 is it I think that's the internet I think that and that's not going to change so just like the data center is now an L3 connectivity backplane I think the core is going to be an L3 connectivity backplane and then all the semantics are going to move up a layer so you're going to have companies like Rapid API which are going to like provide basically API arbitration services online and they exist today you can like find APIs you can look for APIs you can use them so just like the web was built on top of like IPv4 and then we built CDNs and everything else to support it and like that was kind of like because there wasn't great semantics that was kind of a mess I think now we have micro services as endpoints instead of the web and we're going to have a whole layer of infrastructure that's going to provide support for that so with that I'm getting pulled off okay cool thanks so much for your time guys thank you