 All right, folks, we seem to have a full room, so we'll get started. We've probably got a couple more people trying to squeeze in in a few minutes. So we're going to, oh, I thought you were talking about test. I don't know what to do about it. Best thing is important though. It's a pleasure to introduce someone who I've known for a few years. Unless you've been completely asleep for the last few years, you will have read some, several of Temu's papers, starting with his work on PIP and the host identity protocol, and then on data-oriented networking. We read it in CS244 yesterday, the donor paper, which really presaged all the interest in information-centric networking, which has obviously been a big topic in the last couple of years. And then, of course, SDN, which is going to be the main topic of Temu's talk today. In addition to all of that, he's just received the ECM, the sitcom Rising Star Award at the end of last year, because so much of this work done early in his career, it's going to be kind of interesting to see what he does next, because it's going to be a pretty tough act to follow, I think, for all of those things that's going to be exciting to see. Okay, without further ado, I hand you Jamie. Thanks for the introduction. Hello, everyone. So I thought I would use this opportunity to present some observations and thoughts on the network design, and especially how SDN is about to change some classic assumptions we all have had quite a few years now. I think not everyone really realizes the implications of these changes that we're going to talk about, unless you have really, really carefully been following what's going on in the industry. But first few comments about my background, just to establish some long distance for the presentation. So I do write code, and I've stopped designing protocols a long time ago, and I have no first-hand experience about designing hardware. So please do keep this in mind. But having that said, I do have plenty of experience in designing and implementing these systems, and that's pretty much what I've been doing for the past 15 years, actually, of my life. So not even this background, you could think that I'm all pro-open-flow, pro-SDN, and I would have nothing but great things to say about SDN. But after using probably the past five years of my life to actually design, implement, and deploy Networks, Build, and Openflow as an SDN, I have to say that we actually did most of the things right in the beginning. But at the same time, I do believe in certain aspects of SDN even more than before. So I'm going to begin this presentation with some observations over the past few years what I've personally learned and kind of stumbled into. And then after this very brief recap of the lessons, I thought it would make sense to kind of focus on the aspects that actually see valuable in SDN, and so I will be talking about SDN as abstractions and instructions again based on the implementation and the planning experiences over the past years. And then finally, having these structures and abstractions that might build in blocks concludes this all by extending the scope even further and trying to share some thoughts about their implications for the Networking in general and perhaps their implications for the Networking community as a whole. So here's what you can see with the main feelings. So here's your typical classic, it looks just perfect, right? I mean you have this centralized model sees the program of the network, you can do whatever you want to do almost. You have this extreme power of your hands. You have these cheap switches you can use and you remain connected to switches all the time over this protocol every single vendor agrees about. So that sounds pretty nice, but some of you might have kind of a weakness and kind of felt the pain yourself. There's probably something wrong with this picture. So first, I think this whole idea of centralization can be very misleading, because it's just the fact that reaching any practical levels of scaling and availability you can't have just a single controller, but you have to be prepared for some level of distribution among the controller system. And now, while having said this, I'm not obviously implying that you would have to have the same level of distribution you have today in the physical network so that this is really right for the robot to actually spread through all the elements. No, we can actually implement the distribution among the controller elements, hopefully in a way that actually simplifies the control problem a bit. But we shouldn't assume that the distribution goes away. So that was fairly obvious. So what about the protocol we use to control these switches? I mean open flow sounds practical, it's very powerful, but yet at the same time it's actually a very fine crane to manage in the switch state. And it's actually this fine crane aspect that actually complicates the management of switches quite a bit if you're doing it at scale. Let's say you're managing thousands of switches or tens of thousands of switches, you actually have quite a bit of state within the central cluster you're managing. It could be easily tens of gigabytes or just flow entries. And that's a lot of state for any kind of a system. And obviously it's even worse if you follow the kind of classic ethane model that you would forward always the first packet all the way up to the controller cluster. It just makes it very complicated to the scale. So it's clear that this sort of a very low level T-cam like abstraction and this T-cam like state management as well as pushing the first packet into the controller cluster it doesn't mix well with large scale systems you see in data sensors. So clearly there are way more efficient presentations of that same state. You almost like push some policy integration down the switches and then they let the switches themselves transform this policy configuration into flow entries globally. Just like with the router CPUs they don't transfer flow entries directly as such down to the line class but they actually transfer routing entries down in the line class locally transfer their or transform the entries into flow entries that can be better in basics they have there. What about the vendor nail quality of the protocol? Well, if any one of you has actually tried to implement something on top of a hardware switch this knows the pain I'm talking about because you basically have to tailor your application, your pipelines you use to manage your or control your packets for every single switch for every single github and in the switch. And the simple trim the single huge open flow table you have at every single switch they are even after five years and in fact even today utilize all the capabilities and resources you have in the modern chips and switches you have to be prepared to customize everything in terms of the hardware pipeline exposed by the vendor. We obviously know the root cause for it because vendors do care about the production costs of the chips and the dice actually doesn't matter when it comes to the costs as obviously they do care about having unique features from the competition point of view. And then finally this assumption that the controllers do remain connected to switches all the time it just doesn't fall in practice because network politicians do happen and you have to carefully engineer the whole system in a way that even though your switches is connected from the controller cluster you may be still connected on the data plane inside and packets may be flowing and you have to do the right thing and you don't have the exact connection to the controller. So again, it's again not that simple as the original simple classical overflow model made it sound. So all in all so you would think that I basically locked my fate in SDN overflow not for quite a year because this is a fairly skeptical viewpoint I think but I just trashed basically all the central components of the classic SDN overflow design the centralization is not there the overflow as an abstraction is way to a low level network partitioning makes things more complicated in practice and it just doesn't seem the right thing but it's actually exactly I think all these issues I mentioned they have only clarified to me what's really essential and foundational in SDN and why it's actually still has always been on its way to change the way we do network that makes me say so I think really the most important lesson of SDN is the separation of control and data plane it's easy to say that but what I actually mean by that so we know why internet is so robust it's this extreme level of distribution of functionality over all the network elements that makes every single network element being able to operate individually without any help from the rest of the network and it's really beautiful from a robust point of view to do any better than that but unfortunately we do know what it means from the control plane point of view because now the control plane becomes totally distributed there's no single central API of any kind in the system where you would be able to contribute to the whole network so you only have indirect means to control the network by fine-tuning the knobs and setting of individual network elements and the same problem actually exists on the data plane side as well so spreading all these apples, all these complicated policies all over the network element doesn't exactly simplify the system exactly the opposite actually and this is just because we have been enslaved with this sort of an extreme model of distribution that we have to distribute all the functionality over all the network elements and getting rid of this thinking is I think the very value of STN because STN was the first one to actually articulate that we should not design the control, network control in terms of the physical topology instead we should decouple the control plane from the physical topology not to a single server but to a cluster of controllers and then then design the control plane distribution in terms of the requirements for the controls, scalability and availability of the problem we are solving and we can do that by following general principles of this huge system instead of being confined in this small corner of the science space used by all the routing protocols and if you do that then you can really open your mind and start thinking the overall structure of the network, not just the control but the whole network and how you could make the network more simple how you could make it more flexible how you could make it more modular all those properties we have been lacking for the networks and I think it's exactly this change of the priorities from the data plane towards the control plane and its requirements, simplicity and the modularity and flexibility that I see the most valuable aspect of STN so now that we have actually three of ourselves from the physical topology and this extreme distribution we can start redesigning the networks with this mindset in our heads so where are we today just from random calls you obviously know what I'm going to say here basically that we just have this huge collection of protocols today and it's like the mess it's not a world-defined science of any kind if you show this to anyone say broken languages, operating systems fast system, databases, communities those folks at least have some sort of principles and foundations, we have nothing we just have to set up a pile of protocols of activities so how did we get here it was actually fairly easy because we just get to implementing new data plane mechanisms and solving data for the problems one after one and that's fine as such but whenever we did so, what happened was we kept adding new mechanisms on the control plane side of the older mechanisms and when we were done with the one problem we moved on to the next one and what happened was obviously over the time we were filing up more and more control plane mechanisms and eventually we had a fairly interesting collection of protocols and control plane mechanisms and as you know this resulting mix of control and paid protocols it's not simple to reason about but on the data plane side at least, we have some basic layering and kind of makes sense because the focal point was always on the data plane side and on the physical topology so the kind of focus was there first and it was always the control plane that had to add up to this problem solving on the data plane side so even that we have this sort of a mess with the control plane protocol so let's think about what's really the fundamental causes for that so it's always so easy to state that we have a problem, it's a mess but what are we really missing in terms of solving this mess we have pretty much everything actually relates to or revolves around the concept of multi-labyrinth so for instance we don't have too many concrete very good, very explicit examples of several sources of concerns meaning problems of one particular problem one particular, one specific protocol then spill over from that protocol the weather protocol and then it's like like nothing is nothing is contained and everything becomes because the implementations, the protocols they will introduce dependencies between themselves and you don't have this nice property that one problem would be solved at one location within one component and the rest of the system wouldn't have to hear about any aspects of that problem so the overall system becomes more and more complicated because we have more and more dependencies because of this lacking separation of concerns and then I think that the very principle of abstraction is also somewhat lacking and it's a bit different from the separation of concerns in the sense that every single developer who has done any sort of like object rendering knows that you should hide the internals of your object or your class from the users of the class and why you do that if anyone from creating dependencies into the internals of your implementation so that later you can actually replace the implementation without anyone knowing that you replace the implementation with something that's improved somehow improve the implementation and but the key here is that you need to have an abstraction you need to have an interface for this module so that the users of this module don't create any additional harmful dependencies to the implementation of this module and this is very obvious in programming but we don't follow this principle really that well when it comes to networking so no one said that this would be easy I mean you would make the statement that networking is a young field of computer science and first we had to make our systems functional first and we didn't have time to make them pretty so but I think it's exactly thanks to STN now sit down and actively think about these problems and the reason why I'm saying actively because it's so trivial to miss these sort of structures and abstractions I'm referring to because you have to be explicitly looking for these principles and these ideas and these concepts and again the analogy to the programming would be that everyone who has any sort of a programming knows that if you're trying to improve the modularity of any sort of a major major application that takes active effort even keeping the modularity at the same level requires active effort but if you're going to improve it it requires way more effort but if you don't do anything it's currently that the modularity level goes down you will have more and more dependencies within your system so next I'm going to go over some structures and abstractions I've stumbled in over the past five years and we can have some interesting long lasting implications in terms of simplifying the general network design and they're exactly building on these two principles I mentioned separation of concerns and using abstractions to high details and to reduce dependencies so the first structure is called fabrics and with fabric I basically refer to a similar provider of connectivity you have within the routers and switch sheets you have the backplane that provides full bi-sectional bandwidth between any endpoint attached to the backplane and this time the only difference is that we're doing it over the network instead of doing it within the chassis so let's think about MPLS network in MPLS network you have the edge routers that are the only ones that are forwarding based on the IP addresses but in some sense that's where all the semantically interesting stuff happens because in the middle of the network you have only these simple switches that just lay forward the targets based on this MPLS labels so there's this very clear divide between the 4 and edge there's this very clear separation of concerns in this sort of design and now let's contrast this to the this MPLS network to a modern computing environment which has been virtualized meaning all the wear clothes actually one on VMs which run on top of high devices and obviously in this sort of a network the VMs might belong to different talents and every talent probably would like to have their own network and not exposing any other traffic they have to any other talent so how people do it how people solve this sort of problem today the data center says that they actually build tunnels a layer of three tunnels over the network core and the tunnels contain enough information so that the receiving hypervisor knows to which VM this pattern belongs to but the point being that this sort of a setting actually becomes very similar to MPLS in some sense because it's again the edge that is basically doing all the semantically interesting processing for the packets because the middle of the network is doing nothing by forwarding these encapsulating packets quickly, deeply over the network and it's this internal part of the network that is doing nothing by forwarding packets these tunnel packets that we call battery and in some sense this battery they're like one huge switch all these hypervisors use just to send quickly packets over the network but the point here is that neither the battery cares about the packets that are sent over it doesn't care what's within the encapsulation it actually doesn't care about the encapsulation format that much and the same way hypervisors don't and definitely not to VMs they don't care about the internals of the battery so this should sound fairly similar to the kind of a backplane design that has it at this point we just do it over the network instead of doing it within the chassis so that's the reason why within this sort of environment you actually take these other networks because clearly you could just pick a different sort of a design and say well it's more of a classic design and say let's implement the isolation of all the features you need to provide for the VMs through all the network say by using BLANs and NACOs you can actually claim that all the benefits and reasons why people do it do relate to the separation of concerns because in this sort of fabric model it's exactly the network edge that implements the network policy and fabric is just doing the and it's this separation that allows both of the solutions actually become simpler fabric can just focus on delivering packets it doesn't need to understand anything about policies you don't need to have tackles anything implemented in these switches or you don't even have to support tackles within the chips that's used within the fabric structures and on the edge you can focus on the flexibility than providing interesting network policies and features but be less obsessed about the speed and more importantly you can actually evolve these both independent you can replace the fabric without changing hyperblogs at all anyway you can upgrade the features you provide at the edge of the network without changing the fabric at all and it is obviously very convenient if the vendors happen to be different for fabric and the edge so such a very explicit we're on kind of heading to to do with this this clear divide between the edge and the core so edge was all about the flexibility of operations speed less fabric was more about providing reliable which sheet transport and as you can kind of predict the the hardware platforms you use to provide the function within this sort of a network they are very different x86 all about flexibility fairly okay in terms of packet forwarding but not that great whereas asics very rigid, very difficult to add new features but extremely fast extremely they provide extremely high aggregate bandwidth so fabric was the first structure so let's look at another structure of stumbled endo which is even more useful in terms of more useful in terms of stability, modularity and increasing simplicity to the network so do you think about the networks, how do we configure them today it's pretty ugly actually the network policy configurations then as discussed earlier it tends to span over all the elements you have in the network so just think about the simple policy of A not being allowed to talk to B in which element would you kind of enforce this policy and what you would do if either one of them actually changes the attachment point within the network so once you are asked why should admins even care about this sort of low level details it wouldn't be much simple if they would be provided with this sort of a virtual switch and they would just operate with that and they would declare the policies A can't talk with B and then some magic below would take care of propagating all the policies and configuring the policies properly within the network below and in terms of that this sort of detail hiding is exactly what network virtualization is about instead of exposing the details of physical networks for the users as such we let the users operate with the topology of virtual switches and routers so obviously it's not one-to-one mapping you can have multiple virtual switches and multiple virtual routers implemented by a single physical switch and it's not even like one-to-one mapping in the sense that virtual switch could be implemented over multiple physical switches but what's essential here is that that users are provided with a topology that has nothing to do with the physical topology the topology users are provided with is exactly that complicated as they need to express their policies so in some simple cases a single virtual switch a single logical switch could be completely enough to express all the policies that they might have in the data center but you would have more complicated, more demanding customers more demanding tenants that actually would be able to have a bit more complicated topologies and then you might have some extremely demanding customers that have policies that are topologies, logical topologies that are even more complicated than the physical topology below but the point is that it's only as complicated as the users need tenants need and that topology remains stable regardless what happens below in the physical topology so not only we can actually use the virtualization network virtualization to high details of the physical layer from the users but we can actually use recursively multiple times and in that manner we could actually make network control scale wider and wider areas of the planet almost in some sense because we can hide lower level details from the higher levels of the system to the extent that it can scale because at the highest level you have policies that you care about at some global scale and then at the level below you have more fine-grained policies but they are still not at the lowest level where you care about all the little details so this is just kind of a basic simple of basic basic volume the basic principle of networking that you have to aggregate in order to scale and virtual switches actually if you use virtual switches in this manner they actually provide this is fairly subtle point but they provide a nice interface in terms of the policies because the virtual switches below actually can internally implement and follow whatever policies they have in terms of the traffic management and they don't have to listen to the higher levels at all in terms of whatever policies they might have as long as they provide this sort of virtual switch interface they can internally use whatever traffic management policies they might prefer so it's kind of a clean interface in terms of two different policies kind of almost like dodging each other within the network so how do you identify this sort of a hierarchical virtual switch structure well it turns out that it's just following the very basic principles of networking first you establish the lowest level of virtual switches are probably the smallest area of connectivity in some for instance a site could be a single virtual switch whereas the second level could be say a metropolitan area then building on top of these site-specified virtual switches then you probably have to also consider the failure domains meaning the smallest failure domain you have is probably a single virtual switch and then the bigger failure domain building on top of those failure domains again would be something you build on top of these low level switches so for instance the metropolitan area could be built on top of the site specific or building specific with virtual switches you have and then the last point that if you have this sort of a need to separate policies at different levels the virtual switch interface is kind of a very good candidate to do that so back to the implementation of the virtualization so nothing says that we have to actually use the physical any physical switch in the middle of the network to provide the network virtualization for the users of the network instead what we can do is we can just combine the ideas of the fabric and the network virtualization and just make the middle of the core of the network the fabric that just relates the packets from one edge of the network to another edge and the edges are the ones that provide all the semantics for the user and in this case they would be the ones that provide the user with these sort of abstractions, these logical topologies to connect their VMs too and we don't have to stop the switch system routers, what we can do is that we can have virtual services as well provided for the users and again we don't have the touch the middle parts of the network we can start to implement everything at the hypervisors or whatever x86 service you might be using at the edge and the middle part of the network remains still simple even though the users are provided with all the services they're used to have in the physical interface and we have so far we have the fabric simplifies the hardware in the middle of the network and the edges are the ones that provide all the semantics in some sense for the user and obviously in the hypervisor based environments it's the hypervisors that provide this edge using x86 and it's all software, it's extremely flexible it's huge proof because of the pre-nation of the software and then we have this mechanism of network virtualization that we can actually finally, first time in the network in some sense to simplify the topologies the control planes have to match because we can expose for the higher levels of control plane or these logical controls or we can expose users only with topologies as complicated as they need for their workloads and for their security policies so we don't have to expose the users all the time with exactly the same physical topology that may be very complicated and has a lot of details that are completely relevant for the users and the virtualization can be implemented completely in the edge so now we're getting to the interesting part so how does this all change that they find the role of the software and hardware in the network so a long time ago probably way before many of us actually used networks this is how network elements were built for CQs both doing the control both implementing and providing the control plane as well as doing the packet quality that's all we have and then obviously as we know, networks became more popular traffic volumes started to increase and we had more and more packets to forward and then at that point we realized we have to do something special with the data plane and that we got totally crazy with optimizing the data plane that crazy stuff but that was the history and now I claim that we are moving to the third phase software is again taking hold of part of the data plane but it's not doing it for the whole data plane just for the edge and what we can do is obviously XA is becoming faster and faster but it's also because of the fact that in case of hypervisor environments the forwarding you have to do you have to do it only for the VMs you have locally you're not trying to replace some high high aggregated, highly aggregated switches within the network with X86 not at all but it also means that because X86 is now providing more function at the edge of the network the rest of the hypervisor can get more simple so this gets us to this very clear divide of roles becoming fairly obvious at least in data in the networks that software provides all the semantics, all the policies all the interesting stuff at the edge and hardware is providing the high bandwidth of forwarding packets and in some sense this is a perfect marriage and hardware is doing exactly what it's so great about and software is doing what it's creating software is adapting to the user requirements that evolve quickly and I've seen for instance software vendors for the data paths to release new releases or new features for the software data path every month you can't think of a hardware vendor that will be able to do that with the Asics but that's correct in some sense this is just a modern version of the end-to-end principle what we are doing is we are removing something from the network from the middle of the network that doesn't have to be at the edge if it's enough for the overall system and obviously at this point quite a few people in the audience are still thinking that software never works it never has to work and it never will work but just to repeat it's different we are not trying to replace any highly aggregated high fan out elements here with x86 they are still there but the key thing is that they will be there and they can become more simple now because they don't have to worry about that they are providing the same on each of the users and as I said the software forwarding is more like a tax for the hypervisors they are using just a small fraction of your CPU resource that packets for your local VMs that's all so this has some fundamental implications in some sense the privacy is actually getting kind of turned upside down because today how the overall design process works in the classic network hardware software or elements is that the data plane the asics the cost of the dye service kind of drives the overall design completely and it's always the control plane that has to adapt to these restrictions you have on the hardware side so whatever little resources your hardware is able to provide you have to somehow work around those limitations and just the practical implication is that the overall system becomes more complicated because the control plane has to take the pain and implement some complicated protocols to kind of deal with these tiny labels you have for instance, for instance in the packets because asics and the T-cams are expensive and they can't match overall labels for instance whereas if you use software at the edge to provide all the network semantics that actually is completely the opposite you can start from designing the network control and the requirements you have for the users and the overall management and then you can rely on the fact that because you have software forwarding below it kind of reacts it can provide all the features you might need to realize the network control measures your implement using the control plane and the nice part here is that it actually results in a simpler system in overall in the overall what happened was that yes the asics was simple but the whole system was more complicated and the whole system is more simple and the software might be involved but it's a local problem it's not a global distributed problem what you have there so that was the divide between the hardware and software and how it's changing but I think this actually has even some larger implications for the networks as whole and how we understand the reason about the networks so if you look at the networks today it's basically a collection of different control planes as I mentioned earlier on the public level you have all the even the protocols of the spanning three protocols and the variance for them and then you have some wholesale replacements for those protocols as well and then obviously on top of that you start building the inter-domain running protocols and not only you have quite a few of them but they also have to be designed and implemented in a way that they interact probably with the protocols below and even if you would be using at the both of these layers probably you will have your own implementation for every single protocol just in case and then again once the scope of the network gets wider and wider you have more and more protocols interacting with the low level protocols and then finally obviously you will have to interface with PGP and in overall this whole stack as discussed becomes very complicated to reason about it's just a random collection of protocols in some sense that we have gathered all the time so why this is relevant then well if I use these structures as just discussed earlier I can actually construct exactly the same sort of a hierarchy with roughly similar properties just by using the same principle multiple times the same abstractions multiple times I know while doing that I probably would save quite a bit of code as well but I wouldn't have to have a custom implementation for every single layer and the point is that I might have removed the protocol specificities and these different implementations but this also could make the overall system much more simple to reason about because it's not anymore just some sort of a random collection of different techniques but it's built on same principles applied multiple times and obviously I'm not arguing for the replacement PGP here it's PGP that's what I said about the example here so my point is not that much that that would be exactly the approach to use to implement this sort of hierarchical global scale control plan with SDM my point of point is that there is an alternative the current approach of using traditional distributed protocols and this alternative doesn't require us to use any protocols and we don't have to approach the problem the protocol point of view but we can actually approach this problem from more of a systems distributed systems point of view and it's all about using just some values that with principles instead of coming up with these specifics for every instance of traffic the same problem so networking community tends to think that networks are somehow very unique and from the rest of the systems but as we know if you have written any sort of code with NOx SDM control of platform you actually don't need to know that much about any of the protocols and I claim that larger networks are not an exception it's just distributed systems and you have to apply standard principles of considering failure domain and locality and perhaps a bit uniquely in terms of networks you have to consider also the policies but designing this sort of a network or this sort of a wide area huge almost global network control system doesn't have to result in this sort of massive protocols we have today that also implies that if we take away all these protocols we as a community have our networks quite a bit of the magic we kind of have the secret source as this appears and in some sense protocols are any more the kind of an asset we kind of carefully maintain and see the value of us that the value providers for us but if we do that I claim that actually from the science point of view we are then making huge progress because we are applying this implies that perhaps actually in the end in the networking you have some well defined principles and constructions or constructs and structures that you can apply in principle manner without relying on this sort of a mixture of protocols we have today and which we by the way each do all the students still I think so time to summarize so as discussed I think the real value of SDN is actually the opportunity for us opportunity to consider or reconsider the structure and design of the networks and fabric of the network virtualization I gave as an example there are also some favorite designs becoming almost a default when this stuff is deployed in the industry and the structures actually I think they are kind of on their way to cause this divide between the software and hardware to be changing once again in the history of the networking and it's obvious that the virtualized environments that are driving this change and as you know more and more and more and in the same way less and less relevant in practical requirements and surprise surprise it's not network product developers that actually write and develop these systems it's all standard systems these systems developers that implement these products and maintain these products and in my opinion that's exactly what the software of SDN stands for it's about implementing both forwarding and the control plane in ways that you are free from the kind of classic way of building networks that have all the constraints of the hardware that have all the constraints of the disk for of course instead networks actually could become the standard systems that you use the standard software development practices and principles to make the system simple, modular and very flexible that's all I have thank you so when you were talking about the fabric and the edge separation what is your mind is the boundary between fabric and edge in some of these kinds of networks like residential access edge, wireless edge, cable edge there is an edge where there's a heavy there's a lot of flexibility that also tends to be an aggregation point in your view is that still the fabric or is that the ideal design here because I was concerned but I think you can kind of apply the same idea in case of like residential perhaps it's the first home not at the home but say at the provider I don't think it needs to be kind of a strict boundary there's not a strict rule where you should have that the first hope so to say and some of the kind of environment you have so for instance what you would have is that the network virtualization solution we are building for instance we can integrate remote enterprise sites and in some sense we are just placing the edge to the remote site they have all the networks all the existing like us networks there at the site but we provide an appliance that integrates their physical network to the network virtualization solution and then the edge is actually that device you still have the physical network and that makes sense it tends to be that there's also a heavy amount of aggregation happening at that point so when you were saying that edge doesn't need to be aggregation it does make sense that you push that edge all the way up to that first hop but there's also a heavy aggregation but you would also argue that you would be able to do these other aggregate networks that provide you that connectivity in a very simple manner that just kind of channels inside of a device that provides you that boundary meaning you have to have, I'm not playing with X86 to be the one that provides you that functionally but you would design like high aggregated connections towards an appliance that provides you the processing without having the high number of connections actually terminating that element so there's to have a distinction like the affinity here in that hierarchy that you belong to but isn't that what you were saying because I mean that edge point is still software and the software has to run on a platform and you said that the platform is X86 yet as you mentioned was pointed out that point in the network is a high aggregation point so it's vast amounts of computation so how do you reconcile these two things? I'm saying that you don't have to have a huge number of cables being terminated into the appliance having the processing power so at least you can separate that problem away you don't have the problem of providing having a cluster say that provides you with the necessary number of CPUs and processing power that's a heavy cost that you need to consider which is that if you really want to do it it's a function of the population you have behind that device or that cluster definitely but in some sense you have to also remember that you are removing some functionally from the net and it becomes simpler so in the total scheme in the global scheme of things you are just kind of moving things around it's not necessarily saying that you are adding more resources global you are just adding more computational resources to that element, that cluster but the network elsewhere has seemed to are at least You've replaced sort of inexpensive high performance assets with expensive low performance CPUs because of your flexibility So the modularization example that you gave is the operation of edge from the fabric and how policies can be implemented independent of what's happening in the fabric but what if your policy is driven by the state of the fabric, what's happening for example let's say What would be an example of that let's say you want to do some sort of traffic management rate control at the edge based on how congested one particular node is in the fabric that's really optimized in terms of there that's an optimization, then you have to expose some hints that then you can have the sort of a total decoupling examples I had in my mind was that you provide a full five sectional bandwidth and that's all you do but even in that case actually what you would do is that the higher levels probably the layers probably would provide some QoS classification bits for the lower level levels for the fabric so the fabric is able to do the Q in properly again the picture I had was kind of an ideal so for instance QoS is something you just can't avoid you have to expose some information for the lower level Can I just respond to that I think the way I think about it is the fabric can have a different level of service that exposes to the edge and the edge can provide policies that for this traffic I need this level of SLAs through your fabric and then fabric can do automatic rerouting the worst case unfortunately is that you might have you can't aggregate these policies you can't have you aggregates of QoS policies in some very exotic use case and you might end up actually a huge number of those categories of traffic you have in your fabric the fabric is good as a dump fabric you do it as a fully traffic engineered fabric it's basically full five sectional bandwidth you have some fabric and QoS bits are basically the only ones that you provide as an extra piece for the fabric to do the proper thing in case of QoS All of this can be easily mapped to the data center context but when you go on to exercise the mapping into all different types of micro-government that is going back to a major question I definitely can't claim that I definitely can't claim that so we have actually the students at Berkeley have done some work of trying to map this sort of a hierarchy over larger topologies but even they haven't considered all the details all kinds of savings because more on the bandsaw basically Just to point out what the kind of thing I get back is it's not really the data center when two ends are covered by same organization then a lot of these things would go in place for example internet to a data center won't work data center, data center, back bone, back bone and you control the board ends then the assumptions like you have a factory kind of connectivity or you have very nicely traffic engineered stuff all the assumptions won't work then if you could elevate remove some of the intelligence from the network into a different mechanism which can control the flow of traffic across the path then as long as the both paths are both ends then you can do a lot of things so you're going to have a high level traffic management almost it's pretty interesting though what I see is obviously just nothing more than a switch getting into the server right then the came the product was like VXOI NVGRI STD moving this product back into the mic so what are your thoughts about it it looks like it goes to a cycle so I might actually have a very strong opinion about that you shouldn't fix these protocols actually within the mic you should just provide the primitives within the mic that accelerate the encapsulation and you can actually do that I think even the modernism, modernics today they don't, the high end nicks at least they have mechanisms that allow you to find offsets basically they don't understand protocols really you just have like a library of primitives almost that you can't even use I think that's the way to do it because then you don't again get found in the hardware and the development cycles you have within the software the control remains still completely in the software if I were to be kind of a same fate of that to me right so you mentioned that network programmers have to become distributed systems programmers so I was wondering as a programmer what are those constructs or primitives that you think are most important for networking people to learn to move into this distributed systems world obviously it's just the database and consistency stuff and all that kind of useful classic distribution systems literature based it's I don't think, I think it's more about the mindset actually in some sense so for instance the people I work with none of them is kind of a classic network engineer they all have a background operating systems actually and it's kind of interesting for those folks it's easier to kind of move into this domain of SDN than from the most of the networking folks because networking folks have always the package whereas the operating system folks kind of classic system folks well they perhaps don't know all the details of distributed databases and all that sort of techniques at least they don't have that package they have more of the correct mindset when it comes to the overall system but unfortunately I can't give you like a list of my top 10 what so are you looking for something like what I would call SDN 2.0 where you can implement all these nice features that you solve the problems that you described in the beginning like reliability or is that still you don't really need SDN 2.0 you can do let's say reliability and specification or the controller using standard in SDN 1.0 so the funny thing is that we never think that we are actually building SDN networks we are just building systems and we are always solving all those problems I mentioned but I can't go into all the details of that work but it's not like we don't think this is a term of an under the umbrella of SDN we are just building software systems that solve all these problems and the fact that we call it SDN is actually almost more for the external communication I don't know if that was kind of a response to your question let's say you know the problem of your liability with the controller is with the standard ways to say you have an Africa you have some distributed product so you don't really need to redesign whatever the product was and where you are for what is called as the sure, sure, sure I mean in some cases we are building everything for straps, we control the V-switch we control the cluster so it's just software for us we don't have to be bumped by any of the overflow or any of the SDN kind of assumptions it's just software for us so in addition to being a protocol overflow is sort of an abstraction for switch functionality but it seems that you've argued that in the core MPLS gives us a good enough abstraction and at the edge we don't need a protocol we just need an API so my question is, what's the correct switch abstraction for both the edge and the core and what's the API to that so when it comes to the when it comes to the pure fabric I actually think that you can do pretty nicely with existing tools IGV protocols and usually you probably need a very good reason to go beyond that but all these protocols are fine too they do exactly what I expected you and they are stable and they have been defaulted pretty nicely over the past 20 years but when it comes to the switch in the face of the edge then it gets more and more interesting and there's kind of an extreme two extreme vent concepts post here some argue that it should be almost like X86 you download code you download the VM and then that VM is staying in the face almost and then the other extreme is that when you improved all the protocol enough to be flexible enough to allow you to do the failure domain, the failure handling properly to allow you to do all the operations you might need to implement the server virtualization and I wouldn't actually let the comment where we are within the spectrum but I think those are the endpoints in the spectrum and it's kind of a delicate balance in the sense that X86 is kind of easy to say and even if you download X86 code there you probably still have some of the protocol it's often at appliance but that protocol problem is very application specific it doesn't have to be like all in all it could be like related more to the problem you're solving so if you're solving this it seems less of a protocol it's a protocol still I mean it might be you know kind of a like perhaps it's not the protocol in the old flow sense perhaps a more kind of like RPC kind of thing what do you have because it's definitely more application specific to take that role Any other questions? Okay, well let's thank our speaker