 So, I guess we can start. Thanks for being here. So, I'm François Le Marchand. I'm responsible for SDN product management in Ericsson. And I will present today with Alain Cavana, that is responsible for most of the open SAC architecture and cloud architecture in Ericsson. So in that presentation, the ambition is to cover a bit this virtual enterprise gate to architecture and what we mean by that because there is, of course, many different, proposal, blueprints, product sketch of what it could be. But this is really like Ericsson vision on how we are building it and how we think it should be built. And a bit open the discussion on what are the things in Open SAC that we have to focus on, what is the impact on Open SAC architecture. And Alain is going to open a bit more on that front on the technical side. So first, we thought it would be good maybe to start with some of the basics. So sorry, I know some of you in the room will be very bored. But I think it's very mixed audience with a lot of people from enterprise and so on. So I just wanted to touch base on our definition of SDN, our definition of FNV because we are going to use that across the rest of the slides. So SDN, I spare you this whole architectural slide. But basically what it is is splitting control plane from forwarding plane, or that was at the core initially of it, which means, for instance, you have an SDN controller that will run a control plane for a network. And then the nodes will simply be forwarding element that are like substantive to that SDN controller. So this whole system will act as one logical box, right? One logical box. Then of course, SDN is much more than that, right? I mean, when the definition knows being extended, including like a lot of overlap with NFV, I'm going to touch on that. And then of course, the value of SDN is much more than simply splitting the control plane that's also about API extension and flexibility you can bring on the network. So there is much more to SDN, but really at the core it is that. And if we look at the early, let's say, use case for SDN, including data center, that was really that, right? That was basically controller using OpenFlow to drive like an Ethernet switch. That has been one of the first and core usage of SDN. So what's the relationship with cloud and OpenStack? Well, again, the first application that has been subtend, let the SDN controller drive the underlay, the infrastructure, the network infrastructure of the data center or the Ethernet switch. And basically, the role of SDN, why it's software defined, what value does it bring, is that you get a network that can be reprogrammed very dynamically. And then again, using API, like a neutron API northbound of the SDN controller could be a way to do that. Then this allows the cloud, when you create dynamically virtual network, virtual machines, and so on, to basically drive the reconfiguration, the dynamic reconfiguration of the network under NECET, right? So it's basically SDN driving like physical box to implement dynamic networking on behalf of the cloud demands, right? And then those little like floppy disks, I get a lot of bad price. They usually upgrade to USB at least. But they are supposed to be software image, so they are supposed to be VMs, right? That's the meaning of that icon. So then those VM pops up, and then we automate the connectivity. Then what about NFV then? Well, I think things start to get a bit more complex when we intersect cloud and SDN, right? When we intersect the actual network functions with the actual virtualization together. And at this intersection, I think there is basically two menus' case. And I draw this dotted line to say, first, you may have, let's say, network functions. So virtualize network function. And if you take a physical switch and you virtualize it. But part of the infrastructure, that's your v-switch, right? That's your v-switch when you build an overlay. And that can be controlled like this, controlled by the asian controller, similar to the underlay, right? So you have both of either or model. Or actually, many operators now are going for both model at the same time, right? They can be quite complementary. But that's not really the definition of NFV, right? Because NFV being introduced by carrier, by operators, being defined at Etsy, the goal was not simply to make the infrastructure of data center more flexible. That was really about looking at the existing IP domain and the operator, so meaning all those waters and firewall and BRAS and gateways and box that are running the physical operator network today. And basically, look at a way of virtualizing. And meaning virtualizing, it's basically running on XLE6. I think that's the main intent. Running on general purpose hardware, running on dynamically orchestrated networks so that those resources, they can share a pool of hardware, compute resources. They can be spun out and spun down dynamically. And they will also run on a more generic hardware to reduce cost. And actually, it's more of increased flexibility than cost reduction. So what does it mean? It means, in that case, that's also a virtualization of a network function. But it's not really as part of the infrastructure of the data center, that's part of a hosted application. Then your NFV is more like a hosted application that happened to be itself a network application. So looking at all the functions within Ericsson and the different, let's say, use case for SDN, you can look at the middle that's basically the data center, this SDN controller for the data center that we call the cloud network controller. That's our SDN controller. Then the data center infrastructure and in between a portfolio of NFV services. And I kind of color coded those services because there is two main domains in our view. There is services that are more like network centric, per se, that are focusing on getting connectivity to the end subscribers. So they could be virtual waters, like Enterprise P, providing VPN services and so on. They can be more like gateways, meaning that they have an ocean of subscriber termination, whether it's mobile gateway or whether it's a fixed line gateway or BRAS. And then we have also more like IP services. So all those typically bump in the wire services that you can insert into the path of the traffic in order to create new services or like security or like an N-square or video optimization, I mean, all those do this class of services. And the reason I color coded is that the role of SDN, it could be on one side, could be that SDN will drive or configure those NFV dynamically because then those NFV, they are not part of the data center infrastructure, but they are part of the operator infrastructure. So it's kind of a layered approach in that model. So of course, there is a case where SDN controller role is actually to talk to the NFV layer. But I think on that presentation, we really focus on SDN controller as part of configuring the infrastructure in a data center in order to connect efficiently those NFV features. So it's basically SDN as driver of the infrastructure to provide services to the NFV functions. So concretely, what does it mean? It means typically, if you run a virtual gateway, a virtual router, and so on in your data center infrastructure, the first thing you need is being able to, of course, send traffic to internet. And that's fairly basic in the typical data center design. But you also need to collect traffic from your access, from your one, from your metro network, and so on. And so the first need of an intelligent SDN controller in the data center is to be able to inter-work in a seamless way, in an efficient way, with the existing one with the existing aggregation and core network equipment of the operators, which means that that SDN controller needs to talk, for instance, IP and PLS, natively. So that's one of the use case where it's not only basic connectivity inside the data center. That's across the data center. But that's also interworking with the existing IP1 domain. I think the second use case, and this is where it was a bit color-coded, the first use case is pretty much pertain on getting traffic in and out and connected to those NFVs, including like very, very simple function that you do today on the router, which is like load balancing, availability, and redundancy, and so on. The second part is more like, OK, what about the services? What's special about this other NFV that are firewalls, and so on, that are service-oriented? I think what's special is that you could, at the very basic way, build service chaining about like bolting different instances of VM using like VLAN or virtual network together. That's the most basic approach. But there is actually, given the nature of those services, a lot more you can do. And a lot more you can do that's being able to identify, for instance, specific subscriber class. So it comes back to policy management, right? And it could be policy management as for enterprise policy management, defining different group of operators or sorry, of users in the virtual enterprise networks. Or that could be policy management, in a sense, or broadband access policy management. What is the class of the user? What are their quotas? What services are eligible to? And based on that policy management, then, for instance, the SDN controller can build a chain in a more, let's say, granular way, or in a more dynamic way that you will do with a simple, let's say, VLAN cross-connect. So SDN is also, if you look outside of the data center, right, that's really like the vision inside data center. But outside of the data center, there is other use case for SDN controller. The same way that an SDN controller can drive the physical data center infrastructure, like Ethernet switches, for instance, using OpenFlow. Then this SDN controller can also be applied to basically change the way the operator are building the actual aggregation network. By saying then, we will do the same thing, centralize the control plane, use simplified forwarding, hardware forwarding elements in the aggregation, and basically drive this whole aggregation from this more central computing point of view. So that's one model where it's more, we call it transport, but we mean transport in a sense of next-gen transport, which mean IP plus optical, which mean packet, which is IP today, IP MPLS, and then optical. Not necessarily transport in a sense of more legacy, TDM, SDH, and so on, of course. But that transport controller could be seen in that model, which is, let's say, the more extreme SDN case where you really centralize all the control plane intelligence. But there is also many use cases for transport where there is not necessarily a strong drive or benefit to basically try to remove the intelligence of the existing box. It may not be at visible in certain circumstances, like in the core network and so on. But in that case, what the SDN controller can do, by being more like network level, he has a more holistic view of the network requirement. Having APIs, it can be connected in a more tight way to the, for instance, the orchestration layer. So he knows about VM, he knows about application, he knows about requirement, he knows about policies. So he has much more intelligence about the traffic that is carried in the network than would be a standard router that will lack those type of interfaces. So in that model, the SDN controller, for instance, knows what application needs, knows how the network is actually set up in terms of topology and so on, and then can be used actually to instruct your existing routers how to drive the traffic in a more optimal way inside your network. And then the final point I wanted to point out is that all of this is really, let's say, IP centric. And of course, there is also many use cases on SDN and that's why you saw this rainbow coloring, that means optical. And that's another activity that we're engaged, actually, we know through Open Delight, but also through a new partnership with Ciena, is basically to enhance that network controller, that SDN controller in order to be multi-layered. So control IP layer, but also control the optical layer so that we end up with an optimal, let's say, setup and coordinated setup between those two network layers. So coming to the core of the, sorry, it's a bit long introduction, but coming to the core of the topic, and we are getting back to that end-to-end vision at the end, what is the virtual home gateway or what is the virtual enterprise gateway? And I think that's a very good use case because it's really end-to-end from that perspective. The initial case for the operator were like the maintenance and the management of those home routers. Depending on the region in the world, some people are just buying those routers like directly at Best Buy or whatever, separately from the operator, but more and more, the operator, they want to drive new services like Wi-Fi and so on. They were driven to provide you with that box, part of the subscription, so that they can deliver more advanced services. Then, of course, there is a cost to that. There is a cost of maintaining those hardware elements. And when you talk millions or tens of millions of equipment, then it becomes very complex to manage. Managing upgrades. Software upgrades are already a pain, but then, of course, hardware upgrade becomes very difficult. And you don't want to do truck rolls to tens of millions of locations. That's really crazy. That costs a lot of money, support and so on. So the pain points were really monolithic software saying, okay, we buy a software stack for our physical CPE, and then if you want to add services on the fly and so on, it becomes more and more complex because that whole software is untangled. Then also, those box, they have to be cost optimized, so they typically don't have infinite resources in terms of CPU, memory and so on. So sometimes it's difficult to insert new advanced services. And then that has been the drive for the first virtual home gateway type of concept. And initially, it had nothing to do with cloud. It has nothing to do with SDN. That's a concept that is probably like five-year-old now. But the model was more like network-based. Take the intelligence out of the CPE, out of the customer premises and move it to the network. Move it back to the network because then it's much more flexible and you alleviate all those support problems. Another drive for it is that if you look at a typical, let's say, a gateway, router, whatever, you get one public IP address with IPv4 and then you get some NAT. So basically for the operator, the type of policies you can apply to the end user is basically based on your home, based your online subscription, not based on your individual device or the people that is behind the device. On mobile, it's very different because it's more or less, you can curate the subscriber to the device, to the session, right? So then you can do a one-to-one mapping between who's the subscriber and what policies you want to apply. And you're missing that in a fixed line, right? So I will say so be it, right? I mean, there is no use case and so on. There is more value-added services you can deliver. But to the most part, people were really happy with that. But then there has been more and more, let's say use case, where you may want to differentiate the course across different equipment at home, right? One that is doing video streaming on Netflix, one that is doing web browsing, one that is doing software updates. There's been more and more use case about fixed mobile convergence, let's say offloading through Wi-Fi, for instance, your phone or your 3G tablets, right? And then in that case, offloading through Wi-Fi is nice, but then when you talk about community Wi-Fi, that more about roaming, how you can isolate that given device at home from the rest of the device and handle it efficiently. This is where this virtual gateway is not only like moving intelligence back into network, but in the process, it simplified the aggregation method so that now you get a layer two tunneling between the home domain and the network domain. So then you can do much more functionalities per, again, per device, per subscribers. You also get access to certain functions, like UPnP, for instance, that is not leaving the home boundary today. Now UPnP frames and so on can be broadcasted, right, done to the operator network. You can deliver new value-added UPnP services, UPnP proxy, easy discovery of video resources and so on. So there is many use cases as well that are uncovered with this one, beyond the course control, beyond the mobility, beyond all those functions. So what about enterprise then, right? So what's the difference between virtual home gateway and virtual enterprise gateway? Today, when we talk about enterprise and cloud, the main obvious use case is to say operators or let's say cloud providers in general, will be able to host enterprise IT services. So they will take all the enterprise, whatever, database, backend system and so on, servers and then move that as a virtual payload into data centers. And that has been, let's say, the main and the core, let's say, use case for cloud and enterprise, right, especially for the operators. But then they still leave a lot of network function in the enterprise that are either deployed by the enterprise itself or deployed by the operator in additional element or the CP. And here the problem is worse than on home because there is more services. The enterprise are more picky on the quality of those services, on the type of services, on the brand of firewall, whatever they want. And then the drive to actually simplify this, right, in order of better manageability, faster provisioning time, and cost reduction has been even greater for enterprise than it was for home. So then the model is the same, is to say yes, you're not only virtualizing and hosting the enterprise IT services, but also all the networking function that have been operated by the enterprise, operated by a sort party, let's say manager, or by the operator directly, are moved in that model into the cloud. So it's very similar model than the residential. The gains are a bit similar as well, right? Even though again, there is even more drive for enterprise because more complex services, more personalization is required over there. And then the difference really comes to, I think, two main factors. The baseline is the same, but when you look at an enterprise, typically they will have very different policies from one side to another side, from a group of users to another subgroup of users. Some enterprise, they will want a very specific brand of firewall, for instance. So there will be a lot of difference between two different enterprise or even two different sites of the same enterprise. And then the enterprise want to have a very tight control. The enterprise administrator, security administrator, they want to have a very tight control on those services. So they want to know a lot of things about analytics. They want to know a lot of things about security and get visibility and get fine-grained control on type of ACL, type of policies you want to apply. So there is a big difference which is really on the orchestration layer. And when we talk orchestration, doesn't mean necessarily, for instance, if you compare to OpenStack, doing that part of OpenStack, that can be a service level orchestration that is driven and operated by, for instance, the business people inside the operator, but that needs to interface with OpenStack in order to apply some of those services or drive some of the change in the underlying network. I think the second difference is that now we are talking enterprise application. So we can tunnel those enterprise applications from a CP down to the cloud. But then after that, those sites, right? And typically, we'll do that on smaller sites initially, right? Because the larger site, they will still have like a small data center or server farm or local resources, more probably. So you don't do that for all the sites. And those sites that are centralized in data center, of course, needs to communicate with the rest of the sites, right? You have any to any type of application and so on. So for that model, what you need, this is where you need the IP VPN connectivity, right? And then, of course, you can have different form of VPN, IP sec, and so on. But I think a very strong drive for those services to say, not only use those services in data center, but they are natively inter-work connected to your IP VPN instance in the one. So that's the second aspect. So now there is different way to implement it, right? You can say, okay, I move my services in the cloud and there is two main ways to do it. There is the simple model or basic model, whatever, I don't know how to call it, but it's basically the basic NFV, right? You have a software that runs on CP. You take that software, you put in a VM or in a container and you run in your data center. Done, right? So that's very straightforward. I think that's the benefit of it. The second thing is that if you're used to operate your CP using a given management system and so on, there is not a lot of difference on doing that. So that's, I would say, quite easy step to do. Then, of course, what it means is that it means two things. First, you have many instance to handle. Not saying that it's a problem. There is way to solve it, technically. But if you have a million CP, you have a million VM or million containers, right? Or 10 million or 20 million, right? So that still has an impact on your design and operation and so on. The second thing is that you're not getting all the benefit because I was saying you have a monolithic software. You buy from Vendor A. Vendor A is providing you URL filtering, time-based access, DPI, IPS, a number of set of services. Now, if you prefer for one given enterprise or for, let's say, other reasons to change one of these elements, then you have to change the whole stack, right? It's not like you can decouple those services. So our model is rather to say if you want to reap the full benefit, right, of the virtualization in that model, you should not do the simple one-to-one mapping, one virtual CP, one one VM. You should do it with SEN and NNV, meaning that, finally, at the end, that CP is a set of services that are chained by proprietary mechanism inside the box. Now, if you move those services into the cloud, you have to, you can replicate the behavior of a CP, but you replicate that behavior of the CP with different set of software. So all those software, they can be, like, from different Vendor, they can live in different VMs, and they can be desichained, or they can be chained in a different way in order to replicate the same behavior than a CP. But then it's much more flexible because it means you need to have, like, one-to-one model. You can say, for instance, your NAT instance is a big NAT instance that will handle 10,000 customers in one instance. You don't have to have 10,000 individual NAT instance to do that. And for NAT, for instance, it makes a lot of sense, right? Because you may want to share, for instance, for residential multiple subscribers to the same egress IP address, for instance. So you have no other choice than sharing by definition. For other things, the benefit, you can still use containers for certain services and do, let's say, one instance per subscriber, but at least what you have, you have the choice on how you want to, the lifecycle of that service or what type of service you want to deploy. So we believe that both models are, of course, we see both models in production. The second model is a bit more complex, but it provides you so much more benefit in terms of flexibility in the long run. So that's our target architecture. So I was mentioning you need to change those services, right? So if in your CP, you are doing a bit of DPI for, like, a QS insurance, for instance, prioritizing some HTTP traffic that is business-critical versus other HTTP traffic, or if you want to do, of course, all the security function or some URL blocking or policies, or if you want to do some NAT functions, that those are the typical function you find. You will want to build chain, depending on the CP, depending on the operator, that will have one or more of those elements, and from one or more of software version, or with one or two or multiple level of software version, or from one or more different vendors that you can offer in your catalog, right? So you have to build a number of chains, and already moving to the cloud, even without very advanced or complex chaining, you, it's already much, much, much better than it is today with physical box for the operator. Because, of course, if you need to build, you know, 50 different chains of service, today is very hard to do, because then it will mean that you have to deal with physical box. You have, sometime, you see the box that is with a red square. It means that you don't want to have, like, so many box instance, becomes very hard to predict, you know, much traffic. You will send to a given box to do, like, all the traffic planning is very hard. It means you have to convert the box differently to isolate subscriber one from the other. So there is a lot of complexity doing that in a physical world. In the virtual world, of course, spanning a VM is a very low cost. So it's making it much easier, right, to build a service chain. Also, you don't have to hardware built and bolt, right, those cables, those villains, whatever configuration of the switch that is all automated, right, through OpenStack and through the SDN controller. So there is a lot of benefit of using cloud, but what we believe is that we need to go beyond that. If you want to get a full benefit, we need to go beyond that. Because even with that model, you still have some issues about dynamicity, for instance. If you say, let's say a site or a site is currently connected on the chain on the top, that has everything, right, the whole enchilada of services. And then the administrator decides to disable one of the services. Say, I disable the URL filtering on that chain, right. Then you say, oh, it's pretty easy. You have some function that will steer that given IP address, that given traffic to a different chain. But then what happened? You have built, let's say, stateful entries on your NAT, on your firewall, on your DPI. You had state, right, layer for state and so on. You switch the chain, you go to a different VM instance, you lose all that chain, you break the connectivity. So it's not so easy in practice to get that level of dynamicity. Also it's not so easy in practice to say, it's very easy to say that site goes to that chain. When it comes to, oh, that CP goes to that chain, or that group of subscribers goes to that chain, or that given device when you're using your cell phone and you go to Netflix, and it's this time of day, then you actually have to bypass that given element of the chain. That's not something you can provision through static policies and so on. That becomes impossible. Then you need much more dynamicity and much more granularity. So the way we're doing that, we are basically building, rebuilding that service chaining completely controlled by SDN, so that SDN will control at the flow level and the flow can be anything you want. It can be a VLAN, so this is site R goes to chain A, that's very simple, or it can be a given device, or it can be a group of subscribers, or it can be a set of destination IP address. So subscribers or device or group, that could be a way to classify, that could be per destination. Is it, is the traffic staying inside your own autonomous system, or does it cross the boundaries, because then you may have different security policies, you may have different, let's say, charging policies for that domain. It could be OTT specific, of course, with certain limitation, right, in terms of awareness and in terms of URL and so on, but you can do already a lot by figuring out where the traffic is going and applying different policies. And then finally, it can be per application. You can understand that this traffic is a video flow under congestion, you know, let's say a congestion cell, and therefore you decide to redirect that traffic to different instance on the fly. And this could be quite handy in a different set of case. So there is many use case, and of course we are not going through each and every one, but there was some cut and paste of some use case which were related to either like security prevention, right? You detect some dose attack, you want to provide a clean pipe service to your end user or to your VM, and you need to block those dose attack or some of an LC traffic directly from the core. So then use as the end controller to either detect the attack or basically block the attack in an efficient way. That could be based, I was mentioning this network conditions, congestion and so on, then you will change a set of services based on this network congestion. That could be about inserting on demand a vast sort of service that could be again security related, right? You detect that you have certain type of attack, you want to log, redirect, store, do a more advanced parsing of the packets. That could be simply for offloading because some of those services are very high cost per bit. I mean, already you go through x86, so there is kind of high cost per bit if you compare with some hardware accelerated device, but then some of those services, they do a lot of inspection and so on. So they have some bypass function at the VM level, but it's actually much more efficient if you directly bypass from the infrastructure level where you can say this is a Netflix stream, it's going to last for one hour, right, or whatever. Then if you say that DPI has no problem with that, parental control or policy, whatever, agree that you can watch a Netflix film at home or at work at that given time. And then all your security, traffic, whatever, check that this is clean streaming and so on. Why do you want to send each and every packet to your, let's say, video acceleration for the rest of the session? Then you can use those dynamic, again, let's say control to offload the traffic and save a lot of traffic, like all the long live session, all the video flows, all the software updates, so many of them these days. Then you can bypass from your NFV layer and you can optimize a lot. Then there is also operational tool. You say, okay, it's great because I get all of that, but operational tool is great as well. You can say, I want to test a new version of my service, version 1.1, right, and I create that version 1.1, but I don't want to say, okay, I redirect all the traffic to 1.1 because it's not proven yet. So one simple model, you say, I want only to redirect 1% of the traffic. But maybe you have to be smart about what is that 1%, maybe it's a friendly subscriber that are part of the operator or get the service for free. Then when those guys are happy, then you have your basic residential subscriber, the bronze one. Then you get your gold residential subscriber, then you get the enterprise subscribers. So you can be smart about the way you load balance in the network. It's not load balancing based on the operator policy on hashing of IP address and so on. That's load balancing based on policy per subscriber or per enterprise. And then you can easily insert and try out new services and so on in the network. So that's what you mean by intelligent service chaining, right? Or the extension we need to do to get this intelligent service chaining. The second type of enhancement that we need in the cloud, I was mentioning this dynamic connectivity on the outside world, of course, you can choose whatever technology you want inside the data center and kind of automate through a DC gateway or you stitch that to the external world. But that always come with this set of complexity in terms of operation, maintenance, trouble shooting, redundancy and so on. So there is actually a lot of push right now in the operator to say we, if we build a cloud for operator purposes, right? We might as well reuse some of the tool or the network technology that's one we are familiar with, that we understand very well. Two that are very proven and very flexible. They can do point to point, point to multipoint, layer to layer three IPv6, inter-IS, inter-domain, whatever you can, you name it, right? And then so it's flexible, it's proven and they are used to it, right? And by the way, you can optionally, potentially suppress also some level of stitching and so on that create complexity and create cost. That means you run natively some IP MPLS down to the data center. And that is another thing that is not necessarily mandatory, you know, connected to the virtual enterprise gateway, but you see more and more as part of requirement as soon as you run hosted enterprise services for an operator in a telco cloud environment. So in that model, what happened is that you can keep your existing over late type of model, you keep your existing as the end controller. The only difference is that the as the end controller run more like routing protocols. And instead of pushing, let's say simple Mac address or VLAN cross-connect into your V-switch, it will push IP or MPLS feed entries into your feed, right? So this whole data center look like a big virtual P in that model. And this is what give you, you know, I mean, first you get for free a number of like inter-DC connectivity or inter-operator connectivity. You also get for free inter-working with your existing IP one environment. That is very important for those type of applications. And then finally, in terms of value of as the end or one value of as the end identified for that given, let's say use case, there is the application aware transport. I'm not sure is the right marketing them, but what I mean by that is to say, again, what I was saying before, if you have a transport controller, so you see like on both side, you will have two data center, right? Left and right. And in the middle, you have your IP network, standard routers, nothing about SDN or nothing about cloud, right? This is your standard router network in the middle. There is a way with SDN and with OpenStack and with the right level of orchestration to know more about, you know, who owns that VM, right? Or what is the policy I need to apply to that VM? Or who's that subscriber? What type of policies I need to apply to that subscriber? So since you know more of it and you can control more of it, you can control the endpoint, right? Being the V-switch in terms of classification and so on. Then you have the ability now to treat the traffic in a more intelligent way in your network. So what I mean by a more intelligent way it could be that you have some traffic where you have a very good protection, which is like this, you know, so-called, you're 50 milliseconds, higher grade protection time, whatever, like you lose almost zero packet. Or it could be that you have a lot of resiliency, so you can have a router failure, then you can have like a fiber failure, then you can have multiple failure and see your traffic is up. This can be kind of SLA, that could be about bandwidth. It could be a strict bandwidth guarantee when we talk about course in operator network. Most of the time, say, we give you the right marking and with that at 90.99%, you will be safe, you know, statistically. But some use cases are not safe with the statistical. They want to have real guaranteed bandwidth, something like TDM, which makes it a problem for TDM to IP migration for the operators, because you cannot emulate that. And it becomes quite complex. You can do it in IP today, in IP and PLS router for 10 years actually, but no one does it in a large scale because it's too complex to provision and maintain so many tunnels. Now with ASEAN, you can do it on the ASEAN controller. And then finally, there is also in terms of use case latency, so that you can say to pass not based on the cost, not based on the resiliency, but maybe based on latency. So those are the typical, let's say, parameters you will find. Maybe there is more, of course, parameters you can pass. And the idea of the ASEAN controller in that model, there is a bit of a lot of detail, but maybe I will cut short. But it's basically that when the orchestration is basically sending or creating, let's say, a new VM, a new resource, receive a new policy from the orchestration, it will basically let the SDN controller in the data center know, right? What are the requirements for that application or that flow? Then what we can do, the SDN controller, what you can do first, is to get the VM reachable to the outside, right? So I was mentioning BGP, then you have to tell with BGP, those are the IP address and the labels that you have to use to reach or the IP address that you have to use to reach that VM. But then on top of that, it can actually pass along to the core controller. Basically, you see that other transaction, it can actually request a specific SLA for the services. And when you request that SLA for the services, then this core network controller knows has a holistic view of the network and the topology and it can decide for that group of application, if we want guaranteed bandwidth of those SLA requirements, I am going to select that specific pass in my network and it can make it in a very efficient way because then it will create a given pass ID and of course it will multiplex 100 of thousands of VM or flows or whatever that share somewhat of the same requirement into those aggregate pass. And this is the rule where you see that kind of DC gateway element has to be controlled by this core controller and this is where it will classify and remap the traffic into those core tunnels. But then if you look at your existing core routers, right, in the middle of your network, they don't have to be aware of the existing application. They don't have to have the state for each and every flow in each, every application. So that becomes much more manageable and much more scalable than it used to be before. And you don't have to, you can continue to use your RSVPTE and so on aggregate tunnels. You can also use, let's say, new methods such as segment routing to make that in a more lighter way. So when we put all of these together, virtual enterprise gateway, what would be the typical workflow? This light gray box, that's the data center environment where you can see OpenStack. You can see the Asian controller embedded into OpenStack, the cloud Asian controller, and some infrastructure. I'm seeing a V-switch, of course, is many V-switch or it could be like physical switch as well. The first thing you have to do, you have to create resources, right? Create your VM, create your resources, shared or not, you know, you have the choice. But it has to be, of course, isolated, right, in terms of tenancy. And then the second part you have to do is configure those instances. You have to create them and then you have to configure them. This is already where it becomes a bit more complex. Who's configuring the actual instance? Is it the data center, people operating the data center, data center orchestration, the resource orchestration, or is it the service level orchestration? In many cases, that's another level of orchestration that is responsible of configuring the VM and sometimes about doing the VNF like cycle management. So knowing about when to add new VM, the load of VM, and so on. The second stage, of course, is to, you know, using Neutron typically, but now with extensions, is create the connectivity. But it's not only connectivity in the boundary of the data center, but it's connectivity between data center, outside of the data center. So here we need also, for instance, to look at new extensions to pass along additional information for VPN connectivity. Then I was showing like a dis-line, you know, bandwidth, SLA, reachability, that's BGP, that's PSAP, that could be like a routing protocol in between that you use East-West between the controller. And typically those are different controller instance, different domain, your IP domain, your data center domain operated by different team. But you can have East-West interface and it's very important, of course, if you want to have everything automated and orchestrated and to end. Then once you get that, the transport controller can actually do the proper mapping and reservation and stateful reservation of the flows into your core network. And then the service controller, we'll have to use potentially another SDN controller again or another function inside the SDN controller in order to actually chain the given services, right? And this is the flow you see where this service level SDN controller have to talk about the infrastructure SDN controller. So now you see the complexities that you have a resource level orchestration and then you have a resource level SDN controller but then you have also service level orchestration and service level SDN controller. And we need to define the interfaces between those two layers. And then create the service chain along the line I was explaining before. And then the final piece is that you can have a very simplified CP but in many cases you still want to control some elements of the CP. So for instance, if you want to prioritize certain application on the upstream and you want to be application aware, you can do the DPI in the core network but then you have to remap the course and so on directly on the CP. Or if you want to allow local switchover or not allow local switchover at the CP site, then you can use something like open flow to drive the forwarding table of the CP. So it means the SDN domain is not only in the data center but has to reach maybe the next gen CP on the access. And I just highlighted these little dots, maybe some of the points where we have to redefine in open stack or in the architecture, extension and so on that are required. And I will end over to Alain for this part. Okay, thanks Francois. It's true, French people talk a lot. It's really true. And I have exactly two minutes left but I'm Irish, I can talk really quick. So I'm gonna give you a quick rundown, two minute snapshot of actually what we've actually done to make the virtual enterprise gateway. This is actually a prototype we've built. It's running code. If anybody wants to come to me after the meeting, I can give them a walkthrough on actually what we've actually changed and what we haven't changed, what we've actually used that comes from Trump from OpenStack and what comes from Trump from ODL. So just to give you a quick snapshot, there was a couple of, I need to go to this slide. Okay. So we have built our own service orchestration manager on top and it's already given me a hint to move or we've built our own service orchestration manager on top and what we've actually done is that's where we keep all the business real logic. And in that, what we do is we actually define like in a service catalog, you pick the type of service that you actually want, you go select the service, you define the user groups, the IP address pools and then you define the inline service that you want. When you've got all that done, then basically you actually constrained the logic on actually how you want the actual service chain to be built. When that's actually done, then what we've actually done is we've actually taken OpenStack Neutron and like the OpenStack trunk and then we make API calls all the way down from the service orchestration down to an OpenStack controllers. So what we do that for is basically for statically provisioning the services, instantiating the VMs, pulling the images on the back end from glance, saying this one is a firewall, this one is a router, this one is a low balancer, DPI service, snapbox. And then the parts that we have added, we've added a part called the Flow Network Programmer and that's the enhancements we've actually done to the Neutron-Norperian interface. And the reason why we've done that is actually to dynamically learn when the services are created in Neutron, in ODL of actually what service has been instantiated on what VNIC on what OpenFlow port. So what we've actually done is we take ODL then and ODL is actually the one actually for configuring and instantiating the actual service change rules. So we haven't made any changes in OVS, the switch. We've made some small enhancements in OVS DB. This we're actually going to upstream back to OpenDailer controller. So all going to be open source, nothing we're keeping for ourselves. And what we've done in the prototype, there's a couple of different ways that you can actually use for service tagging. You can use an MPLS LSP, you could use a VXLAN ID. Or what we have used is we've also used the IP TOS. The reason for that is because MPLS LSP labels actually wasn't fully working on OVS, so we actually have to make some quick cuts and work around actually for that. And this gentleman is telling me to get off the table. Summary, Francois, do you want to give it a summary? Okay, I can give you this, okay? So there are some extensions that we had to do. All the extensions that we've done are on Neutron. We're going to upstream all of that. And the idea behind this is that you can actually plug and play different network service types and different vendors, right? So you can imagine you could actually be using vendor A for a firewall or vendor B for a firewall or vendor C, right? The whole idea of the solution that we're actually building is so a service provider and operator can actually swap and actually decide which firewall, which router, which low balance, or which DPI box he actually wants to instance A. And the magic part that we are actually adding that we're still upstreaming back to OpenStack and ODL is actually to figure out like how we're actually going to build a service chain. So how to actually make the flow from VMA to VMB or VMA to VMC to VMB. That's the actual part that we have actually done in our enhancements. And this we're going to upstream back, like I said, like to OpenStack Neutron and also to ODL. So what else can we do? So I think one of the most important parts is also when we talk about SDN and NFE, right? This is not just applicable like for like inline network services. You could actually use in this for redefining like how typical like network boxes are built today, right? So you could actually look at a CPE and you could actually break the CPE out and you could actually say, I want to have my DHCP. I want to have my actual like IPOE or PPPOE termination done in separate VMs, right? So one of the things that you can actually do with the service chaining is actually you can blow out actually how you would actually have a physical box actually defined today and you can use the service chains for redefining what you would actually have done like in a specific vendor lock-in box, swap out the different components that you actually want in that box and then decide how you actually want the flows to actually be connected together. So I really need to get off the stage because the gentleman is actually pointing what appears to be a gun sign to my head. So I need to give out some cards actually to some people. So my colleague here Mahoud has actually collected some papers from some people and we're going to give out some prizes.