 All right, shall we get started looks like we're ready last few stragglers coming in sit down. We'll go ahead and get started and File in as as you can Yeah, hi welcome. My name is a time Anderson. I work for Alcatel Lucent I run the development team of cloud band our NFE platform and My name is Chris Wright. I'm the technical director for SDN and NFE at Red Hat and and We are here to talk to you today about network function virtualization and The overlap with OpenStack if I can figure out how to work this Voila, so Just for our education Who here is in the telco world and it's very familiar with NFE Why are you here? For those of you who are OpenStackers who are not from the telco world and interested in learning about NFE The subset all right well we may have prepared the wrong talk. Yeah, exactly. We'll work through it. Well, we can add live So today what we want to do is bring Bring you up to speed on what is NFE or network functions virtualization With some examples, you know look looking at some actual use cases or applications that might be virtualized In a service provider environment We'll build up a definition of an NFE platform there is a standards organization Within Etsy or an effort within Etsy to standardize something called NFE It's where this this term came from originally and we'll try to take a look at that from the point of view of this standards bodies Reference architecture and map that through to what it means to an OpenStack platform and what the components underneath look like Linux KVM, etc and Talk a little bit about when we combine these two together NFE telco requirements and OpenStack, what's missing? What what makes NFE special? Why are we here talking about this? Why are we advocating for? for Requirements coming from the telco segment and then we'll talk a little bit about Red Hat and Alcatel Lucent what we're doing together in this space and Hopefully we'll have plenty of time at the end for people to ask questions and have a dynamic conversation So as Chris mentioned maybe within prepare the right slides We're expecting a more OpenStack forum than an NFE one So maybe I will do it a little bit faster than than planned if you want to stop me While I'm speaking please do So we'll start just to state a common ground for all of us on what is NFE So NFE is the term used today. I think in this room Most of you know it the term that is used to describe the transition that service providers are doing today from the reality of today where they have specific system Dedicated system silo system usually for dedicated services Into a new reality where they have a common infrastructure multi-ten and serving a lot of services based on Of the shelf hardware or servers some will call it cloud At the end of the day what the service provider trying to do is to build this cloud to start and run instead of dedicated Systems for dedicated services just as applications on top of this cloud Why did they do it right what what is the promise that stands behind it? so the first point is The promise for agility Today to introduce a new service into this environment is very very difficult. It's difficult from different reasons From our budget perspectives risk perspective and actual operational aspects and operation processes within those companies The basic idea that one that you have and you Switch the gears from a hardware play when you need to go and buy the dedicated system for a new service Into a software play when you need to go and install a new application a common Infrastructure that already exists should introduce a much simpler Processes and and better agility the second aspects around the personal Efficiency I will touch about it later on in terms of what are the pillars that we believe the such platform needs You know to address these challenge but at the end of the day the promise is that while doing this thing and Introducing all these concepts also a lot of operational efficiency will be gained The next point that this promise is bringing is cost efficiency, right? So just as a small example yesterday was Mother's Day, right? So most of the system are planned for Mother's Day So you want to have you want to support a day as yesterday But then the rest of the days today the system is under very very low utilization Ones that you build a multi-tenant infrastructure You can obtain also cost efficiency not only but also cost efficiency by the fact that you can create an elastic environment And the last point is about new revenue streams ones that you have all these new goodies that I just mentioned Just as just to create a common language between us when we say an application Application usually an open stack. It's quite obvious for everyone what it is, right? So we all can imagine what an application is One just to share with you What is an application? in the NFV world Sharing with you the four most popular use cases that we see today in the market The first one is a virtual CP the basic idea Is today you have a CP customer promise equipment either in your business or at home The basic idea is that you want to move a significant part of this functionality into the cloud to introduce That we will is the operational aspects and they will help you to introduce new services in much more easier way So this is an example of an application that will run on such a cloud a second example is virtual CDN CDN the content delivery network Instead of having dedicated appliance that are Dedicated for CDN you want to have an application that serves as a CDN that runs on your cloud Obviously creating very interesting challenges both from storage perspective and network and bandwidth perspective Another example is virtual IMS the communication system Again today is based on a dedicated and specific systems and the basic idea is to make those and take those Applications around the cloud what is this application is all the different components that you need in order to have our phones our chat Messages video phones and all this stuff Just working as part of the service provider network last example Virtual EPC the evolved pocket core the basic components if you want to run and Solar network a wireless network. So when we say applications in NFV, this is what we mean It's not a web app or something like this Those are applications with some specific characteristics But at the end of the day the idea is to treat them and to think about them as apps When we when we speak about NFV there are like five pillars that we believe that are Needed and relevant to keep the promise that we mentioned before The first one is around automation So we spoke about that we want to obtain operational efficiency in order to obtain operational efficiency We do believe that we need to have automation everywhere bottom up from the way that you manage and install your infrastructure the way that you handle the different life cycle events of your infrastructure but also the way that you manage your applications as As an application owner as all and also as the As the team that is operating the cloud and the service itself So you want to introduce tools and framework that will enable you to automate the different aspects and help them Not only fully automated, but also exposed with programmable interfaces So it can enable also integration with different system in an easy way the second pillar is around distribution Because we're speaking about network functions at the end of the day the promises that The cloud will become the network if you want So it's quite obvious that we cannot build the network that is centralized right so the basic idea on the basic assumption is that because of the essence of the workloads of the applications and You need to build a distributed environment by distribution. We see different approaches in the market Some are speaking about distributions in the numbers of dozens or they're in the numbers of hundreds And there are some examples that are speaking also in the in the numbers of thousands It's all depend on how far do you want to take this distributed environment? Closest and closest to the endpoints of the applications, but the basic assumption is that No matter what you will need to create a distributed environment just because of the needs and the efficiency of the specific workloads The third piece is around openness We don't want to build this environment that is closed in a way that is difficult to consume and it means all over right? You don't want to create Any assumptions on the hardware that you are choosing to create the infrastructure. You must be totally open in this aspect You want to use open source. This is why we are here. This is why we are Corporating on We try that we do believe that this is the right way to To build it and also we want to expose all the services and all the capabilities via open API So they can be easily consumed by the different both applications and the different systems within the Ecosystem of the service provider The fourth pillar is around operations We do believe that for creating this environment and again obtained the promise that I mentioned before We need to be very focused on the way that we operate it the way that we deal with challenges like Understanding where the problem is right if things today are reality in the service provider. It's It's quite clear the relationship between the application itself and the hardware that is underneath You need to have tools that will help you to identify where the problem exists because now you have a common infrastructure Good chance it will be Delivered by one vendor and then an application delivered by another vendor. So you need to have tools and understandings to efficiently Understand where the problem exists both to solve it and to know how to address it You need to have tools and ways to model your applications VNF For the part of the audience that is not familiar with the terms is is the name that the Disindustrial gave for an application a virtual network function But you want and you want to have tools to model the application itself So you can move from a situation whether you have today when you have operations Procedures into a situation when you have a full automations of these procedures into one framework And you want to have all those aspects fully integrated with a network of the service provider and in this case Obviously, we're not speaking about the internet as this is a usual case in open step The less pillar to speak about is the other workloads themselves that they do have some specific characteristics The first one in some of the cases you need more deterministic performance. You need to be able to To have more clarity on what would be the performance of the application Again, just because of the essence of the workload The second one is is around network requirements at the end of the day We're speaking about applications that are serving mainly as network functions. So obviously you want to be very efficient and Obtain and sustain their needs of those workloads from bandwidth and network efficiency That goes together with the last point of data plane optimization How do you help those applications to be efficient and to have at the end of the day the scale that we are speaking about an Efficient environment as it helps you to deliver their services in this multi-tenant generic infrastructure So what is an energy platform if you want an energy platform? Is something that is following the pillars are mentioned before and it has If you want to split personality on one hand It needs to serve the applications themselves and it to serve all the needs of the applications It needs to serve the application life cycle management. It needs to give all the basic Services compute storage network. It's need to have or to expose Let's call them generic services that are used by different applications load balancer as a service is a nice example But because we're speaking about a distributed infrastructure You need to also give tools for the applications themselves to consume this distributed infrastructure In an efficient way, but also in an easy way. So you need to deal with the placement problems You need to deal also with the security problems And the assurance problems from the eyes of the application owner itself on the other hand And a platform needs to serve also the the cloud owner. We're speaking about the service provider That is building his own cloud. So the platform need to serve him as well To to maintain and operate this distributed environment to have all the different tools that you need To operate it to analyze where the problems persist to have tools to plan the capacity So at the end of the day from the application perspective, it behaves like a cloud per se meaning From the application perspective you have endless resources What are the basic building blocks of an NFV platform? Because we're speaking about the distributed infrastructure The first building block is what we call the NFV data center as I mentioned before it can be a different levels of distribution But this is the basic building block and this is this is the place where we believe Open stack has a as an amazing fit. This is the place where you expose the basic services Compute storage networking monitoring. This is the place where The different projects in open stack have a very clear and significant role, right? You need to have as part of this environment Just as an example to expose obviously compute services. So obviously Nova has a has a trivial I Roll over here Sains gold for all the storage aspects in there or swift Or Syllometer, but also there is a role for SDM that Chris will will detail in a moment The second piece is is is is that part that helps you to consume and to manage All these distributed resources in a unified way and in this piece We see we see different aspects that need to be covered that I already Mentioned so I will not repeat them again, but also you need Something that will help you to manage your applications on top of these distributed environment And this is where we see a significant role for it To serve as a basic component To orchestrate your application lifecycle management want to spend a few words on neutron and takeover all right, so Just to quickly recap since I'm swapping myself in here. We are freeing Critical functionality out of Purpose specific hardware and placing it in a generic computing fabric Open stack to provide network services and and what does that mean network services is sort of a generic term You hear actually if you've been coming open stack summits for a while you hear network services as a Language that makes sense also in the IT world talk about service insertion. Usually it's kind of the load balancer firewall VPN as a service type functionality, but in in fact it could be anything anything that's processing network packets and One of the key components here is is getting traffic to a specific network function And often we see something that looks like a chain of functions or a service chain something that's a series of pieces of network functionality Connected together and something like an SDN controller is in the prime position to make those traffic decisions and steer the network flows through each of the of the services or functions and in your NFV environment, so I think it's important to reflect where are we we're at the open stack summit We're talking about using open stack to meet some of the needs of the telco industry and what are we building upon? We're starting at the very bottom. We you know it I mentioned a generic or Commodity type hardware infrastructure building your your compute fabric also potentially your your storage and networking fabrics We're using Linux at the bottom of this to create the the runtime environment for the services that that are Makeup open stack and then also KVM as a virtual virtualization layer KVM and Libvert together working with Nova to provide the compute infrastructure for open stack as a You know concrete example or for Nova as a concrete example and when you look at this picture here We have a set of these of these NFV data centers So these are not necessarily single-point data centers again This is each I mentioned the the geographic distribution of data centers around the service providers footprint each of these represent a unique open stack deployment and We have all of the SLA requirements or assurance requirements that that you've heard mentioned earlier coming into play in one of these environments and You know you could even consider for the SDN case that there may be some use cases where you want to connect these data centers together Dynamically across the WAN Which is something that currently we're not really doing a lot in neutron neutron is still fairly focused on a single data center deployment If you look at these pieces Linux KVM open stack as the building blocks of an NFV deployment Each one of these actually needs some work to make them Really appropriate for an NFV use case oops All right, so here's an eye chart. I won't hold you accountable for any of the details on here What you're seeing is the NFV architecture diagram as just as defined by the Etsy NFV ISG So this is a standardization effort describing It's essentially a group of operators coming together saying we have an existential crisis How can we remain relevant in the continuing cost of doing business and presenting new services to our users? The way they can they can do that is by virtualizing their infrastructure And this is the diagram that they've come up with to explain that and If you've not followed this and you came upon this diagram one day you'd run away screaming because it makes no sense so our job is to try to translate that into Maybe a language that's more friendly to open stack and you can see There's a lot of stuff here in the diagram. The lower port portion is called the NFV. I the upper Pieces over here are management and orchestration or Mano and then you have a series of VNFs or virtual network functions So man, I can't operate this thing at all. All right. Who's out there? Okay, I Don't know my left from my right So if you look specifically at the NFVI This is really a very great fit for the Linux KVM open stack stack that we've been talking about It's broken across two pieces. One is the actual Virtualization infrastructure. These are the compute nodes providing capacity for applications and and the storage and network fabric providing storage and network for these applications and then some Management infrastructure off to the right side here if you look up the stack a little bit You have this orchestration layer and as as we mentioned earlier This is an area where heat comes into play So if you talk about a VNF a VNF sounds like a thing often VNFs are collections of virtual machines so I believe they call them VNF sees or components and The collection needs to be launched together You need to launch this in a way that makes sense to the application. So again, the application has really strict SLA's associated with packet processing throughput or response times deterministic response times Coordinating the launch of something like an entire VNF, which again might be multiple virtual machines It's something that you see off on the right side in the in the VNF manager and part of the orchestration layer Kicking off requests to each of the individual open stack services to place this workload somewhere in your compute fabric If you look at Nova You know, we have some specific issues with what do we do with Nova? Where do we place this virtual machine when we go to launch this VNF? You'll want to make sure that that VNF is running in a very well-defined environment in terms of Numa topologies and things that will affect the performance of this of this application You may have in the CDN space and some specific storage requirements for Streaming data to and from the storage and subsystem and again for neutron Here's the place where you're orchestrating each of these different functions to work together across the network Question So the question is is Nova's scheduler too dumb or too smart and How do you map the scheduling functionality that we have in Nova today to this type of environment based on scaling requirements? And and what kind of inputs are you taking for for understanding how you schedule? And you need to continually pull and look for a resource utilization or is there some you know Can you make it simpler and offload the problem somewhere else? And I mean I have a personal opinion, which is the scheduler It would be nice for the scheduler to take input from other systems So it's actually critical for example for a VNF to have access to a PCI SRIOV virtual function of which there are a limited number on a box You need to know ahead of time before you launch the VNF onto that box. Does it even have that resource there? Probably not a real heavyweight discovery mechanism But if you do that just within the without providing a pluggable way of giving input into the scheduler Or maybe a stackable way of giving input into the scheduler I know you won't solve the problem. Is it too dumb or too smart? I'm too far away from nova development to really answer that question or have a have a great insight But I know it's an area that is of underactive interest of active interest or of interest in underactive development here during this particular about design summit, so I Don't know if you have a particular I think it's a kind of a mix, right? There are things that are missing so I Wouldn't call it dumb or smart but things that are missing on the existing Nova scheduler that are relevant for this specific type of use cases Just as an even simpler example, right? So I have two servers in my rack or whatever number of servers in my rack and each of them has only one 10 gig interface Now I have two workloads each of them with six or seven gig needs in terms of bandwidth I want to understand it and not place both workflows on the same server. All right. This is something that Consider it dumb consider it smart, but this is something that we want to add and to add to them To the function of a Nova scheduler you will see it also actually in one of the slides following So I don't think it's it's making it much more smarter But do accept more inputs and take other considerations some more specific things that today do not exist To to make a more efficient use of the resources and address them the specific need that we mentioned before So, yeah, it's me. So is NFV so special, right? So there is some tendency to the thing that NFV is very very special and you need to build a lot of specific and dedicated things in order to To obtain and to create an NFV environment We're not sure we think that there are some specific needs that we will address in a moment And we'll give you some examples of the tactical things that we believe that are missing But do we do believe that those gaps should be part of open stack at the layer that we described before And at the level of the NFV data center We should address those gaps as part of the community as part of the upstream We do not think that there's something very very special here that requires a specific Specific open stack Implementation or a specific open stack version This is all of all of those needs are generally nothing Relevant enough for other type of workloads. We just believe that there are Those gaps should be addressed a little bit more specifically But again as part of the ongoing process of the community and as part of the big upstream and not as As a fork if you want So what are the needs at the end of the day that you need to take under consideration? You need to take under consideration the distribution that we spoke about right? You need to take under consideration the bandwidth intensive of the applications themselves You need to take under consideration the large scale at the end of the day if the promise Will be kept whereas picking about a huge deployment with a lot of users a lot of applications and a lot of bandwidth And you need to take under consideration the service provider network, right? You need to integrate it with the network itself But when we summarize all these into what it means to open stack and what does it means for the gaps that exist today? We believe that the tactical list is is is fair is generally enough It should be addressed as not a special project, but as part of the community and Chris will go over some of the examples So clearly this is not an exhaustive list and and actually that's part of the point If you were earlier today if you listen to Alan speak about NFV He was he was also going through a similar Analysis and showing some of the differences of where open stack is now Versus where open stack needs to be in order to meet telco or service provider requirements and I think what's interesting there was was Alan's perspective was really focused on on these are really fundamentally different Requirements and while I completely agree. These are these are new requirements What we're trying to stress is this is not a deviation a fundamental right turn from the open stack core mission These are things that are incremental changes to open stack as it is right now that make it readily consumable in an NFV environment So at the top of the list you see as an example SRILV support we already have the emerging capabilities to manage PCI devices and directly assign them to Virtual machines so if you if you remove open stack for for a moment and you go back to just Linux and KVM Linux has this functionality and KVM has this functionality It's had it for quite some time the ability to carve up a physical device into multiple multiple virtual devices Actually dedicate that device directly to the virtual machine and as a result effectively bypass the hypervisor or accelerate the IO throughput into that virtual machine exposing that through