 Good afternoon everyone welcome to best practices in the deployment of NFV infrastructure My name is Julio Villarreal Pellegrino Right now I am a principal architect with the cloud practice of Rehat Some of my responsibilities are to actually go and implement OpenStack on the field I've been working with a lot of telcos in the last couple years and I'm really glad to see couple of my customers in the room today Thank you Julio. Good afternoon. My name is Stefan Le Freire. I run the cloud practice at Rehat and I'm Julio's minion Okay so We just started This session here with a poll for everywhere So the question we have to start and if you can use your app to respond Do you have direct responsibilities in your business enterprise network? We want to assess the audience here whether we have more network people than other type of people and Want to find out about this so Take a time. We have a couple responses already coming in Have like 57% yes Well 50 on 50 56 60. Okay. This is looking good. I want more network people It's moving. Okay 50 50. We have someone there doing a lot of yes and no yes and no and I'm messing with us But we're really glad to see that we have an audience of people that actually Have responsibility over the networks and network services on their companies. Oh This is my company VP and asking me to scroll some key. Okay, so fairly split audience here so Let's just go ahead and I start about our session so As we start I'd like you to let us know in one word what you think the purpose of NFE is for your activities in your In your business It's hard. It's in one word. So if you want to use met multiple was used iPhone Otherwise the words are gonna be split over there. Yes, and you could repeat the word, right? If you if what you believe is already on the screen, please feel free to type it I love the magic one. You know, I'm all about that one streamlining career More money Automation automation is taking precedence. Yeah, automation is taking precedence. That's good more money. Oh people love money in this room. I like that too virtualization magic and automation competition, right Maybe we have some people from telco here pain We have some telco people in the room. So you guys are describing the purpose of NFB right now, you know what NFB means to you and in some of the of the Cloud of words that we have here. We have magic number one And yes, we love we love magic We we love to provide you a service that you could just click one button and everything happens We have a lot of automation. We have time to market really important one. We have provisioning So I think that we need to jump because we're gonna hold you it says there. I guess you hold you there Your name is showing up. Oh one of my customers is in the room. This is good. Okay, so let's go ahead with the NFV Network function virtualization, right? So what is it for those people that are not involved with network is the art of decoupling your network functions from the underlying physical network infrastructure It's basically moving traditional network functions that are usually deployed in proprietary hardware To something that is going to be software or virtual and the goal is to do that on general purpose x86 type of type of hardware Very important. So the idea there is To how to get some benefits out of this. So what is the biggest driver for any of the adoption in your enterprise? Again, we're going here and pulling you guys and tell us what's the drivers for you Yeah, we want to make this a conversation You know and this is a way for us to to gather data at the same time and to see what actually driving the the adoption of Network function virtualization in your company Why? While this what this poll happens like how many of you work on the telco. Let's do like like a goal is cool Paul with the hand. I Know you j work on telco. Yeah, I know you go to work on telco So I know a couple people on the telco space here in the room. So it's a majority. I think yeah All right, hey, nobody's got scalability issues. That's cool Hey, they work on telco. All right. So we see that the winner here lack of flexibility for being a biggest driver And we got high cost Vendor lock-in. I think it's very important and slow innovation Okay, so let's just go around all these topics Anybody familiar with that picture there on the right side, you know, that's what we see in many enterprise Obviously not in telco telco is perfect, you know, we get all these things But one of the main driver for NFV is you know, we have a lot of proprietary hardware a lot of I would say legacy systems that have been basically ruling to the world in the telco space and these represents millions and millions sometimes, you know Dozens and hundreds of millions of dollars of investments for you know in a telco space, but it's also true in the enterprise This hardware does not necessarily have the flexibility in today's world cloud is about on-demand self-services and The lack of flexibility and basically the obligation for the consumers of it to go through the network guy to get their fire rules They are load balancer rules or anything. That is really the old way of doing things So the lack of flexibility is also an impediment for to respond for for businesses to respond to the demand Scalability issues well to to be able to scale, you know and respond to high traffic loads You need to be able to be elastic around your workloads and that applies obviously to compute but also to network Slow innovation. Well with with a way to basically provide on-demand and self-services Compute network and storage infrastructure, then you're able to allow your developers to innovate faster So that's also a big driver. Finally vendor locking. Well, why vendor locking? The idea is you know with these investors you want to use open standards so that you can drive your innovation faster and Basically, allow yourself to drive through standards What are the benefits obviously reducing the capital cost? There's a as I mentioned a lot of Capital spending a lot to reduce the need to purchase purpose-built hardware and supporting pay-as-you-go models to eliminate wasteful for over provisioning OPEX There's a lot of space power cooling, you know a lot of operational costs when you virtualize that all this goes away Let me let me ask a question here really like how many of you are already dealing with the truck day You know, I need to deploy a new service into my telco And I need to wait for the truck to come in with you know Will I make huge appliance and then I need to get everything ready for it? You know, I need to do my cooling math I need to say okay I have enough power for this big appliance and then get the operator to bring an integrator to bring the appliance into your room Into into your telco data center and plug that in and at the end of the day that that doesn't even operational cost to So yeah, and so along the same line, you know, how do you reduce your time to implement things? Right, you know, you see you have a project here, you know developers They need to deliver yesterday, you know, how long is it going to take? So this flexibility of the ability to accelerate your time to market reducing the time to deploy all as our great factors Agility flexibility Quickly scaling up or down services to address changing demands Supporting the innovation by enabling services that's huge benefits for NFV Okay, so what are we here in the marketplace? 83% of our telco operators demand essentially focus on telco prefer open systems for their network and Among this population of telco operators 95% of them see open source as a positive attribute for any real solution So why open standards? Well, because you don't want to be get locked in with those vendors and you want to be able to take your own destiny We in your proper hands. So red at is very committed to open source basically all of our products are have a project upstream that is basically funded and We have you know, we started with red at Enterprise Linux, which is the foundation for pretty much all the innovation at red hat We have control on this operating system We obviously have the RDO project, which is the upstream project for the radar distribution that is a for open stack and most importantly, there's more innovation coming in the Software-defined network space with open daylight So open daylight is becoming, you know among other SDM projects our open platform for network programmability innovation Essentially on software-defined networking So what does the NFV architecture look like so your virtual infrastructure management there? So the red pieces are basically the pieces that we had as control on but with our partners and the Ecosystem or partners at red at we're about to deliver a complete NFV vision in architecture So the VMP is basically at the bottom right corner and that represents by open stack But we also have other pieces right the SDM controller red at does not I Would say yet fully control this space, you know, we work with partners, you know, you heard about Jennifer Contreras You've probably heard about new age networks or Even meet me to Kura and there's other solutions are out of the box as the insolution is a OBS And there's different ways to to skin that cat, but essentially on On the old I would say the telco base There's also all the other pieces that are very important to to the telco space So we talk about VNF manager the analytics piece security service assurance service orchestration Network orchestration and a service catalog overseeing these PCs there on the right side so the idea with this Open stack platform is to be able to manage your compute resource networks Storage and have that as the base platform for the innovation on the VNF side All right, so now let's get into open stack and I mean I hope everyone to say yes here because we are at open stack summit 100% so far. That's one person. Okay. Oh, that is one. Let's say no And that's okay. Hey good opportunity, you know, like We hope to change that perspective with this presentation. I didn't expect that We have a 62% to actually 57 to 43 now 60 40. Okay Cool, let's get to the next question so NFV might not be a use case for everybody, right? So what why are you opens? Why are you using using open stack? Obviously, if you if it's NFV is not your use case if you don't want to provide virtual network functions What are they? I mean they can be network functions under is another thing like like everything that we talk about NFB We talk about the telco, but NFB is really relevant on the enterprise and on the education sector, you know I I don't projects with university today are implementing NFB features for HPC computing It's not only a telco thing. So I think that causes the Costs open Rapid rapid innovation Standard framework for money Simone there again enterprise Yeah money API's I like that one Scalability flexibility. Yeah, so why use Open stack as your network function virtualization infrastructure You could do it without it But we think that you're better off with this because you can also address most of your needs You know for your enterprise. So what are the advantage of a open stack versus traditional virtualization? everything is modular you have Multitenancy you have shared pooled resources. You can plug your storage network from basically any vendors It's all open right vendors have highly contributed with their drivers being able to do that very rich API feature set and a very vibrant community Now let's let's talk a little bit about the beam about the virtualize infrastructure Manager and that's that's one of our biggest push here And this is where we see that open stack actually feel that the space as as Stefan was showing before We could put together pretty much the complete ecosystem between Rehat and our partners, but Why open stack as a beam? Individual Modular open stack components could provide you these capabilities And the idea is that using open stack to manage compute networking historical sources is Is what will give the telco or any other Company using open step for network function virtualization the agility to innovate to deploy and to control their environments in an easier way so Using open stack as a beam you will be able to map physical resources to virtual resources in this case Provide the asset service component for for your for your different Requirements Another way that open start helps is a really secure way of resource sharing across multiple tenets When we go and talk to some of our telco customers some of the telco customers are doing open stuff for one telco tenant, you know, I have this Be I this virtual IMS tenant that I'm gonna on board and this is my only use case We have other telco telco customer that they are doing multiple tenets With the open stack infrastructure. They say no, you know, I want to have an IMS. I want to have routing infrastructure I want to have CP Yeah BPC. I want to I want to board different kind of tenants So open stack provides that capability of on board in different kind of tenets for NFB wallows into your cloud And you could enforce quota management as part of it, you know, like provide a quota to your tenant. You know, you can only use this amount of resources I am allocating to you. Or I can modify the quota in order for you to scale out or scale back. With OpenStack as a beam, you also will have a resilience and highly available control plane. You know, like in the way that we are deploying OpenStack on the field, we take the approach of deploying a highly available control plane as the base for OpenStack. What happen if you lost your API? You know, what will be the impact of losing API endpoint? So that's why a resilient control plane is really important when you are deploying a beam. And not only for OpenStack, but for any other beam that you want to deploy on your telco or in your enterprise. APIs, this is super important. Right now, what we see on the field is like most of the network vendors, they come with a set of API calls. That's how the deployments are happening right now. When you bring your partner into your room and you say, okay, I'm going to implement this and this is my beam, they're gonna come most likely with the heat template. Okay, here is my deployment mechanism. This is my image, this is my heat template. So OpenStack is going to provide you that. And that's the way to go. You should go to a way that you could automate your deployment, that you could control everything from an infrastructure as a code approach in an automated fashion. And it will also provide mechanisms to collect full and performance data for physical and virtual resources. The telco world metrics are a really important thing. You know, how you get metrics out of everything, utilization, optime, throughput, all those kind of metrics, you will be able to collect them with OpenStack. Now, that's one part of the big puzzle that we showed before, is the beam. But the beam is not the only important infrastructure part when you are going to deploy NSV applications. Another big part of NSV application deployment is the SDN. So by default, out of the box, Neutron is the SDN provider that we use in our OpenStack platform and that OpenStack use by default by the way. So the ability that Neutron provide you of supporting VLAN, VXLAN, GRE, will make you flexible enough to be able to deploy certain NSV workloads by default on the application without having to onboard an external third-party SDN. Now, if you want to board a third-party SDN, most of the SDNs out there, they have a point of integration with OpenStack and that's a really important thing to do. But Neutron out of the box will provide you with that. Now, as I was saying, if you want to go for a more commercial SDN and we partner with many of those, if you want to bring any of those, that will be vendor dependent, but you need to have a use case that you could qualify. Okay, why I am bringing this SDN? And remember, the SDN conversation is probably the most important conversation that you could have when you are onboarding OpenStack as an NSV beam in your company because everything at then will be dependent on that SDN choice that you did. Can I integrate my IT systems into this SDN? You will have to ask yourself many questions during that process. Now, storage, that's the other important part. We talk about the beam, now we talk about the SDN and the networking side, but you also need to consider the storage. For what we are doing on the field, we have two big things that we are driving. Number one is to save and that doesn't mean that need to be our save. Save is an open source project, nobody owns save. So for what we've seen in OpenStack deployments, most of our OpenStack deployments out there are using save. Why they are using save? Open source, like highly scalable, reliable, build in redundancy, and you could use save not only for your glance image, but also for your sender volumes, for your Nova ephemeral workloads. That's a one way to go when you are deploying a cluster with OpenStack for your storage backend. The other way is what about using what I already have on my data center? What about using my old son or my old NAS and integrate that into my OpenStack deployment for BNF world load? You could do that. And we see a lot of people doing that. The gut shot about it is that you need to make sure that what you're integrating is totally certified that the vendor has support for OpenStack that it could provide you with a driver for it. But you could go both ways here. Now, let's talk about performance, you know? Performance consideration for NFB workloads. And this is super simple. And as I was saying before, this not only applies to telcos. This applies to any enterprise that want to take advantage of these features. So end to end service performance is achieved through individual components and the optimization of these individual components on your platform. Now, how are we doing that? Now, on the compute side, when you are deploying NFB workloads, you need high performance compute. You need to take advantage of high performance compute components. In order to do that, what we recommend is the EPA features. It's the enhanced platform awareness features. The EPA features are really simple. And what we are doing, we are carrying that up from the Linux kernel and in order to be implemented in OpenStack and take advantage of huge space in order to allow to use larger page sizes for the guests that are running on the hypervisors. We are taking advantage of CPU pinning as part of our NFB centric installation CPU pinning will allow the workloads to be more aligned with certain physical resources in your hypervisors. We also point to threat affinity as part of CPU pinning and to new awareness. So, do you actually want to cross a QPI on application? You know, grab a resource and oh, I have to talk to the other memory bank or I want to get something from the other CPU to orchestrate this workload. Those things are going to degrade your performance. So, these basic things, some people don't think about it, but they are critical and key where you are deploying NFB workloads. It's how I'm going to use these features that being there forever as part of the Linux kernel and how I'm going to take them and make them production ready for my OpenStack NFB workloads. The other part is when you are going to the data plane. It's like, okay, I did my compute, my compute is fine, but now let's go into the data plane. What kind of support I need throughput-wise, you know? Do I need to be real close to bare-metal performance? Then when you are going to be real close to bare-metal performance, you have to look into features like SRIOV, PCI passthrough. You know, like I'm going to grab this virtual function and I'm going to provide it directly into the instance that I'm going to run. Or I'm going to grab the complete PCI card and I'm going to say, you know, I'm going to give 40 gigs of throughput to this baby. Maybe you don't need to do that, but maybe you need to do that. So we see everything on the field. And then we have other technologies. We have a DPDK-enabled OBS. Like it won't get you as close as life rate, as PCI passthrough or SRIOV, but it will get you really, really close. And at the end of the day, the important part is to understand your use case and your workload and how are you going to match that with OpenStack as the beam in this case? The other part that the data plane will provide you is some sort of network U.S., you know. Up from the OpenB switch, all the way to your physical infrastructure. Never forget your physical infrastructure. I talk to a bunch of customers and sometimes they're like, well, I want to virtualize and I want to have everything on OpenStack because I don't want to manage networking, you know. It doesn't work like that. You still need to manage networking. You still need to design your network. No, because you're going to have an SDM that you don't have to take care of your network design. That's an important part. Another approach and a best practice is to monitor things. Like please monitor your OpenStack infrastructure or any beam that you are using. Some of our customers only want to monitor the application running, the VNF, but they forget to monitor the OpenStack cluster. It's super important to monitor your OpenStack cluster. Monitoring your OpenStack cluster and then monitor also your VNF and whatever you're putting on top of it will provide you not only an understanding of what are you running, but a correlation between your whole stack. And that's where you want to be. You want to do log aggregation not only for your application, but for your whole stack, for your hardware, for your OpenStack components and for your VNF. Because if you are running an issue, your people, your operation people, your day two people are going to be able to troubleshoot that in an easier and faster way by doing low correlation but doing a lot of operations magic behind it. Open source options that we are proposing. You know, we are a big proposing of elastic search, Fluendee as part of our stack, Kibana. We have also some Nagios compatibility with one of our oldest release. We also believe that you should have the ability to integrate and monitor with your existing tools. I'm gonna do traditional SNMP logging. I'm gonna do a CIS log or R-CIS log, but it's an important conversation to have. And this is a pre-picture that I graphed from the Rehab website about it. Sample USB use cases. As I said before, everything is down to your use case. And how are you going to answer your use case with your platform? And your platform architecture should match your use case. Now, I have customers that they say I want to have one architecture and I'm gonna ask my BNF vendors to adhere to that architecture. That's one way to do it. And then you design a really high performance architecture that could take all kinds of use cases. And there are others that they take a use case-centric architecture where they say I'm going to design for my VEPC application. And I have this and this requirement for my VEPC application and that's part of my design. Or I'm going to design for my VIMS application or for my routing applications or my IPS ideas application. So you could take either way what you are doing in your design but the important part is that your architecture need to match that use case. Please do not adopt OpenStack because it's the cool, trendy thing. Adopt OpenStack because you actually have a use case that you could fix and match with OpenStack. Putting everything together, I'm talking a lot and I'm gonna have time for questions at the end, I promise. So putting it all together with Rehat. Rehat OpenStack platform. This is a little bit outdated since today because we are releasing the version 11 of our version of our OpenStack platform today. So great news and a couple of people in this room is responsible for putting that together. Thank you. So OpenStack as I've been saying is our VIM right now that we are positioning in the telco market. Why? Because it's highly modular because it's open source as I was saying. So this is OpenStack platform 10. Some of the components on OpenStack platform 10. So just imagine how you are going to get that use case for your telco application or for your HPC use case or for your enterprise with high network demand application and you are going to fit in here to getting answers to that use case. And that's what we are doing and that's the kind of conversation that we have with our customers every day is how can we help you with our tooling to make that use case resolve? You know, like to provide you a platform to run that use case. Now why OpenStack for... Why Rehat OpenStack for NSV features? And please notice that I am really specific about Rehat OpenStack platform. Why are we doing this? Number one, we deploy a highly available platform. So our deployment is highly available by default and that's our supported deployment. It's a highly available control plane. Number two, and this is super specific for... And I see this more drive on the telco is the composable roles. Composable roles is a feature that we are introducing with our installer that you could install different kind of roles. And when I mean a role, you could deploy a compute with SRIOV and have a role for that or you could deploy a compute with DPDK and have a specific role for that. So at the end of the day, your deployment will have multiple roles depending the kind of application that you want to run in that hypervisor. The other part is like the EPA that I was talking before, you know, like the enhanced platform awareness. By default, we have the tooling in the code of our installer to enable all this by default. So you could by using our director, our Rehat OpenStack installer, you could enable all this by default. We have also SRIOV from scratch there. We have support, director support for OBS DPDK. We have Open Daylight right now on a tech preview, at least on 10. We have real-time KBM on a tech preview. So if you want to have like a really high performance, hypervisor, you could get that with real-time KBM. And we also have a supported ACI, hyperconverting infrastructure. And this is a use case that we see on the telco for the edge offices. You know, like I want to have a small consolidated cluster on my edge location, and you could put that there, and you could put together a storage and compute on the same nodes. Besides that, we have other OpenStack features like Nova device role tagging, and we have a VLAN and web VMs right now on the preview on 10. Now for the storage side, I don't know if you guys remember, we talk about compute, networking, and then storage in our recommendations. Our answer to that right now is Ceph. I love Ceph, I've been a Ceph lover before it was an acquisition made by Rehat of Intang. So if you want to get the highly distributed storage, Ceph is one of the way to go. And within the OpenStack communities, it's really like it and deploy it. If you could go and see the latest versions of the OpenStack Foundation Pulse, you will see that Ceph is one of the big choices of deployment, and that's what we are pushing. But besides that, as I said, we certify and integrate with multiple third party SANG and NAS for our OpenStack platform. Cloudphones, I want to talk about this one because I'm a big fan, and I put it there. How many of you know what is Cloudphones? Rehat Cloudphones? Okay, only couple of you. Please take a look to that. Rehat Cloudphones, we have an open source community-based product called Manage IQ. Rehat Cloudforce is a product that we made out of that open source project. So, there are two operations. How many of you are operators here? Okay, couple of operators. This is what people don't want to talk about. When people are not cloud computing, they say, oh, I'm gonna go to the cloud. Yeah, but when you are on the cloud, what are you gonna do in your cloud? How are you gonna manage your cloud? Cloudphones is crucial in our approach to manage OpenStack. Why? Because you could do a single pane of glass experience, you could gather metrics, you could see performance in your own cloud and those are Rehat centric terminology for our distribution, but some of you know what I'm talking about. You could get dashboard, you could get custom dashboard, you could do automation of your application stack. Right now, we announced last week the fully support of Ansible as part of Rehat Cloudphones. That means that you could manage end-to-end OpenStack and you could also orchestrate and automate with Ansible within Cloudphones for OpenStack. Back to Ansible again, so you could replace this with whatever you're using right now on your enterprise. You are Rehat, so we are an Ansible company, but why Ansible? I have this conversation with people in this room already. So some of you, we talk about this already. Why to use an orchestration and automation tool as part of your NFB deployment? How are you going to manage the life cycle of your application? How are you going to update? How are you going to scale up and down? Ansible could be the answer for that. Ansible is simple, it's powerful, and it's agentless. So that's why we think that Ansible is the right fit for as an orchestrator, as an automation tool, we think OpenStack for any kind of workloads, to be honest. Operational tools. So operational tools, we got full support of clients for retail monitoring. So we, our radar distribution comes with agents. So we integrate with any customer's tools, but at the end of the day, what we want to provide is data, all the metrics, all the data you need to be able to operate your platforms. So for, I would say, availability, we have SENSU, a SENSU agent, that you can plug into your SENSU console, but we have FluentD for log aggregation and log correlation. We do also have reference implementation, so sometimes you say, hey, Rehat, what do you have? We don't have, so we have that collection of tools that is available for you to build your own dashboards. So that's the OpsTools Ansible project. So all these tools are being deployed by our Ansible Tower, and we're able to just give you a fully comprehensive suite. It's, I mean, we don't support the whole stack, it's text preview, but you can at least build your dashboards until you get to production and until you integrate all this data into your own enterprise tools. Yes, by the way, Kudos to the CentOS 6, because they've been maintaining that OpsTools repository for the server side. So it's full OpsTools project and you are really welcome to contribute. We're always looking for people to contribute to the community. Now, everything together. I know you remember this image from before, and it was like kind of red, blue, and gray. Now, how are we filling the puzzle? You know, we have RehatSatellite, Ansible, OpenSTAP platform, you know, everything put together for the NFBI infrastructure. So, and also how we plug the partners here. You know, we have several partners as we've been on the telcospace. So all those partners are usually through our architecture within our beam and our environment. So that's pretty much how we fit everything together with SEP, with OpenSTAP, with Ansible Cloudphones. So you could see pretty much almost every product that we have in this slide. And that's a good use case. You know, we've seen this on the field. People actually deploying this kind of stack for their NFB workloads. How are we on time? We're fine? I think we're good. I guess I think that we have time for questions right now. And we are using these two microphones. Anyone that want to ask a question? Any questions about what we just talked about? That was just crystal clear, I guess. No, yeah, that's a good feeling. No questions? Well, if you have a question and you don't want to do it right now, I will be around here. And I will be by the Rehab booth. On the CV, please stop by. We have a lot of Rehab people, experts on multiple technologies, and we will be more than glad to talk to you about anything concerning Rehab. Also about baseball, you know, we are in Boston, so about pretty much anything. Thank you very much for coming by, and have a great summit.