 Hi, good morning. Anyone in the room who wasn't here for the first talk? Oh, there are. OK. So for you, I'm Tarek Khan. We had a back-to-back talk, so a lot of people got a chance to speak to them. But by the way, my name is Tarek Khan. I'm part of HP's network functions virtualization business unit. Been working with Cloud for a while and OpenStack. But for the last year, year and a half, focusing on how we can use OpenStack for running the network workload. And I'm joined here with my colleague, Arun Thulasi. What we were going to talk about at this session is that, and it seems like it's a pretty hot topic now. Going through the list of talks, it seems like there's one conversation right after this, and somewhere, a different room where we're talking about essentially trying to solve the problem of when we're using OpenStack for network deployment, then since the network inherently has a shape, and some of the network data centers, if you're going to call them data centers, are rather small, or could be rather small. And what are the options for us to introduce cloud-based technologies to carry specifically wireline carriers, small data centers, even going down to a CEO, central office, or a pop, where you're probably going to have less than 10 servers. So that's the part that we're going to focus on. And what we thought was that, first, we're going to share what a micro data center may be. I think I gave it away a little bit. But then what are the options for deploying OpenStack at a micro data center? Then getting into one of the primary use cases for micro data centers, there are others as well. But this one, we're going to focus on virtualizing of a CPE, which is a customer premise equipment. And we're going to talk about when we say VCP, what really does it mean, and then some of the HP solutions that we've been working on to be able to address this problem. With this, I'm going to hand it over to Arun. Thanks, Arik. So fundamental question, what is a micro data center? So Telcos have been pouring a lot of money into building large data centers. The advent of network functions virtualization, you'd be able to host a single or a failover-based data center deployment with additional services being pushed out to a number of different micro data centers. And as you can see in this sort of mythological example, you have one central data center, and then you have a number of different micro data centers that are much smaller versions, do not have the kind of power space cooling requirements as your major data center, but be able to use some of the key services that are built into your primary data center extended. And in certain cases, push it out to, for instance, customer premises equipment. What it allows the Telcos to do is be able to centralize services into the large data center and only scale out necessary services into the micro data center as appropriate. You have a VCP use case, then you push out only those services. If you have a mobile pay station or a different kind of use case, your footprint is very different. So in essence, you could have a number of satellite data centers that operate around a central data center. So what are the characteristics of a micro data center? First, it should be resilient. We cannot give up the capability or the ability to provide high availability at the cost of cohen small. The environment should be resilient as any other telco data center should be. Or the second is proximity. Depending on what kind of services you extend, who's your end customer? Is it an enterprise? Is it a residential customer? Is it anyone with a mobile phone? You need to have your micro data centers closer to the end user. Needs to be cost effective, monster of NFV, be able to reduce your cost. And lastly, you cannot sacrifice performance, again for scale. The data center performance metrics that you have should apply to both your micro data center and your centralized data center. So what effectively this means is, except for the cost factor and the number of servers you put in, you expect the same kind of service level of availability from your micro data centers as you would from your central data center. So this is an example that we have, again, to tie it back to the use case that we have identified. So today, you have customer devices at either edge, and then you have provider edge devices running out of your micro data center that could possibly offer a connection into all your customer end points. So this is typically how a legacy environment, on the top half is how your legacy environment would look like where provider edge equipment, fairly larger in size, used to be hosted on actual hardware. Now as we move forward, we'd be able to virtualize that, run it in a much smaller environment as a micro data center, effectively connecting the same kind of users, providing the same kind of performance and availability. So we talked about what kind of devices we had on the data center edge, what are the possible devices we could have on the customer edge, and that could depend on what kind of customer and what kind of workloads you have at the customer end. For a large VCP, we typically have a classic server, a one-use server, or a group of one-use servers that could provide the service. Going down, you scale it as low as you can go, based on the services that you desire, and as you can see, the smallest VCP is effectively a thin processor, providing just gateway capabilities for what you have behind the VCP environment. So we now have an understanding of what a micro data center is, what are some of the use cases. So what are the challenges that we see today in having OpenStack help us deploy a micro data center? Yeah, standard disclaimer. Some engineers are usually harmed any time we try something like this. The first requirement for a micro data center is the ability to allow some kind of service-changing mechanism. Going from a traditional environment to an NFE-based environment, we should be able to stitch together services without impacting final user service-level availability. So what is service-changing? I'm sure that's been bandied about quite a lot. So how we define service-changing is the ability to bring together a number of different network functions to provide a seamless service. So that's what we're going to call a service-changing, typically software-driven. There are two halves to it. A service-chain needs to be network-aware, in a sense. If there are changes happening in your network, your service-changing mechanism should be able to react based on the patterns that you're observing. For instance, our users are suddenly getting a high spike in traffic. In that case, you should be able to automatically either scale up your load balancer or scale out your load balancer and add another instance. In a sense, seamlessly provide the same kind of service by being network-aware. Service-chains should also be subscriber-aware. In the sense, depending on who the subscriber is, he or she should be able to authenticate and achieve a certain kind of service level. So this is how we define service-changing. And one of the challenges that OpenStack has is how easy it is for us to be able to add a VM. Once we add a VM, how easy it is for us to, for instance, steer traffic to it. Those are two different challenges. OpenStack does very well on one half of this problem. You'd be able to scale up or down a VM while it's live. So you'd be able to provide some kind of network-aware facilities where your environment scales. But how easy or hard it is for you to actually steer traffic down to that new VM that you have created. How easy it is for you to stagger your rules across your entire environment from OVS to OVS, each of the hosts, to your upstream switches, how easy or hard it is. So now that's the challenge OpenStack is still trying to figure out. And if you could see these things listed down below, dynamic modifications of VM infrastructure. Now, that's a problem we are addressing very well. But dynamic modifications of networking infrastructure to address the subscriber-aware half of the problem, that's where we need to bring in a lot of work. There are efforts going on. So the Neutron service insertion blueprints, there are a lot of discussions going on on how Neutron can support the subscriber-aware side of the discussion we had. There are newer projects. There was quite a buzz around a start-up when it was mentioned yesterday. The efforts are coming in, but we need to show more improvement there so that service change can be supported end-to-end. And Arun, just to put a quick plug over there. So if you're interested in what the solutions that we have for service chaining and some of this, please just stay around for the next session, where our colleague Shalomi is going to talk about some of the SDN controller solutions that we have and how you're able to create these complex subscriber-aware service chains. Thanks, Rick. So talk about service chaining. Second major challenge is scalability. So today, we need to identify what I'm going to call a golden ratio between how many controllers can actively manage compute nodes. And there are efforts that are happening in that end within various projects. For instance, Rally tries to get a scalability number. But however, the metrics that we use today for any internal open stack benchmarking for scalability is widely different from how our telcos require it. I mean, for instance, the Rally tests today are localized to the environment. Microdata centers out of the gate breaks that assumption. You are going to have compute nodes located in completely different sites geographically apart, connected through various different L3 mechanisms, various different tunnels. The current testing mechanisms, the current benchmarking mechanisms do not account for that. So we need to be able to identify and publish metrics that do not just look at a very specific deployment model, but a truly carrier-sensitive deployment model. We need to achieve this scalability with HA. We need to achieve this scalability with performance. So OpenStack controls the stack from end to end. So it is important for us to provide the availability and performance end-to-end as well. Security, again, this is one area where in-house tests do not stand up to actual environments and actual challenges. So when you have a micro-data center, it's going to be located somewhere that is just going to be kept in a lock and key, you're also going to deploy your services onto a customer with whom you practically have no control. So there are unmanned third-party sites, networks. It's essentially a recipe asking for trouble when it comes to security. So there has to be an identified mechanism not just to secure the existing data center. Mechanism needs to spread to cover these real-time use cases, unmanned sites, third-party networks, third-party sites, third-party equipment. This includes securing the network, securing the host, and securing the virtualization platform. And securing the virtualization platform again is key because OpenStack supports KVM, widely used hypervisor, and open-source technology. What happens when such a widely used open-source technology gets impacted by a security bug? How soon do we react to it? So there are a number of different security challenges that need to be addressed as well. And lastly, you know, VNF support. How does an NFV environment truly succeed by the ability to bring in VNFs that can be easily onboarded? So today, there are, again, efforts going on in the space. There is an effort to do a marketplace such as Murano. He tries to do some orchestration, but we are still plagued by challenges, for instance, over multiple OpenStack releases and cadences. VNF is able to certify on one platform, let's say Juneau. Six months down the line, Kilo comes out. How easy or hard it is for them to re-certify? Is there a way they could say, I do not use any of the new features that are coming in in Kilo. None of my existing functionality is implemented because of the changes that the community has done. My product is automatically re-certified on Kilo. How easy can we make it for an external VNF partner to be able to certify their applications? Because in a truly micro-data-centered environment, you're gonna have various different versions of OpenStack running throughout the globe. So you should be able to have a mechanism of framework that's gonna help the VNFs easily onboard their applications. That said, I'll pass it back to Thari to talk about the HP strategy. Okay, thanks. So, I know we kind of in general talked about what micro-data-centered could be and we kind of remember that long line which kind of tried to show how connectivity is provided, what are the different connectivity. Let me just quickly go back over there so I can talk a little bit about this. So this is how different enterprises, either there's two parts of the CPE market that we generally talk about. There's the residential, the way we get internet, wired internet at home, and there's the enterprise, the way organizations get internet services. At the heart of it, they use the same thing which is that you start with a customer edge device, this is the device that's sitting at our premise where the customer beat the enterprise beat home. It goes into some kind of a router and then it connects with this router, connects that knows everything else out there. This router goes over to some kind of aggregation router. So this is a provider edge router, there's the customer edge. So the main thing being that when we create a company like HP or your organizations, multiple locations, multiple regions, they need to be able to connect together with a VPN. And right now carriers provide this service. How can carriers make adding, removing networks and adding, removing additional services? The carriers want to be able to provide a service, firewall service. If the carriers want to be able to provide VAN optimizations, how can we provide this very quickly and NFV over there comes to rescue? Essentially, if you have NFV, if you have virtualized your different data centers and typically carriers have large data centers, regional data centers and central offices, how can we put, and today central offices don't have any x86 servers. They just have rows and rows of switches and perhaps routers. If you're able to put x86 servers over there and be able to manage them remotely, that opens up for carriers to provide additional services on top. And it is a significantly big market, this VCP. This slide is from Etsy NFV use cases that they have identified. They have identified that, and this is very, very conservative, 8.2 billion. It's a pretty big, big market out there where we can start introducing NFV. So at the heart of it, what we are essentially trying to say is that what used to happen was that for making any change, the carriers had to roll a truck over to their end site. And typically to be able for enterprises to be able to connect, they have to roll a truck two or three times, which is expensive. If you're able to roll a truck once or maybe not even roll a truck and we're gonna talk about how we can distribute these out, being able to ship equipment where the carrier, the customer can hook it up and rest of it if you're able to do it remotely, that increases, that has a lot of efficiency that brings in and it opens up new markets for carriers. So instead of these physical devices, you essentially try to introduce, you can virtualize it at customer premise, you can virtualize it at the operator premise, you can virtualize it at CEO, the regional data center or their main data centers. Now what the main data center problem is being addressed by OpenStack in general. What we wanted to talk over here was that how do we go out to places where you're not going to have as many servers, where you don't want to dedicate a couple of servers just for running control plane services. You don't want to, you're just going to have eight to 10 servers, maybe in some cases two or three servers, how do you bring NFV closer to the edge? So this one and you know, these slides, copies of these slides are available, but what I wanted to be able to call out with this slide was that as we were saying, networks inherently have a shape and that shape really means that, it's not that ambiguous cloud, you connect to cloud and within the cloud, doesn't matter where things are. What you're able to do is, if you have the customer side and the carrier side, there's value of virtualizing this, what the equipment is sitting at a customer side. So even you know, this building, if you go and look at their network closet, it has racks of equipment out there and quite likely not much is going to be x86, it's purpose built, if you're able to put x86 over there, then you're able to manage or apply IT styles tool to be able to manage. So you can do this and then, or the other option is that you basically, I apologize, this was, I was talking about this, so this is, if you virtualize this, you're able to put some functions over here itself at customer side. Or what you can do is, that take these functions and you put them in your CO or POP, which is a point of presence, central office. Then if you're able to do it, you're again, the idea being that adding or removing services when they're virtualized is a lot easier than having to go in, plug in a rack of physical equipment, wire it up and do things around it. And you may have a combination of the two. So this is the use case that, that a micro data is trying to address where within this service provider closes to the edge to the customer, how can we bring OpenStack in? And for able to do this, for folks who were here in the last session, we essentially absolutely wanna be able to use OpenSource and OpenStack. And with that HP's vision for the IT of the future is that it's going to be very much based around OpenSource. That's the reason we are here, so you know, more than 5,000 people. And that the programmability of the infrastructure, which is it's going to be more and more developer-led. And by saying developer-led really means that you shouldn't have to be at the console of the equipment to make any changes. Being able to do it through APIs, through scripts, through programs, being able to configure the environment. And then, so HP's vision is that the world is going to be more and more like that. And then four telcos that we wanna be able to bring the IT style, cost structures and agility, all the stuff that I just talked about. Once you virtualize components, you put it on x86, you're essentially able to apply the IT styles that has been working in data centers for quite a while. So what would a micro data center deployment framework look like? So typically what you're gonna have is, and this is what most of the carriers are, that you have one or more central data centers. Quite likely, and most of them have at least two to provide redundancy, but you're going to have a central data center. And then between the central data center and the customer prem, you're going to have other set of data centers which we're calling micro data centers. Now, number of different ways that you're able to deploy it, which is you could have separate instance of OpenStack and there was a lot of discussion at this conference around federated OpenStack, which is becoming, now discussions have started happening in a couple of really cycles, Federation is going to become mainstream where just from from location, you're able to reach out to other OpenStack deployments and get a single view, single federated view. So micro data center essentially sits over here, quite likely depending on the country you're in, depending on the region, you may have between 10 to 20 micro data centers per carrier or in larger geographies like US, like Russia, like some of the other larger countries, Brazil, you may have more than that, you may have hundreds of micro data centers. So, but each of this is going to be contained, it's going to not have that many servers. So the idea being that you put your central data center where your shared services are running and those shared services quite likely are going to be very operator specific, but then you're going to have some local services running at the micro data centers at a minimum, having a local copy of Neutron and Nova server services running. And the slightly different view of the same slide that we talked about earlier, that now that you have this centralized data center, here we are going to have the IT style deployment, so larger open stack deployment, you will use some kind of orchestrator and orchestrator is going to sit in the central data centers, but the micro data centers which have a local copy, in certain cases, you'll have a local copy running with the entire control plane, all the open stack services, but in certain cases you'll be running them, you could potentially run them without the control plane or what we're going to call as headless compute nodes, and then going over to the customer prems. And when we are able to do this, one of the solutions that we have that makes it possible is to come up with a integrated open stack based deployment because the key ask over here is these micro data centers by definition are remote data centers, and these data centers that are unmanned, so you want to make these data centers to be as low touch or zero touch as possible, and to be able to do that, HP does have a solution called NFE system, and my colleague Arun is the lead architect for this solution which is a integrated solution, and Arun, maybe you can say a couple of words for it. So, Tathari talked about the challenges on how every time you need to expand a service, you have to roll in a new truck, that truck needs to be configured, that has to be on site services, weak or probably even longer to get the services up. So how do we address those problems by simplifying the deployment process, essentially the expansion process, and NFE system is the answer that HP is looking at. So you should be able to buy specific blocks of resources that you require, so it should be a solution that's easy to buy. Secondly, it should be something that should be easy to deploy. You roll in the rack, you power it up, should be able to auto configure itself based on pre-identified parameters, reach out to the central data center, and be able to start at services, should be able to be operated as one unit. So you have a central data center, that central data center effectively controls all your micro data center. So you should be able to have an overall view of the entire environment, central plus the number of micro data centers that should be easy to operate. And finally, it should be something easy to support. So regardless of which component and which domain has an issue, you should be able to come back to one vendor or one solution provider to be able to address these issues. So that's effectively what NFE system is trying to accomplish. And if you look at the overall process of how this is laid out, so you have a central data center, you have what we're gonna call the NFE system starter kit that provides server storage, networking, healing open stack, carrier grade, and SDN controller, all those components running. So if you decide that you need to have a micro data center, that has to be an extension of your central data center. So you get the information on what is required for the micro data center. Is it gonna be storage heavy? Is it gonna be networking heavy? Is it gonna be compute heavy? Once you make the determination of what it's gonna be, you place that single order and based on the configurations you have in the starter kit because your micro data center is an extension of your starter kit, the entire information goes into HVs factory where the system is pre-built to match the specifications that you have. That is deployed in a variety of different micro data centers based on the configurations you have where the system is auto configured to be able to reach back to the central data center. So once you rack it in, power it up, it's pre-cabled and pre-installed. Once you rack it in and power it up, it's able to reach out through the existing mechanisms you have the DCN mechanism or that we provide to be able to host any of the VNFs that you currently deployed. So from start to finish, the solution aims to be easy to order, easy to deploy, easy to operate and easy to support. So that's how HP is trying to address the micro data center challenge. That's it again. Pass it back to Tharik for any closing comments. So folks, we are here for a couple of minutes. So if there's any questions, happy to take them. But just wanted to close out with this micro data center and there's a lot of other efforts going on. And in fact, I encourage you to look for it if you're interested in this to also look at a talk which is right after this one, something called as CORD, C-O-R-D, central office reimagined, re-architected as a data center, which addresses the same use case when lab is working on it. But this is a problem that all the wire line providers and also wireless providers are looking for and to be able to have data centers that are very small in size, but are managed like a data center to be able to get closer to the edge as possible. Any questions? Sorry. Before, at the beginning of the talk, you were mentioning that in terms of open stack architecture, of course, you don't want to have a server dedicated to acting as a controller in the remote data center because this is consuming resources that you can use for the function on site. But I'm wondering then, I mean, this means somehow to keep controllers in a central place and in terms of high availability, maybe yes, you have the high availability of the supervisor, but then you lose the one of the control plane. How are you thinking to manage these kind of issues? So I think what we were suggesting for micro data centers was that not to dedicate servers for controllers. It doesn't mean you don't have control services over there. So what you want to be able to do is to have the be able to, because micro data centers kind of go from, the way we are defining them is, they go from two servers to up to eight to 10 servers. So if you just have two servers, you need to be able to use quote unquote, hybrid control and compute nodes. So where you're not dedicating the servers for it. And then, but when you're going to eight to 10 servers, then you have a little bit more room and you can dedicate some for. Okay. So you are still thinking to a sort of multi-region scenario? Quite likely. And that's where the OpenStack as a community needs to work a little bit more in, in coming up with a common deployment that is manageable everywhere. But right now OpenStack regions is a, is one of the options to look at for this. Cascading, you know, there are a number of different options. I think it's important for us to congregate on one option and use that as our front for, or strategy for micro data centers. So right now we feel that, you know, with OpenStack as you know, there's a lot of effort that's going for futures and there's some things that are available today. So, and that is why for this deployment, we very strongly suggest that you keep an orchestrator over here. And perhaps in near-term deployment, you go for separate OpenStack deployments and bring everything together with a, with a orchestrator. But as the region concept mature, all the distributed deployment concept mature, you try to leverage them and be able to centralize as many functions as possible. Okay, thanks. Any other questions or comments? Well, folks, thank you very much. And like I said, for anyone who's interested, the next session is on our SDN controller and the service function chaining, which is very, very linked to what we just talked about. So thank you again.