 So, how are we doing so far? Good, good. It's about a sleeping time in California, but I promise I will not fall and sleep. Hopefully, I will not bore you into sleep. There are still two minutes to go. This is my first time in Paris. So far, I've spent most of my time in hotel room, so I don't know yet. I want to apologize first. I think I didn't really spend too much time on this material. We do prepare a lot of materials in today's presentation, but maybe it's not well organized. So, please bear with us. But after the presentation, if you have any questions or any ideas, any comments, or maybe if you want to start an open source project, please do let me know. Okay, let's get it started. My name is Xiang Chen. I work for Huawei R&D USA. Right now, currently I'm leading an NMV lab in Silicon Valley. My colleague, Prakash, who just officially become a Huawei employee as well. So, today we are going to share our observations, our findings, and our region about the NMV and how OpenStack can help us through this NMV journey. I'm not sure how many of you know about what NMV is, right? So, I put some brief definition about NMV. So, basically, NMV is a network service running on cloud infrastructure. So, it's just like the cloud computing, right? Amazon Business Model on Server Industry. NMV is going to be, in our opinion, very, very disruptive. So, disruption will not be just in telecom industry. It will also impact the IT and the cloud industry. So, before NMV, right, vendors like Huawei, like Cisco, like Juniper, I think we are considered as a box company. So, all our solutions are built inside a box. So, for example, this is so-called appliance model, right? So, in every appliance, every box, we can commit our service quality. So, for example, for things like EPC, BNG, IMS, Firewall, CPE, we can, in general, we can say, okay, we can support one million subscribers per box with guaranteed reliability. So, as we evolve into NMV model, right, it will be more like this kind of thing. So, comparison is that the box, right, is a purposefully designed system for spatial applications. But in NMV paradigm, everything will be on top of commodity hardware, commodity system. In box, pre-NMV, right, everything is tightly coupled. Hardware applications, everything is vertically integrated. But in post-NMV paradigm, everything will be loosely coupled. Application hardware will be coming from, solutions will be coming from different vendors. We will be seeing a layer of solutions, and you will become multi-vendor. Box solution, right, is optimized for low latency and high throughputs. For NMV, right, this kind of model is optimized for feasibility and efficiency. So, in NMV model, right, reliability is more important than, no, I'm sorry, availability is more important than reliability. So, for NMV, right, so, for leading carrier, they are already seeing every year, right, the traffic volume will grow 50%. So, in three years, right, the whole total traffic volume will be three to four times more than today's traffic volume. But the revenue will be deteriorating, right. So, most carriers will not invest more. So, what they are expecting from NMV is that they want NMV, hopefully through NMV, they can somehow find a way to reduce cost. And they can find a way to expedite the innovation and eventually can develop a new income. So, this is a typical NMV component architecture, right. At the bottom is a physical infrastructure. So, the red box, right, is basically where we use an open stack, right. It's managing all these virtual infrastructure. On top of that, right, we are trying to create higher level components. We call it virtual network functions. So, we will have individual different functions like firewall, security, video optimization, all kinds of network functions. We have a virtual network function manager. On top of that, right, is an orchestrator. There's an NMVO, right, NMVO orchestrator. Try to link all these different virtual network functions together and eventually become a virtual network service. So, that's the NMV architecture. So, this slide, right, is talking about, at the bottom, right, is a typical IT cloud platform. And at the top, right, this is what NMV is trying to do, right. We are trying to bring, build all these, telco workload, something like EPC, BNG, IMS, right. All these on top of this cloud platform. But in between, right, currently, we are facing a lot of challenges, right. So, things like, for example, we talk about is a layer architecture. So, how are we going to define the cost? Which one is going to get more money, right. And also, the SAOA, right. How are we going to define the SAOA? How about security, right? How about performance? So, we are really facing a lot of problems. Not to mention the blue part, right. Because of all these distributed cloud kind of business model, customers and carriers, we are, and the users, we are having more expectations about the whole thing. So, become more challenging. So, here we are sharing some of our real observations from Huawei's point of view, right. Basically, from three, four different areas, right. From performance point of view, from more hypervisor, right now, we are seeing great performance loss, right. And there's no real-time support, things like this. Also, from this resource management point of view, right. Right now, OpenState is designed kind of a staff thing from system engineering point of view, from bottom to top, right. Currently, from NMV, right, we are trying to see if we can have another, just like this container, this ecosystem, right. We can have some kind of another layer of abstraction to manage all the resources. And the other challenge is like this stability, right. Right now, we are seeing that after we scale the OpenState cluster up to hundreds of machines, hundreds of VMs, right. We are seeing a lot of scalability issues, a lot of bug issues, right. That means maybe doing the testing in OpenState. Maybe we need a better testing mechanism or testing environment to test this a large scale kind of a deployment. Also, from education point of view and from communication point of view, right. We are, this is basically some real challenges we are facing from within our company. So, to address those challenges, right, we have some ideas. So, before we look into those ideas, right, let's share with, let's be in sync on some of the trend of history. So, the very first one is a computer service, right. As we can see, we got all the whole computing industry coming from mainframe, right, up to personal computers. So, the trend is that everything starting from the share centralized, the quantity very few, size very big, very generalized, right. So, what is the trend decentralized? Become more and more distributed. Size is becoming smaller and smaller, quantity become more and more, much more closer to the users and the user will have a better control. It's more user driven and from one to many to one to one, right, to the personal computer kind of an era, this kind of a paradigm. So, we are seeing currently a lot of ICT, things in ICT industry, right, are still in this so-called mainframe kind of a model. For some of our router, our mainframe, our EPC, BNG, right. Even, I'm sorry, even today's cloud, in our opinion, right, is considered as a mainframe model. So, very, very soon, according to our prediction, right, something will be happening, right, something called personal cloud, a new PC will be coming out. So, what is a personal cloud? Maybe we can have some other time to talk, but in order for personal cloud to happen, right, NIV will play a very, very big role. So, the second trend, right, to share is modularity, right, layers and abstraction. I think all of you are very, very technical. You should know, maybe a lot of you are not as better than me, right. We have the same model with the hypervisor, abstraction, we have this hdn controller, right. All these are designed to make sure all these will become very easy to evolve, very easy to manage, and very easy to understand. So, this is another thing, right, cattle over pets. So, right now in this cloud paradigm, right, we should try to design things and use resources like cattle. That means cattle are disposable, right. And the instances are now unique, right? And we can replace if necessary, and then we can shoot a cattle when it's sick, right? It's not like a pet, right? What is the name? This is another thing, right? I think especially in city industry, I think a lot of people will need to be educated for this. Before, we always talk about 5.9 in hardware appliance. Right now, as we move toward this virtualization and this cloud paradigm, right? All this availability responsibility will be shifted to the application layer. And the hardware will become more and more commoditized, right? And they become less available and less reliable. So before we move on to the next example, right? Next vision, let's share what is the name working today, right? Starting from our mobile phone, right? If we access an OTT website from our mobile phone, right? Basically, you will go through all this, right? You will go through a terminal, then to an SS, like a Wi-Fi of 3G, 4G, or go to a core, and then go to a core like an EPC, right? If we go through wireless, right? Then go to a service there before internet, then eventually reach internet and to an OTT service. That's the general network, this telecom infrastructure. So if we go through this SS core and the service, right? Traditionally, if we have, for some more, more than 10K traffic stream coming in, right? All of them will go through a share pipe, then go to a very big share box, then another share pipe, another box, another share pipe, another box, right? Each box inside, maybe there's a lot of several components, individual components. So characteristic of this picture is that everything's shared, right? Resource pipe is decentralized. It's fixed a lot of clients. Quantity, very, very few, size, very, very big. And every box can only be generalized, cannot be customized for any specific user. Every box is very complex, and every box is very, very expensive. So in all point of view, right? This is, in all point of view, is the so-called mainframe model. And before NIV, right, this is the only model possible. And that's why carrier spend a lot of bucks in all these expensive boxes for buying from Huawei, something like that. So after NIV, right, we are seeing another possibility. That means instead of going through one share box, maybe to a very extreme, right, every user can have a dedicated virtual link. Each link can have a dedicated virtual appliance, virtual machines, and everything, all the network topology can be different, can be personalized, can be customized. So characteristic is that everything's dedicated, secured, isolated, distributed, mobile, flexible. Quantity, a lot, size, very, very small. Everything can be customized. Everything become very simple and cheap. So in the CD industry, this is a very new concept. But in server industry, as you can see from Amazon, right, this is not new anymore at all. So in all point of view, right, it's more like a PC model in networking. So look at today's NIV market, right? As we can see, a lot of the existing NIV implementations are still mainframe style. That's the major reason that every time when we talk about the benefit of NIV, right, a lot of people are talking about, can only talk about saving, nothing else. So NIV really ignites the so-called PC style disruption, both in business model and technical architecture. And actually next thing after PC, right, will be mobile style. What is mobile style? That means your EPC, your PCPE will become mobilized. So that will be another great thing to be happened. So the market is moving from operator-centric to so-called user defined. It means that before when company like Huawei produce appliance, right, the talky customer is operator. But right now if everything become personalized, become dedicated, become PC style, right, we should think about maybe it's not operator anymore, right? Just like right now, I'm on the virtual machine, it's not for service provider anymore, right? It's for any user like you and me. So same thing is gonna happen in the city industry, if according to our prediction, right? So photos will be shifted from operator to operator's customer. So two questions, right? We need to ask ourselves and this may bring in a lot of opportunities. First one is like, what is Amazon AWS cloud business model in NIV paradigm, right? Amazon AWS talk about utility computing, right? Is there any so-called utility networking coming out? What will be the technical architecture evolution, right? That enables Amazon AWS cloud like this kind of capability, right? Is there any new technical architecture evolution gonna happen after this NIV paradigm? So things to look at is that maybe new abstraction, further decomposition, or new composition is going to happen. So in the following slides, we are gonna share what we did, right? Starting two years ago, I think starting two years ago, we started from OpenStack because OpenStack has a very flexible architecture. This is a component type of architecture. So we try to see if we can add one more module over here, right? We create a module called quantum leap, right? It's basically talking about this virtual mobile node, right? We are trying to address this mobility network kind of a problem. So we are trying to see if we can, first of all, create additional component just for the mobile network. And then we decompose the VEPC at the virtual network function layer and create corresponding VNF and the orchestrator. That's how we started two years ago. Then today, right, this is today's picture because after we dig into it, we found a lot more and more problems coming out, more and more challenges coming out. So currently we are working on with a lot of different kind of scenario. At the VEPC we are working on bare metal, the traditional virtualization, the containerization, and this virtual APP kind of a scenario, right? Also from this VIM, right? Virtual Instruction Manager point of view. We are also trying to look into this, something like a missiles and Kubernetes, right? To see if you can provide a better resource management to the upper layer. At the virtual network function layer, right? We are trying to create some kind of a past platform because the idea is that the reason, one of the reason for NIV is to allow any startup company, right? As long as you have some kind of interest in development and programming and coding, right? You can, by just writing maybe a couple hundred lines of code, you can right away create a new VNF and bring your innovation on top of this platform. Right away you can deploy it over the war in this telecom infrastructure. That's what we are trying to achieve through this path, right? So internally you create this VNF path, we will create a VNF path and orchestrate the same thing, right? Everything, we make it pluggable, highly flexible. Actually a lot of experience we were learning from actually from OpenState community. So on the next slide, I was just talking about today, right? So tomorrow, actually how are we going to build this so-called personalized, customized, highly flexible, this kind of virtual network link, right? So this is what we are thinking. So this slide should be very familiar to most of you, right? Forget about the text inside, right? Before we look at the picture, you will see a so-called server infrastructure. Then you'll have a KVM and VMware hypervisor. They have a lot of virtual machines. Right now we are trying to see, right? Is it possible to create similar kind of architectural diagram? But underneath, it will be, for example, AT&T's physical infrastructure. Then we create maybe an open source network hypervisor. On top of that, maybe China Mobile's infrastructure is one of the virtual network infrastructure. Huawei's internal IT company's network infrastructure is another virtual network infrastructure. My home network maybe could be the third one, the virtual infrastructure over there. So eventually, maybe we can create something like something like MWA, but it's not for server anymore. It's for network. So this is another picture looking at this, right? So we have a physical Ingress and Egress traffic going through physical infrastructure. So this hypervisor, right? If we bring in this SDN technology into this guest domain, and we allow different users, even people like you and me, can freely create a topology, whatever topology we want, we need, right? Eventually, maybe you and I, or your company, can very easily become a virtual operator and can own a mobile network infrastructure or any network infrastructure. So I'm sorry, I think I'm moving faster than I expected. I should stop by and ask whether you have a question. So this topic, right, the topic of this session is talking about empowering customer-centric NIV. So who are the customers, right? In all point of view, right? Any user, developers, carriers, and the ecosystems, right? They are all considered as customers. So if we are going to build a new paradigm, a new framework, right? A new system, a new product, right? We need to consider all this together, right? Because just like an open stack, right? Without any of this, right? It will not become successful. So this is my last slide, right? Currently, we are, as you know, from NIV point of view, Carrier and some vendors, including Huawei, we created a new open source project called OPNIV. The full name is called Open Platform for NIV. So the idea is that we are not going to reinvent a vehicle. We are not going to create any new open source project. Instead, we are going to create a so-called integration project, which is going to integrate a lot of upstream open source project together, and create a reference architecture just for NIV use case and use case scenario. So this one just got started in September. And open stack plays a very, very big role in this OPNIV project, at least in the initial phase. So if you are interested, I feel free to come in, join this project and to contribute. So, OK, that's all I have for today. Any, so I went through the slide very, very quickly, right? So if any one of you wants to discuss more about any of these slides, please feel free to let me know. I'll just add a bit of it. Of course, it's in lighter mode, and don't take it too seriously. So first thing we observed, a couple of, of course, last Atlanta, I believe, and even before that, Hong Kong, we used to call NFV. And we were told it's not for vendors. NFV, what does it stand for? Network Function Virtualization. But it was initiative by carrier. So in the lighter way, and we used to see back door people say, hey, it's not for carriers. Don't touch it. It's not for carriers. It's not for the customers. It's only for the carriers. Now you see a new initiative you saw, Op NFV. So lighter side, we say it's open for vendors now. It's Op NFV is open for vendors. So next time when you come, next summit, you'll be hearing, we are open for customers now. So the basically evolution occurs. It started with carrier. It has gone to the vendors. Now vendors, it's open for vendors. Tomorrow, next summit, we'll say it is open for customers. Basically, it's evolution. So the idea behind this, you carry home when you go. You think over it. Hey, what happened? We started with something like Op NFV, so NFV started. We did a lot of theory. NFV, it's the NFV organization. It's a standard body. All the architecture they drew, but in the end of it, you need something concrete, right? People cannot just, oh, we say Java, it's a reference architecture. But where is the implementation? And that's where you see the need for vendor is there now. But tomorrow, you'll see, OK, vendor is there, but what, vendor can do something, but who will use it? Then comes the customer. So then comes the subscribers. So ultimately, it has to evolve. And OpenStack is one of the primary places where we are able to gather that kind of a practical knowledge, which has come from starting from NOAA, to Neutron, to Cinder, to SAF, to you name it, the core reference, which is there. And way beyond that, now we are seeing that even for policy implementation, we need Blazor, we need Congress, and we need Ionic for the bare metal. So the carrier grade requires those kinds of hardening, and which is where the OpenStack has been the key. But OpenStack alone, can it deliver? Had it been able to deliver, we would have been happy. But you see that we had an NFE group. I don't know many of the NFE groups are there. And you look at what kind of things are going. They are the lower layer. They are looking at, OK, I need to fix the high availability. I need to fix the fault tolerance. I need to increase the availability. But going above, then you see, OK, that's fine. But I need more than that. So what do I do? OK, platform layer. OK, I have a solar myo. I have a heat for orchestration. So OpenStack has done it. But essentially, OpenStack has been focused at the lower layers, and which is what we call WIMP, which is. So these are the challenges which you see are the key factors for anybody to get implementation. So that's why I said that Andy Groove, when he said for Intel, ISDN, it still does not work. Same way, NFE, not for Windows. From that, you have moved on to now Open for Windows. And we want to see that it is open for customers. Because without you, without the subscriber, who is going to pay for it? So that's the key. And why would you pay for it unless you get services you want? You want a voice, video, data. You want all in one, combined. So OTT has been very good, which as you have seen that in the Google world of Googles of the world, they have been able to capitalize on it. Whereas in spite of the best efforts from the OpenStack community, including the OpenFE community and future community, we are going to see that it's hard to meet these challenges. Service-layer agreement, if you say SLA, you go into QoS. When you go into QoS, you say, OK, I want this much bandwidth for this video. Whereas if I'm moving in a low place where the bandwidth is low, I would rather still have that particular with a lower resolution. So the QoS has to be made. And this is depending on the bearer. What kind of bearer you are getting, whether you are getting over there, you are going along the pipeline at end to end. So end to end SLAs are very hard to get. Capacity planning, you look at it. Go to any carrier. They do always the design from core towards the end. The core will be the primary where you define this much is my capacity. You do dimensioning, they call it. You lower down the capacity. At the subscriber, you want the minimum. So capacity planning is one of the things which is important. Troubleshooting, of course, you have F caps. You have fault, configuration, accounting, performance, security, F caps as they call it in their carrier terminology. So you have to be able to do the configuration, any break in the configuration, any dynamic operation, be able to capture the data from by instrumenting into the nodes, collect that data, analyze the data, and be able to figure out what is wrong. Yeah, maybe let's spend five more minutes to go through this. Pricing. All observations, right? Yeah, just pricing. You saw the pricing, right? How do you do the pricing? Do you do by minute? Do you do by volume? And do you do both? Because I may be talking. At the same time, I may be viewing. So viewing may be in terms of number of gigabits, whereas talking may be in minutes. So the pricing is an issue here. How do you price it? What unit of measure you price how much? It's not like it's based on timing alone. So there is a complication there. And plus, if you have millions and millions of subscribers to collect the data CDR, what they call call detail record, for every subscriber and to build them correctly, it's a humongous challenge. And so it's a volume issue. So I think policy management, all of you know policy, whether you should apply policy to the subscriber, policy to the network, who decides what is the agreement with the contract with the, and is it going to be used neutron-based policy? Are we going to use? Are we going to use? Where is the application of it? Where is the enforcement of it? So there are challenges. So every piece of this security you name, everything has a challenge. And that is why people are not able to deliver that in spite of our best effort. We are all human beings. And we want to do our best, but yet there is a limitation. And that's the message I want to give, that as a Huawei, as a corporate, it has its own responsibility. We can't be every time, no Cisco, no Cisco. Who are we? Who are we? Not that. Remember that this design, going beyond the vendors, it's the overall, what do you call it, inherent nature, which an organization changes. That's important. And we believe Huawei has a very sincere, very sincere way of going about, in spite of what people say, or what governments say, they oppose each other, we want to be really good so that it helps the community. That is our message to you. And thank you very much. I'm sorry, I think I really want to go back to sleep, I think, so this time I really overdid it. Every time, I always need more time, right? This time, it's strange, anyway. So, actually, in an interview, we really think there are a lot of opportunities, right? Because, as I mentioned, this is a disruptive kind of a model. So, new business models, new technical architecture will be coming out. So, for us, we are especially interested in these, some of the slides, right? The hypervisor, the naked hypervisor, if any one of you have any comments, right, regarding something like this, right? We are thinking maybe this is a good opportunity, we can really do something from open source community point of view. Because once we are able to virtualize the naked infrastructure, maybe there's an opportunity for something, for a real core naked hypervisor, this kind of abstraction. So, later on, if we can, anyone, just like maybe later on, there will be something like an Amazon AWS platform, everyone can go in, and the host could be companies like AT&T, right? And they might need a naked world for my company, or naked world for my home, or even I want to become a virtual operator, so I can very easily, through a console, create a virtual infrastructure right away, just in time, on demand. And from company, through using these technologies, like SDN, combining with NIV, yeah, maybe we can create a lot of possibilities, right? Eventually, we are even seeing, every one of us could become mobile operators. And we are also seeing more and more new traffic, right? Something like IoT, internet of things, D2D, M2M, those traffic will be happening locally, right? By the age. So, if we can create this kind of a platform by the age, for example, AT&T, if we can convert a lot of this existing central office, also all this pop office, right? Into this cloud-like capability, with the cloud capability, kind of this kind of platform, then companies like Netflix, right? They can, wherever they deploy service, they don't need to deploy centrally, somewhere in the country, right? They can deploy highly distributed around the age. Then, no matter the performance or quality or latency, a lot of problems will become kind of a trivial, right? And then it's gonna ignite a lot of innovations. So, I think. We are open up for Q&A. If you have any questions, please do ask and we will answer you with our best knowledge, whatever we have, or we can take it offline also. Anybody, any questions? Looks like everybody has understood everything we have delivered. Quite a bit, I believe, or nothing is understood, either of the two. Yeah, go ahead. Welcome. Yeah, yeah, come, come. We are here to answer you with our best what we can. Yeah? Can you tell me the difference between network hypervisor and SDN controller? I don't understand the difference you're making there. Over here, we can see, there's a hypervisor, on top of hypervisor, we can create a lot of guest domain. I try to demonstrate that. We can, for the hundreds or thousands of different virtual network infrastructure. So, each network infrastructure may have a one or many, one, at least one network topologies, right? So, the reason we can put the SDN controller over here is that maybe some part of a control will be, will still, will be confined to the operator. But most of the capabilities will be empowered to the users within that domain. So, it's just like a super user of the guest domain, right? So, it is a SDN kind of a controlling mechanism. We can make the management very, very easy. So, yeah, I'll add to it. Your question was, what is the difference between SDN controller and network hypervisor? So, the answer is simple. SDN only deals with network domain, specific to network. That means it will do flow control, it will use the open flow, it will interact with the networking, that's all. But when we say network hypervisor, like you have a domain, in NFV there is something called domain, you have storage domain. You have processor domain. If you combine the process and CPU domain, that is one domain. You have network domain. Whereas here we talk about network and hypervisors. So, it is CPU domain and network domain, combination. So, of course, we have our own views, each one has their view, but anything that is like that, which is not sure, then everybody has some idea and eventually it grows to some idea. So, we are throwing some idea into the market that we want to do something like network hypervisor. What would you think it is? So, good question, thank you. Any other questions? There was one gentleman there, what was that? Same question, yeah. It is, that's why I said, no, it's like, you go back to history, Wendt said, Bill Gates said, oh, .NET, whole of Microsoft, you know what the hell this guy is talking about. He's talking .NET. Same way, it's a network hypervisor, we ourselves don't know what we are talking about. That's what happens sometimes. But eventually you come to, oh, realize, okay, there is something special about it, there is some value for it, and we have to come up with the value and the architecture and go from there. So, we look forward to having debate on that, what is good, how will it evolve, we have to see. But these are some of the things you always think about it, okay? One thing I want to add is that, within this guest domain, every appliance will become a lot more simpler, very simple, because it doesn't need to feel, just as a mainframe in the PC, you can still do the job, but a lot of things like access control, security management, isolation, all these will become, 70 to 80% of the code will become unnecessary, and will not be included. So, the middle is a HTN controller over here, or all these appliances within this network topology, everything will become very, very small, very simple. So that's why, that's when I think we can use a container, the smaller lightweight container to handle this kind of a traffic load, the personalized load. Any more questions? I think people are satisfied. You know, in half an hour what we can deliver, and we have to imagine, I hope you have learned something from this, and next time we will see again in Vancouver, hopefully we'll be talking about, we are open for customers. Thank you, Hermit. Do feel free to email us, or send us any questions. If you are interested in discussing more about the details, feel free. We are in Silicon Valley in California. So if you are also from there, maybe you can have a lunch or something, okay? Mercy. I hope. Thank you very much. Thank you.