 Good afternoon, ONS! I am so glad to see you all here. It was really great to sort of see where we've come. I was very lucky to be a part of some of those events in the past and I'm happy to be here today in our evolution. So, as Arpit mentioned, I'm Heather Kirksey. I am the VP of Ecosystem and Community for the LF Networking Portfolio of Projects. Previous to this, I was the Executive Director of OPNFV, so I remember some of those scenes that we just saw, and today we are going to go through a virtual central office demo. So, just a little bit about VCO. It's a topic that people have been talking about for a while, and in fact, VCO and EDGE are going to be the topics of a few more presentations later today. But, you know, one of the important things as we've, you know, continued our NFE journey is the importance of sort of getting all these services out closer and closer to our consumers. So, you know, the service providers are looking at using the existing real estate that they have in their central offices and modernizing those and deploying residential enterprise and mobile services to improve efficiency, customer experience, improve cost. You know, when this journey started, you know, when we started looking at the data center and the central office came to be a place that was very important, and when we started this activity, you know, the central office was in need of a lot of modernization. You know, the hardware was very bespoke. It was very monolithically integrated. There weren't a lot of standard interfaces. And over the past several years, I think we've made a lot of progress around virtualizing that, looking at what happens if you have, you know, commodity hardware, regular white box hardware, both in switching and in the servers. And we started the software defining a lot of aspects of that. And looking ahead, obviously cloud native becomes important, orchestrating things in a more efficient fashion. So, we're moving from sort of, you know, a smaller number of data centers to huge numbers of central offices and even more EDGE locations. And so, you know, within OPNFV and many of our upstream partners, you know, we've been looking to look at how we can solve those problems for our end users. So, when we started this journey, we had a lot of our operators, you know, asking us within OPNFV to start looking at this. So, last year, just a little over a year ago, we showed our first version of this journey in virtualizing the central office from an OPNFV perspective at our summit in Beijing. It was focused on residential and enterprise use cases. And we really focused on sort of that first aspect, which was the onboarding of a lot of traditional residential and enterprise use cases onto the open source platform. So, on the residential side, we looked at bringing a virtual BNG online on top of an open source platform built out of open stack and open daylight and other open source pieces. And then we also showed something similar with the enterprise use cases. We showed a virtual CPE and some firewalling and other use cases of that sort. So, looking ahead this year, we wanted to look at mobile use cases. You know, mobile, you know, sort of to round out the trio. But also, as we look ahead at 5G, the residential enterprise use cases, they've got really good business models behind them. But for 5G, it really becomes a necessity to move things further out, to get closer to our consumers so we can meet the latency requirements of the use cases that we're looking to do there. So, things like autonomous vehicles, drones, AR, VR. So, the demo that you are going to see today focuses on that sort of Talcom network edge area and the sort of the access evolution from LTE to 5G. So, we're going to be setting up an end-to-end network with a next generation core. You're going to see some disaggregated access and a full VRAN implementation. And you're going to see some live connectivity happen here on stage. I wish I could say that all of you should look under your seats and you're going to find some AR, VR goggles for that kind of use case. I, however, do not have over one freeze money. So, we will be doing a mobile use case. And then before I go any further, we are doing a live code with a live connection between here and California. So, send us good energy and please be kind. So, with that, I would like to bring out Fuchiao from China Mobile, who's very active in the Opium Fee community. She is on the Opium Fee TSC. And she is going to talk to us a little bit about China Mobile's vision of the next generation mobile network and how some of this work from Opium Fee and the demo fit into that. So, it will be great to hear the perspective from the world's largest mobile operator. Thank you for Chiao. Thank you, Hesha. So, good afternoon. I'm Fuchiao from China Mobile. China Mobile, we try to structure our future network into what we call TICS, Telecom Integrated Cloud. TICS, basically, they are provided and using the cloud technologies, including NFE and SDN, to provide this virtualized environment for future virtualized network functions. We categorized TICS into core TICS and edge TICS, in which core TICS they are located in some of centralized locations and basically support control plane surfaces. And edge TICS are deployed, distributed in large numbers of countries and cities and countries and support user plane surfaces and edge services. So, this kind of architecture actually brings our network like in two ends, at a core TICS, and it seems like that the number of the TICS will be very small, but in each TICS, there will be a huge scale. And in the edge TICS, there will be comparably small scale clouds, but the number will be huge. This architecture actually brings us with new requirements and expectations for the future network. For example, we would like to have the open stack to have management of massive distributed edge cloud. And also, we expect a common deployment models from the core to the edge. And also, we would like to have not only open stack, but also Kubernetes into our cloud so that we could have stacks to support containerized VNF services. And we also expect the stack to be agile and flexible, and also they should have telco levels of service assurance so that we could support all the different kinds of telco services. And to end, orchestration is something very important to us so that we could use one orchestrator from the core to the edge. So we are expecting that OPNFV could provide us with a list reference platform, which have all these fancy capabilities to support our future network. We expect a lot for the VCO demo, especially this time for VCO 2.0, beyond just the residents and enterprise use cases. This time, we have the mobile use cases, which are mobile core and mobile edge. And we hope that this kind of demo can drive some of the work in the community and also provides us more experience and test tools for the integration and testing of the future networks. So with that, let me bring Azor onto the stage with more details for the architecture and the demo work. Please, Azor. Thank you. Thank you so much. So the requirements that FooChile just laid out for you translated into a long list of, you know, things that we needed to do for a demo. And from the different service providers, we got interest in showing VRAM, showing packet core, showing IMS. Particularly, some of the things that were interest was to show the splits-based architecture that's actually defined in 5G but with the 4G radio. And things like network slicing and edge compute. So the long list ran very long, and we started to actually now assemble together a team and different vendors to come together and actually build this particular demo. Obviously, our goal was to try to show as much as we can, but this time around, we were just focusing a lot more on the VRAM demo. We used the exact same architecture that was defined as part of VCO 1.0, so it's the exact same stack. Now, with that stack, of course, because this is a new and different use case, and you have new set of partners that are part of this particular stack. People like XFO, people like, you know, Quartus, and open air interface, which is actually an open source implementation of the radio, is also part of this particular exercise. Again, we used it as part of the open stack platform with an SDN control using open daylight. And we built a complete architecture that actually shows both the RAM elements, the packet core, and the edge compute elements as part of the overall architecture. With that architecture as our blueprint that's to be deployed in a virtual central office, we now started to look at how do we go about actually building the overall demo. So that brings me to the lab topology in terms of how we did it. This time around, Cumulus was incredible in terms of helping us host the entire environment in the California lab. So this is a full data center built with a leaf spine fabric with six leaf switches, two spine switches running Cumulus software on white box switches. And they also provided us complete hardware for this particular setup. What's new and interesting here in addition to that particular data center fabric and the entire setup is a software defined radio. And yes, nothing would be fun without it being live. So we actually bought a Faraday cage and we put some phones inside a Faraday cage. So now looking at the overall topology in terms of how this looks like, let me just walk you through very briefly. What you see on the top left corner of the screen is the Faraday cage with the two phones in there. And you see that picture right underneath that with two phones, this actual antenna, and then there's a software defined radio that's provided by Etis. And then you have the two bare metal notes, the radio unit and the distributor unit. Those two bare metal notes were provisioned by OpenStack. So OpenStack as a controller to actually build and manage those bare metal notes. And then you have the rest of the OpenStack infrastructure hosting a whole host of virtual machines. Those virtual machines, again not all of the virtual machines are depicted here in this picture, but the centralized unit was actually running as a virtual machine on our setup. And then you have the, of course, the evolved packet core, but there's something very interesting in the middle there called the session director. The session director is actually redirecting traffic based on different phones to different sides of the infrastructure. One to the corporate VPN and another one to the internet. And what we have for you as part of this connectivity is actually a branch office that Heather's showing you right here on stage. And that's a live setup and it's part of the live demo that's connecting back to our central office that's in California. And you have now some phone clients that are registering everywhere from California back to here. Now, you will get that information a little bit clear in a moment as you see this particular data flow. So data, what we'll be showing you live is coming from the internet over through the packet core into one of the phones and then out through the other phone back here all the way down to the branch office and we will actually be able to show you something here. But before we actually show you anything, I would request you one thing. If you can all take out your phones, this is the first time in a session that somebody's asking you to take out your mobile phone, tweet with the hashtag ONS 2018. If you can do that now, that would be great. In the meantime, I'm going to call Hanan to actually walk us through the demo. Hanan? So I hope you are tweeting. All right. So I think it's time to switch over to our demo. So what we're seeing here is that I am having a remote desktop connected to the lab back in California and what you are seeing here as well is the representation of these two phones. We are actually controlling the phones on the cage through the app. Okay? And to show you this is working live. So I'm going to switch off one of the phones. I did mention that this is back in California, right? So there is some lag in here. So maybe I'm going to ask my guys, Dave, can you switch off that phone, please? There we are. So we should be seeing the phone connecting. Yeah, there it is. It's attached to the network that we have live in the lab. And the next thing I'm going to ask is, let me see if I can get hold to the... Oh, it's working now. So did you guys finish tweeting? Because I would like to see real time tweet here. Actually, I'm not going to do it here. And I'm going to do it here. Oh, no, not here we go. All right. I say the other one, guys. No blue whales on our stage today, please. There we go. Oh, come on. All right. So let's refresh that. And I hope we are getting much better results now. All right. So we're seeing the tweets update that y'all are sending in. So we are seeing... Yeah, this is good. So we have that phone... Yeah, it's on stage. It's great. We have that phone connected to the internet. And actually, what I didn't say is that there's two phones on the cache connected to the same network. One of them is attached to the same network, sorry, connected to the internet. The second phone is as well on the same cache. But even through is we are using the same radio and we are using the same EPC is attached to a corporate network. And that corporate network is where that PBX and that central office is attached to. So just to show you guys that we are live here. So I'm going to play some music. Of course, this music is playing loud in the cache. We are not hearing anything. It's back in California. But as we are doing a mobile demo, let's try to make a call. And I call back the stage here. Looks like I got a call. So you're hearing something? Oh, I'm hearing something. Let's plug it into this case so we can hear what's coming over the phone. Let's share with everybody that love music. I love it when my friends call me with Kenny Rogers and Dolly Parton. All right, so... Well done. I am now forever going to refer to that song as Islands of Packets in the mobile data stream. All right. So I just want to make sure that I sort of understood. So this phone, I basically just got a phone call from your phone back in the cage in California. Absolutely. Yeah. So what we were hearing is the music. This phone is streaming music and playing loudly on the cage. And the second phone is picking up on the mic what is what the first phone is playing and back through the voice channel to the call here on the stage. Yeah. All right, cool. So as I said, I would love for this to have been something more like augmented reality, virtual reality, but it's hard to do that in a cage on another continent. But so as Zara, we've had some pretty graphs going back here in the background while those calls are happening. Do you just want to point out what was going on there? Sure Heather. So while you were actually busy taking that call, I noticed that you see the SIP call went through and on this particular graph and you see that it was active for a few seconds because that's when you were listening to the music. And then when you shut it off, it went back down again. And you can actually see that there was some data that went through that particular SIP session. So you actually see that live here. As soon as the call went up, you see the spike and then when the call goes down, the spike goes down. The other activity that you're also seeing here, for example, on this particular screen is the assurance capability. Now, the first one was through the session director where you're actually looking at what's happening on the SIP sessions. Now, this one, you're actually monitoring the voice traffic. And here you can see the KPIs, latency variation over time. And here you can see the success or failure rates. And you can actually see how many unique call attempts were there, why did they fail, were they successful, what was the delay, average delay. And of course, the cause, what is the error code for that particular failure. So really, what we're trying to do is show not only just a call that's going, but remember one of the requirements that Fushar laid out was to actually talk about full assurance capability end to end with all of the delay and all of the metrics that are assigned to this. So we were actually able to put that in, live monitor the call, show you all the metrics associated with that. Great. Thanks, Hanan and Azar. And if we can go back to the slides. All right. So just really quickly, sort of pulling this together, unsurprisingly, the first lesson is open source collaboration works. We have 15 organizations and 30 volunteers pulling this together. And another point is increasing maturity. So point being, we were actually showing an actual virtual ran and software radio with open source software. We were showing this on commodity hardware built on open source platforms. And there's interoperability between the hardware and the virtual EPC. Certainly, this did take work and a lot of planning around hardware and specification. One of the things that we will be looking to do across a number of projects, like a Kino, OPNFV, LF networking, is to help sort of make that process a lot easier to go from sort of putting together a proof of concept to something that is really deployable and repeatable. And then certainly from a hardware optimization point of view, a fairly beefy hardware was necessary. And we did learn a fair amount about what things needed to go on bare metal, what things didn't really need to. A lot of detail around that is going to be sort of in the white paper. And then in terms of what we are looking for is next steps. So first of all, if you would like a deeper dive into what we did here, we do have this demo running in the LF networking booth along with a number of other great community demos showcasing the work that a lot of our projects have been doing. Hanan and Azar are doing a zero-touch provisioning for edge breakout session during the conference. We also, one of Fuchiao's colleagues from China Mobile is hosting an conference on CRAN. There are also a number of different edge and mobile buffs and conferences and sessions happening this week. Check your schedule. You can join the VCO demo mailing list if you are interested in continuing to evolve this. We have a number of working groups going on in OPNFV, a CRAN group, an edge group. The rocket project specifically is looking at some of that beefier hardware that a lot of folks are looking at the edge so we can make sure we have interoperability with the open source applications and that. We are also, as I said, really looking to operationalize what we have learned here. So start incorporating these capabilities into our CICD, our automated testing. One thing that we have done in the course of this project is we now actually have a virtual EPC as one of the VNFs that we deploy on various OPNFV scenarios as part of our regular testing so that we are able to make sure as we keep evolving the software stacks that these types of applications work. And then, as we mentioned, obviously there is a cloud native, as Arpit and Dan were talking about, a cloud native vision that we have for all of this and there is work going on around cloud native activities across multiple projects in CNCF and LF networking. And I encourage folks who are interested to get involved in those. And then finally, I think the community volunteers who did this, whether it was providing software, defined radio, antennas, servers, racks, switches, getting the stuff shipped here, doing the software, doing the integration. A lot of people playing a part. Here were the ecosystem partners involved in helping us spec this out, helping to find requirements and providing those various things. And then finally, this was a collaboration across multiple open source communities and software from many of them. Just like to highlight open air interface is actually one of the newest associate members of LF networking. And they are doing a lot of really interesting work. They provided a lot of great of the next generation mobile pieces for this. And we look forward to future collaboration with them. And obviously, the hardware that is in this rack is actually open compute project based. So it is open source hardware with open source software running on top of it. Some of it very generic. Some of it very telecom, 5G specific. And this is what I get up in the morning for. This is what I get very excited about is seeing folks from different projects come together to make our ecosystem better, to make these use cases work and to continue to march technology forward. So I would like to ask my wonderful co-conspirators and partners in crime to come out. As I said, it takes a lot of guts to do a live demo, especially one that crosses the ocean. So thank you all for your work and your guts. And take a bow. Thank you very much. Thank you. Thank you. And so thank all of you very much for your time and attention today and looking forward to a wonderful week with all of you. All right. Thank you Heather and team.