 OK, I think it's 1.32. I think I will start. So hello, everyone, again. Welcome to this session. So this is about telecom cloud. So I have to say that for myself, this is my second OpenStack Summit. So I joined the Paris OpenStack Summit. And my general, I say, impression with this OpenStack Summit is definitely I feel that seems to be definitely an increasing interest regarding this telco cloud discussion. So I'm really happy to have you here. And so I'm going to go through a number of, let's say, topics, which should be wary of, let's say, of strong interest to the community regarding how do we address telco cloud, let's say, performance issue. And we normally talk a lot about carrier-grade performance and telcom-grade performance. And people have been asking questions. What's special about telecom workload compared with what we have been doing previously? So that's where much I think I would like to focus on today. But I would like to start the session with telling a bit about the company I'm working for. I'm working for Aresong. As you most likely know, that Aresong is the number one mobile infrastructure and services provider. And also we're number one today for OSS-BSS. And plus we're also doing TV platform as well. So I'm working in Aresong in so-called group function technology. That's where much is Aresong's CTO office in the headquarters in Shista, Stockholm. So my responsibility is very much focused on, let's say, we have a group working with, we call it Industry Area Datacom. So we're much responsible for the overall Aresong technology strategy for SDN, for NFE, for cloud. So my own focus is very much on the cloud strategy. So we are working very much, let's say, with our customers, with our partners to drive NFE and the cloud and SDN, as we said, in this community. So as I said, today the focus will be on the telecom cloud performance. And I myself, like my name is Sheng An Yu. I'm a Swedish-Chinese. So I've been working in Aresong for 20 years. So my niche area has been very much in the network or infrastructure planning and performance optimization and performance assurance area previously. So today I think I would like to focus on a number of topics. I would like to start with saying to, let's talk a bit about end-to-end service performance. Because regardless if you have a virtualized network or you have a native, as we call, non-virtualized, the carrier network today, from the user perspective, there are expectations on how to say the user experiences very much the same. So therefore we think it's very important for us to align, to basically agree on what kind of end user, let's say end-to-end user experience we should focus on. When we do the coding, for example, in the OpenStack community. And then after that, I think I would like to continue to talk a bit about the telecom grid challenges, which is very much focused on the OpenStack community-related areas. And here we're actually calling for the community to support us to address issues jointly. And then also we're going to share a number of solutions as we see in terms of the joint effort we're driving, for example, OPNV as one example, with very much the mission to basically achieve the telco, let's say, great performance for the industry then. In case you have any questions, please, I guess we have a Q&A session in the end. And I do have my card here with me. In case you felt like you need to know more details of the things we are actually showing today, so we should keep in touch then, like that. So if we start with end-to-end service performance, as I said, so when we talk about user experience, end-to-end perspective, the view is very much saying that it's not only about network infrastructure, because today we want to have a view saying it's very much interconnected. It's not only about telecom network itself. It's very much about telecom network interconnected with OTT, with IT infrastructure, and also together with application and devices in a way. So what do we actually see that today? From telecom service provider perspective, when we talk about the telecom service performance, for example, I show some example here, like in North America, we do have two carriers, like Verizon and AT&T. So for carriers, what do they sell to their end-user is very much the quality, the performance of their, let's say, network, their infrastructure. So in term of the requirement, I think we all worry, let's say, we all very much the subscriber user of carriers' services. So the general requirement, I think, is very much saying we are expecting the telecom service to be available all the time. And I myself joined Arizona 20 years ago, actually did my sister's work about mobile network availability, actually. At that time, we had a mission to talk about zero downtime, but we also know that that's kind of mission impossible. So instead, the focus has been carry-grade performance. So you all know about the 5.9, right? The availability is saying you have to be, at least be able to provide the 99.999%, or even like 6.9, 7.9, or 9.9, there are different requirements on different level, right, in term of availability. And not only about saying the network should be available, it's also in case if you truly have issue, then you truly need to have the fast-forced recovery time, often we'll talk about millisecond. For example, the standard requirement is saying, you should be able to cover the fault, the error in the network within five millisecond. So that from the end user perspective, you should not have any feeling about saying the network is on, or I can't make a phone call, or my data service is not available, et cetera. So this is not only saying from the end user perspective, you're putting a requirement. This is very much from the regulator point of view. Like thanks to the regulator requirement, the carriers have been truly driving their infrastructure planning with this kind of requirement. And then the other things like with, let's say the data traffic is actually increasing, always connected, talk about mobile network coverage is very much important, like how can I get the best access? So I'm going to show you in the minutes to talk about even Facebook today is very concerned about where can the Facebook user get the best access to the Facebook application, for example. And then talk about the speed. I guess you all were familiar regarding the speed comparison among carriers, for example, like a typical speed test. I guess many countries are driving this speed test competition among carriers. And that's actually, is another way to drive the performance requirement. And then that also leads to the infrastructure requirement to the carrier then. So if we talk about in the near future, when they virtualize their network function, when they introduce the cloud capability, but those requirements will still remain to the carriers. So how can we help the carriers to achieve those requirements is definitely the biggest, let's say, challenge for the industry then. So if you look at the story I want to share with you today is also why I want to talk about interconnected infrastructure is very much, I don't really know if you notice that, for example, here is a story talk about like for Facebook, they actually were much more by oriented like, because everyone's saying that we're actually driving the network transformation, we're driving the society transformations through mobility, because mobility seems to be the central part of the things we're doing now. And if you look at the statistics showing like by end of last year, so majority let's say 526 million of the monthly active user for Facebook is actually through the mobile, either through mobile application, mobile devices for sure. So here we're actually not talking about WhatsApp, by the way. And WhatsApp actually has a major user base in certain countries in particular. So in sense, if you relate to also to the revenue Facebook is generating through their advertisement, you all know like 92% of Facebook revenue is actually from through advertisement. And then if you look at the recent figure saying more than let's say almost 70% of Facebook revenue actually are through again, mobile infrastructure. So therefore we're saying now like from under user perspective, the infrastructure is not only about, so let's say Facebook infrastructure, like Facebook, their data center capability is very much about interconnect with carriers, mobile infrastructure jointly. So with this reason, I think, Erison has been working with Facebook for some time now. So one example to show like we have joined project to help Facebook to basically understand, in terms of under user experience. So they're under user, how do they experience like for example, when you try to access Facebook application when you're actually trying to do the certain things what they experienced there. And then from the infrastructure perspective, the project is also trying to identify what actually is a bottleneck from the infrastructure perspective. So both here I'm actually talking about both mobile infrastructure and also the Facebook side of the infrastructure. So it should truly end to end, like to identify the limitation and truly define that how can we improve that. So this is, we run as a typical, I say Erison services, we run like for example, application optimization services. So the outcome of the projects like basically concludes saying we could improve Facebook, so-called app coverage, by the way, I should explain a bit here. App coverage basically is application coverage or application experience. This is a concept Erison brought up a few years ago. The thinking is very much saying when we talk about user experience, it's not about average in the network, average in the infrastructure anymore. We want to focus on each individual user at any time at a certain location. So the so-called app experiences were much like an indicator to show that from end to end perspective, what's your experience for certain application? For example, if you use Facebook at a certain time at a certain location, you know, like, do you have the best access? That means that there's app coverage as we talk about. So here this project has achieved that basically help Facebook to improve the app coverage by 40%. The 40% improvement means like you as a user basically at that location, you know, at that time, you will have 40% better chance to be able to use your Facebook application as you wish to do. So this is a rather big improvement. Then the other thing is related to time to content. This is very much related to, you know, when you try to access certain contents in Facebook, it depends on the server quality. You can't have, you know, if you're lucky it's a bad server or if you're lucky it's a good server, if you're not lucky it's a bad server. So the time, from the time you request for certain content until you get it, can't wear us a lot. So we could shorten basically the time for content, time to content by 70%. And then in the end, upload time. When you try to request, for example, I want to upload a picture, how long does it take? So this is also very much an user perception of the quality of the infrastructure, the quality of the service then. So here we can improve by 50%. So this is just one example to show that from under user perspective, as we talk about the quality for a telecom, let's say, cloud, you know, in the near future, is not only about, again, as we said, the enterprise side of the cloud infrastructure is very much interconnect with, let's say, the carrier side, the mobile infrastructure, together with, let's say, the cloud infrastructure as well, yeah. So in summary, I think what I talk about just now is just to give you a big picture, what we mean by end-to-end service performance. And end-to-end, let's say, again, the user experience then. So with those kind of requirements, we definitely felt like we understand the requirement, as I said, on the infrastructure. For example, like if you talk about the telecom grade challenges related to virtualization to cloud infrastructure. So we want to share with you, like we have been working with the requirement setting, for example, for OpenStack for NFE as one example. So this is one way to explain to the community, like what other things OpenStack community need to achieve in order to help the users, or you basically, the companies who are wishing to use OpenStack products so that they can truly provide a telecom grade service to their new user. So if we start with, like, talk about the high availability of resilience, and here is where much the classic thing, right? When we design, let's say, certain part of the block of you, let's say, again, your code, we need to think about the overall big picture, like how can I provide, for example, the instance availability, and then how can I make sure that I truly have a proper control on the, for example, like the scalability part of the issues, yeah. And then relative storage, for example, we need to make sure the storage redundancy issue are considered, and typically requirements, saying it should not be any single point failure in any of the storage subsystem, and assuming if you do have an issue, and then you should not really result any downtime to the, let's say, the end user application. So this is like a straightforward requirement to the system design, yeah. And then related to networking, some examples, like here we have a requirement, again, on the redundancy side, and also related to latency, for example. And then also I think, like we talk about, like related to isolation, resource isolation, and topology awareness, this kind of requirement is definitely very critical to the system design. And in the end, we do have also a bunch of requirements related to telecom-grade security, related to backup and restore, and also related to very much about software management and upgrade support part. In the end, it's about performance and assurance. Here is very much about the fault management, for example, like how can you have truly, let's say, a capability to be able to predetermine certain, let's say, scenarios in case you have truly a performance issue, yeah. So I just want to summarize, this is just some examples to show that this is a requirement. We have the challenge, rather. If we want to provide a telecom-grade solution to the industry, we need to address those requirements. And the coming slides I want to share, like together working with our partners, our customers, we have been achieving, let's say, we have been starting the journey already, like towards the telecom-grade solution. So some examples to share with you, like, for example, like we've already been doing, like we have provisioning, there is a solution available. And related to high availability, just now I talk about, like, you know, here we have actually a solution to talk about how do we truly, through this fault monitoring, basically you can active monitoring potential issues in, for example, like in your host, for example, and that's one way to address basically availability challenges. And then the other thing is related to assurance site, like we do have a fault and event performance monitoring solution there. And some other examples is related to, you know, backup restore, as I mentioned now, like we do have solution for automatic backup. And in terms of recovery actions, there's also, we do have different solutions to improve that. So this is just an example to show that we already have, you know, started a journey, as I said, towards the telecom-grade of, let's say, solution for the industry. Then beside this, I think on top of this one, we also have a number of other things we want to share with you. So this is where much is the third part of my presentation, talk about telecom-grade solutions. For this part, I think I would like to bring up OP&V as, let's say, the open-source project, you know, among vendors, among the carriers, as one initiative to truly speed up, let's say, telecom cloud deployment or NFE deployment as a purpose. I assume all of you are rather familiar now with OP&V as an open-source project. And the purpose, I think, is rather simple, that we want to have a community, a platform, so that we can jointly, let's say, work on, work out a reference platform solution so that we can quickly start a deployment and truly have, let's say, benefit from virtualization from cloud then. Related to this OP&V, we're much saying that we are, everything is very active in the OP&V project. I myself is very much involved, I'm very much involved in the testing and performance subgroup in the OP&V project. And here I want to bring up some projects just to share with you, which are very much related to the telecom-grade, let's say, solution, and how do we address that. So in the OP&V project, you probably already noticed in the web, you know, in their wiki page, and there we have different type of project, like from the requirement project side, I'm sharing some of the project here, like China Mobile is driving this high availability project as a team, leading the team, and then there are different vendors and carriers joining that project. So that's very much related to, again, how do we make sure that this high availability requirement are properly addressed in the OP&V project. And then from Erison's side, we actually have one project we proposed some months ago, we call it a transformer project. And this is where much, by the way, it's the introduction of my talk when you actually read what we want to talk about. So the transformer project is where much, what Erison, we believe that as the telecom vendor leader, we have the responsibility to basically help the industry to quickly say that once we have the virtualized solution in place, how do we truly make sure that with the virtualized portion of the, let's say, what I just now shared, that portion of the solution, we can still make sure that, because the carriers do have a rather large base today, it's a native deployment of the network. So how can we make sure that the network, the infrastructure in transformation, truly work end to end? So therefore we believe that we call this project the transformer is where much, kind of like, let's say, testing verification project, we believe not, we talk about, it's not about a sandbox type of testing, but rather in the life network environment. Like what we say, open site community, OPMV community, the focus is where much on the virtualized related solution. But then outside the virtualized box, Erison has been working together with many other vendors, telecom vendors, like we have already well established testing environment for, let's say, the native partner network. From the radio, let's say, to the poor, basically from the, let's say, we can say end to end, truly from the device to application, we do have those testing environments. And we believe that we would like to leverage those existing infrastructure, existing, let's say, testing methodology and test cases. So we don't need to start from scratch. Once we want to deploy, for example, a certain solution in the life network. So that's one project I'm going to show a little bit more in a minute, yeah. But beside that project, the other thing we're also driving, I want to share with you here is another effort Erison just recently brought up is a Yardstick project, which is an effort we have basically brought up a common testing framework for pre-deployment NFE infrastructure. So basically for any reference, VNF, you can actually use this Yardstick test framework. And you can make sure that truly, that your infrastructure solution would work before you even go out into the life network, yeah. So here is some, basically, this map showing is all the project in OPNV. And I would like to spend some time more to talk about the transformer project, as I mentioned just now. So the picture here in the background, it just, it's rather busy, it's not have the intention to let you see what exactly it is. But the background here, like you see the cloud in the middle is very much the virtualized environment. And this is the focus here in this open-stack community in the OPNV community. And around this cloud is very much what I just now mentioned. This is very much the existing, let's say, mobile network infrastructure, the native infrastructure. So this part is very much, let's say, in place, so that we don't really need to recreate this environment and we believe that the best way to do that is very much to say that we should drive for sure the vertical verification and that's very much the WNF with NFEI, for example, the vertical, all the integration verification activity in the open-stack community, in the OPNV community. But then beside that, we believe that we also need to focus a lot on the horizontal verification. That's very much between, let's say, the virtualized and non-virtualized environment. And this is the part which we believe that Arizona can contribute a lot because like the things we've already been doing in the past, let's say, many years. And this is also the part we believe that we need to jointly work with the community to redefine certain process and methodology and tools. So truly saying that we can address the full-stack performance. Not only about, let's say, we know the native environment very well, but with the virtualized environment coming in, we need to, we can reuse quite a lot of things, but we also have to redefine the new stuff, yeah. And then I think the focus, of course, as I mentioned, is very much about the test cases. There are actually a thousand test cases for any combination we have been working for the native combination. And now if you have virtualized, for example, you have any WinF coming, you have any NFEI deployment, then we need to consider how can we truly do address testing in the best way, in a smart way, yeah. And here is a talk about, then it's very much about consolidation, it's about automating the test case as well, yeah. Then I think one thing I also want to share related to this transformer project is very much about, as I mentioned about the mobile industry, like internal multi-vendor interoperability, as we all understand, this has been a challenge for the mobile industry. But then in the past, let's say, 20, 30 years, the mobile industry has been driving this multi-vendor interoperability through a forum called the Network Vendor IoT Forum, which is a kind of informal group of 3GPP. So that forum has been very, let's say, essential to help the mobile industry to drive basic interworking across different vendors. And we believe that for our SDN, NFEI, and the cloud now, we need to have a similar, let's say, effort for the industry. And therefore, we believe that this OPNV transformer project, Erison, is driving for a few of that purpose in a way. So we actually want to call for collaboration among vendors and carriers here, and all the partners here, so that we want to focus on to truly through that forum, that project, we can address the telecom grade performance requirement to truly ensure the interoperability and also interworking them. And I think finally, this is one thing which Erison announced about half a year ago. You probably recall that. We call it Erison OPNV certification program. And the thinking here is very much saying is Erison investment to support OPNV to succeed as a community. So we believe that we want to drive as a leader, and drive the industry certification. This is not a vendor-specific certification, but rather we want to drive a partner program and consolidate, let's say, the vendor-specific certification in a way. And also, like the thinking is very much to certify vendors towards the standard at CNFE and also the OPNV reference platform. Then in term, as I just now mentioned, among the vendor compliance, and also end-to-end full-stack performance, this is very much the focus of this program. And in term of benchmarking performance, this is, again, a very important topic, which is very much today's focus in my speech. Like, once again, I want to highlight, when we talk about how do we benchmark workload performance, it's not about only in the virtualized environment. It's very much a combination of virtualized and non-virtualized environment that we have to consider end-to-end. So I think it then comes to my summary, actually. So I just want to conclude with this talk regarding the saying to summarize again, like we all agree, it's a telecom cloud challenge now, like with all this openness, with the flexibility, with this interchangeable components as a focus. The biggest challenge, as we see now, for telecom cloud is truly, as also I just now shared, about the requirement, right? From the regulator point of view, from also like the end-user perspective, performance requirement on network, on infrastructure, is definitely the biggest challenge. And then also, once again, highlight the multi-vendor challenge, because when we work with our customer, like majority of the carriers today, their biggest concern is about the complexity. You know, they're very excited about the promises with virtualization, right, with the cloudification. But their biggest concern is saying, how can I truly put everything together, right? Because suddenly I have, probably before, I only need to work with two or three vendors, and now I suddenly probably have 20 or 30 vendors. So this is the biggest challenge for the carriers, and this is also the biggest challenge for the industry. So therefore, we believe that as a summary, we believe that end-to-end service performance and the user experience, as I just now said, is very much about telecom and IT infrastructure, is interconnect infrastructure today, as we see. And it's also about the infrastructure together with application and with the devices, like truly have to work well then, to truly satisfy the end-user need. And then from open-stack perspective, I think open-stack for NIV is truly leading the transformation, as I just now shared. We believe that we are well on the way to address this telco cloud solution need. And in the end, I think from Arizona perspective, we will continue to contribute and also bridge the telecom and IT world, as we see today, to truly to drive this network society as our mission. So this concludes my part of the talk, and I don't actually know how much time we have, but then now it's open for questions, yeah. If you have any, yeah. Yes, please. Please use the mic at the back of the room for the questions. Or hold them, maybe we can do a question for you. Right, right, okay. Not yet, actually. It's a very good question. Thanks for that. I forgot to mention that. The Facebook Erison project is very much on the native environment. So it's not yet the individualized solution, but it's very soon. Like, you know, any carriers who is going to have a part of the solution, maybe I should show, yeah, that's exactly this picture I want to show. So today, we don't have anything. No OpenStack, no PNV portion, but we believe Erison, once we have the solution deployed, you know, the solution OpenStack is providing very much, will be part of the end-to-end chain. So this is a very important message today I want to bring to all of you here, is saying the work you're doing will be very much a part of this, very critical part of this end-to-end, let's say, service performance, yeah. Okay, any other questions? You can ask anything. You can ask about, not related OpenStack, that's fine too, yeah. But maybe I should ask you guys a question, Lawrence, like, do you have any, let's say, how do you think about the challenge now? How do, I understand the telecom cloud, it's a difficult discussion in terms of the talk of great performance, right? It's all up to interpretation. Probably different community, you know, different group might have different understanding in a way. Maybe someone can share, like, how do you think, like, what's the biggest challenge today? Yes? No, no. Right. Yes, fully agree, yeah. Definitely, definitely. I mean, the 5.9, the category requirement is not about single component, it's rather, you know, end-to-end view, like, and also it depends on users, you know, the use case, right? Yeah. Where's the microphone, by the way? Yeah. Yeah, I think the comments just now is very much like saying that, when we talk about category grade, it's not about single component in, let's say, in the end-to-end chain, as we discussed, but rather, like you're saying, what you drive the cost is rather, when we put requirement on, it's often like silo-based requirement setting, right? When we put requirement on every single component, for example, like on hardware side, I understand HP definitely, you are getting requirements from the customer regarding you have to provide 5.9s. Every level of your hardware, what you're providing, yeah. So question here. Yes. You have talked about quite a bit on the performance and functionality. Yeah. What's your opinion about the security aspect of the whole solution right now, based on OpenStack, does that, is there any gap from the carrier requirements? Okay, so your question is related to, let's say, we'll talk about the performance and the specific related to telecom security, right? What kind of requirements we see today? Yeah, I mean, you talked quite a bit on the performance aspects, but how about security? Yeah, security as, I probably went through too quickly just now, security is definitely part of the performance requirements, you know, the telecom, great performance requirements as we see. So you're definitely right. We're not the separate security away from any other requirement. So here we're probably just to talk about a simple case, like when we talk about carrier-grade security, it's to talk about the multi-tendency with end-to-end isolation. This is kind of straightforward requirement. But then in terms of security, there are many other things, many other aspects we need to consider. And I think I also shared a bit, that's just now we're saying that from, how we work with our customer, our partner today, we believe that this is the fundamental, the basic stuff we need to start with. I don't know if I answered your question, it's you probably have other things that you have in mind regarding security specifically. Do you have any study or evaluation of how the current status of OpenStack meets the carrier-grade security? And does it need to be hardened, any plan for OpenNFV to work on the aspect? Definitely, I think I forgot to mention by the way, from Arizona perspective, OpenStack activity, all the things you guys are doing are very crucial for us. And Arizona is actually one of the top 10 companies I invest in most in OpenStack activity. So related to your question, yes, we are looking into, for example, the ongoing activity related to security issues. And again, how do we drive the, let's say, different part of the project and truly address specific customer need? I think I probably can't comment on saying what exactly the status is, but that definitely is the plan then, if we haven't really started with driving certain project yet, yeah. Yes? Okay, any other comments or questions, please? This is Ashik from Entity Docomo. We also have a project in OpenNF called Doctor, which looks only in the availability of the network nodes rather than the whole infrastructure. My question to you is through your investigation in high availability in a virtualized environment, are you looking into cost? Because one of the reason why we are trying to migrate to a virtualized environment is because we believe that it is cost, we do not want again a very expensive infrastructure, which is highly reliable, rather try to achieve high reliability through software, through the manner architecture. So how do you see cost in your investigation? You're definitely right, thanks for the question. So the question is about like when we talk about high availability and how do you again consider the cost and in term of a balance against the requirements on high availability and the cost in the sense. It's very much a similar, I guess, as the comments just now from the HP gentleman. Like what we see high availability again to emphasize that we do not really encourage, like truly you need to have, for example, five lines or six lines on all level of you, let's say, of the infrastructure, but rather in term of, let's say, the intelligence, because like when we talk of high availability is also about the physical, let's say connectivity is one thing, the logical part is another thing. So the intelligence we have now with the virtualized, like say, like the coupling, for example, the coupling hardware software, then you actually bring certain, let's say, flexibility. But the challenge now is very much about how do you monitor, for example, like if you're assuming that you could have lower requirement on hardware, but then you need to have certain intelligence that's so that you should be able to very actively monitor the stages of you, for example, certain, let's say, host, for example, if you truly have that kind of thinking. So regarding the cost, going back to your comments about the cost, yes, with every solution, we do look into this total cost ownership, like the TCO analysis. And so from the product side, I guess every component, for example, Arizona is also doing hardware now, right? And we're providing this hyperscale data center hardware recently as we announced in Barcelona. So that piece of hardware, we truly focus a lot about analyzing the TCO, the cost. And from that perspective, we're also looking into the high availability, for example. If you have efficient, for example, component, in that case, like when we have a software-defined hardware, then you actually can benefit a lot in terms of achieving the high availability through much lower cost then. So I think I probably can only give you a general comment on that, like saying, yes, we definitely consider the cost, and there should be a balance between, let's say, very, let's say, five-nine requirement, the category requirement and the potential cost with everything in order then we supposed to come up with, yeah. And this is, by the way, like you mentioned, OPML project, like we are actually looking to, we need to collaborate more than through all the existing project, yeah. Okay, any other questions? I think we still have some more time, right? Okay, if you don't really have any, for the comments or questions, as I mentioned just now, like we actually would like to truly have more community members to join force working with us, and as I said, either through the OPML project or through the OpenStack activities. So once again, I would like to thank you for your attention and your time here, yeah.