 All right, ready to start? All right, good afternoon, everyone. Today, we have a pretty exciting panel from leading operators. The topic is all about open stack orchestration to support SDN and NFE, and specifically how programmability and automation are going to be impacting the telco networks. So before I go to the panel, real quick, I want to set the stage. One of the things we have noticed in the last several years is the technology trends. There are many technology trends that's been shaping how we think, how we do business, how we do operations. It's almost like throwing a rock in the pond. It has created a lot of ripples. And it has given rise to the theories like this, right? Efficiency equals migrating to cloud. Einstein's second theory. And the social media was completely full of it. Just to recap, the technology trends, we know cloud, right? I mean, especially in this community, I mean, everybody knows cloud. And cloud has been their own. And key thing that it brings is the rapid elasticity and resource pooling and taking that concept and applying it to the network functions, more like virtualizing it, which provides you the network function scaling. That's the NFE. And the third one is the SDN, Software Defined Networking, which provides programmability, sort of taking that control and the data plane elements separate, and provides you a common APIs, common programmable interfaces. So these are the technology trends. So what we are going to explore today is about orchestration, OK? All these technologies are really good. But ultimately, it comes down to how we can orchestrate. This is just a pretty straightforward definition right from Wikipedia on orchestration. And we would not only talk about this, but also some of the experiences from each of those operators. So having said that, let me introduce myself. My name is Swami Vasudevan. I'm with Ericsson in the SDN NFE Cloud Solutions. And with me, I'll go from my right, over here from Telefonica. And then we have Greg from AT&T, and Fred from Verizon, OK? And what I'm going to do is I'm actually going to ask each of the panelists to talk about how OpenStack is impacting their networks as well as the businesses. And can we start with Abir? Mike's not working. Can you hear me? No? No. No, it doesn't work. Yep, that works. Yeah, yeah, yeah. Now it's good. I mean, using yours will be a bit awkward, but this is what it's called, an operator by an early age for getting the things done. So well, some challenges about the orchestration. I mean, the uses of these technologies, the obligation of these technologies for the network is the reason is because we want to increase the freedom that we have, the choices that we have, but also make our life easier and more complicated. And you know all the list of passwords that we all want to use to apply to these technologies. But one thing is what are our expectations and others? What are the reality that the reality, by the way, is changing and evolving? But you need to be aware of every single moment what the reality is to have the right level of ambition in each of the stages. We in Telefonica started this journey very early. And actually, we had our first BNF. By then, they were not called BNFs back in 2008. And since then, we've been working on this stuff. And I think that we have an informal opinion on what are the things that can be done. And we are pretty optimistic on them and the things that cannot be done yet. So instead of focusing on the things that we can do today, because they are pretty obvious that we can do a lot of stuff, I'll focus on what are the challenges and what are some challenges that would be great if we could finally be able to solve. There are two things that we are struggling, in general, as industry, related to the orchestration, the way that we do work. One is the modeling of the BNFs and the network services. I mean, if we don't have an appropriate description in terms that can be consumed by a system, then we end up doing things manually. That's some of the things that we wanted to avoid. And the second is that, and this is very pertinent in this forum, it's about some of the limitations that the Cloud Management System have, and particularly OpenStack, because we are in this forum, on how to deal with data plane workloads and interconnect them successfully. So I elaborate a little bit on them. Sorry for being technical. This is what orchestration is, from our perspective. You have a set of replaceable components and you want to be able to assemble them in an automatic fashion. That is the theory. And you have your BNFs and you have your network service. That is what you build out of those pieces, those bricks, if you want. And then you have one process that is onboarding where you have new BNFs that you purchase or building house that you incorporate into that catalog. And that is great because if you have some sort of blueprint of what you need to do, then the system will do it on your behalf. That is great, but any resemblance with the reality is merely casual because actually what we have is an uneven modeling of the BNFs, which is in most of the cases inadequate and is completely unrelated even with the development, the things that the developer that was creating that BNF intended to achieve. Then it's not a surprise that the onboarding is hard. And then you have a catalog like this, which is difficult to handle, but hold on, we have the blueprint for developing the network service, but perhaps it's a bit rough. And it's not a surprise that the integration, ad hoc integration is needed. So if you are lucky, we end up with something like this. But I'm not saying that we are being over ambitious, it's a work in progress, okay? So that is one thing. And some second point that is related to this one is the connection with the underlay on the things that go underneath because the developer all the time had in mind those requirements where he expected that the, he or she expected that the BNF landed. And one of the key features that many of the network elements that did with massive amounts of traffic have in common is precisely the data plane and the data plane workload. And that means that if we want to handle them at BNF level, we need to make sure that the BNFs are deployed in the manner that we're designed and they are interconnected properly so that we don't lose packets in the middle. The funny thing, and it is the positive things that all the technologies are available at all the layers, but in OpenStack, they are available in the hypervisor, in libbeert, even in the SDN controllers. All that is ready just to use it. But it's not yet there. In order to make it simple, I mean what you have is some BNFs that deal with the data plane and you want to interconnect them. And sometimes you need to interact with external sources or interconnect them in a manner that is as designed by the BNF vendor. The thing is that you need to pay attention to the three aspects because there's a lot of misconception and I think that we can elaborate later on in the panel session. What is about the resources that are assigned to the BNM itself? So it's large enough, has the special type of memory, whatever that is needed. The way that the interfaces are managed, which is particularly challenging in a cloud management system because they are more oriented to elements that only have one interface in most of the cases. And then how you handle the underlay because many of the connections can go in overlay but many of the connections cannot. And the thing is that there are a set there could be more of gaps that are related to this simple picture that are still there and are in the middle of being more ambitious on the NFE deployments on top of OpenSack. So that is the statement. Hope that we can elaborate later on in the panel session. Thank you. Thank you. Right. Craig, do you want to? I do. It's working. All right, okay. Is it working? All right. Okay, my name is Greg Stigler and I'm AVP Cloud from AT&T. For those of you who are here at the opening keynote yesterday, I know you saw my boss, Sorb Sexena, the senior VP over software development and engineering at AT&T. If you didn't see it, I highly recommend it, not just because he's my boss and I'm sucking up, but it was really good. It was really good. He tells you a lot about our direction, a lot about where we came from, a lot about where we're going and a lot about what we want to do, okay? One of the pieces of that is something that we've released in a white paper to the public called ECOMP. ECOMP, let's just not worry about the acronym. It's got a whole bunch of orchestration. Oh, is the orchestration, okay? So it is the orchestration engine that we're developing and looking to open source and we had 1,700 downloads of this white paper on the first day we put it on our website. So I invite you to go out and download that as well and look to collaborate with us, which is going to be a continuous theme. Can we skip to a different slide or just go blank? Oh, I can do it, okay. I had two slides for today, but we cut them and we decided to go unplugged. So, which I kind of like anyway. So what I will tell you is that ECOMP is our foundation for moving forward with orchestration. I will tell you that the initial orchestration is not the hard part, right? It's once you have something up and running and you want to take action based on the events that take place, that is the key. So it's getting to those details and it's not creating a storm as well at that point in time where maybe you have a DDoS attack and you start spending up more VMs or containers and you go out, eventually you're going to go out of your data center walls. So that's one of the biggest keys is controlling that storm. From the AT&T story, we were a large contributor in the beginning of OpenStack. In fact, we were on Diablo. That was also mentioned in the keynotes for a couple of people that were on Diablo and the pain we went through, well, the pain we went through on Diablo. I think somebody hit a switch they might not have hit, should have hit. But it was nice to see you all. So we've got a lot of experience. In fact, we were great contributors back then. Tremendous contributors to OpenStack. But what we could not do is deliver something for our business. That was a problem, okay? So we stopped and we went back and said, what do we need to do to make this right? And really, we culturally changed. Instead of living by the Agile manifesto alone, we added some rigor. My favorite search is Agile space excuse. Now, I wanna tell you that I'm an Agile fan. Love to iterate. Feel like I've been iterating on my life in IT development. Okay? But when you have absolutely no structure with 13 plus scrum teams like we have at AT&T and growing around OpenStack, something this complex, if you don't have a little bit of structure to work this, you're not gonna make the end goal. So we put that in, we began to make process on our strategy, we began to deliver, and then in Tokyo last year, I made a promise to the community that we were gonna show up in a big way here in Austin and we were gonna talk about how we are going to contribute. You may be surprised to see an AT&T person sitting next to my friend Fred, and he is a friend now, from Verizon. Okay? We want to contribute together. This may be a little bit of back to the future. There were interconnection agreements in the past, we can help each other. That's what Open Source is all about. Okay? So we wanna do that across the globe, not just within the United States as well. In fact, I have a vision that what we will see someday is service providers and large enterprises sharing workloads on their OpenStack powered core, right? So we get the same core, we can share that workload with another provider. So if we go to country XYZ and we don't have a facility rather than build a building, put in hardware, do all the things we have to do, I would like to run my workload on their cloud and vice versa, we wanna make our cloud open. So we are very open, very open source based and we'll continue to follow through with that and that's our commitment to the community going forward. Problems, Javier mentioned some, they absolutely exist. I will say the top ones for me is upgradeability, neutron and why I have to go pay somebody to do overlays I don't understand. So I would really like to see something grow in that area better and I'll differentiate something here as well. Pure open source versus commercial open source. I don't know if anybody's used these terms yet but I made them up at least the commercial part. On the pure side, you would look at people like Mirantis, you would look at people like Ubuntu and basically you can download their hardened version from their website, hardened version from their website. On the other side and I'm not gonna mention them, by name, you buy a commercial hardened version and you have vendor lock in because it is not easy to switch to that one that they call open source, okay? It is not easy to switch over to that one because there's a lot of packaging, there's a lot of issues with doing that. So I would ask the community to put a great focus in this area as something that we could do together and make a difference in this world. So with that, I'll sit down. Thank you for the time. So again, thank you very much. I'm Fred Oliver from Verizon and we're good friends now with Greg. Again, I think we've been Verizon working on a cloud environment for a fairly long time. We did deploy internally stuff back on Essex stuff way back when. In the meantime, I think it's improved quite a bit but it still has some pieces that are lacking. I think there, you know, Verizon sees this as a great opportunity to improve our business efficiency, leverage some of the capabilities that are coming up in the environment and improve agility and eventually arrive at some operational improvements. There certainly are, again, I'll reiterate a lot of the things that our friends, Xavier and Greg have not mentioned. There's certainly some pieces lacking in the current limitations, both from the VNF vendor perspective, some of the VNFs we get are not really suitable or ready to be run in a cloud environment per se. Orchestration is one of the mechanisms that we use to try to compensate for some of the things that are there and we're looking to leverage all that capability. Again, I'll reiterate some of the things that I've said before some of the things that are kind of missing in the or lacking in the environment or neutron and kind of high packet rate processing is still in the infancy for us and we can certainly see some, ask for some improvement there. So we'll switch to some panel questions and I think I'm hoping like audience have a mic so we can actually alternate between a question I ask and one from the audience. This particular question, gentlemen, one of the things that remains me off is a conversation I had with my son. It was a pretty candid conversation about life and death. So he asked me, are we all going to die, daddy? And I said, yes. Are you going to die? Yes, am I going to die? Yes, and I asked him, like, why are you asking it? And the answer he gave me was pretty interesting. He said, when you die, I want your iPhone. Okay, all right. So the reason he said was because he wanted to do that complete video streaming and online gaming. So interestingly, that puts a lot of pressure on the network and how agile it should be changing as fast as it can be. So I want to ask you guys, did OpenStack enable you to deploy new services in a flexible manner, in a faster manner? And if so, an example. So maybe I'll start from the other side. Fred, you want to get up? Yes, I mean, I think there's, we're still in the beginnings of the journey but we certainly see there's examples of cases where a lot of the process that's been involved in actually acquiring hardware, planning for something and then deploying in the various environments and then enabling things have been considerably improved by the use of OpenStack, deploying virtual environments and deploying it on a customer at a time basis, rather than as a kind of a large incremental service deployment. So I think there's certainly in the wireless pack of processing environment where I've seen a rapid enablement reduction from a several months to deploy a new customer service down to more in the week timeframe, just going through all the process environment. Greg, do you want to add anything to it? Sure. Six months ago, I would have said no. Now I'll say yes. The know was based on the time it takes to deploy a cloud, an OpenStack cloud and the energy and manual nature of that. So as part of Mr. Saxena's presentation and the keynote, he discussed that in the first 10 months of last year, we deployed 20 zones and those are sites too, same thing, okay? In the last two months, we deployed 54 and that's all about automation. So that automation making the sites available, both test, dev and production has changed the world that we were able to make at AT&T, that we were able to make this available to our developers and our business users. So that was the most significant move that enhanced the services and we have 17 VNFs coming up this month based on that. So it's very exciting. Okay, thank you. Are we up? We'll do this. Yeah. No, sir, did this one work now? No. Yep. This one. No. Both? Yes. Both? No. Now? Yeah. Exactly. I'm okay. I can't get a microphone from there so you don't need to. Anyway, anyway, what I was saying, okay, of course that's helped us. I mean, we've, as I said, I mean, we are continually recalibrating the level of ambition, because it's obvious that each technology has a degree of maturity. So if the question is whether we, in the environment that we have for compute nodes, we have improved in the way that we do the processes, I would say, definitely, yes. And actually we have some largest cases, experiences in South America, where we have strong presence as probably many of you know, with residential market based on H86 processes. The process of the point is that it's not only about the way that you change your operation in terms of what you do with the BNFs, but also the way that you change and you transform your relation with the customer, which has changed substantially because of that environment. That's been the main lesson for us, is that it's not about the technology, it's not about deploying BNFs fast, it's not about deploying data centers fast, which, by the way, is required, so I'm not neglecting that challenge, but what is actually challenging is in the end, is to be able to change your relation with the customer for the good, if possible, and leverage on the new capabilities that those platforms have in order to be faster, be more responsive, and be able to offer things that really matter for that customer. That is the process where we have changed substantially the way that we operate the network, particularly in the places where we have some NVE-related deployments in place. I would like to echo that before we move on real quick, because splitting up the black box is the best thing that ever happened, right? So taking the proprietary hardware, making a commodity hardware, taking the software and making it software that runs on commodity hardware now, not everybody's got it right yet, because it's not native. I think they stripped it off the black box in some cases. So we need that to be cloud-native and written the right way, and we're working with our partners on that, but an absolute huge key to giving the control to the customer to be able to manage and change their services on the fly. I totally agree. I agree. I agree with you. And second that, I think there is a lot of the opportunity for this is to be more agile, allow customers more flexibility, more agility, enable services on their own without having to wait six months. Okay, thank you. Maybe we'll see if there's one question let's take from the audience. If you have any questions? If not, I'm gonna ask another panel question. All right, so one of the things I want to ask you guys is what are the key deficiencies of OpenStack that you have encountered in achieving your desired level of automation? You wanna go, Javi, first? Now that I have a microphone. Hopefully it works. So I would say deficiencies, because I know that all these technologies are in quick evolution, so it would be unfair saying that we perceive them as deficiencies, but it's obvious that there are things that are from a perspective missing, which are related to the kind of news, I mean, with the new set of use cases that we are as sales providers bringing on top of the table. And many of them have are related with the IO related workloads, which is not only being successful in the way that you assign resources in a pool, which is more or less part of the theory in computation, but it's also in the way that you break out those workloads and the way that you automate the interconnection of those workloads with the rest of the physical network, because in the end, you end up in a fiber, in a long-distance fiber, not in a data center fiber. And if you fail on the automation of that last step, you're automation very little because that leads again to a lot of manual labor, which is one of the points that you want to avoid. So that is one of the challenges, and the other is one that you can leverage on that, the appropriate modeling, so that you can repeat that process over and over. You have to consider, in our case, the telephonic case presented in over 26 countries, but obviously we don't reinvent the wheel in every single country. We sometimes try, but we try to make the things homogeneous. The more than we can leverage on a single experience, on a single evaluation, on a single testing to repeat it reliably, the more successful we are in that replication. And that requires, of course, automation. So for us, it are key, those two elements to give a step farther in that evaluation, in that deployment, because besides that, we have no doubts that the servers are perfectly capable of handling most of the workloads that we consider as relevant are capable of scaling at the locations where we consider that they might add value. We have no questions about the theoretical possibility of automating and creating appropriate templates for that, but it's just a matter of the delivery of the last step. We have hope, but we need to keep working on that. Okay. I'm okay. I think. The first thing I want to say is, if you don't contribute, don't complain. Seriously. I don't mind the complaints, they're okay, but make them constructive and get involved. Okay, that's what really needs to happen. I am so grateful for this OpenStack community for launching what it launched, because we could not have done this on our own. There's no way in this timeframe we could have done this on our own. So I am so grateful to the community. Now, with that said, now that we're getting back in the contribution game and we're doing good, my curve's going up this year, I do have a couple of things to say. So let's start with the foundation's ratings of some of the modules. It's a lometer, two out of eight. That's no good. It's got to get better and it's got to get better faster. It's too critical. The telemetry is too critical to what we do. The ability to have insight into those things has to happen. So that's one. Containers would be another one. So we're kind of talking about Magnum, we're kind of talking about other stuff and different ways of doing it. Not saying there only has to be one way, but there shouldn't be 20. We should lock into some things or at least some reusable assets that allow people maybe some freedom to do things their own way, but it would be limited. But it would be limited. And I would just have to come back to Neutron again for the last time. Anyway. Yeah, well, I'll just bang on Neutron as well. Okay. Now, certainly I think there's, again, OpenStack is a very useful tool in general and it's one tool. I think it's actually an incumbent on a lot of the things that surround it to deliver a full service. So again, I'll reiterate the salameter. I think telemetry in general is one of the kind of things that we see as a lacking. And we're, we don't contribute, unfortunately, but that's certainly some of the areas that we're looking to contribute in terms of requirements and specifications as well as just code. But I think there is certainly value there and things are moving rapidly. I think there's certainly more and more coming out of it. From a kind of a specific Neutron thing, I think one of the things that we try to integrate into our network that's not well supported in the Neutron environment is quality service and how to do bandwidth reservation and deployment of the various qualities in control path traffic that we want. That's one of the key areas that we're looking to help in. All right, good. May I add something about the contribution topic that perhaps is often misinterpreted and I'm certain that it's not the case, but I mean in economics, as you probably know, the most valuable is the most scarce. And in many cases, there are many types of contributions which happen that in this floor are happening in all the levels, almost all the levels, and I only can speak on behalf of Telefonica which has been contributing actively in all the levels, including the software for a long time. But I would say that there are many service providers that can afford contributing directly to the software. And I still have a deep respect for that and I particularly want to give an out on some service providers who have provided an extensive analysis of what are the application and the assessment that they made of OpenStack in some use cases that they wanted to deploy for real in the short term. And sometimes it has been misinterpreted and since it's not in our case, I can say it freely, as if they were saying that OpenStack was not valid for them or they gave up with OpenStack, it's exactly the opposite in my view. I mean, those inputs first, we have perceived in Telefonica as a very useful insight on things that we are not even considered. In use cases that we were considering as well for real and we haven't even noticed. And second is that for the common community that is reasonably IT related and most of the expertise is based on IT, they are extremely useful feedback. I have in my mind, for instance, the contribution from British Telecom, talking about the virtual CP for enterprise environment where they made very valid points and perhaps they are not contributing millions of lines of code. Not one. Or they are, but I think that putting in a comparing weight in this moment in time, what they are contributing in terms of knowledge is huge and it's extremely valuable and I understand that it should be perceived as something useful for the community and constructive. And I know that many others as providers are hesitant whether that kind of message would be perceived as a back off for the OpenStack community or not. And I would like to invite you to read them in a different manner, in a constructive manner of bringing them their experience to make things better. Yes, so obviously we disagree to some degree. We have a middle ground here. What I will tell you is that we have a small service provider event this evening. I can't invite everybody at the small one. A lot of C levels up here. And British Telecom's invited. So we would like to welcome them with open arms. We just want, I can pay a consultant for feedback, okay? So we want people with skin in the game and not just blood sucking. So that's about as blatant as I can be. Okay, I guess we are a little bit over time. Can we take one or two questions from the audience? If anyone wants to ask a question? Microphone. Hi. Hi. Question for all three of you, really. You mentioned with VNF vendors that you saw that they weren't maybe quite ready yet or at least a lot of them weren't ready yet. So what are the primary deficiencies that you see with the VNF vendors at this point? Well, I'll try to answer that. I think I brought that issue. So I think the things we're seeing and that this is improving fairly rapidly, but initial set of VNFs that we get were pretty much poured soft to direct what they had in their hardware environments. And they're not really set up to be, to scale well, to be separately deployed, to handle kind of HA as you would be able to handle it in a VNF virtual environment. I think these things are, again, improving. I think a lot of the VNF vendors are, let's say, cloudifying their VNFs to support this, but it's, again, early days for them as well. Thank you. You want to add? So that's a quick question. Oh, you don't have. Logistics is getting very complicated. Anyway, no, I would like to add a note on the VNFs because I think it would be fair for some and fair for others. We've, as you know, many of you know because it's public information. We've found a lot for a VNF evaluation with our 57 VNFs evaluated so far and keep counting for many purposes. And I would say that there are two big conclusions from my perspective. One is there's a lot of extremely serious stuff out there. Another that is they have not even given it a thought. And the second thing is that that degree of maturity is surprisingly unrelated to the size or the brand of the vendor behind. It's uncorrelated, which means that there are big vendors with excellent VNFs, big vendors with all four VNFs, and the other two combinations. And so we should be able to distinguish and to send the right messages to the VNF vendor community or what are the kind of things that we want to see. But that wouldn't be pessimistic because we are clearly seeing an evolution over the time on the maturity of the VNFs. And it's worth acknowledging that therefore it's happening at the industry level. It's actually happening for all of us, right? It's happening within all of our big businesses where we have to transform. And the same thing is true for the vendors in this case. So, that's all there. All right. Unfortunately, we're almost run out of time. The one thought I want to leave you guys with is there was a little boy who goes to a balloon vendor and ask if the red balloon is going to fly high. And the vendor says yes. And then he's asking whether the green balloon is going to fly high and so forth. The thing is, what the vendor said was it's not about the color of the balloon that makes it fly high. It's actually what's inside the balloon that makes it fly high, okay? And it's you guys. You are the community and the collaboration. That's what makes all of us fly high, okay? So, thank you so much. And thank you for all the panelists and it's a very good discussion. Thank you all.