 Hello, hi how are you so we're gonna the show must go on as it were we were waiting for some chairs to come they're on their way but because the time is short and we wanted to get things moving I thought we would at least start with the introductions and introduce this particular session my name is Darryl Jordan Smith I work for Red Hat I'm responsible for the telecommunications business at Red Hat on the global basis and I'm very very pleased to welcome Toby Ford from AT&T I'll let Toby spend a couple of minutes just introducing what he does at AT&T, Stuart McLean from Verizon and Michael Bragg from from TELUS and we're here to really talk about today deploying open stack and telecommunications so you know just to give everyone here an update in terms of some of the experiences that some of our key partners and customers actually have in this particular space and we really wanted to share that experience and make the best of this particular session and that rattling behind me sounds like some chairs so it's going to be interesting so Toby do you want to just introduce yourself what you do at AT&T? Sure so I'm Toby Ford I work within the architecture team within AT&T responsible for our domain 2 transformation so and my focus is particularly around cloud and what was sort of actually described as the AIC that's my area of responsibility. Stuart McLean so with Verizon and my main responsibility is you know I'm in the architecture design team I spend a lot of time in the lab non-production environments doing a lot of integration interoperability testing a lot of proof of concept testing putting a design together that can be you know handed over to our operations team for production deployment and hopefully it's a stable platform so it's a lot of a lot of work a lot of time in the lab environments working with the continuous updates and new releases from the infrastructure up to the network functions. Good morning my name is Michael Bagg and I work for TELUS communications which is a telecommunications company that's located in Canada. I am on the data center and network infrastructure virtualization team that's quite a mouthful for which I've yet to find a clean acronym for which is probably a good term but if you can't make an acronym for it it isn't real and my role in that organization is the architecture prime for the next evolution of our NFE platform. Great well thank you very much and pass the mic back to Toby. Toby can only join us for a few minutes today so I'm going to start a little bit more focused around Toby and what he's going to do and then he's going to slope off he's particularly busy today so don't be surprised we're not saying anything that's going to offend him today as he disappears from the from the stage here but from from your perspective Toby what are the key things that you're seeing with regards to OpenStack as you're building out the project from within AT&T and one of the key challenges. Sure so one of the things that we're dealing with right now is as described in the SOARPS presentation earlier we're having to manage lots of locations and really OpenStack was was not originally designed to to deal with the problem of what happens when you have a thousand OpenStack setups so that that one's a real tricky problem and then also upgrading that type of environment and making sure that all of the different projects and modules that we use can kind of be upgraded consistently. Let's take Keystone as an example I mean if I have to upgrade all the sites to Mitaka and I have to upgrade Keystone as an integrated way of keeping them all together that's that doesn't really work very well so this is probably the biggest concern that we have at this time. So Stuart from your perspective. So I agree with Toby similar challenges there operationalizing getting the knowledge base up within the operational teams to understand the infrastructure because it is significantly different than your traditional bare metal infrastructure so that's that's a challenge that we're working to get the teams up to speed and you know be more aware of the infrastructure how it's functioning security is a concern so a lot of work being done there on you know not only deploying and having a stable platform but also securing that platform and you know so that's that's something we're really focusing on right now. Great and Michael from your perspective what are your review what are you reviewing in terms of your overall plans around at a high level around open stack? Well from the part that's giving us the most difficulty right now is just even finding the right VNF stack for the business use case that we're deploying. I finding that the each of the VNF vendors are rather low to give out what requirements they have what kind of connectivity requirements they have and if they do have these requirements they also want to come with a full set of proprietary kinds of management that somehow is going to fit into our rather open open environment that is supposed to have its own metal so how many management and orchestration pieces are going to come along with these different VNFs that we have. So that that's one part that is giving me the most fits for what we're doing. So Toby back to you here a little bit you know one of the key to the key you know use cases that you're looking to deploy open stack certainly initially building on what was presented maybe earlier today one of the keynotes. Sure so getting into more specifics that way I mean it's really across the board so our internal IT workloads anything we expose externally our API services those were the first things that we took care of. Now we're moving on we're moving into the voice applications like I don't know if you've you've actually noticed this recently but the rollout of Volte has happened pretty dramatically over the last few months and the call quality has gone up that platform is running on on the AIC. So the voice layer the next big one for us is all of the mobility services in the EPC realm we're going through and moving them over to this platform. And then the last big one we're going to undertake a pretty major reversioning of Direct TV and of U-verse both the native platform as well as the over the top platform. Great thank you Stuart from your perspective. So we're doing a lot of testing we've done a couple production deployments with you know virtual load balancing virtual firewalls virtual routing with voicemail so we're working across the board there with different applications we've tackled some of the easy ones you know some of the smaller ones up front but the majority of our time these days is being spent on getting these these virtual network functions that require a level of performance. So you know when you take a you know purpose built you know application a purpose built piece of hardware and you virtualize it you know we're still working towards getting the ultimate performance out of that virtual network function using technologies like SRIOV for example to you know pass those physical devices straight through to the virtual machines and we can do that but now we need to understand availability and consistency with with those deployments. Right and Michael from your perspective you know what really tell us to look at OpenStack you know from your perspective initially. Well I know that OpenStack is from all of the vendors that we're working with and you know Red Hat is a part of nearly all of the vendors that are approaching us for their solutions and for what they're doing. We've had a long relationship with Red Hat and our traditional virtual stack and in this next generation it makes a great sense to partner with Red Hat. The overall NFVI layer that's in there has reached a level of maturity that we're quite comfortable with and we have availed ourselves of the various plugins for the network infrastructure silo that's there for the storage that's there and the next level up that we're looking to incorporate is the NFV architecture which we're expecting and hoping for Red Hat to help us out with. Great thank you. So Toby just coming back to you just a little bit looking into the future a little bit around OpenStack and from a community perspective what are the things that you think we need to be focusing on to help the deployment of OpenStack as it gets rolled out in AT&T? So clearly the move and the work on the OpenStack product group to really focus on rolling upgrades and making that a meaningful thing that can actually work so that everyone can get the benefits of the CI CD kind of deployment. I think that one of all the things is probably the most important because that's a foundation that will allow OpenStack to evolve even further. Yeah great. Yeah I mean great great comment you know Toby I agree with that you know upgrades is something we have to be 100% solid on you know we can't take down infrastructure we can't cost service disruption when we're trying to you know add a new feature or add a new patch or a new feature or service patch. Back to what I said originally knowledge you know that we have to get people educated they have to be more comfortable with the OpenStack platform. There is a gap there today and it's just going to take time. So the comments this morning in the keynote you know taking your VM where CIS admin over to an OpenStack admin it's significantly different I mean virtualizations are definitely still there but how you manage deploy support that infrastructure is significantly different. Yeah I think it's not just in knowledge but in culture and we have a culture and tradition around the kinds of of how we handle and upgrade internally and it's you know we're used to these sort of long-standing set dates of when we're going to take systems down and upgrade them and some of those dates are just worn out of a legacy tradition or culture of how we do that and to change that over to a more agile more rolling upgrade or rolling outage or being able to fail over at whim to be able to do these kinds of work just as a part of our culture as a talk we like to put something into the ground and it's done. So for us it's going to be not just a knowledge thing but a culture thing. So before you need to run off here Toby what about live migration building on that is that a big theme that you're looking at in terms of. Yeah I mean I think live migration is the core of what was presented earlier today another key concept is that Jonathan talked about is that we can't just assume that everything will be cloud native there will be legacy applications there'll be legacy VNFs certainly within the router space this that have been built out of a vertical sort of a vertically scaled way of thinking and so we have to have migration to be able to do that we have to be able to have some of these mechanisms at a VNF and app layer that can help support a legacy way of thinking whilst at the same time getting the other benefits of the cloud being able to scale out and such so that's a very key part of it. Great thank you. Thanks Daryl for the time. No thank you for coming by thank you. Stuart. Live migration. Live migration yeah we're from Verizon's perspective what are you looking at there. So there are use cases for it so right now we're trying to work with our network functions that we're focused on right now to where they don't need live migration we're looking to deploy those in a manner where a single you know a host a compute host or an instance can fail and then the application is balanced you know via some mechanism you know to continue to run so and initially you know we're not enabling live migration but maybe that's something that we we look at you know further down the road so you know as far as ephemeral storage and things like that you know we're using local disks on servers so we're not using shared storage for ephemeral disks at this time. Great Michael. Yeah for for us we still have not just culturally but from a knowledge wise perspective is that we are tending to look at the VNF and looking at the NFB similar to a traditional virtual stack and haven't quite made the leap into cloud based applications so even when we do have the VNF groups that are coming to us they are not quite cloud ready from from just what we've been looking at so it's going to be a pretty long evolved process to get to the point where we can do this sort of live migration of an application or of a VNF appliance but that of course that is the ultimate goal. So Stuart we're passing the mic backwards of all it's here we're doing pretty well you know what I want to try and do I've got a couple more questions I want to kind of go through and then because there's so many people in the room it's a great opportunity maybe just to field a couple of questions if you don't mind to the audience so just give you guys a bit of heads up there that if you have any questions more than happy to take them give you a minute or two to think them through but from from your perspective but within looking in Verizon at the moment and what made you get to OpenStack are there other things that that you are looking at that could impede the success of OpenStack things that are top of mind that you're worried about that it doesn't have or isn't resilient resilient enough in certain areas of the some cultural things in in say Verizon in your environment where people are saying you know maybe we need to stick with more of a traditional based environment versus a open source based environment maybe share some of the thoughts. The biggest challenge is you know when you go to virtualize a network function for example it's the initial request comes in it says I need this you know 24 by 64 gig you know VM you know that that's the size of the instance these are the network function suppliers are looking for and that's that's you know not cloud ready yeah that's that's not what we're trying to do here but again it's not you know we're not looking for the other extreme which is you know can you containerize that at this point in time that that would be nice but quite a ways out so to me that that's the biggest challenge right now I know I keep saying it but it's it's the education it's the knowledge of you know working not only with the operational teams but the suppliers of these network functions to make them understand how the infrastructure operates and design their virtual network functions you know around that you know so do we have you know VMs today or instances within tenant spaces that are running at 16 you know you know CPUs and 32 gig of RAM absolutely we certainly do are we working with those suppliers to you know tweak that modify that to get that down to maybe you know a smaller more manageable cloud aware instance absolutely yeah and if we're talking about instances that are unwilling to be thinly provisioned that's also a big problem so obviously if I'm a vnf vendor and I have an appliance that is expecting to get 34 vcpus I'm going to want to have those not be thinly provisioned and have those locked into a pinned vcpu that is going to be a problem if you as you begin to collect a number of these kinds of instances into your environment and part of my idea of trying to go to a vnf instance of your function is so that I could begin to utilize that vnf or that VM or appliance that's running and a virtualized friendly manner whereas some of the vendors are looking at that at that virtualized version with the same kind of hardware requirements is their hardware version well shouldn't I just get the hardware version and tie it into my environment so these kinds of things are floating around causing some stress for us yeah that's pretty much yeah at Red Hat we spent a lot of time and money and effort working with a certifying our ecosystem that supports a lot of their software platforms what are you doing around that space you are you thinking about certifying to your specific environment are you looking to more of a you know a partnership with companies like Red Hat and others who would do the certification process at the vnf level for you so as a as a talker in an environment that we have a lot of very good relationships with a number of vendors who have traditionally supported legacy systems and running it may or may not be a natural fit to use that same vendor for that particular function so that's causing some stress internally there may be some sort of political and siloed kinds of ownership around this particular kind of function and what happens with the vendor or partner when we want to provide that same function at a vnf and it may be a different vendor so there's there's some some strife in some difficulty there and the other part of it that I is a rat hole that I can't speak to just because it's not my area of expertise is the funding model and I it just common sense that if the funding model is this idea where we can take a network function and virtualize it and achieve some small modicum of a cost avoidance that's not going to fly doing something that's going to achieve a 20% saving or 50% saving isn't going to do it it needs to provide a 10x kind of saving and I don't know if that's there but the point is not to approach it from this sort of cost avoidance model but that it's an enabling function that's going to allow us to do other things we couldn't do if that function remained in hardware and that's a little bit more difficult to fund to go to my people who are funding me and say listen if we do this we can do a bunch of other things we're not doing now is a little difficult to grasp so the partner relationship with red hats and other vendors I'm really expecting them to help me articulate where the value in activity is coming from I think the answer is simple yes so today you know we need the assistance from partners like red hat we need the assistance from partners like Dell you know we need those guys out there that have those expertise that the resources the knowledge base to dig into that integration that interoperability environment and ensure that what we're deploying out there is stable so can we do that internally at some point hopefully but today we just are not staffed you know with the knowledge base to do that all in-house so those partnerships are valuable to us and we're very heavily reliant on them today great thank you so I would want to hand it over a little bit to to the audience here and I was wondering whether anyone had any specific questions they want to put a hand I was quite a few well that's you can't see them but because of the lights but I'm gonna I'm gonna walk around with a mic a little bit oh you're gonna walk around with a mic so so there's some just there right next to you go go right there they were just sure I'll kick this off I find it interesting that you had a comment that when you virtualize a network function the hardware requirements almost virtually remain constant that that's an example that I don't think that's the rule but it has a court yeah so so let me expand upon this you know to me is not terribly unnatural that that takes place because you're virtualizing and existing to take it an EPC functionality so have you thought about changing the architecture of the EPC to actually take more advantage of the virtualized architecture so all right I'm gonna take a step and by the way I should probably state that this is in my humble opinion and does not reflect my employer so it's obvious that if you have some hardware appliance that has spent you know the the owner of that appliance has spent a number of years squeezing every last bit of packet per second out of the hardware base platform that's there and they're only achieving some 80% efficiency say you know the throughput comes in at 100 and it comes out at 80 and obviously if you're going to put that on a x86 platform that may have who knows what kind of network connectivity who knows what kind of compute architecture that's beneath it you're not going to get 80 there's just no way maybe you get 40 maybe you get 30 but the point isn't to get a one one to one kind of virtualization aspect this isn't trying to virtualize a server and you know if you take a do server virtualization and you take a hardware serving you can put it on to some cloud based or virtualized base you can get it you can get a one-to-one but the idea with the network isn't that you take this and get a one-to-one I'm supposed to get this 40% or 30% or 20% efficiency VM in place it distributed fashion you know then and then I could achieve that so rather than having some aggregation of monolithic network hardware where the traffic is aggregated to this edge point I'm able to then kind of distribute that network architecture around so that you know that that's what I'm looking for I guess maybe I can be much more specific in my question because you know what you say is absolutely true there you know I don't expect you virtualize the existing EPC you will gain any kind of performance advantage indeed a matter of fact I think it'd be extremely difficult but at the other hand as I've been saying for a number of years now we're living in a period where we actually have a capability of redesigning the EPC because of the 5g framework so my question to you is have you thought about how you would like that architecture we designed now that that part is specifically being discussed in the 3g ppsa right now that's exactly right yes is the answer what's going to end you going in the back there and I'll the answer yes but we'll and a Michael sign an NDA I suppose so our next question please at the back there it's a question for Stuart specifically you you mentioned having to use SRMV for high performance VNFs since that bypasses the hypervisor how are you handling multi-tenancy at the first hop switch at the top of rack are you using VLANs to do that and if so are you concerned about scale in that environment it's a good question so so so VLANs so we're passing through flat network you know and tagging within the instance yeah so so you know scale is in our SRV environments today there is zero over subscription it's one to one so we're not we're not you know stacking multiple on the same host do you have any kind of plans going forward to be able to provide some kind of tenant isolation without using VLANs or something you're looking at yeah well there's a lot there's a couple different things we're looking at we're you know today SRV so going forward we're looking at some DPDK you know so we're looking at changing that technology around a little bit to keep that performance level up there so these these are the exact things that we're working out in our non-production environments today we've got a number you know we've seen a number with SRV on virtual instances on our you know commodity compute host today say for example a 10 gig you know connection we're getting like 9.8 line rate 9.8 gigabits per second coming off these virtual instances but it's very locked down you know I mean you know the turning on SRV allowing it to function in your open stack environment is not something you should take lightly that does require some work there are some modifications configuration changes to do that you know we started writing down all the steps and everything and you know it's a number of steps to make sure things are right and perform as they should but if you do it right it will perform but you you you give up a little flexibility with that you know you're not going to migrate VMs around you know your CPU pinnings enabled you know it's it's it's possible but hopefully in the future with you know dpdk improvements to open virtual switch that we can move away from that that restrictive you know technology and I'm not saying it's a bad thing but it is restrictive today when you utilize SRV right thank you yes hi my question is how do you deal with the notion of insertion and if you need to chain certain functions how do you deal with the notion of dynamic networking and the whole notion of insertion and my other question is when you have a physical appliance that assures the quality of service in case of hypervisor obviously you're sharing your resources how do you deal the level of service when you're dealing with virtual workloads what's that Michael these questions are brutal he's gonna kill me late so I could feel it again not the these are my opinions so the service-changing aspect of it is something that we are critically looking at I know that the multi-tenancy and the kind of security aspects of providing customer networks is is one of the rationales for moving to NFE in the first place as we begin to move into customers who require a level of security and a level of isolation that is unparalleled when I'm talking about health care I'm talking about things such as providing IOT to automobile manufacturers who are going to want to have their own kinds of networks and their own kinds of level of security and isolation that previously we would have just built hardware service chains around and have done obviously that's going to be prohibitively expensive and perhaps impossible to do so how are we going to do it in NFE well I can't speak to what tell us in the strategies that we have around that but know that these questions are very top of mind and if we can find vendors who are going to help us approach that we can find some network technologists who are providing some of the underlying network infrastructure who can approach those kinds of questions please come talk to me and help me out anything you want to add to it so it is a tough question so I mean orchestration is a big part of this so you know what sort of work we are absolutely the working with orchestration tools several different partners out there that we were working with to assist with the automation of the infrastructure what was the other part of the question about insertion in NF insertion into a service chain motor place was one part of the question right yes I mean definitely you know using using you know an orchestration tool you know to handle that manage that work for us yeah so I think we have time for one more question okay thank you my question is related to the platform side so for example Open Stack recently learned that probably it's not enough just for the Open Stack to handle the NFV kind of application so it's moving towards building its ecosystem with other platforms like Open Daylight and Open NFV so are you when you guys are trying to virtualize these network functions do you think is Open Stack as a platform is enough or are you also using other platforms like Open NFV or Open Daylight with Open Stack to virtualize these network functions and what's the value that you get out of it so I can I can tell you that you know from a Verizon perspective we are working with if you haven't heard of them a company called Big Switch so we're working very closely with them I think there's some announcements coming out this week about the work that we've been doing there with them so actually today it was today yeah so I can openly talk about that yes yes so we're working with Big Switch and Big Switch has done a lot of work with with Red Hat and they have provided an integration module so I certainly don't have the time to go into it in detail but the work that the integration between Open Stack Open Virtual Switch and Big Switch is is something we're very excited about and you know it's handling a lot of the you know tenants creating networks you know getting getting those segment IDs published up into the Layer 2 Layer 3 environment it's a new way to look at VXLAN it's a way to do some things with VXLAN that currently aren't being done today so we're giving us a lot more flexibility I can't go into too many details about what that is but there's some there's some additional benefits there haven't done much work with with you know you know Open Daylight myself not saying that it's a big company Verizon so our lab we're not doing much with Open Daylight today but there could certainly be some people looking at it within the company yeah Michael do you have anything yes we're evaluating looking at all of those so in that process okay well I want to draw it draw the session to a close because we're you know just short of time I'd like to thank everybody and especially those that stood around the room for the almost the complete session and thank everyone for coming today and I'd like to thank my esteemed guests thank you