 Yeah, good morning everybody. Appreciate everyone showing up for this session. We're going to talk today about how, at least from the AT&T perspective, how workloads, where they came from, and what are we seeing today in terms of change, and what are we seeing in the future in terms of what we're planning for. My name is Toby Ford. I'm an architect within the AT&T's transformation effort called Domain 2, working on NFV and SDN. I'm also a board member of OpenStack and of OpenFV and an advisory group of Open Daylight. Hey, everyone. Morning. Welcome. My name is Amit Tank. I lead the architecture efforts for Cloud and SDN for AT&T Entertainment Group, also focusing on open source and containerization. Thanks again for joining. Let's get started. So where we came from? I'll give you a little story to start. It was about 1996 when I got a job as a consultant as a Unix system administrator to go and work on a system and hosting a little ISP in Eindhoven and other ones. And they asked me to fix this server that was having problems. So I logged in and I started to get to work. And I was looking at what they were doing and what the kind of problem they were having. On this box, a single X86 server running FreeBSD, they were running 2,000 web servers, distinct NCSA, HTTP, the processes running, each one representing another customer. So 2,000 customers on an X86 box of 1996. I thought, wow, that's really pressing it. What happens if one customer gets loud or gets popular and then what's the impact on the others? And that was, in essence, what the problem was. But it gives us kind of the beginning, at least my view of the story, and many of the workloads that we're moving to the cloud. So you just take the most simplest form of that. There's a web server. It's serving up static HTML and a few GIFs and such. And then it advertises a customer, advertises a company's first presence on the web. Over time, one server wasn't enough. I needed to split the load. Load balancers appeared. Or I needed more dynamic content, have databases. From the internet era and the cloud era, that was kind of the standard, a lamp stack. And as it progressed, more and more things happened around it. So the workload has expanded to include other aspects. So a fully loaded e-commerce website of 1992 had many, many web servers and an application layer and databases. And then for it to be at all scalable across a lot of users, you ended up having to use some type of caching system, a Nakamai of some kind. You had caching nodes out there. And I think of the caching nodes as an integral part of that whole story. So this footprint, when you actually look at what was happening, in many cases, the workload itself was processes, their configuration, and some kind of state. Some kind of state that sometimes was persisted to disks. Sometimes it was transitory in memory. But when you look at the root level of what a Oracle database or MySQL databases or some kind of J2E platform and or web server, these were just processes with config and state. Now, as we've gone from then, these things were running on monolithic servers, just single boxes running these processes. Over time, we've realized that the way that this hosting company was able to do 2,000 servers is because most people are delusional about how popular their website is or how much workload it takes. So in the end, we started to take and move things onto virtual machines and start to bin pack workloads and such. And then this has evolved lately to things like containers and LexD. And then the question is, in the future, is that going to look like unicernals as we get more into single process containers and microservices? Is it going to be unicernals? Or are we going to take another direction, a tangent into serverless land? But really, this move to containers, it really hasn't changed the workload itself, even though there's a lot of talk about it being microservices and such. It may change the topology a bit, but in the end, it's always processes, config, and some kind of state. Very interesting. And I like how you laid it out in a very simple manner, that really, what do we define as our workload? It's a combination of processes, configuration, and those database. So moving on, I think if we extend that idea, a lot of models of consumptions of your applications have evolved and come full circle. They started with that kind of a mode where a lot of your services were meant to run in its own environment, in kind of like a singleton environment. As we went along, this is how our workload architecture looks today. And we are taking an example of something that is very, very pervasive, whether it's a media workload, we call it like media virtual functionalization, or it's virtual network function VNFs that are being deployed on a cloud native platform. You're looking at an OpenStack control plane, which has all of these services laid out running as microservice. So I think one interesting thing that has evolved as people who have attended some past two summits is there is a trend to treat your cloud infrastructure itself as a CI-CD capable infrastructure. In order to do that, you are better off deploying each of these services on some kind of a container or a VM, but on its own running in conjunction with the rest of the services. Now once you have your OpenStack control plane up and running, typically the way our data plane are structured, they are essentially massive stream computing farms. Any time the processes that we learned earlier, any time these kind of processing has to be done on, say, for a firewall as a service product or an IPS IDS as a service VNF, there is a 10 gigabit plus traffic incoming, your function, it could be spread across servers, which are being chained. And then those threat database could be fetched from the cloud. Once those threat signatures are fetched, essentially what really these servers are doing is simply peeking into every single packet and detecting whether something has triggered those alarms or not, and then it moves all the traffic to egress. So really this is like a workload that characterizes video workload, or it could characterize any kind of VNF. Just let me add to this slide a bit. The concept, too, is the workloads, when you drive down into them, even if you look at an IPS and IDS system, or if you look at a web server, web server is a listening. Listening for HTTP gets posts and such, and then responding with some content or some response code. This sort of listening, and then responding with information, or listening, going out and getting information, and then responding. This pattern is very similar, whether it's a web server, a database server, I'm taking in requests for SQL, and then giving out data, or if I'm doing threat response where I see a packet, I go and make a search against a signature database, and then I respond and change the packet or deny the packet or reroute the packet, or add some type of information about where to point it. This is this general pattern to overlay on the processes. It will become important in our story later. Yes. So that's one dimension of it, is the topology of the workloads and how they're changing and what they're doing. But then another aspect of it is how we lay them out. As it starts, if you just had a single web server, and this is very similar to other things that we've done like SIP gateways, or voice platforms, or elements of our mobility platform, these elements, if we just put them into a single building, you know, as a telco, we're very proud of our central offices, and we try to push as hard as possible to make those central offices have five nines of availability, but it's very difficult to get them to six nines of availability. It's almost impossible without a lot of extra cost, adding 16-foot concrete walls and a lot of extra overhead for eating and cooling, empowering this facility. Now, and then adding in not two links, three, four links to make it possible to get beyond that. Very expensive and not likely. So you have this building that at most can be five nines with a web server or some other function in it, that, you know, best case. Sure, there's times when in the building I can spread it out and have multiple web servers, and that can get close to four or five nines of availability, but the reality is if I have a service that needs six, I need to have two sites. So the workload has to be in two sites to get to six nines of availability, and so that's a typical SLA that we have in our environment. So if you have six nines of availability, even on top of five nines buildings, I could have actually had 10 nines of availability with two setups of five nines. And the beauty is if I only need six, I can actually reduce the reliability of the facilities down to three nines. And this presents a possibility that we have taken advantage of and others have taken advantage of where you can make perfect services or things that appear perfect at six plus nines, but on less reliable facilities or less reliable underlying resource. So this is important in the workload because you have to consider the geographic distribution of the workload to create, to meet the expectations. Now, that's just with regard to reliability. As we get into durability and keeping data secure and not losing data, if you wanna have, for the longest time we've used tapes to put our data onto a tape and then we put it into a truck from Iron Mountain and Iron Mountain takes it off into a building that has 16 foot wall, so great. But still this tape does not have an enormous amount of, it has pretty good durability, gets a nine, 10 nines of durability, but it's still an enormous risk to flooding or some kind of issue with the truck running into something, blah, blah, blah. So one of the innovations that Amazon did about long ago was realize that if you wanna have really durable data, that you have to have it in multiple sites and really if you wanna have it to get beyond tapes, you have to actually put it into three centers. And so this starts to bring into another dimension of the workloads is understanding the latency between aspects of that workload. And that becomes important and one of the really integral theorems around the cloud is this cap theorem is saying, how far apart can the data be to make it so that it stays consistent and when you make changes to it. This concept is important because you have to have the actual redundancy close in terms of latency to keep it consistent, but not too close so that one nuclear attack or one flood or all in the same power grid cause it to fail. So this is an important part of the topology of the application as it's evolved over time as the nines have increased. Alongside of that though, when it comes to latency and speed of light limitations, we also have realized that the workload has to be closer to the user. As I was saying, a typical workload of the early 2000s had to have Akamai to make it truly available and have the user experience be positive. And so you started to see caching nodes extend closer and closer to the user. One of the dynamics that we'd like to point out is that at some point the possibility exists where the caching of a workload and this sort of geographic distribution has to start to look like as close as to you as possible. So ideally if I'm watching Man of Steel or some other movie, it's actually here instead of in Los Angeles. And so this part is a dynamic where we're seeing the caching starting to show up closer and closer to the user, even perhaps on the device itself on your DVR or on a residential kind of head end of our video centers. So the workload is really, the way I look at it is expanding to be out toward the access or the edge part of our networks. Exactly, I think the point about proximity is right on. Yeah, and then part of the story is for us is, okay, when you look at the edge and what's happening there, there's a lot of functions for an operator like ourselves. So there are the routers and the access, the VOLT, the taking the fiber out to the home, all that's one part of our infrastructure. But then as we've like required direct TV, we have other things, satellite, we have residential gateways, we have DVRs, and then as IoT happens, these things are everywhere. And one of the issues that we have in OpenStack today is the view that OpenStack can only really manage the virtual machines and processes, the configs, the state that are near it. Nova was designed to assume that the servers that it administers are all real nearby. But does it make sense to put OpenStack in every RG or every DVR? Probably not. And so we have an option, we feel like it's essential to start to think of having maybe Nova talk to things to have more latency between it and what it manages so that we're able to actually take advantage of maybe unused parts of a DVR or an aspect of the access network. Very appropriate. And I kind of see a lot of potential for something like OpenStack to be used in what we call as our RG evolution. And I think I would totally agree with Toby's assessment that you have that ability to leverage certain edge locations and leverage OpenStack which can do a very good job at serving those edge use cases where you have devices that absolutely depend on proximity. So in this diagram, we're showing something that we're working on together right now is this concept where the actual, what runs on the RG is actually a stack of a container stack. And then how we actually make it work and one of my beliefs is that the workload is very much impacted by the developer process it takes to evolve that workload. And as we've seen, if you start from the web server example before, I worked on my HTML, I FTP'd it up to the site and it showed up. Over time that evolved to things like Capistrano and other mechanisms like that or what I thought was a beautiful example of integration into a developer's workflow is like Heroku where instead of using version control as part of my process, I actually just inserted Heroku and used it as not only the version control but also the way I deploy new content. And so that workflow becomes part of the developer workflow becomes part of the application topology. And then in this example we have with RG is we're wanting to enable the developer to work on RG and all of its functions on your laptop and then be able to take it and test it and emulate it in the cloud and verify that it works and get it's all the regression testing and add all the cool agile test driven parts of it to it and then eventually push it out to maybe embedded hardware. So this flow is another part of the workloads and how they're changing over time and making it's a cool convergence of the agile methods with what the cloud is able to do and then hopefully what we're suggesting is making the cloud be able to extend to devices out at the edge. Very cool. And I totally agree that this has potential to actually create a lot of value for our developers as well as for our customers because you can decouple, you can move your software innovation cycle at a much faster rate without having to depend on the hardware cycle. So moving on. This is a very interesting layout of some of our workloads and how they're transforming. Toby, you want to take this one? No, you can. Okay. So I think some of these workloads that we basically learned about today, many of these workloads when they originated they were very, very monolithic apps. Video apps, very monolithic app. You, let's say you have an authentication app or you have a mobility infra app or some kind of video encoder. Typically all these VMs as they evolved over a period of time, they had lots and lots of dependencies packed tightly into one monolithic blob which would then having had so many dependencies would create bottlenecks for a person to then upgrade, downgrade, troubleshoot once it's in a live environment. And I think as AT&T, there is a big emphasis put on empowering our engineers, our QA people, our architects, our designers to actually uncheck all ourselves from that mindset that a software has to only look like this or an appliance, a particular function can only be done by appliances. So what we did was we leveraged OpenStack, we moved to common off the shelf hardware which essentially could be very well, be any kind of open compute device and we started treating everything as a very generic set of compute capacity VMs coming from AIC or any other cloud. And on top of that, we essentially build a tenant-level CICD framework that leverages Docker containers to deliver your applications. Docker containers themselves needs to be orchestrated. So whether it's a video application or it could be a VNF, those orchestrators like Kubernetes play a vital role there. Someday when OpenStack control plane itself is containerized, you could find yourself in a situation where Kubernetes is actually also orchestrating your OpenStack infrastructure. But otherwise overall, we see a lot of value in going this model because it is quite extensible and it has a lot of potential to build upon. So I'm gonna go give you another story from, I'll tell you about early 90s. There was an episode where there was a big hacking a tank up against the network and then there was this fellow at AT&T, his name was Bill Chestwick and then one of the ways that he was using to deal with this attack was to create this honeypot. And then he did this using a function in Unix at the time, charoute. And then in that process, this is how it came up with or helped participate in the creation of some of the first firewalls. But essentially that first firewall was a charouted jail and a PF packet filter. And then this evolved over time to be a lot of other things. But essentially at the very beginning, it was essentially a container. So what I think school is kind of a comfortable circle is there was a period of time where you take Unix Box, you put IP tables or the kind of free BSD version of a PF and onto the system, work up what it takes to essentially make a firewall and then you strap a label on it and then have some fancy GUI website that configures it. And you put the label on it and say, this is in a firewall. So it becomes a thing, an appliance. And now in the mid 2000s, we took that appliance and made it virtual. Oh great, we put it into a VM. And now we're taking it to the next step and then making firewalls as a service. So you can easily spin up more of these firewall or VMs or really configuring just IP tables to manage packets over here. Or even in our case, taking that firewall and making it a part of the SDN. So this flow, if you look at it, what we're coming back to is essentially the topology of a firewall of that particular BNF is just a process with config, with state that is taking in information, looking at databases for what to do with this, maybe dropping packets and then stopping it or sending it on. This pattern, the same pattern, doesn't matter if it's firewalls or you can look at load balancers or session border gateways within a voice system within our environment or P gateways or any of the things that we're working on, they share the same pattern. Interesting part of this dynamic, as I was saying, is, and this is actually well described in the Adromeda paper from Google, is, look, we started to wake up to this idea that, hey, I had all these firewalls and load balancers and IPS devices and session border gateways, whatever. If you look at what they do, they open up a packet, they do stuff and they close the packet and send it on. And they open up a packet, do stuff, close the packet, send it on. Open up a packet, do stuff, send it on. That's a lot of overhead in opening and closing packets and that limits the amount of throughput that you can get. What we've seen is a consolidation down to, hey, maybe we can do that all in one place. And even better, can we do it in a way that's very programmatic? And that's where the concept of service function chaining comes from and then being able to do all of those functions inside of the SDN itself. And then this proposes a kind of a question as an architect, what is the balance between, do we put it in the SDN? Do we put aspects of session border gateway in the SDN? Or do we split it out and make it a separate thing that gets run as if it's a web server? Right, very interesting perspective on what kind of design paradigm you take. And some of these design paradigms may work for certain use cases while you may find other companies taking a different route because their use cases demand a more finer control. So in summary, I think what appears very clearly is that OpenStack, it obviously has so much of capability built over a period of time, like something like NOVA, for example, I spent this week, this last four days attending different sessions and realized that NOVA does try to do a lot for a lot of use cases. And at times for use cases which require extremely high performance, things like we just saw earlier, a media function virtualization or a virtual VNF, virtual network function, you may want NOVA to be leaner, not necessarily be fear. You may want NOVA to actually work with some kind of a plug, a pluggable SDN controller. And it dawned upon me that I had educated myself about blue on in Austin, but I think that this summit was really the time when it kind of like became apparent that something like a green light which allows NOVA to extend itself to different distributions, what kind of workload that it's supposed to drive on a target cloud, it becomes absolutely vital. Blue on becomes absolutely vital because essentially now you could make it work with OpenContrail or you could make it work with Plumgrid or you could make it work with Midokura. And your use case may differ even within the same cloud. If your use case was video related, you absolutely, your time to first packet matters a lot. Whereas if your use case was serving web service, it's not probably gonna be that vital. So I think that I would say that my takeaway from this summit, one of the takeaways from this summit was the importance of some kind of framework that allows Neutron to extend itself with different kinds of SDN providers. Yeah, and in addition to that, as this is, so obviously I've been working very much on the GLUON project and I appreciate him selling it for me that way. The other part of it is about NOVA is that I think it's time to think about how do I either use NOVA or do something else that actually extends to be able to take on the edge in the access use cases. And then as well, not only be able to expand out to the edge and access part of our problem, but also deal more with our problem of what services should be centralized and what should be distributed. And find a way to make more of a how do I manage many, many locations. Today at AT&T we're pushing 100 significant open stack deployments and then now we're talking about in the next round, especially with 5G expanding to a far more number this way. And knowing the dynamics of 5G and densification, you have to be in a lot more places. Right, very interesting. So I think that's the main story, but I wanted to add a post script to our thing that's just as we were here. So you know, one of the dynamics that so kind of made the argument along the way that the workloads are changing and the way that we're managing them is changing, but justifying kind of how do I apply the cloud to every area that we work on. More because everything looks the same and then I can use the same patterns to manage things everywhere. Now, one of the benefits of doing this, you know, not coming at it from what I'm about to talk about, it makes it so that as we talked about, the customers get better, more functions faster, we can move functions into RG quickly. That's a cool part of it, but the other part of it that's essential, so that's the benefit to the customers that get real time capabilities, real time access to resources and real time access to change. But for us as a provider, the cool part is that we're able to do, we're able to really look at our assets and use them. So this is more than just, hey, I have a server, I virtualize it, and then now I can have, I can run many tenants on it and I can oversubscribe it and, you know, like many of the enterprise VMware setups, they're like, wow, it's 400% oversubscription. And I told them about the time I saw this box that had pretty much 2000% oversubscription. But anyway, so the concept of utilization is very important in our world and we've taken advantage of it in the past with things like the two lines that connect your house to the phone network and the phone network itself. We created this thing called statistical multiplexing, which allowed us to oversubscribe copper and be able to use more of that asset for more tenants. We're doing the same thing with spectrum, we're doing a lot of creative things to take a frequency and be able to stack lots of traffic on it. We're taking fiber and getting a tremendous amount of usage out of it. And then we're looking at our facilities as Moore's Law continues to press on us, it's continuing to cause like a common commercial data center that used to fill the room be all contained in a one-use server. When you look at a central office today, that's what's happened is this big of a space is basically now available to fit into your mobile device. Exactly. So that Moore's Law is a part of this dynamic, not only in terms of causing it more dense possibilities, but it's also making it so there's less utilization. And so this part of it is really important, and as I was saying in my earlier example, like if you look at the web servers of 1996 that they were having 2000 web servers on, if you go ahead 20 years, Moore's Law basically is two, four, blah, blah, blah, up to 2000 times more capacity is available in that server. So if you think that one server can only run a one web server, something's wrong. Right. Because it can do a lot more than that. And that delusion or that change in dynamic is causing a lot of underutilization. And for us, we have to address that. Very important point. So at this juncture in the presentation, I think it reminds me a very interesting perspective. Now before I go into that, let me give you guys some little bit of opportunity to stretch it up a little bit. By a show of hands, how many of you took a train, a railroad train, to get to this venue? Oh, okay, at least there are several people here. Now if I had asked this question back in Boston, there would be literally zero people who'd have taken this. So a very wise professor, I learned something from a very wise professor a long time ago. Back in its heyday in the earlier 20th century, the railroad in US in America was like a very big thing. It was like the most major entity, most major enterprise that was available. And it had such a bullish and bright future. However, it went extinct. And why exactly did it go extinct? And there was a very interesting study done. Turns out the railroad industry was always in a belief that they thought that they were in the business of running trains, running railroad carts. What they didn't realize was that that was not really the case. They were actually in the business of mobilizing people. And had they realized that, probably they could have been better positioned to make the transition to airline industry or the airline aviation. So it kind of brings me back that electricity, for example, the electric grid, it evolved over a period of time. And now you see a very, very dynamic function in your electric grid that if you have an excess capacity in certain part of your geography, you could have that excess capacity used up somewhere else based on the usage pattern. So you can draw some very interesting parallels between electric grid and computing grid. And I'll let Toby drive that. Yeah, and this comes from a really a book from about 10 years ago that talked about how 100 years ago the electrical companies formed. At first, when electricity started, the factories were building their own generation on site. Where I lived as a kid, there was still a remnant of this where there was a hydroelectric dam attached to the Alcoa reduction plant, the aluminum reduction plant. Now over time, more centralized power showed up. And the way that it was able to disintermediate and localize power was because they were able to take assets like generators and use them all the time. So the story in Big Switch is basically, during the day the manufacturers were using the power grid and then at night they applied it to, in Chicago they applied it to the lighting. And so this allowed them to take non-coincident peaks of workloads, put them together and get higher asset utilization. And the Big Switch story is essentially, this is what Google and Amazon have been doing. And this was 10 years ago, talking about how they take many workloads, take the non-coincident utilization of disparate workloads and pack them together and get higher oversubscription, even more than just sharing a resource, but been packing utilization. So this concept today, I mean you see this, and this is one of the reasons why Amazon's so successful, and something I believe that OpenStack probably should enable at some point, is the ability to auction off unused resource. So the spot instances are genius because it takes workloads maybe in the middle of the, or resources in the middle of the night that don't do anything, and then allows HPC or batching jobs or whatever big data jobs to happen then. So this allows the infrastructure to be fully utilized at all times. Nowadays, a rack gets rolled into an Amazon data center and it's fully utilized instantaneously. For an enterprise, it could take maybe a year before it gets ramped up to full utilization, if ever given Moore's law. So this dynamic is, and then Google lately just announced preemptive instances, so this is an important concept that they're recognizing that drives down the cost of the infrastructure for customers, and it allows us to get to the same kind of centralized utility dynamics that were shown up in the electric grid. Very interesting, very interesting. So that is to go back before I go to this slide. I mean, this is a part of the workloads that have to be managed better, and in one of the areas that I think we could use some more evolution on with regard to not just placement scheduling, but lifecycle scheduling inside of OpenStack is thinking about, okay, I've watched this workload over time, I've seen how it works, maybe how do I consolidate it and put it together. I still don't think we've really meaningfully addressed the competition of a DRS, Dynamic Resource Scheduling in VMware, make it something that could do something similar. My last slide, just to close out, is my father, we used to work at this facility in Niagara Falls, he was in charge of this dam. And this dam is an extension of the story of using noncoincident peak utilization. So this dam was designed to take advantage of an arbitrage between the cost of power at the middle of the night versus the day. So the middle of the night, the cost of power is less given that people don't use it and then the power companies figure out long ago they wanna make it less so people use things at night as much as possible. So they actually take advantage of it themselves by using a reservoir where they pump the water uphill in the middle of the night and then store the energy at night inside of this lake, this reservoir and then during the day they actually generate power as well as bring water from Niagara Falls. So this concept is not only do they have the manufacturing peak utilization and the people's usage of watching TV but they also are able to take advantage of it at night and then be able to increase revenue during the day. Very cool, very interesting. Another cool story just to leave this is there's kind of like Homer Simpson in the control center of this building they had this panel with a big red button and if you pressed that button you could turn off Niagara Falls. So they did that in order to if they saw people coming down the upstream of the river if they saw them coming in barrels they could press the button and then they ended up building a concrete walkway on one side of the falls so that the policeman could walk out and save the person from themselves. Very interesting. All right, thank you very much. I appreciate you joining us for the session. Hopefully you found some of the content that we discussed today meaningful and something that you could derive insights from. If you have any questions you can stand up to the microphone I believe. Yeah, thank you for the very interesting presentation and you mentioned about transforming the monolithic application workload to the cloud-native like container-native application. It seems that in terms of a telco application there have been a long history of developing monolithic stiff applications. So I think it would be quite difficult to move to a container-based or a cloud-native application. So what do you think is the biggest challenge for the application developers to make that happen? And another question I have is that transition would take some time I believe and there would be a situation where there is a cloud-native application and monolithic legacy application on bare metal, on VM so there would be like hybrid environment. So would there be a new challenge in those circumstances in terms of management in cloud SDNs? Sure, the first two things before we get into the technical part of it I think the first two things are ossified neural networks and then initiative is just really changing your way of thinking of it and then taking the initiative to actually make an open source, whatever, P gateway or MMMC or HSS just going out and making an open source version of that. We've seen a lot of initiative taken for operating systems and compilers and web servers and browsers but not a lot in that area. So those are two kind of people dynamics that are in the way of progress in this regard. I'm glad, so glad you mentioned about the opening the mindset. So the question that was asked here was a two-part question. The first part was you have these legacy applications. What kind of challenges do we normally encounter in turning those into cloud native? I think the single biggest challenge that people encounter, organization encounter is in attempting to decompose in existing legacy applications, many of those assumptions that were horrible assumptions technical debt that were baked into your legacy applications now they basically they arise. They arise from the dead and like you people are not ready to necessarily adopt there are different way of doing certain things if certain applications are expecting a persistent database. I think to move those components from using a persistent database to a purely no-sequel key value pair that is accessible through REST API and the persistence is achieved somewhere far away. Those kind of decomposition challenges are I think are the major challenges. In terms of time, I would say it's definitely not not an overnight job or not even like a one month or one quarter job. I think your success of your OpenStack Endeavor is gonna depend upon how smoothly can you take some of your greenfield application and build them around OpenStack and then gradually turn your ground field application one by one into decomposable microservices. Yeah, and one additional point is the technical level is the VNFs, many of them were built into hardware. So if you take just a firewall or a switch router many times the functions were offloaded in the past to hardware to A6 to FPGAs or GPUs were really purpose built hardware. And so part of the tricky thing that we're dealing with is finding the right balance of what happens in software and what may happen in the hardware. And so now we have options available to us and then it's getting better all the time where we're able to have more easy to program hardware acceleration with FPGA and GPUs. So this part of it too is related to paradigm shift is realizing, okay, I could take something that's custom built and find general purpose ways of solving it and then finding that right balance of what happens in hardware and software. It won't very likely not all entirely be solved with software. We just have to find that right balance. Exactly. Thank you. Sure, next question. Hi, what do you see as the challenge is coming from 5G and 5G with respect to the cloud? Yeah, I mean, number one is that with 5G and any amount of increase in bandwidth the reality is that you can't use the cell towers we have now to deliver that you can but only close to that antenna. The thing is as you increase bandwidth you have to increase power and then it limits the how far it is that you can transmit data. And so this causes what is known as densification where we actually have to have a far more distributed set of antennas. That's why you see other players showing up is an enormous amount of opportunity for disintermediation in the space because of this concept. So now Wi-Fi providers or cable companies can get into this space because they know we may end up having to have an antenna inside of your house or really near your house or inside of your buildings and such. So this is, I think, probably the biggest challenge. Not the rest of it we have down solid of how to actually make it work. But the real problem is this issue that we were getting at before is how do I push out antennas far more in a more dense way? Thank you. Sure. You're welcome. Thank you very much. Thank you very much.