 All right, let's get started. My name is Toby Ford, and I'm from AT&T. I'm responsible for the architecture of AIC or AT&T Integrated Cloud. Hi, everyone. My name is Amit Tank. I'm a lead cloud architect responsible for cloud architecture and containerization and a few other things with AT&T Entertainment Group. Thank you very much for joining us in this session. So this has been a very interesting week. Quickly, by a show of hands, how many of you here feel really excited about OpenStack? Nice. Very nice. I feel the same way, super excited about OpenStack. But then, before we translate our excitement into how does it correlate with NFV, I want to take you to a different era. I want to take you to the era of 1960s. Now, I wasn't born then, but I've relived those stories that I heard from my dad about how it used to be like. So the president gave a vision. There should be a man on the moon by end of this decade. He did not necessarily tell exactly how we go about it. He just gave a vision. And then, interesting thing happened. By the process of self-fulfilling prophecy, everybody rallied together. They worked together. Scientific minds came together. They pushed the envelope, and things happened. Going to the moon happened, but then other interesting phenomena occurred. As part of doing so, we got some amazing things that actually improved the quality of humanity's life tremendously. We cannot imagine our world without fiber optics. We cannot imagine our world without satellites, or Tango, for that matter, or Velcro. So talking about this quest that yielded so many amazing things kind of makes me wonder that when I come to these summits, I see this amazing quest that's underway about NFV, VNF, SDN. So let's dive in a little deeper on what are some of the things we should be excited about. Sure, so by a show of hands, how many people have actually drank Tang? Nice. I don't see it very often anymore, but I had a lot in the 70s. All right, so many of the presentations I've done recently, I've talked about the basics of NFV and SDN and VNFs, and how we're migrating from old PNFs to VNFs and such, but I wanna make sure that we don't get stuck thinking solely about the cost benefit. And oh, we're all gonna get cheaper mobile cell coverage and it's all gonna be, we're gonna be able to expand to more bandwidth to deliver to the last mile. Much, it's got to be more than that. So I mean, clearly if you talk about commodity hardware, open source software, virtualization, containerization, automation, configuration management, all these things that are happening, templatization, all these things sort of imply that we're gonna save cost. But at the same time, it's causing us to be very unit focused, not only on cost and what I believe is gonna be commoditized, but it's also focusing us very much on the infrastructure and keeping us stuck there. And what I wanted to talk about today was some of these things that we're gonna do trying to, what is the actual value that we're gonna add? What are we gonna do that's unique and new with these technologies? Once we have the extensibility of software, what's coming? So how can we make this more than something that's commoditized by Moore's law of plateauing at seven nanometers and having ad-driven sort of connectivity? What's gonna get us beyond that and make something really interesting from this technology? Thank you, Toby, that was very insightful. So you look at the big picture and you get to see the NFE framework, the HCE vision that allows you to kind of build these blocks. But before we jump into this, I want to tell you a story to kind of illustrate the message that we are trying to convey here. So the story is about Peter. I met Peter, who's a very bright engineer at this summit. So he's here traveling from Europe. He's a post-doc medical student. And we got talking and I learned a little bit about what he does. Turns out he would love to use OpenStack to run his simulations to identify the impact of the molecule compound that he's been working on. Now that got me thinking that there are so many different stories similar to this, which essentially for an average adopter of an OpenStack or an average user of an OpenStack, when they look at OpenStack, when they come to the summit, do they see this complicated block? I made a transition into a network domain a long time ago. So at that time, a lot of things were new to me and I was also kind of getting introduced to OpenStack. And so I had this similar question that when I get started with using OpenStack, am I still gonna be really confused and kind of entangled into thinking things as, what's VM? Do I have to worry about capacity? Do I have to worry about memory? Or am I gonna be able to, is this platform gonna empower me to rise above it and think about applications? Think about experiences that we can build. Exactly. So I thought Boris did a great job earlier this week of framing up this problem, okay? When you're confronted with an OpenStack login, what do I do? Can I have a password or a login please? Oh no, you can't have that unless you give us some money. Or you can't do this because you don't have capex or you have all of these obstacles before you get there and then at the end, you give up and you just go to the public cloud. And often I think we get stuck thinking so much about the cabling and the VNFs and all these things that we forget what cloud is. The C in AIC is a really important part of the story. And then I have an alternative login that I want to present. So for me, what's behind that login? When I get past this login, what do I see? I wanna be able to see, for me, I'm a big fan of Legos, I wanna see composable building blocks that I can put together so I take away the context from what I'm doing as a developer, take that context away so I can focus on the core of what I'm doing, whatever, the presentation layer, the branding, the unique new service, that's what I wanna focus on. And I feel I very ironically, that when I first used Heroku in 2008, they gave me that then. And we still haven't, with all of our focus on making progress, we still haven't really gotten to that point yet. And this world has evolved since then. And so what I urge us to do is imagine a place where as a product manager in its side of a telco, I wanna go and create something new. So I can make a login and I can go into something and I can truly engage with a service definition tool that allows me to compose new services and roll them out in agile timeframes, like in days. So that's what I'd like to see us be able to get to. And many of these things have already existed, whether it's in Amazon and that's the typical example. But there's other examples like this that are very compelling examples of what you can do and what has been achieved. And it's just how do we make this happen faster? So some of those examples that are very overlapping with the telco space, one of them is about policy. In the telco world, we think everything is unique and especially our view about policy. Well, it isn't unique. It's been something that has been very well thought through and solved over time in the IT realm. And I would argue that my friends at Epsara when they made this project, they actually made, manifest a policy engine that is the vision that we've presented in our like our e-comp white paper of what is policy for a telco? And they've made that integrated in a model that is very much like a Heroku Paz. So I want that. Plus I want the container aspect of it as well. I want simple, highly optimized use of infrastructure in a container way. And I wanna be able to do that with VNFs. And I wanna be able to do it with the fancy Mark Shuttleworth kind of demo of assembling blobs on the screen and then seeing it happen. And then expand as necessary. So that's really, really interesting Toby. The fact that your narrative really connects to that audience which is really looking to build experiences using leveraging OpenStack. I think it's amazing that some of the things that are being driven as because of this NFVs lofty goal people don't even realize but they end up getting this. So as a engineer, as a developer or as an application designer, experienced designer, I could definitely imagine that if we achieve this vision and this cloud really becomes that Nirvana cloud, imagine how much value it could unlock. You could look at A-B testing. You could essentially transition your company into a fail-fast model which allows you to evolve much rapidly and catch industry-wide transitions. I mean it essentially could allow AT&T designers and technologies to craft unique experiences through next century. Why should we limit ourselves? We are just limited by our imagination. You could have a personal VPN of things where your car that's connected with a very secure MPLS connectivity to your refrigerator or your thermostat or any of your appliances and effectively your car kind of also tells what things how it should interact or imagine an experience where basically you have an NFL, sorry, you have a regular game and you are enjoying that game by immersing yourself in a virtual reality. So I'm really excited as an OpenStack community member of what is possible. Yeah, so last night, Met and I were up late trying to solve for this presentation and it was quite funny because we got to a point about one o'clock and then we realized by this slide we were like, oh, we've sort of guided ourselves into a point where we actually have to solve for the strategy of AT&T and come up with something meaningfully new. So one of these things that I imagine is watching NFL game and then being able to play the same plays, take a feed or copy and paste the play that you see being run and run it in Madden and try it out all different ways and then get to know the people's names and the stats and all that. I mean, I find Madden is one of those games that is very much like my son and I buy it at the beginning of the season just so I get to know what everybody's name is. That's very cool. Yeah, so I mean, in terms of taking this vision and then making it real, I think our CSO, John Donovan, our strategy officer has done an amazing job at doing something very much akin to what JFK did with the moonshot is basically say, you know, and sometimes these numbers are a little sketchy but he comes out and he says, hey, we're gonna be 50% open source by the 2020 or we're gonna have X number of percentage of VNFs running and over time that, at the beginning you say, oh, that's impossible, how is that gonna happen? And over time that evolves into something of a plan. Last year it ended up being a plan of like 69 locations. We ended up being able to do 74. And at one point we're thinking at 20, we're like, how is it even possible? So this kind of casting a shadow and then filling it is what we're doing now. And what I would like to compel the community to do is to cast that larger shadow with the other telcos and so that we can work together to achieve something amazing. You know, when I was in middle school I read this book, The Jesus Incident and in it it describes this guy going into his living room and saying, hey, play me War Pigs by Black Sabbath and it plays and then, oh, play me Beethoven's Fifth Symphony and then it plays it. And it just, I couldn't imagine a world where that was possible. And today I drive around in a car where my kids are saying, hey, play Hello or play whatever, Purple Rain. I wanna hear it now, what is that song? And it happens. So it's very much akin to that. So this is a lot of high end talk and I wanna get down to a very specific example. Thank you, Toby. And speaking to that point, if we drill down into this transition that's underway, really if we could push this, push the performance boundary, how can we achieve 50 million packets per second? A regular 60 forward byte packet which is an average size of a small size packet on internet depending on the media or any kind of traffic, you may have about one K size of a packet. But if you wanted to drive 50 million of these per second on a Linux VM, how do you go about it? Yeah, no, and then it's an example of, okay, so that's a very specific problem to Telco. We have to make voice connections and keep them running and this is part of the requirement to do that. And so how are we gonna actually make that happen? And part of it is very much using the self-fulfilling prophecy factory to make this happen. So we say, okay, industry, we need to do this. And then sure enough, over time, that things start to appear. We get help from a lot of our partners, Intel's, IBM's, from Broadcom's, from many of the software providers in this room, like working together to actually make this work for us. And then we work together to come up with a standard benchmarking for how this works. And then sure enough, where I didn't think it was possible, we're getting much closer to making it happen. You know, as Marco and I talked about yesterday in our presentation, one of the metaphysical sort of architectural things that we're struggling with is to SRRV or not to SRV. It's a very easy path to go right now when you have X86 servers just to give up, use SRRV, and then go on and continue to manage an underlay, a complex underlay of different kind of VX LAN overlay and all that, that's all very possible that works. But to me, that's shorting us on the potential of what the software can provide and what I think of service chaining of the future. On the other side, you have the software option, and even though we've made vast improvements having a pure software option on this, and then we've done a lot of great work in the community to make this happen across a number of groups to make this better, it's still not quite 50 million packets per second. And so we have still a gap between these two extremes. So what I'm hoping over time is that we actually bring together and find the right balance where we can offload the appropriate amount of standard networking and packet processing and make that happen within, let's say, a smart card or in a typical NIC, a smart NIC or a typical NIC. How can we do that? And then at the same time, take advantage of many of the optimizations that have been appearing. Moore's law needs more things to do. And you see that with a lot of the consolidation of new functions onto the processor chips. And that's why we're very supportive of trying to use a multitude of processors in this space because you're seeing nowadays, especially with mobile devices and the arms and the open powers, a lot of the ecosystem, and I went to the Arm Partner meeting last year, it was equally exciting what was going on in that space. So if we can harness some of that and bring that goodness into this realm, then we can, I think, go well beyond providing just 50 million packets per second. We have to be able to support, as Sorb was describing, 150,000% growth. I wanna be ahead of that curve. So as I was saying, this isn't some of the examples where when we showed up into OpenStack, people were often like, my friend Jay Pipes is always saying this, please tell us the use case and don't tell us how to do this. We have to work harder on explaining what we want and why we want it, and then we can all work together to make better solutions. So that's something clearly we're trying to do better in Opium-NV and I think this is an example of that flow where when we were able to specify what we needed with networking, we were actually able to start to see and propagate new concepts into, like say NOVA around NUMA pinning or thread pinning and these types of things. And so you see a lot more options in that realm and before you may not have needed it and it may be over-tweaking prematurely optimizing things, but in the end it has been very helpful to us and it can be helpful to a much broader set of use cases in the future. Very interesting. So here's, yeah, a quick point to that. I think that's really interesting. The stuff that you just shared for a typical average community member, I've seen through my career, I've seen enterprises being able to provide this feature to select set of customers for extremely high premium just because these traditional features like CPU pinning and NUMA awareness have never been accessible to the general public. And it's really beginning to paint that picture for me now that now that you've described this, it actually for an average user, like the average users like Peter of OpenStack, they get to enjoy this feature right out of the box just because of NFV's focus on some of these things. Exactly. Yeah, so here are some of the other examples of things where we've put into the blueprints or in the specs and that we're prosecuting and we're trying to make happen faster. So I mean, you will see us in the near future putting a lot more emphasis on scheduling and being able to schedule a lot of combinations. The VNFs like an EPC or the like, they require a lot of anti-affinity or affinity rules and make sure that certain pieces and parts aren't on the same boxes and such. So you're gonna see a lot more work from us on scheduling and making it more, also more life cycle-based. Today it's very much placement-based and we really wanna see it move in a direction that is over the life cycle is managing workloads and where they go and then helping us with our overarching cloud benefit. Again, the why is the sea so exciting is about really being able to get high asset utilization. So this is, that's just one example. Another one that I'm working on with my friends at Ericsson and Epps Air is around containers is trying to make the VNF run the actual, not only the VNFs run and have meaningful examples of VNFs that run in containers, so that's one thing. But also at an infrastructural level is running like the control plane on a Kubernetes setup or something like that so that we can actually make a much smaller footprint and be able to service more of the edge use cases that we have. So these are just a few of the examples. Very interesting. So Toby, you know what really excites me? I'm a big fan of being able to containerize and decompose services. As an architect, nothing makes me more excited about having teams of engineers being able to build this experience. But until now, there has been a big concern what about security? What about isolation on running containers? But I'm now convinced that as VNFs adapt containers and they start running on containers, those problems are gonna be solved in a very elegant way. I am gonna get extremely good isolation and security solutions just by the fact that VNFs are gonna run on containers. So that's really exciting. While thinking about this, like we talked to so many different teams and so many different module owners, PTLs, developers and the general message was it resonated well with a broad range of people and participants of the community. What excites us about this? Some of the key projects that we should watch out for have some really cool things coming in. The best practices from telco worlds, they are gonna come to the cloud world. Sendlin is a very cool project that if you get a chance, you should check out. It's essentially gonna help solve clustering use cases for open stack users. Migration capabilities, another area of really interesting developments there. And VNFs have these tremendous requirements from HA and resiliency. And if it works for VNF, it's definitely gonna work for say a grocery store or a national toy company for them to leverage open stack. Few other things that are really exciting are Mistral that is really interesting. ASTARA, an SDN related project that's coming together very nicely. Tacker which is around management of policy profiles. That's also coming very nicely and we have some really interesting things to look forward to. So anytime you see VNF and anytime you feel like how can I make an impact or how can your team make an impact, these are some of the projects that we highly recommend you to consider contributing code, contributing your use cases, contributing your mind share towards. Yeah, and I will add three more examples in this slide that are of interest to me. Certainly policy in Congress and then making that inner work, convincing everybody to open source their policy work on our end and then other entities, trying to make that all come together. In the same way with orchestration, I would like to really see us work together more closely on this problem. In the past for us, our problem is that each VNF has its own orchestration and we wanna do it in one way. And so we're very closely following what's happening with Mistral and with Morano and Tacker and look forward to having a way to solve for that. Now something that I urge everybody to be patient about is that each telco and each telco group is gonna come and show up here very soon with their own version of workflow and orchestration and policy and control. And then we're gonna find a way to work together to do a little bit less redundant work there. So in that realm, please do follow the Tacker work that we're up to that in that space. And the last piece is on Neutron. Clearly it's an area that is of great interest to us. And in the past, when you came from a cloud and data center world, there was one way of looking at it and then we have our one way of looking at it. And so hopefully over the next period of time, we can do a much better job at really solving for this in a unified fashion. Sorry, so just the last few slides so we can get to the questions. I just wanted to conclude with some more high flute and talk about our AT&C strategy. So one of the things that's always fascinating to me is everybody's always talking about like HBR or whatever. It's always saying, oh, look at how fast this technology is adopted by 100% of the people. Look at how fast Facebook was adopted by this many people. Look at how fast Amazon got to $10 billion or $1 billion or whatever. And then the curves are getting shorter, but they failed to mention the other side of the picture is like, what happens? What is the fastest thing to fail or to go to zero from 100%? So like one of those examples that we have is I remember in the, you know, this is the Bell logo from when I was born and we're a big old monopoly and we had complete control over the services. And as a result, you had to pay a lot for two pairs of wires coming from our house to a central office and managed with a very arcane mechanism of coloring wires and such. But then we were able to milk that for long enough up into a certain point and then that wasn't okay anymore. And then, you know, back in the 90s, you would get phone calls all the time. Okay, you want to switch to my MCI? Sure, it's cheaper. Oh, you want to switch back to AT? Sure, it's cheaper. Like back and forth. And then over time, the price would go down and down and down. And then one day you woke up and Skype showed up and then long distance completely disappeared. All of the money making ventures we used to have in the 60s and 70s is gone. And then, you know, you have FaceTimes and all that and there's very much of self-fulfilling prophecy like today, you know, I'm waiting. I'm sure that Apple has it in cooking is the Facebook, I mean FaceTime for my watch. So, you know, Dick Tracy and itself fulfilling prophecy can come true. Now the thing is for us, the path forward is very clear is we sell bits and moving bits around. Today that's done for a billion people and tomorrow and not very far in the future, we know that it's going to be done for six billion or seven billion people. And that path from one billion to seven billion, are we going to make that a short path or we're going to let that linger? Some entities out there want to see that happen almost instantaneously. And when that happens, you can imagine with that much demand and with the mechanisms that typically take us to six billion people with like ad-driven revenues, that's going to happen very fast and those bits are going to look like corn. So we have to move faster. And to conclude, I'll give you a little story about this statue that we had on our t-shirt earlier this week. Originally it was a commissioned statue that was on the front cover of the bell manual in 1914 and it was called the Genius of Telegraphy. Very quickly, once they built it and installed it into the AT&T headquarters in New York, they had to change the name. So they decided, let's do something like the Genius of Electricity instead. And then since long distance and the phone became much more important to AT&T than our electricity work, it became the spirit of communications and it's been that way since then and so that's one story of transformation. Another story of transformation is how that has been located in different places over time as we've, if you've seen the Stephen Colbert thing with all the different acquisitions, we're like the sum of 74 companies. This statue actually, there was a building in New York which was built entirely around this statue. And then of course, that business is different now and then this statue is in a building in Dallas. What I'm suggesting is that maybe it's time for us to rename that statue to be the spirit of transformation because we have to be changing. Very interesting and I think that speaks a lot because what we saw this week, Toby, was really a company that is a traditional telco provider being as agile, demonstrating as agility as what a software startup would by being recognized as a super user. Thanks, Amit. All right. Thank you very much for joining us. So we'd love to have questions or thoughts. Before we get into it, I really wanna thank Amit and Direct TV guys. It is really exciting having these folks join us. We've done a lot of good things so far and I look forward to doing even more. Likewise, Toby. Thank you. So how do you and the rest of your team take us in the next century? I've got a question. So you're looking now at the future. If you would say what are your two takeaways from last year? So how you review your last year, what you've done? In terms of OpenSack or in terms of NFV, SDN and those pieces? So I mean, in the last year, I feel like we've done an amazing job at deploying a lot of OpenSack in a lot of places. It was a lot of hard work. But for all of that work, there's a directly proportional amount of technical debt that we've adopted. And so things like when we ask for rolling upgrades and rollback and these kind of phenomena, which is something that's a tricky problem to solve for OpenSack in a way that's very modular, that's an example where we need help. So. Thank you. Yeah, on this side. Hi, so it sounds like with bare metal, you can handle lots of packets per second and we're kind of in the same boat of transitioning to virtualization and where the performance seems to be a little bit of an issue and it seems that in virtualization, rather than bringing a chainsaw to cut down the tree, you throw a lot of termites at it and solve it that way. But it sounds like with the NUMA node affinity, the SROV, making things sort of hardware aware. It seems like it's counter-cloud where everything's virtual. Have you guys considered metal as a service or were there any sort of? Absolutely. So I mean, this is the thing where we've, many of the, much of the work we've done is very VM oriented to date and it's related to security and separation, reservation and that part of it. But we recognize that there's a continuum and we have to pick the right points. And so some places we may end up having to use a bare metal or to use something in the middle containers or something simpler and then just really use the proper scheduling to make sure that we keep asset utilization high. So yeah, I mean, that's clearly one of the things that we're having to balance. So do you find SROV and that sort of stuff as a stop gap to going completely virtual or is that a real investment? We can do virtual on top of it, but it's only a part way there. And so yeah, I mean, it is, in my view, it's a stop gap. I'm of the to not SROV crowd because I wanna see us been packed more, get more out of what we're doing. But it's a necessary thing for right at this period of time. And as we were saying, there's options for us to go in different directions. And really also you brought up a very important point about the termite thing. I do wish we would find a better analogy than pets versus cattle, cause I like my cattle. Not the real cattle, but you know, so anyways, the other part is making the VNFs truly scale up. Routing was the original scale out phenomena. The internet itself is inherently a scale out phenomena. And so we need to get many of these two aspects of the voice systems we use and the three GPV systems that are evolving to 5G and such. We need to make those things into a scale out cloud native as well. Thank you. A quick thing to add to that is the point about the container bare metal as a service. That's an area of extreme interest to us because we don't see any reason why we should build up a capacity where we, when we don't need it, whereas seeing everything as a fabric where you just tap into the resources as you need, your Kubernetes part is full, let it spill over and let it basically spin up bare metal nodes and then basically take care of itself. Yeah, I mean, and one thing that the Direct TV guys have done with using more hybrid cloud options and using the public cloud as a spillover, that's been very inspirational to us. So I think you'll see some of what we come out with having those attributes because in the end, we're not religious about where it goes if we have the right reservation and separation. So. Thank you. So on the side. Yeah, thank you for a very enlightening presentation. And so the question I had, it was partly asked from the other side also, was you compared that we want to sort of reach SmartNIC as opposed to SRIOV. Do you see what's the fundamental limitations from SRIOV perspective? And I have one follow up. Sure, so to me, the fundamental problem with SRV is that it is essentially passing all traffic past the software straight into hardware. So there's no opportunity to do software functions on the server that you're on. And then you have to rely on more of the top rack switch or leaf or the spine of the underlay to be able to do that work. And generally in our pathway to disaggregation, we want to simplify the underlay and then allow for more the extensible part B in the software. But like I was saying, there's a balance. Some of the packet processing could be done in like FPGAs or other types of offloading options and then to make it easier. So yeah, I mean, it's more than, so it's more than the software part two. I mean, there's the aspect of SRV that is very reliant on one processor vendor. So it's another form of lock into us as well. And quickly to add to that, as a software architect, I feel like going SRV is us giving up saying that we cannot innovate any further in software, which I don't think is a right approach. There are context switches, user context switches and many different areas, polling versus interrupts, that there is tremendous room for us to innovate. Yeah, so my follow up would have been kind of goes to what your comment was, more in a sort of soft hardware way, meaning FPGA in conjunction with, say, SRIOV kind of thing. Because what happens is that by using maybe more hardware, we can use the same amount of CPU cores or something to basically use the accelerators for the functions that, you know. They do well. We can give a lot more, because after all, with the Moore's law, things are not going processing wise that fast. So anyway. Exactly, so that's very much what we're looking at. So acceleration, specialized acceleration with softness of FPGA, maybe the answer. That's definitely an aspect we're looking at. Okay, thank you. Yeah, thank you. Yeah, a few minutes ago, you mentioned the term technical debt kind of, paying down the complexity in the system. And I'll just be curious to hear more of your thoughts about OpenStack and helping to kind of pay down technical debt, if you will. Sure. So beyond the example I was giving about just, okay, we have so many nodes and then we have so many different projects and then being able to upgrade them and be the, you know, we're not going to upgrade them all simultaneously underneath running workloads. Some of the workloads are going to be very sensitive to that. So that immediately causes us to have a schism or out of out of sync aspects. So that's like, if you look at a federated Keystone, that's very hard to manage. If I want to upgrade Keystone into v3 and then try to do, have everything be on underneath one Keystone. That's a simple example. I mean, in general though, also, there's another example of technical debt that we're getting into that is, I don't really have the answer, but I'm hopeful that we can get there. Is if you look at like a chef or a puppet and how these evolved Ansible and Salt, you know, we're, we knew we built up enormous technical debt having a lot of complexity in our chef recipes or in puppet manifests. And then it was hard to manage. Hard to, hard for people to dive in and actually fix things and add to it at some scale. Now Ansible, Salt are a bit simpler, but we're finding as you build them out, they're also very complicated. And in a similar way with heat, the last heat template that I saw was ridiculously big for a VNF. And so it's a very similar problem is, are we putting too many layers of abstraction between us to actually solve a problem? So that's why we also have other efforts going on, like our work with the folks at the Owen Labs is experimenting with things like XOS and maybe a simpler way of providing, like using Ansible around what we're doing. So those are two examples of the technical debt that we have. Yeah, great, thanks. Thanks. So thank you for a very interesting presentation. I'd like to go a bit deeper to the requirement of 50 million packets per second per virtual nick. Yep. And understand whether you, since there are many technologies to try and reach this amount, they will differ a lot in terms of CPU utilization and number of CPUs that you consume. Yeah. So. Yeah, absolutely. So I mean that's also very top of mind. And I didn't have this picture in here, but we definitely, we've been talking about it a lot. So do we have, you know, if we use a V router or V switch and we know if we put it to 12 of 16 cores pinned to that OVS or V router that gives us a lot of performance that gets as close to what we need. But there's no room to actually put a VM on it. So it's finding that right balance is like, what number of cores to pin to make that work is tricky. And then in general, this is like, I'm gonna use up a significant subset of the box to do that. And so what, you know, obviously one of the things about the SmartNIC idea that's very helpful is offloading a lot of the things that needed the CPUs to be pinned, as was described in an earlier question, onto accelerators like compression, encryption, things that are standard packet processing mechanisms can be done in something more akin to a network chip on a NIC. And then less of the processors are used. So yeah, that's a very good point. Thank you. I'm also not with you on that, because it's simply not logical to use so much CPU for network. Exactly. All right, on this side. Hi. When I read the slides of 50 million buckets per seconds processed by 1 VM, suddenly I talked, but why not use 10 VMs processing five million buckets each? But this is what... That was the right, that is definitely what we were trying to imply. This is of course too easy. So my question is, what are the challenges you encountered on trying to have a segmented driven architecture capable of managing efficiently this smallest workload? So, you know, to achieve that. I mean, I think it comes down to one really important aspect of some of our workloads. It is maintaining a connection and keeping it running and keeping it stayed up and having that and being like an elephant flow, like stay up and running. And then if I lose the thing that I'm talking to and it has to go somewhere else, moving it over. That mechanism, there's been solutions to it, so it's not like we don't know how to solve it, but getting it implemented into the VNFs that we're talking about in the EPC space or in the SIP space has been a challenge for us and making it something that the vendors can commit to us on what the performance is. So yeah. Thank you. Thank you everybody. I appreciate it. Thank you so much for joining.