 All right, well, we want to get started. We might do a little tag team here in the middle of the discussion with participants on the panel. And I apologize for those that can't see me on this side, but we'll try to lean in and out. But thank you for sticking around. We realize it's late in the afternoon, and I heard Mark Collier a little while ago declare it's time for beer. So that means we're all really dedicated to what we do. So thank you for joining us. Hopefully you can make it entertaining, and you can help us make it entertaining by asking questions. But we wanted to take time this afternoon to talk about NFE and OpenStack combination together from a user's perspective. So there's been an explosive growth over the past couple of years around this area. And lots of telecoms, or as we've seen in discussions and keynotes this week, there's lots of telecoms that are doing POCs and deploying OpenStack with NFE capabilities. And you've seen the names, Orange, DOKEMO, AT&T, China Mobile, are amongst those. And so they're embracing NFE via open source, right? So combining the two open source projects together. And if you were here earlier listening to Heather and Mark, they were discussing how the projects OP NFE and OpenStack are working together. They gave you some examples on that. So we're innovating very rapidly. We're trying to, hey, that was a quick tag team. Now you see, now you don't. Thanks, Brandon. That was really nice of you to be able to participate. So we're working hard together to try to meet the requirements and accelerate the deployments of NFEs. But we want to talk about with these folks today some of the use cases, the emerging use cases, talk about the PLCs and testing and how they're working with both OP NFE and with OpenStack. And what they find is unique with those. So with that, I'd like to introduce the panel. I'll introduce myself. My name is Alan Clark. I work in both projects, actually, and have a whole bunch of fun. So I'm on the board of directors here for the OpenStack Foundation. So I've been to lots of meetings all week. I'm also on the board for OP NFE and learning the ropes there. So like I say, I get to have a lot of fun. But let me go down the line here and I'll have our panelists introduce themselves. And as you do so, I'd like to start with a real basic question. We've seen lots of surveys this week. We've seen lots of data that shows people are adopting OpenStack and they're adopting NFE, and so particularly with open source. But I'd just like to start with the basic question. From your perspective, why OpenStack? There's lots of alternatives there. Why OpenStack? So Deng, will you go ahead and... All right. OK, so my name is Deng Hui. I work for China Mobile. So my role in the company is mostly develop the strategy for both SDN and NFE. So back to the question, I mean, why OpenStack and why OP NFE here? I mean, we started and I mean, we see the promise of the OpenStack for the IT industry. And our company sees its trends about our future vision. And so we jointly with other operators to promote the first telecom open source project, that's the OP NFE. We see the value of the telecom open source. That's the reason we work with OP NFE and we also upstream OpenStack for this kind of private cloud solution for us. And that's the reason we are here. We want to contribute. We want to get help from the OpenStack and OP NFE. All right, thank you. Ashik? Hi, this is Ashik Khan from Entity Dogumo based in Japan and I'm responsible for its NFE standardization, NFE open source development and these days working on 5G core network design. I have a very simple answer for why OpenStack. So it was the closest possible cloud management system that fulfills our requirements to telco cloud infrastructure. Oh, very simple, succinct. Okay. Thomas? Hello, everyone. I'm Thomas Morin, I'm working for Orange. I'm focusing on the infrastructure aspects of network utilization components and to answer your question about OpenStack, I will. Of course, there are two questions, why Open? And I've got a very specific example to explain why we choose OpenStack in the early days in the labs because it was in labs first and I happened to be working in the orange labs. We had specific use cases where we needed to interconnect the orange one with virtualized server virtualization platforms. And we needed to both innovate and try to bring new solutions to incrementally toward something deployable. And the traditional vendors of server virtualization solutions were not really that close to being strong on telco need. So we wanted to be able to explore these topics by ourselves. So Open source was kind of an obvious direction to go. And OpenStack, why OpenStack? Well, the dynamicity of the community already at the time back in 2011 to 2012 was the key motivation to look at OpenStack. And the easiness of prototyping in the OpenStack context was, well, the confirmation that it was a good bet. Very good. And Toby? Yes. Just in time. Exactly. I'm stuck in traffic. But yeah, so OpenStack is something that we've been working with in production for about, since January of 2012. At that time, early on, we did a whole proof of concept comparing it with CloudStack and Eucalyptus and Nimbus and all of the different tools that were available at that time. And really in the end, in that particular time, the thing that compelled us was about the just, we felt like it was a good foundation to build from. We could see it extending and growing from where it had started. Where the others had a very limited view of what they wanted to do, OpenStack had a very open mind about how to expand from the beginning. And over time, we've had to revalidate why. Even in the last three months, we've had to, again, answer the question, why are we using OpenStack? And I think it comes down to in the end, it's really, I think, the replacement for what standards did before. It's created a way for people to make an API and have a vendor ecosystem that can be proprietary and open. All work within the same construct. And we've seen a storage provisioning API where none would have existed before. And there was no motivation to make one. And now there is one. And the same things are happening in really large scope in networking. Thanks, Toby. By the way, if you have questions, we'd love to make this interactive, because we could talk all day long. So if you join the mics, if you stand by the mics, then I can see. It's hard to see from the stage unless I go like this. But I can't see if you stand at the mics. So if you hit the mics, while they're hitting the mics, I want to ask a follow-on question. Thank you. Let me do my question, then we'll grab you. So a follow-on to my first question. So you guys have been at this for quite a while. You really are kind of pioneers in this space, right? And pulling open stack. You don't have to sit back down, sorry. Oh, good. OK, we'll stay on this topic. So you guys have been pioneers, are really kind of pioneers in this space, right? And as you said, Toby, it's one of those things where you keep remaking this decision every few months. So my question is kind of, what are some of the benefits, and what are some of the drawbacks, what are some of the things you've run into as you've been heading down this space over the last couple of years? Jake, you've got the mic. OK. The benefits is obvious. I think everyone knows that it's the availability of the whole ecosystem. OpenStack plays a part. But OpenStack plays a big part. There is other part of the ecosystem where you have the orchestrator, you have the BNF manager. And as I mentioned before, OpenStack satisfies our requirements quite well. So the availability of different open source solution was one of the driving factors which enabled us to take the decision, which one we take into our commercial system. The bottlenecks I see at the moment is there are a few gaps in between, let's say, what is required from us or what the standardization organization defined. And in between present OpenStack implementation. But I don't see those as big bottlenecks. We are trying to fill those out through a BNFV and other means directly going to OpenStack. So I think we'll fill out those gaps sooner or later. OK, yeah, we're going to ask about those gaps here in a minute. But let's go down to. So the benefits you have, it depends if you ask me as an individual or to arrange that. But definitely, the benefits for arranging the company of us starting early in being a pioneer is that, well, this is a huge transformation to make. So the earlier, the better. And that's a simple answer. And the drawback that you can find when you are a pioneer that you start using solutions when they are early. So by definition, when they are less mature. And it means that you have a significant amount of work to do internally to explain properly the expectation that people can have to sell the value of OpenStack but not oversell it when it has not the required maturity. So and all this balance has to be preserved so that the people understand what we can do with it, what we can't do, and what we will be able to do the next year, for instance. Interesting, very interesting. Toby? Sure, so one of the benefits that has been very surprising to me over the last period of time, especially as we focused on NFV, is just really the telcos. There are some places where we compete with each other, but for the most part, it's not really overlapping. And so that has presented an opportunity to work together in a way that was much more difficult before. I actually feel like when any kind of standards work or interfacing work had been done before, it required a lot of negotiation and translation and all this that's quite difficult hurdle to get over. But then in the case of code, it's actually kind of becoming a lingua franca, something that bring everybody together and easy to act on way. So it's not only easy to understand, but it's easy to take action and see results. So that's one benefit is definitely getting help from the other telcos to make change happen and enforce it's multi-trillion dollar business so it can have a lot of influence. So that part's been good. The drawback I think has been, just as kind of I alluded to, is having to constantly kind of resell the concepts over and over and over again to various different communities. And it's quite tedious over time. Chang Wei got comments down on this. Right, I think they almost talk everything, the last people I'm talking. I see the gap, I mean, mostly I look like more negative people. I see the gap because we get used to the 509 of the current grade, but I'm seeing today the VNF application developers, they are not ready to cloudify their applications today. Most of them are still using, I mean just virtualized, but the solution is not really current grade yet. I think today OpenStack is very good because everybody support that. When we talk every VNF vendor they can support it, but we can still see the infrastructure cannot give us the current grade capability. We have to rely on the VNF application developers, but they are hesitated to change because they are totally in charge or control the layer three, layer four, I mean the solution today. So I see this as a challenging for us and we cannot change the OpenStack to become the current grade today, but we try to help, we try to figure out which parts we are going to take. We work with VNF application developers and so we work with Oxtracer and to help them to understand they cannot necessarily to rely on the OpenStack to provide current grade capability, we can still take the path of the MFE for the, to be the real commercial deployment. So that's what I see, either we, that's the benefit also, the limitation we have to overcome in the future. Yeah, very good. All right, hang on, I promised him first. Oh, okay, you are so gracious. All right, go ahead. Ha ha ha ha ha ha ha ha ha. Thank you, thank you. I think you talked about orchestration on the VNF layer for the NFE, right? So what are the challenges you see when you're managing the workloads in the day to environment, in a hybrid cloud environment? Right, so you know like you're running the VNF and the workloads are all sitting in the VNF and if you are migrating the workloads between the private cloud to the public cloud, for example, in a hybrid cloud environment, what are the challenges you see with that orchestration tools you're using from the OpenStack? Good question. At present, we have, I mean, I wanted to say it a bit later, but we did the commercial rollout of multi-vendor virtualized EPC for the first time in the world last month on OpenStack. Nevertheless, OpenStack is the cloud management system for us, it's not doing the orchestration, the orchestration, orchestration comes, orchestration part comes from a vendor, so does the OpenStack part as well. I cannot give you a direct answer to you because we do not have a hybrid cloud implementation at the moment, it's not a public, private cloud mix scenario. But the challenges I know, we analyze those maybe at some point, not into the distant future, it will be a hybrid cloud, or you never know, maybe it will be all public cloud. The challenges are, there are some security problems when you handle mobile subscriber data, so you can't really put them unless you secure it and you are 100% sure there won't be any data leakage in the public cloud. And there is also, when you look at the telco nodes, mobile core nodes especially, they're very resource hungry, heavy duty, high throughput nodes. So at the moment, looking at the public clouds, let's say resource availability point of view, we are not very sure whether we want to put out telco nodes in the public cloud, but the challenges is what I have mentioned. So any of the other guys doing hybrid yet? Yeah, so one of the things we have is a security offering called the network-based firewalls, is based on network-based firewall. And then we also have this thing we do called NetBond, which helps you to connect your VPN to Amazon or IBM cloud or one of the others. So we've been doing a lot of work on this subject of, and then also in Direct TV we have aspects that are like this that we use multiple clouds for. So it's something that we are being mindful of is being able to orchestrate across a multitude of clouds. And that's one of the great things about trying to keep things interoperable and standard is to make that available. The issue though in orchestration and we're constantly pressing on using and trying to use more of what's in OpenStack with let's say Morano and Mistrol and Tacker and Congress and those things are evolving but they don't quite meet our needs yet. And so we have our own work in orchestration that we've done and that's I think probably one of the next steps for us to work on together as an industry is around the vision of Mano that's in Etsy and try to really solve for that. Yeah, so I'd be honest, I'm not a specialist of orchestration but my perception is that one of the great challenges around orchestration and that's not entirely specific to hybrid cloud use cases is the diversity of orchestration scenario that you have to cope with, the diversity of, well, VNF that you have to cope with and the risk of proliferation of having multiple orchestration tools to orchestrate. And that's something that we see for now different orchestration with different capabilities, lots of variety in the devices of VNF to manage and that's currently a big challenge. Want to add to that or should we? Yeah, I think the orchestration is a very big topic here but the question mostly goes to the hybrid case. And so for the hybrid case, I think we are taking different parts, we saw the telecom integrated cloud take so for China mobile the case, we build take not hybrid. So that's purpose, we have different type of take, I mean for the control plane, for the data plane, for the corner one side, for the edge side. So we have this kind of mixed environment but it's not hybrid. So orchestration I can talk about later or whatever. Thank you. All right, thank you. Now that we've given you your exercise in standing today. Finally, so I do business in Asian area maybe it's a regional issue but it's not that easy to hire good open-stack or open source engineers in that area. So I totally buy the benefit of NFPB but it's really hard to take advantage of open source in the area. So why not just buying traditional vendors NFPB version of EPC-O-IMS or just buying VMware? We'll pay a bit more capex part but you know OPEX or difficulty of finding a good open source of open-stack engineers could be avoided. So as part of that question as you answer that is this your first endeavor with open source? Yes. You were so succinct. So actually I think you asked a few questions in one shot. Why not I wouldn't really mention any particular vendor over here and you don't have to hire open source engineers that's the beauty of open source. When I started to join open source from basically OPEX from one and a half year ago I didn't believe that. They kind of sold it as it's an organic community you'll see that you'll get people. I didn't trust it I didn't believe that but it happened. So in the community you have open source developers if you can convince them about your use case like the gap I was talking about they will be more than willing to help and we got this that help. We were very successful with one of the OPEX and every project requirements projects. So to answer the why I mean hiring about open source engineers in many of the cases you may not have to do that. For let's say EPC nodes we see that high availability telco nodes it will still be supplied by vendors. That's why we always talk about let's say standards or a defect to implementation like OpenStack to ensure interoperability. So the vendors will be supplying the telco nodes which need to talk to the orchestrator the virtualized infrastructure manager like OpenStack. So we are buying vendor products and the monostack in local implementation it also comes from vendor albeit it uses open source solutions inside it. So we are not getting into let's say vendors territory that much but we are trying to ensure an ecosystem where vendors, open source solutions, telco operators they can coexist and all can benefit by ensuring interoperability. If I have answered all your questions. Yeah what I would say would echo what you said but one nuance that I would bring is the fact that we really don't have a black or white choice between buying a vendor product or having a team of open source developers. That's really the beauty of open source as you said is that we can explore the full range of options including buying a vendor distribution of Linux and OpenStack, deploying a PNFV or having a DevOps team running an OpenStack deployment plus OpenStack plus NSDM controller for instance. So we have the choice and we are in an ecosystem where we have more mastership of the solution. We are closer to being able to choose and change our decision later. So that's the key reason to not go for proprietary. Nevertheless, in a set deployment today we are not at a point where we have everything open source. So that's also the, that's not part of open source. The fact that we are in an ecosystem where there's a reasonable amount of interoperability allows us to have a mix of open source and proprietary products. So yeah, this is my favorite topic lately because we spend part of the whole wide OpenStack. Part of the thing is one group within AT&T says, well, we do VMware. We use X number of people for Y number of dollars and it's less than what it takes you to do this over here right now. So I mean, it's a constant discussion that we have to try to re-rationalize the actual dollars. And in that context, I mean, if you really look at it, it is when we're talking about vendors, there's support and somebody's gotta do the development and pay the ninjas or whatever in the moment because it's a presentation. Somebody's gonna pay the ninjas to evolve it. If it's stable code, like I was saying earlier today, you know, in 1991 I used to buy a C compiler from Sun for $20,000. C compilers have evolved, but should I be paying the 18% maintenance on a C compiler today after 25 years? That doesn't make any sense. If you look at all the different technologies across the board, you know, as they evolve, the licensed part of it is something that needs to go away and be open. And it's not about cost in the end. If it was just about cost for us, then our businesses would be commoditized and the bits would be corned. It really has to be more about the value added and the generative aspects. And I tell you, I ran a public cloud using VMware for five years. And then I never had the types of discussions and the types of interactions when I did that, as I do now in the OpenStack community and getting help from such a much broader set of people. In the end, the development's gonna get done by somebody. And then I think for the most part, it happens without you even knowing about it. Don't add or should we go on? Okay, I can make a quick answer. Thank you, sir. So firstly, I think it depends on what are you doing. So are you doing a 60-minute grader by yourself or you are relying on the other people? Secondly, I mean, I believe different use case where I have different people. For example, I'm using OpenDateLine for the optical transportation. I mean, dedicated people to do that part for the EPC or whatever other part. We need other people to do something like that. Then for the OpenStack, so like China Mobile, we have a couple of hundred people today doing OpenStack implementation for our private cloud. It's not for MfE purpose. For MfE purpose, we do need them so they can do not necessarily to develop the OpenStack, the core code, but they are mostly the tools, installations, right? So these are different purpose. We do hire the people, bunch of them. But for MfE case, I think we still are trying the hybrid case, either when we are on OpenStack. We see both side has the place to use them. OpenStack has very strong ecosystem and we see the beauty of them. And I think it's quite open topic today. Operator will decide different use case, different things based on different program then pick the different solution for that. Okay, thank you. All right. Yeah, so I represent a networking vendor. So I find it very incredibly confusing. We have actually delivered the DPDK, we have delivered heat templates, things like that to match your network needs. What I want to hear from you because you guys deployed at scale is what do you expect from networking vendors in terms of plugging into this incredibly complex ecosystem? You know, the top three asks or the top four asks, not from an SDN controller perspective, but from a networking vendor perspective, either a load balancer or the firewall or anything like that. What were the top five things you expect a networking vendor to do in order to plug in into your MfE environments? Top five. Or three, what are? A, it works. Yeah, I like that. The scale. So I mean, we spend a lot of time on this subject. I'll give you one example and I'll try not to use names. All right. So there was a time when we picked one vendor and this is in the storage realm and I'll get to why it's related. We picked vendor X. Vendor X had claimed that they had very good Cinder integration. And so we deployed X in production or tried to deploy it in production and then it turned out that it would take four to five minutes every time we wanted to spin up a new volume. It added a layer of complexity and confusion in the middle between Cinder and the actual provisioning engine and we couldn't make it work any better than that and that's just not acceptable. So in the end, we ended up switching to vendor Y because their integration with Cinder was well thought through and it actually had been tested at scale and it worked, it just worked and it worked real time. So I mean, paying attention to the integration with like Cinder or Neutron and especially with Neutron and really working with us on that area is probably the most important thing right now because if in the end, I have a set of APIs that are standard and everybody agrees on and that's only like 5% of what I use then I haven't gotten a good benefit out of it. I'm back in the vendor lock in ahead before. So that's this area of integrating with Neutron and not overextending and not having gone back and refactored and worked with the community to put it into the core is really important. So I mean, that one is very top of mind lately and Tom Snives are spending her time on the subject so lately a lot. So that's one. Well, I was going to say the exact same thing. I'm going to say it in a slightly different way just for the sake of making it even more important, highlighting the importance, I mean. We need vendors to work with us upstream so that we have consistent APIs that we can use through Neutron to use whatever implementation is behind or inside Neutron. So that's really a critical thing for us. Well, my reply is really very short. So the other thing we need, and it's a bit of a telco requirements, maybe is a very fast failover feature. If a switch goes down in the SDN controller or whatever, we need to instantly recover that through a backup path as an example. So that's the third one, is it? Do you mind me taking a look? Yeah, sure. Oh, he's out of now. I just want to complete one thing. So we mentioned Neutron integration, but we were mentioning orchestration and the question of orchestration of VNFs in particular. And this is an area where we have the same type of concern where we need early work to make sure that we can manage VNFs that do the same thing in a consistent fashion and the same pattern applies to this context as well. Yeah, I think you already threw you over the middle where you passed. Okay, fine. Yeah, so we're down to about five minutes. So let's see, you were first. Well, what did I say? Yeah, you were first. Scott Fulton from the new stack. Gentlemen, thanks for being here. Dung and Ashik, you both mentioned that there were certain gaps that the current incarnation of open stack kind of leaves open. You didn't say there were insurmountable gaps, but they're there. To the extent that you find yourselves having to fill those gaps yourself or to make open stack more interoperable with OPNV, are you concerned that those changes you make won't be things that you can contribute upstream that can contribute to the community at large that in effect that perhaps you may be forking open stack, it would be the worst case scenario, or in the moderate case, just changing it to become unsuitable to anyone's needs but yourself. Very good question. Yeah, very good question. We will contribute it upstream and we are doing it. And I'll give you an example. We required some failure recovery features from open stack. And in order to achieve that, we came to OPNV. We proposed a requirement proposal and OPNV people, many of them had open source experience. So they helped us develop the project proposal itself which became more, let's say comprehensible to the open source community. When Telco people write specification, they write 10 or 100 pages. But because we tend to see the end to end view of the whole thing rather than the feature itself in a standalone way. Then in OPNV to that project, the name of the project is doctor as an example, we got people, open source developers on board. They're very good with open stack. And we started to submit blueprints of open stack. We have already five, six blueprints accepted margin to meet the release and for especially nobody related failure recovery for Telco nodes, it's almost done. So to answer your questions, the most approach is whatever custom built now gotta be in upstream as soon as possible. Yeah, upstream is the only way to go. Apart from tiny bug fixing patches that can be carried and maintained in-house for a short time when it makes sense. And if you have the right team to deploy open stack in a DevOps fashion, but working upstream is the only way to go. That really is the spirit of a project like OPNV where all the contributions are whether in upstream projects that are in open stack or in other projects, for instance, open V switch, but there's no point in trying to fork. That's certainly not an option. Yeah, so I mean I'll reiterate the point about OPNV. And that's kind of the cool thing about OPNV and I'm not sure we even have another example like it in the open source world where it takes a specific group's requirements and make sure that the process is flowing. It's the process of what does the requirement look like and then how is it tested at the end but then also what are the integrations needed to make it actually work and then all of us working together on it. So, and it includes also helping people test it and be aware of what the upstream processes are. So it has quite a lot of good aspects to help us to prevent forking. Yeah, I thought so. I gotta give you your turn. I don't wanna promise because they already promised so let's continue. Go ahead, we don't have some time. All right, go ahead. Thanks for sharing your thoughts. I have actually two questions or maybe two and a half. First one, early adopters of NFV and I believe most of you guys fall into this category. Either deployed or are currently evaluating orchestration solutions from vendors. As OpenStack recently launched Zataker which is an OpenStack official solution for orchestration. What are your thoughts regarding using a vendor solution which has been developed in the last maybe a few years or a few months versus waiting for Zataker to mature and using it as an alternative or maybe combining both by maybe pushing your vendors to work with Zataker. Second question if we have time for it. Do you think that some of the benefits or concepts of virtualization such as maybe multi-tenancy? Do they really apply to EPC and IMS, VNFs? Do you think that you may have at one point of time an MME and maybe a Peek-It-Way and PCRF together in the same piece of hardware? That's it, thank you. Right, so I try to answer one question first and the first thing is about the, yes, we do, we did a commercial deployment since last year about our small-scale gateway and also reach communication systems that is both virtualized and NFV based. So you're asking the attacker, I think you are asking the right people. So I'm going to present tomorrow if you can come nine o'clock in the morning, ballroom E, you can see. I'm going to tell you what is the difference. So I think the attacker here is the most attacking enterprise NFV. So we are here sitting here, the TECOM NFV. So what is the difference? You can see my presentation tomorrow. I forgot his second question, but I believe that. Okay, regarding the first one, the attacker could be a potential solution, but you got to think about, let's say, how we upgrade, how Telco operators upgrade their networks. So you have already deployed a monostack, et cetera. It has an orchestrator, it has a cloud management system, and it has the VNF manager and interfaces among all three of these. So the interfaces are generally like they are being defined in XNFV. So for attacker to replace an already deployed orchestrator, I suggest you follow what has been defined in the standard. It'll be much easier to replace an existing orchestrator if someone wants to replace the orchestrator, but it could be a potential solution. The other one is that the answer is simple. PCRF, all PCRF EPC nodes, the target is to have them all on that same cloud. Yeah, so one comment I would make is that open source is not a religion at operators. So when we have to do deployment now, the exercise consists in looking at the different options, including proprietary ones, and see what will work for us. So, and at the same time, since we are learning to understand the benefits of open source in different areas incrementally, we also know that typically that can be done in the labs of different companies. We work upstream to understand what the majority of the solution and see when it will be applicable to our context. So I wouldn't be able to specifically comment on attacker, because I'm not an orchestration specialist, but basically that would be part of the answer. And the other comment I would make is that, well, the title of the panel is open source in a V. It's not open stack in a V. And I'm saying that for a set component that you have to deploy in architecture, I think that the official component in the open stack world that could play this role is not necessarily the choice that we will end up making. Maybe we'll use something else that comes from another ecosystem. And it's good. That's why the open source ecosystem is sane. So, but I'm not saying anything against attacker again. But it's really the point that the open source offers this diversity and we want this diversity to play for us to give all the benefits. Yeah, so I'll reiterate what I was saying before about orchestration. I mean, one part of it is certainly each VNF showed up with their own orchestration. And we need to consolidate that. The IT world has been working on orchestration and workflow for a long time. And there's a lot of solutions. And the telco people have had a lot of solutions. And I think as was described before, I mean, attacker may be a way to solve it. But I would urge everyone to work together more on this topic because the risk is that there is essentially 100 IT tools and 100 telco tools. And we're in a real mess at the end. We can't come to an agreement over something that I would argue is not that hard. So orchestration, I hope we can work together to help solve that. And then in a way, Opina V, I think can be, can help kind of drive that. I mean, it's not really been able, been willing to make choices about technology. It's left that alone. It's just demonstrating what the goal is. And then people can insert and as Thomas was saying, whatever open source tool or proprietary tool, as long as the test passed, then we're good. And we can see evolution happen that way. So an orchestration, that's my thing. And on the sharing, I think that's, for us, one of the struggles that we have, certainly the target, as described, is been packing all workloads on one common infrastructure. And we've done an enormous amount of work to flatten the different clouds we have into one to do this. And we are gonna run many of the 3GPP components on the same exact platform. But there is still a hesitancy on our security teams part on certain functions being together. That's a simple example is a route reflector or something really integral to our network routing. That is, if that was hacked into and somebody messed with that, very bad things. And so there is a reluctance in certain spots to share hosts. We're working very hard to solve those sort of concerns, add new layers of security, like simple examples, TPM or TXT from Intel, adding that as a layer of hypervisor security. So that's, the target is clear and we have a few things we're working to get there. All right, thanks, Toby. Yeah, so the workload packing issue is a valid concern, definitely there are. It's a concern that we share. But I would say that even if we don't put certain workload on shared compute host, we still preserve a large majority of the benefits of having a common cloud. So that's what I would say. All right, let's go on to our last question. We'll give you the honor of being the last one. Thank you. In all your deployments yet and the stuff you get from your vendors, are you just using hot templates right now or have you tried TOSCA stuff and VNF descriptors and other stuff keeping with the orchestration team? This is my favorite topic this week. You said that several times. Yeah, well, heat templates. All right, so I mean, one of the things we've been pushing all the VNF vendors toward is to write a hot template for what they want. And that's worked out okay until recently, just the complexity of some of these hot templates have been daunting. So I think one of the issues that you get into and we've seen this with Chef and Puppet and other types of intermediate automation mechanisms that are templating ways of templatizing automation is that you build up an enormous amount of debt of complexity debt and you lose the benefits of it. And that's something that we're keeping an eye on is are we just adding another layer of unnecessary abstraction and then having to, also then, as you were pointing out, have one end of it be TOSCA and then some subset of it be heat and then just having to have multi-layers of abstraction all for basically setting up a VM. There's gonna have to be a balance there. I will just do a short comment. TOSCA and heat, it's worth mentioning then. I think it's also worth mentioning a Yang-based that I'm monitoring because it's interesting to know that it's an approach that's having, that has a lot of traction in the telco world. But it's also something that's very, well, kind of alien to the IT world. So we see a kind of tension or question here about what we will end up doing and which teams will learn it and use it efficiently. So there are lots of question related to this. So the present implementation is a bit proprietary and we are actually kind of looking forward to the VNFD being defined in XNFD and at the moment we are kind of implementation agnostic on that but hopefully we'll look forward to open-source solution on that as well. Okay, because it's the last question let me make it the last comment. Yes, so if you look at the orchestration layers, so from the top to down, whether you go through the heat, if you go to the heat you have the translation to the hot template, right? So from the top you have to have TOSCAP input for the VNFD, I mean VNF onboarding. Then you have catalog about the modeling in second. Then you have the core modeling. That's very important. It depends on implementation but after that then you need to decouple the VNF with the connections. For the connections you go to the young modeling for the network connections and you can TOSCAP for the lifecycle manager we have from descriptions. So that's where the layers that's already been implemented by people. I think OpenAustrator is also doing these things by open source and kind of standard data modeling. OpenAustrator, tomorrow, yeah. Night Cloud, borrow me. I'm making an advertisement, okay. Yeah, you're good at this, you know. Any other announcements you want? No. Thank you. All right, well thank you. That was fun. Especially having the audience participate and we didn't have to dream up the questions. Thank you to the panelists, I enjoyed it. Anyway, thank you. We're standing between you and beer. So have a good evening.