 All right. Welcome, everyone. Today, for this session, we're going to talk about one of my favorite topics, orchestration. And specifically about the role orchestration plays in my use cases, which are, my name is Toby Ford. I come from AT&T. I work on the NFV SDN space. And so the use cases that I care about are somewhat complicated, often span many data centers, many central offices, and in nowadays is extending out to the edge to the fog realm of UCPE and IoT and base stations for the RAN. And I need to orchestrate. I need to find a way to instantiate our, quote, unquote, virtual network functions and put them in dispersed across all of these facilities. And then based on demand or a lack of demand, I can scale up or scale down the resources. And I can adjust. And then even beyond the simple single use case, I can then eventually have multiple use cases with multiple tenants. And then, you know, bin pack the resources I have. Try to get the goal that we have, which is essentially full asset utilization. But at the center part of this story is orchestration. And you may have heard the term MANO. MANO is a term that comes from the Etsy ISG's work on NFV. It's intended to represent the part of the puzzle that is, as I described, the part that helps us to manage and orchestrate the virtual network functions. So today I have a group of folks that I think will do a good job at representing this problem space. And then an example of the sort of what I think of as diversity and innovation that's happened around orchestration. There's been many attempts at solving this problem, not just within the NFV or SDN space, but also in general in IT. There's been many, many times where we've worked on this concept of orchestrating many maybe automatic and manual tasks, different types of resources, and put them together to solve something. Been a long history that way in the IT realm. So I think I've tried to put together a panel that represented the diversity, both of the IT realm and also of the telco realm, and specifically around two things. One is about the projects that are happening that are trying to solve for the MANO problem. So AT&T has its own way of solving for the MANO problem. Other folks like China Mobile and a number of entities we'll talk about are working to create a new thing called OpenO, which is intended to solve this problem. And then we also have representation from the Etsy community and a new entity that's created around that called Open Source MANO. And so today, let me call up to the stage three of my guests. So first, I think attempting to represent the traditional IT space, and then also the solving for my problems, Chris Wright from Red Hat. The steps are on this side. Thank you. Or even better. Thank you, Chris. Thank you. And then second up, we'll have Diego Lopez from Telefonica representing the Open Source MANO project. Thank you. I applaud your agility there. And then last but not least, Ling Li from China Mobile. Now, he has joined me as well as being a Super User Award winner. Thank you, Ling Li. All right. So first, my question to this group, this esteemed group is I want to first have each one of them go through and introduce themselves and what they work on. So starting with Chris. Hello. Good afternoon. I barely made it here. I was lost in the shuffle of trying to find this particular room. So my name is Chris Wright. I am vice president and chief technologist at Red Hat. I focus on our technology strategy and looking at where the industry is moving and where open source innovation is solving new interesting problems across the industry. OK. Just one second while I put this in silence. That's true. You're looking that up to see who you are? No, I was trying to spare you of the music I'm buying. Diego Lopez, I work for Telefonica. My position formerly is called head of technology exploration. I joined Telefonica five years ago. Since then, we started working in aspects related with SDN and NFE there. And I am part of the large team that is, among other things, working in an open source manner. Hello, everyone. My name is Ling Li. I work for China Mobile. I'm a current member of the Novonet Project, which strives the company's SDN and NV strategy. And I've been actively participated in some of the open source community around this idea, including OPN3 and OpenO. And very proud to be serving on the board of OPN3 and also the TSC of OpenO representing China Mobile. Thank you, everybody. So let's start out with a simple one. Let's pretend I know nothing about orchestration. Could you tell me what is your definition of orchestration? I think of orchestration in terms of, well, first of all, in the software context. I mean, you can picture the conductor at the front of an orchestra getting everybody to work together. But in a software context, I think of it in terms of taking multiple API driven and taking multiple steps together in a concerted effort to produce a result. And typically, I look at that as, for example, creating a service that's a combination of a collection of independent pieces. And how do you launch that? How do you manage that? How do you update that? So orchestration, collection of things, done together. APIs under the hood, I think, are really important. No, exactly. I mean, orchestration in the musical terms is precisely make the set of different individual instruments to sound as a joint instrument. This is what they call orchestration. So you take a piece of music and you make that the whole orchestra is able to play it. The point here, and when it comes to the particular case of network service provisioning, the key here is that we have to live with something that in some cases is complicated, that is install-based. There is this old joke about that the creation would have been much longer than seven days. If God has to deal with install-based there. So and this is precisely one of the key things here when talking about orchestration on network services is that we have to deal with the deployments of network functions, of virtualized network functions, in an environment in which there are many other network functions that are not virtualized. And they need to work together because at the end, we need to orchestrate not only the virtualized part, not only the parts that is based in a cloud infrastructure, but we have to make them work with the other ones that are not part of it. And this implies an additional challenges in orchestration that probably in other environments are not so high. Makes sense. Certainly the brownfield state that we have to live with and the greenfield have to go live in coexistence and that's what the orchestration has to cover. Ling Li? Yes, to my understanding in a narrow view, I think orchestration means to just like Chris mentioned, putting some of the pieces together. Usually we would call it as resource or components and then we came up and connect them and to come up with a deliverable like a service which can be consumed by another level of consumer. So I think in this interpretation, I think actually we have orchestration in different layers. We can have orchestration in the VM layer which could compose to be an application. And we can also move a lever up and by combining different application or VNF together, we can come up with a network service as defined by the CNP architecture. And then still a level up, we can come up with, you know, combine different network service which are usually consumed by the operator and then put them together with some of the product definition and also business logic. We can come up with an end-to-end service that can be consumed by our end user, for example our subscribers. So at these different layers, I see different interpretation of orchestration. And to me that is, that is, you know, like happens in the design time. And also in the broad view, I think at each different layer of orchestration, we can also incorporate the concept of management. So what I have been talking about is like we came up with a blueprint which came up, you know, can be composed of different components and with connections together. And with that blueprint, we can actually initiate different services instances. And we can also, you know, there's also key to manage that different instances all through this life cycle. And to us, I think, you know, we believe that that is also an integral part of orchestration. Great. All right, so it sounds like we have a consistent definition of what orchestration is. And I think we touched on a few of the areas that I wanted to get into. And Ling Li, I think, did a good job at mentioning some of the key dimensions. So it's more than just orchestrating or instantiating a new thing or a new service. It's also the aspect of designing that service ahead of time and orchestrating that activity to prepare for the time at its production time or real time to actually do that. So we've gotten into a few of the areas and it was good also about the management part of it. Because I think the life cycle maintaining something, it's not just placing it somewhere and making it work to start with or implementing it, but it's also living with it over the long term, scaling it up and down, making sure it runs, re-instantiating if it fails, and then getting rid of it when you're done with it or when it's using or not using the resources it should be. So I think we touched on a lot of the aspects that I think fall into what is the definition of Mano. So I'd like Diego specifically to talk about this subject of what is Mano and where did it come from? Mano means hand in Spanish, you know? Oh yeah. So this is the first thing and when talking about management and orchestration by chance, the two people that were leading the discussion during these moments were Spanish. So we decided to use it and to make the community learn a little bit of the most civilized language in the world, by the way. So anyway, no, and precisely the name came handy, if you'll allow me the poem, because it was precisely about what you do with the remote hands in the environment of a data center or whatever. It's precisely managing and orchestrating things around when you ask someone to be changing things and reconnecting and enacting on your behalf. So the idea is that it was precisely this. So when we coined the term Mano, what we wanted to be very clear, and this is something that is important to understand about that because that influences somehow some decisions that have been taken, at least in our case afterwards, is precisely what I mentioned before about install based. What is very important is that we tried to, from the very beginning, to isolate the aspects that were related to the management and orchestration of the virtualized part, of the part that were based on the cloud infrastructure, to make it isolated from the rest, from the general semantics of a network function or how you control, how you apply SDN, or how you apply network management or whatever. Essentially, because we were aware that the reality in many years to come is going to be hybrid networks. And we will have a combination of, I don't know, optical or radio equipment that will be based on hardware, managing a more or less traditional way, with something that will be much more elastic, much more adaptive in the sense that it will be changed and it will be suitable for cloud operations while the other parts not. So the idea is that you have a separated management and orchestration that is focused on the cloud aspects and not focused on anything else just because to a general network management system would look like another network component. A network component is a little bit particular that is much more subject to change but not necessarily different so you can make a consistent management of the, I mean a consistent network management with the MANO interfaces that are provided by the separate cloud management and orchestration. Awesome, so I have two questions. First, for Chris, how is MANO, as Diego described it, how is that different than what you were doing before in the IT space or specifically with your products like managed IQ? I think there's maybe a couple of key differences. First of all, the notion that you have a complex task that's composed of a set of discrete independent tasks like launch an aggregate application is not new but it's maybe defined differently and part of that definition that's different is it's the network so we're building up the network, we're building connectivity and so many of the traditional IT tasks are focused on things like launching applications. The application is more business logic and less about defining the actual connectivity and infrastructure where in the service provider space the connectivity and that infrastructure is the business. So from our tooling point of view we've really focused on things like can you launch an application which is an aggregate of a set of different components on differing cloud infrastructures or differing virtualization platforms and actually something that's very interesting to the service provider space as well where we wanna make sure that we can provide network functions across a disparate set of VIMS in the Etsy, NFE, Vernacular or a different set of infrastructures. So I think on the one hand we see them as really different applications, business logic, databases, higher level languages, distinct from the network itself but the concept, an application being an aggregation of services like a microservices based platform or our application translates really directly to individual network functions composed to provide a network service. So it's sort of similar concepts applied in a different space. All right. So Ling Ling, what is your thought about so how does your effort around OpenO how does that cover what we've talked about already? How does that cover the MANO space? Yes, the MANO. Or even begin before that, what is OpenO? Good question. I think just as I stated I think orchestration has the integrity part of management so management and orchestration I think that is, you know, it has to be, you know a part of a meaningful solution for us. And as for entity's definition for MANO I think it fits well with the network service layer orchestration and the management and it is specifically targeted at the NV orchestration and within, you know, like data center and cloud. And for us as OpenO I think there's a unique feature that we see actually we have to deliver the solution from the end-to-end orchestration layer which means that we have to, in addition to NV orchestration we have also to do SDN orchestration which means that we have to also provide connectivity service in addition to NV services with network service composed of VNFs reciting in the cloud, also connect different data centers and also, you know, provide the last mile connectivity to the end-to-users device to our data center and cloud. So that is actually two parts of orchestration, SDN orchestration which manage the connectivity service orchestration and in addition to NV orchestration. All right, so that makes sense but back to also where did OpenO come from? Is it intended just to solve that problem or is it gonna extend to more things or what is the, how does OpenO play in this space? I think it has a two layer orchestration model. At the top layer, this is a global service orchestrator which deals with the end-to-end service orchestration that is consumer oriented and which is actually the service, end-to-end service is composed of two types of service, network service is one of them and connectivity services is another type and they're orchestrated respectfully by LBO and SDN. So I think our LBO part is quite aligned with the SDN architecture, the minor part, excuse me and for the SDN part, I think it's quite aligned with MAPS definition for LSO and we tried to combine that to provide the end-to-end orchestration capability and we also work with multiple WIMPs and multiple within managers to increase our capability in outsourcing other open source components and make it a whole solution. All right, thank you. Diego, and then can you help explain how, where did Open Source Mano come from and how does it cover this space, other than the name? Open Source Mano comes from some orchestration components that we have some time ago in, while we started as something that we called our NFE reference lab two years, three years ago in which our idea was precisely to, well, to get some insight on what the rest of the industry was doing around NFE and with the different virtual machines, the different virtualization platforms, the different choices for orchestrating the resources, the cloud resources, et cetera, on the one hand and on the second hand to, well, to bring some awareness inside the company, Telefonica has a size, you know, and it's from time to time difficult to reach all the parts and the idea was to have a clear showcase for what we were doing. And one of the things that was clear from the beginning was we needed an orchestration platform. We started to play with it and from a certain moment and in conversations with some people that were working with us as well, we thought that it was mature enough to try to attempt to make them Open Source and to make it a full-fledged Open Source project, not only delivering as Open Source, but looking for additional collaboration and cooperation and people coming from different other companies, other institutions bringing their inputs and making the whole thing evolve. And we started like that, we started very, let's say, modest and small and we have ground in that direction and I think that right now we have released what is actually the third release but it's called Release One. Because first we started with a seed release that was just putting all pieces together. Second was Release Zero, that was the first one that has had code built in purpose for this project. And that Release One is the one we are, is able to demonstrate where we want to head and now it has something that is very, very close. We'll say the closest you can get in an Open Source project to production level, so forth. We are very, very glad of it. Mostly OSM is focusing right now on what I said before on the orchestration of the MANO problem, on the orchestration of network functions inside the network environment. It's not trying to, it's not addressing the rest of the space. And it's pure MANO though, it's not, I mean, it follows or tries to solve the problems around MANO but it's not following 100%. The current architecture, I mean, I don't know if many of you have seen this diagram coming from Etsy about the MANO stack with the three pieces, the NFEO, the VNF and the VIM, doesn't follow 100% that, it doesn't provide exactly those interfaces but the behavior is precisely of a full MANO stack. And it's concentrated on that and that implies that in principle, I don't see why not it could be able to work, for example, with OpenO in the future, able to integrate with ECOMP, able to work with the... Manage IQ. Sorry. So, because at the end is a piece that is very much focused on solving the problem, not trying to go beyond that and we are aware that there will be other solutions that will be able to be part of the whole process. That makes sense. So, one part of the Diego brought up is that AT&T has made a thing called ECOMP which is Enhanced Control Orchestration Management and Policy, so the own part of it is similar to MANO, but we've tried to do more than that and we're addressing a larger problem of the OSS space from telcos, specifically around policy enforcement across the whole system, some of the dynamics of service definition, which especially about VNF onboarding and then trying to work with VNFs to sort of create a template for how they show up. So there's a few parts that I think are meta to the MANO part of it, but at its core, it also has a piece that does this management and orchestration function. So today, what we're faced with is, at least between a subset of us, is a lot of redundant effort and so we're currently having discussions about how to bring it all together, but given that we're at the OpenStack Summit, I did wanna ask Chris a question specifically about, well, given that we're here, how can OpenStack help? Some of it obviously has a few projects that could help this way, whether it's Morano or Heat or Mistral or Tacker or Congress. How can OpenStack help to maybe deal with this problem? So I think the first question is, is that the right place to do it? And the second question is, what are the specific projects in OpenStack that are helpful? And if you look across a lot of these different efforts, we're actually using some of these components that you mentioned. So things like Heat are pretty consistently used to do basic definition of an application and potentially some scaling parameters around the application. We also have looked at using a project called Tacker to help essentially expose that onboarding portion and give definition to, you're bringing something into a platform. You need to tell the platform what it's SLA, the end result of the SLA is going to require a certain set of resources from the underlying infrastructure. And so we've got something that says, hey, how do you provide a VNF with a descriptor and then deploy that onto an OpenStack platform? And Tacker's trying to help there, especially when you are doing multi, the OpenO project has described there's potentially a difference between the NFV orchestration and the SDN orchestration. So how do you reach out to an SDN controller and define a service function chain that's going to be associated with the independent network functions that build up this composite service? So you've got workflow orchestration in OpenStack. And I think one of the questions for the service provider community is how much do you expect to run your network functions across multiple different platforms? And if you're running across a lot of different platforms, then building the entire stack, completely an OpenStack is only going to solve one problem. And so I think what we're seeing is interest in having cross cloud compatibility. So we want to leverage the primitives of OpenStack without building the entire stack as an OpenStack project. That's how it looks to me. Makes sense. Diego or Lingli, do you have other commentary about the OpenStack part? No, precisely. First of all, when you mentioned this of having the primitives of OpenStack without the whole OpenStack deployment, there's something that we said some time ago that we call OpenVim, which is a sort of a streamlined OpenStack. Something that you can, is a much tinier than the normal OpenStack deployment thinking, precisely of the kind of deployment you would do in a small central office, of the kind of deployment you would do for ahead of managing different radio stations. This is something that, well, and this is something that it could be interesting to explore how we can combine it, because, formally, it's not an OpenStack project, because, well, for us, we are not that many, we are a small group of people working in development, so we have to be, well, we are playing a little bit, to some extent, we are playing a startup inside the platform. We are very much focused on trying to make things happen, but precisely, we're pushing things in a community of this size, that's for you to know. Not sure if you're aware, but pushing certain things in this community is complicated simply because of the size of the community and there's so many forces that are around pushing and pulling in different directions. And, but this is precisely something that we have learned that it would be very interesting at least for our environment, and I guess in other environments that could benefit from OpenStack interfaces without the need of the whole complexity of a full-fledged OpenStack stack. First, and this is one of the main goals for us OpenStack is essential as precisely as a reference, at the reference that we are using for the platforms that we are mandating in our initial pilots and deployments, and precisely it's a way of making this reference and making clear for us that we can move from one provider to another provider by moving our code, or the code of the third parties that are collaborating with us. And well, something that is very important as well is that the knowledge that we collect is reusable. It is extremely important when you have people trained, working with that, because this obsession with vendor locking very often is not only about the compatibility, because then it's a matter of price. If you pay enough money, you will have something different. At the end, it's a matter of knowledge. You simply don't know what you can do with the new stuff. Now it's something that brings you a certain stability in your knowledge. You know in which direction you have to look, and this is a very important one. Sure, so Langley, how does OpenO work with OpenStack today, and are you reusing components that way? Actually, we see OpenStack as the de facto solution for a component for WIM, Virtual Infrastructure Manager. And we are also aware that there are some of the projects in OpenStack community that might have overlap with some of the components within the OpenO community. And actually, I think we are open. We're actually going to put it. We're open to any of the open source efforts that could complement and also make a part of OpenO community. But we actually do our choice to try to keep aligned with other community like OpenStack. I think there are some of the principles. The first one is that we try to push the alignment between the different interfaces, especially data modeling. So if we can keep aligned within different community regarding these two aspects, even though we are not outsourcing to each other, we can become incorporeatable and be replaceable with different components. And that's to me the most appealing feature for open source communities. And the second principle would be, especially, how can I put it, the ease of use. So some of the components, for example, as you mentioned, Mr. Actually, we have our own like version of Mr. in OpenO. Why we are not using that is because of the ease of usability. We see that there is a user-friendly GUI would be a very appealing feature, especially at the very top layer of orchestration, which the operator stuff would be using to define the service template. So without that feature, there would be some comparison and made our choice in SQL. But that is not to say that we are closed the doors for other components and other efforts in other communities. And as long as we are incorporeable, we can join together. And I think it's slipped in my mind. I think I have a third principle, which one is that? Fun, having fun. It's much more fun to have open source. Oh, yes. Tacker and service function chaining. I would like to mention that actually we see Tacker as a generic VNF manager because they all operate at different layers of orchestration. And we see heat is operating at the VM layer building up the application. And Tacker or Juju operating at application layer and it can use heat or whatever interface that opens Tacker lines or other wind provides. And actually, we have been talking with Tacker guys. And we welcome their contribution to our community so that we can also have Tacker as our generic VNF manager as well. And there is another form of collaboration, specifically to SFC. We were actually using the networking SFC project, which is resigning OpenStack. Their defined API is our interface between orchestrator and SFC controller. So in that case, we are actually interoperable with OpenStack some of these solutions. And that's it. All right, thank you. Yeah, so I think the idea I think across the group, we've seen a number of ideas about how to perhaps consolidate efforts and then also to enter work and then find a way to innovate while continuing to innovate, to integrate while continuing to innovate. So yeah, I think this was a good discussion. So right now, I'd like to actually offer up the microphones on the sides here. If anyone would like to ask any kind of questions, feel free to stand up to that microphone. And we'll talk about whatever. Looking forward to that gentleman's question. Ian? Hey, yes. You know me. I'm here to cause trouble. So I will ask one question. To start with your analogy that you came up with originally, that you're talking about a conductor standing in front of an orchestrator and ensuring that everybody works as a whole to accomplish a given task. Then a conductor's main role in leading an orchestra is not the one in front of an audience, but the one far before then when he's trying to get everybody to work to one end and to fix the problems that he's seeing. So in the NFE world, when you're doing orchestration, then I would say one topic that doesn't seem to get covered well is the idea of repairing problems. You're running on a cloud. You will have inevitably in a large cloud with lots of servers. You will have these problems. You will lose virtual machines. How do you see your orchestration solutions solving the repair problem as opposed to simply the deployment problem? Yeah, so let me talk about one part of it with regard to ECOMP. I mean, this is one central difference or central thing that we're focused on within ECOMP. It may not be a difference, but it's about the closed loop control. And then setting up these not only around service management and problem management, incident management, but also around the whole service definition. But creating this closed loop where I'm taking in analytics, I'm taking in feedback from the system or from customers, and then I'm using that information, processing it maybe through an ML system of some kind, and then acting on it and using the orchestration as a way of getting back to the right state. So that closed loop concept is an integral part of our ECOMP work. I mean, I think at a cross-industry level, it's still early. So we're focused on the building blocks. But if you look at everybody's work, there tends to be some notion of event-based processing coming out of the infrastructure or the actual network functions themselves and then actions that you take. And that could be as simple as scale up because you've got some peak. It could be more complex. Like there's been some fault that is in hardware that's percolating through the system. You need to redeploy. But I think in all the cases, an event stream, some level of analytics against the event stream and some remediation is core to the end result that we're striving for. But if you can't describe the function and you can't launch it onto the infrastructure, then you don't have that problem yet. And I think we're still mostly talking about just getting things launched. No, frankly, we have one project running and a couple of proposals launched recently precisely on this idea of repairing, in general, about resiliency, including security and including the ability of identifying and trying to alleviate security problems, not only failures, but ill-intentioned attacks to the infrastructure. As Chris said, it's still early. I mean, it's not that we can claim that it is an integral part of the orchestration suite, but this is something that we are aware of. Last OPNV meeting, it was precisely showing some results. We are talking with the Open Daylight people as well in trying to bring the results we have, trying to make them an integral part of their projects and so have it as, well, part of the normal toolbox that we'll be using in hopefully a couple of years to build networks. So if I may add, I think a high availability or fair offer is one of the key features of telecom services and we traditionally deal with the problem with the, you know, a combination solution from a single vendor which provides hardware and software altogether. And so in every environment, you know, that blocks is actually break down into different layers and to deliver, you know, comparable high availability to traditional like a black box, I think it needs collaboration between different layers and especially I think how we deal with each of the availability issues of a specific VNF, it actually depends on what it requires at different layers. For example, some of the VNFs, they are typically IT application and they are very comfortable of relying on the infrastructure to do the fair over and they are not so sensitive to, you know, to service continuity requirements. In that case, I think it would, you know, depends on the infrastructure and the cloud infrastructure to provide them with some of the fair over mechanisms. For example, migration or, you know, VM reboot and for some of the tactical features of VNFs, for example, our current network VNFs, actually there are very strict requirements regarding their service continuity and we could get sued by, you know, by our government or subscriber for blocking that service for a permanently short time frame. So in that case, we definitely need, you know, collaboration between different layers and what our orchestration part do will be to make sure that for each of the specific VNFs, their requirements, their policy regarding fair over to be clearly specified in the metadata and conveyed to, you know, the specific policy enforcement entities. For example, for the IT application, that would be the infrastructure and for some of the tactical VNFs, there would be a collaboration between different layers. And I think there's also another part to the story is the monetary part. So in terms of, you know, policies or something that you might take proactive action to actually avoid fair over, we have to monitor other status and resource consumptions, something like that and all the alarms from different components and VNFs. So orchestration also has to play this key role at state alarm monitoring to, you know, to complement a total solution. That is my take on this problem. All right, thank you. Next question. Hi there. How would you compare the state of play in the NFE and open stack orchestration space with what's becoming available in the container space with Kubernetes and Swarov and Esosphere? I think that's a very good question. Do you want to take that one? So the one level of orchestration is just where you place the workload, how you launch a workload. And so some of that's just almost more about scheduling. And so from one perspective, you've got Nova doing scheduling of VMs, you've got something like Kubernetes doing scheduling of containers. And so the question is more about what is your application running? Is it containerized or is it in a VM? You also have some interesting definitions in the application orchestration layers. So like Kubernetes or Swarm, where the focus is actually building that aggregate application out of a set of services, which in the networking world would translate to an aggregate set of functions composing to a service. And there there's been work that we need to be interoperable with from an industry point of view so that when you define a service, a network service, we need to be able to deploy that on to, in my opinion, onto a application orchestration fabric like Kubernetes in a way that makes sense. So we don't want to completely redefine the world over here in the service provider space, only to find that when you go to an application orchestration tool like Kubernetes, it doesn't fit towards redefining, we've completely rebuilt a bunch of infrastructure. And I think the real question is how much of the network is going to reside in bare metal, in VMs and in containers. And my personal opinion is that a containerized future in the network is inevitable and it doesn't necessarily mean that it will all be there. And so we are thinking a lot. I think I know at Red Hat and across the industry, we're thinking a lot about how we can bring these things together. And one of the things that we were trying to demonstrate specifically with Manage IQ is all of these problems are similar across the different industries. And so you do service definition, you create a service catalog, you launch a service, you monitor to the service, you scale the service, you may want to launch it across different infrastructures and part of what we were trying to demonstrate is you could build a comprehensive network service out of some functionality sitting in VMs, in OpenStack, and some functionality actually sitting in containers on AWS, which is probably the most extreme version of all the differences you might find in a service provider's infrastructure provided in a single network service. So containers, very important part of the future, making sure what we're doing now is compatible with the orchestration done and application orchestration tools, very important. Yeah, I think one point to add to that because we have to move on, but I think containers actually provides an opportunity to simplify, to make the orchestration easier on us because the things that we were doing before can be done already in the template or in the Docker format that's there. So there's, I think the enormous opportunity to help us to simplify. One last question from Beth, because I'm always over. So my question is the proliferation of orchestrators has started. So I'd like the panel's comments on how to deal with that issue. I know we are personally, I personally am already dealing with that. So orchestrators are in the cloud, they're out at the edge, they're all over the place and they need to interact with each other. So. Yeah, do you think we want to do that one? I don't know. I'm not sure if you're referring to the fact that we're using the time of an orchestrator or all over the place, or. Oh. Just that we have so many everywhere. Yeah, it's like standards. There's great, we have so many of them. I mean, probably, probably it's because everyone at each level wants to feel that it's in charge of the orchestra that they are the conductor and not one of the players. This is something that is, I would say it's human in that sense. As long as what you're talking about is about allocating resources according to a set definition of a model of what you want to achieve, you have an orchestrator and we have inside the HCNV group we have been discussing precisely about different orchestration layers. You can call it orchestrator or you can call it manager or you can call it policy enforcer, whatever, but at the end is that you have different layers of concern that normally are associated very much with business models. I mean, if I am providing infrastructure, I mean, providing infrastructure up to the operating system, I will have an infrastructure orchestrator because it's my concern. If I am consuming that and I'm deploying BNFs functions I will have a function orchestrator because it's my concern. And if I am providing services I will have my service orchestrator. And even if I am a customer I will be thinking about my business orchestrator or whatever. I don't see anything wrong with that as long as it is clear that there is no, I mean, there is not a hierarchy but there's a set of relationships among different objects. This is one thing. Second thing that probably is about that there are many offerings right now in open source and in proprietary fields. It's natural as well because we are starting from different, I would say different goals and different analysis of the current reality. And as I said before, most of the force I'm aware of should be able to work and to interpret as long as as Langley said before they are using compatible models and they're using not even not necessarily compatible interfaces but they are based on compatible data models because an interface is something that is easy to build at the end. And they will be covering the whole space probably with some overlapping but this is the same thing that we have in the general computer industry that you can do things always with a couple of different things covering a little bit more of space, each one. So I think it's part of the natural evolution. I personally am not concerned about that. All right. Thank you very much to the panel. I appreciate everybody's help. I'll do respect to the next group who I'm big fans of, like to see their panel too. So thank you everybody for coming.