 Good afternoon, everyone. In this session, we will present Domino Project, which is part of the OPNFA. My name is Ulaj Krozat. I am the PTA of the Domino Project. I also work at Huawei. And my colleague, Prakash Ramchandran, do you want to introduce yourself? Okay. So, good afternoon. And I am Prakash Ramchandran. Of course, my Amigo here, Ulaj already handed to me. So, I am the chair of the MANO workgroup for OPNFA and trying to organize the MANO aspects as we move up this tag in OPNFA. And the topic here, here is about the template distribution services. And I also work for FutureWhy, which is a Huawei R&D in USA. So, we divide the presentation in three parts. In the first part, Prakash will take over and he will basically set the stage for the use cases and motivational things for Domino Project. And then he will pass the ball to me. He will do a couple of slides on, you know, under the hood what we do within the project. And then we will also have a demo, which lasts about eight minutes, so that you can see in action what we deliver basically. So, why don't you take over, Prakash? Thank you. Thank you, Ulaj. And so, what I will do is I will start, of course, we want to know, as usually we want to know what the pain points are. But before I come to that, I would like to see at least what is happening in the space. So, what you see here, you can see here this is the WIMP, which is the virtualized infrastructure. And we are in OpenStack Summit. So, OpenStack actually for us sits here inside this virtualized infrastructure manager box. Along with the SDN controller, which is a bit different in the sense that we always talk of neutron in OpenStack, but we have underneath the SDN controller, which could be an ONOS or an ODL. So, that's the WIMP portion. And we are talking of HCNFE architecture mapped here. And so, after the WIMP, the next thing, in the first phase in OPNFE, what was done was, this was the priority in OPNFE Arno, which is the first release of OPNFE. The second was Brahmaputra. And in all of these, we just focused on this and the virtual infrastructure layer itself, NFEI, what is the infrastructure, and the virtualization of it is here. And now, as the MANO evolves, the management and network orchestration evolves up the stack. We have moved to VNF manager. And this VNF manager is the one which manages the life cycle of the VNFs, which is the services. You provide services by composing VNFs. And above that is the NFE orchestration, which is the one which tells what do I orchestrate for which use case. And above that comes the actual OSS, BSS. So, if you look at the end users, the BSS, the OSS, which are nothing but business support services, and then orchestration services, all that the end user wants comes to the dashboard, and they look at service catalog as you see, and then try to orchestrate a given service. Example of a service is VCPE as you saw in earlier session, and then VIMS. So, these are some of the standard use cases both for the fixed and mobile network, which need to be orchestrated, and they always run through this orchestrator. So, these are all the standard interfaces as you can see. So, what the idea behind this is that any VNF managed by any element manager is driven by OSS, BSS through the services that are available and orchestrated through NFEO to VNF to VIM, and then actual infrastructure is here. So, this is called NFEI pops. You can distribute across VIM, you can distribute across VNF manager, you can distribute across. So, can you do NFEO then what is the logical view? The logical view is always coming from the service orchestration at the top, which is the global service orchestrator in general we call it. So, this is the evolving stack out of which OPNFE has started building bottom up. That means it has started with VIM, which is our infrastructure as a service open stack, which is sitting inside this. Now, we are moving up. There is a pain point. When we move up, there is a pain point. Even when we were there at VIM level, we had pain points. So, what is that template distribution needs to address? What is our pain point? So, I will focus on now our pain point, and our pain points are you can see the Mano space. Mano space is littered with just projects and projects and POCs and POCs and modules. So, the question is how do we handle it? So, we have got AT&T, they want their e-comp, we have open O, China Mobile wants open O, then we have Telefonica, which wants open source Mano. So, everybody wants to do Mano, means management and orchestration of their network. And not only that, they want to do most of them through open stack, which is a positive sign. At least, we have something common. But when we do that, then the issue comes down to how do we have a standard VNF, that is the network functions, virtualized network function, which we provide. Are they standard? Is there a standard format? How do I describe it? How do I onboard it? How do I instantiate it? So, how does this all happen? So, there are two views. One view is modeling view, which is graphical. How? Everybody likes to be cool, drag and drop and that is all. And it will start running like nobody's business. But that does not happen, unfortunately, because you need to build the software. It is evolutionary. It is not revolutionary, just like drag and drop you do. So, the first way to describe it is through some kind of domain-specific language. We call it DSL. Now, what are the domain-specific? So, there are many languages. So, you can see here, specifically, I will point here, TOSCA. TOSCA is one of them, which is cloud applications. So, if you describe the topology, topology service for cloud application, that is TOSCA. That is a domain-specific language you describe in some descriptive way. And that descriptive way we already had, like, for W3, we had what do you call HTML, then HTML12345, then you had XML. So, when you serialize the data and describe it, you can describe it in several ways. And in this case, it happens to be TOSCA for application, for cloud application. There is another way of... That is the case. How do you render it? How do you really put it? So, you can put in JSON format. You can put it in a YAML. So, there are many, many ways of rendering or showing it. The reason is serialization. The important thing is serialization. If I talk about TOSCA, that is one of them. So, now you can see that there are so many... Why did I say that Manospice is crowded? You can see TACR. TACR is an open stack project. They want to do it in the VNF orchestration. Then there is Juju, which is there used by the open source manual. Then there is OpenO, which wants to use both Juju and TACR. As a VNF manager, they want to build the NFV orchestrator. And so, to build all this, you need some way to describe it. So, our paint point is, how do I describe it? And if I have a distributed system, how do I distribute it? So, this is a paint point, but this is just one of them. I will show you now what is there in the market. And that's the bigger paint point. So, here I go to the next one. And... Sorry, I think I went far ahead. Sorry. Yeah, here. So, when I say template, there are so many ways of orchestrating. So, there is one TOSCA, which is the one I described. Which is OSC standard. So, there are many standards. If I use open stack heat, we have resource we described by heat orchestration template. So, you have got hot. If I use IETF standard, then I have got Yang. Yang is a modeling tool. Now, each one has a specific market requirement which fulfills. Like example, TOSCA is for service orchestration. Hot heat is for open stack resource orchestration. And resource flavor which you described like so much of memory, so much low, medium, high, flavor, etc. So, that is the one. Yang similarly is more suitable for network orchestration. You can describe network. Then you go to UML. UML schema is used for application modeling. So, from good old days from your Java and you always use UML. Because that's how you describe from the good old days. It also has evolved. Then there is a common information model which is nothing but one of the open flow switching model. When you describe the flow and how do you describe. So, that is a common information model. Then you have got UML2 which is nothing but there is Eclipse modeling framework technology MFT. Which is part of another Java. They have got UML2 way of formatting. And they use UML2 schema which is nothing but UML with some constraint being added to it. So, that's one of the ways. Then if you go to Kubernetes, the containers, they have got their own templating of pod and task and services. So, that has evolved. And they use YAML. So, that orchestrated the Kubernetes controllers. So, you use that template to build the replication and all this. Then you have got the good old Ansible and Salt. Both of them use Ginger to templates. And they have their way of again having the template with some kind of embedded codes which are called in this case playbooks. There it is called master and minion which is the master. Some of them are even master less. Minions is nothing but agents. Then you have got another bunch of two tools which are very popular. Puppet and Chef. And obviously Chef is based on the Ruby and Puppet is based on the scripts. Which also uses Ruby and can use Shell scripts. So, then there are called EPP and ERB templates. That is Puppet and Ruby templates embedded there. E stands for embedded. So, you embedded into the cookbooks they call it. So, everybody has got a different name. One has playbook, one has cookbook, one has minion master and then others have their own. So, there are plethora of standards. So, with all this confusion, even if I say, okay, I don't want to drag and drop. I will just use some template. Which template should I use? Why should I use? And what is that we are trying to do? So, the basic problem for us in OpenFE is service. Service is described in terms of some VNFs, some combination of VNFs, network functions. Network functions are nothing but middle boxes like the firewall, the NAND, the DPI and whatever. That is the network function. Virtually they are virtualized and sometimes you want to even disaggregate. So, there are plenty of problems now to describe all these in terms of descriptors and trying to solve the problem leads to many a problem. And that's what I will continue on that. So, here we go. We have got the use case. I will throw two use cases at the floor for describing the problems. You have seen that we can describe, but then how do we really use it for a given use case? So, the use case in this case, this is a special use case. So, if you say the speed of light, you describe 3 into 10 raised to 8 meters per second and you have... This is one second. So, if I say one millisecond, what do I divide by 1000? Then I have got... I describe in terms of kilometer. So, then again 1000 I can... So, it's around 300 kilometer is the coverage. If you go at the light speed, you can cover in one millisecond around 300 kilometer. But then, if you go with a fiber, then you can... It covers around 200 kilometer radius. So, we are describing the speed at which the wave or the content or any signal that moves. Now, if you have a fiber, then it is two way, then you can go 100 kilometers. So, basically what it means is, if you have a cloud as you point to see here, if it has a radius 100 kilometers, if you take diameter as 200 kilometers. So, if you want something in one millisecond, which is what 5G is targeting for a latency, you cannot really go more than that many kilometers. That's your limit. Even at the best of best, you can only do that because of the limitations of the fiber, limitations of... So, how do we make this happen? And especially if you have... There are multiple things. One is cloud. Cloud is distributed. So, you have got a yellow cloud, you have got what you call violet cloud here, white cloud. And you can think of domain in different... Domain is overused term. If I say domain, what do I understand by domain? I will say domain is security domain. Somebody will say it's infrastructure domain. Whatever you conclude, important thing is if I have multiple identities, then I am talking about security domain. So, I mean yellow is one security, violet is one security. They are one tenant. In OpenStack term, you call it tenants. So, if you have multiple tenants, multiple domains, then if I have to have a function which is being firewall with a distributed one part of here and one part in the other cloud, then how do I ensure that the latency is minimized? We already have a distance and a physics law of light use tells you that you cannot do more than 200... 100 kilometers here. So, that is one aspect of it. And to add to that the identity to check, that means you are further delaying. Same way if you have heterogeneous resource. I may be talking of a resource which is only compute. I may be talking of storage or I may be talking of network. So, all this means that you have got to describe all types of domains like who is the owner of it, how is the compute storage described, what unit it is described. Is it 1 kilo, 1 kilo of my or 1 mega or 1 giga and how unit of measure etc. So, you have to describe all that to somebody to be able to orchestrate for a geographically distributed network functions or virtual network functions. So, that means if one place it fails, it is going to fail all through. And so it has a domino effect. Essentially what we are trying to do is how do we address this using these problems that cause a function to be not done within a particular constraint like latency. So, that is the key problem for geographically distributing. So, you say zone. Okay, we do have. I am not saying open stack does not have. You have got zone, region, you have got what you call bunch of group together, the servers. So, NOAA does support all those. But then to get the limit within the limitation, the orchestration within the timeframe is a bigger challenge. So, how do we address it? That is one thing which I do not know. That is what Ola should bring it to us. So, I will go to the next use case. So, geographically distribution of functions is of the cloud. Distributed cloud if you have to distribute the function, yet meet the limitations of latency or bandwidth or whatever. How do you meet that? Second use case again. Still first. Yeah. So, now we have talked about the function. I am not going to say case two. So, still in the, how do I compose it? If I have a VNF made of multiple, if I load balancer requires a load balancing of what? A protocol, okay, then how many places I have to, if it is a web service, then here is a web one, web two, and then here is a load balancer. Where do I distribute? So, are they in the same geography different? So, composition of service over multiple domains is another challenge. Like in the yellow, if I have one in yellow, one in violet, it is distributed. Now, it is in different domains. So, the composition becomes difficult. How do I describe the composition? How do I orchestrate it? Because one has to instantiate in yellow cloud, another has to do in violet cloud. How do I do it? So, there is a challenge there. And not only that, how do I know that this cloud has got this resource versus the other cloud has got different resource? What's the availability? If I have something which is, let's say VMware-based, and we have another place where it is not VMware-based. How do I do? OpenStack instance may be different from a VMware instance. So, that's one of the keys. So, how do I discover the capabilities and how do I onboard it? Do I have only one service catalog or I have two or I have five? Are they local? Are they distributed? Are they global? So, these are something which needs to be defined. Unless you define, nothing can be done. Then, if something has to move, example, I am having a call and I am moving from one cloud to another cloud. I want my service to have the latency, low latency. So, my application which is there at one edge should move to the other edge. So, somebody has to move it. How do I move it? How do I scale? How do I migrate? How do I make that particular if I have number of subscribers increase suddenly? Okay, this is a debate and that debate has some US debate with huge crowd. How do I get millions of 50 million people to see that video? So, the scaling issues, scale up, scale down, migrate. So, these are all the pain points which needs to be resolved and hopefully this is what we expect that some service called template service which is supposed to resolve and we will see how it happens. Next, I will go to the... So, use case one, we have got... Let me use here. Yeah, use case two. So, now we described right now various aspects which were more direct. But when you have, we saw the manner, right? We saw the manner that you can describe. As a layman, I want to use some service. I'm going to say, hey, give me a service which will help me do certain functions. Like, I should be able to talk to my people in India which is 2000 or 10,000 kilometers and I want that to be quality of service should be this and this, X and Y. I should not have any interruption. So, now the question is how does that happen? So, that means you are only describing the service abstraction saying that I should be able to make a call with so much quality of service or so much experience, QA. But then, actually how does it translate into physical? Physical, what kind of network I need to provide you? What kind of memory, what kind of... How do I do that? So, there is a gap between orchestration when you describe in abstract term which is called intent. So, I can say I want to have this. But unless the intent is translated into something which is realistic for the machine to implement at the lower layer, it becomes many lengths. And plus, it also is important that you have different controls, right? Controllers are evolving all the time. Controllers in the same SDN controller as well as your cloud controller. So, you have different clouds, different controllers and among this, maybe the source to destination, source is Barcelona, destination is India. Now, you've got a different, different cloud in between. So, how does the API change? If you have different APIs, my cloud in Barcelona is pain cloud and it speaks Spanish and my cloud in India is Hindi speaking. So, what do you do? So, there are APIs, something similar like that. APIs are making that if you have different, different kinds of domain and different, different specification for them, starting from abstract to the physical, how is it possible for somebody to orchestrate? So, the issue is domain specific and if the domains themselves keep changing their APIs, then how do you evolve dynamically to be able to orchestrate? It cannot be in the wildest. So, what do you mean? Like, when I move from here to another place and if my service has to move, then I had to know when I am moving, actually how is it related to where I am moving to, where, from source to where I am going the next, what's the difference between the environment there? What different cloud it is? So, if you have different cloud, different API, then that should be able to, still be able to say that when I want to scale, I want to scale up and not scale down or if I want to scale out, then I have to say I want to scale out and my environment here is happens to be OpenStack Mitaka. I am going to have another one there also OpenStack Mitaka because even within the versions there are differences. So, to instantiate a VM which will serve me or a VNF which I want to, there should be some ability to say that there is some constraint. So, that must be hints. So, these are the policies which need to describe that. So, we have to add some intent cannot, even though it is abstract cannot be in while, like unless the support is there for that orchestration, you cannot expect that. I am asking for a voice service whereas it, video service whereas it only supports voice service. How can I get such service from that? So, that means you have to indicate hint somehow. So, that is what the use case 2 is. And I will just move on to the next one. Let me use switch. Yeah. So, at this stage. One second. Yeah. Go ahead. I think we are pressing at the same time. Yeah. Okay. Go ahead. You press it. Okay. Yeah. So, now what happens, now we have seen that there is always a intent is at a high level and when we want to example, I think I already mentioned the scale up and scale down and you have to give some hint. If you want some services to, if you have 10,000 people watching and 50,000 people come from different places now, how do you scale out? Or if you see that all the debate is over and all people have stopped watching everybody is shutting down, then we have to reduce the, we do not want to consume our resource. So, how to optimize the resource that requires a scale in scale out and the controller can be at different level because if you have some templates to describe it, unless they give the proper hint, just saying scale doesn't mean anything. You have to scale up or down and not only scale up and down in what grades? Do you want some small to medium to large or you want only the bandwidth to be, bandwidth on demand to be increased and decreased, nothing to worry about compute and storage. So, how do you describe this and which controller and which API will help do this? It's unclear until you provide the proper hint. You cannot have an intent, just wild intent. It has to be supportable intent and that should have been defined if you are doing descriptive. It's not, it cannot be so dynamic that the system doesn't understand. So, system should be able to understand and the pain point is how do you make it understand? Is there a way to do it? That's the question. Then the next, at this time I will say, Yeah, I think I should be... Yeah, so this brings me to a stage where I think I should hand over to the pain points you have seen. Now we want to see what is the solution for all this? All this looks wide and I think these are the critical services we need. So, Prakash basically talked about two use cases. One is the geographical distribution. So, you want to basically orchestrate give one service descriptors that describes everything end-to-end but we want to onboard different portions of that service component in different domains basically. So, we want Domino to be a one-stop shop. So, you describe the end-to-end service, give it to Domino and then Domino is supposed to distribute individual portions to individual domains and onboard them. The second use case you mentioned was about keeping the API level high level so that we don't basically keep updating drivers continuously. Instead, we template our intent. So, your high level API is still high level, scale out, scale down, you know, set up VNF forwarding graphs, change them, modify them. But you prescribe the set of actions that you expect from the low-level controllers in terms of what is the service continuity requirements, what type of low-level orchestration that needs to be happening. So, that gap has to be templated. So, Domino also tries to basically address this second issue as well. And since templating is so critical, you know, there are a couple of things that has to be happening basically. If you basically provide a single shop where you pass your service descriptors and then we are supposed to distribute to different domains, one thing is we should be able to parse that service. We need to understand what individual components are in that service. And then we should be able to map them to individual domains. If individual domains have different template languages that they support, there should be some sort of translation happening, and hopefully nothing should be lost in translation. And then basically we need to schedule and send those templates to individual domains. So, in that aspect, OPNFA has two projects. Domino is mainly interested in partitioning a service descriptor into individual components and distributing them. And Parser Project is mainly responsible for checking, verifying whether your template is consistent or not and translate that. It can be translation from Tosca to Young, Young to Tosca, Tosca to other languages, for example, Kubernetes, YML file. At this point, they support basically a heat orchestrator, Tosca to hot translation, as well as Young to Tosca translation. The other two projects in OPNFA are integration projects. Orchester and Opera Project are trying to integrate open beta and open O projects into the OPNFA platform. So we have basically both integration projects and future projects that try to stitch the pieces together. So if you have multiple domains, multiple controllers, multiple orchestrators that you want to integrate on the same platform, we are trying to solve basically on a joint effort, basically. We are trying to solve this problem of how to integrate and how to combine those different orchestrators together. So the bottom line is OPNFA Domino is part of a bigger, much bigger, mono puzzle, basically. So again, to reiterate, I have a couple of slides. The first critical thing for us in Domino is to discover capabilities. Each controller domain should be able to express what they can do for network services. And what we do, what we use for capability discovery is we basically use policy labels. And a policy label, the way we define it is very Tosca-specific. When if you go to a Tosca service descriptor, what you will see is policy types. And you will have property definitions under these policy types, basically. If you define a rule, you say it is of this policy type, and these are the properties, key value pairs that I want to see as part of that rule. So what we take advantage of that policy section of the Tosca file, so if you basically be able to host or want to host a resource, you should be subscribing to those specific labels that themselves has to be input into the Tosca template. So by this way, Domino doesn't interpret what these labels mean. But as long as the orchestrator, different controls and orchestrators agree on those labels and include those labels in the service description, then we can do the matching, which resource can be mapped or matched to what domain. The second thing that we do is, any orchestrator, any controller, should be able to describe their VNF descriptors or service descriptors and send it to Domino. And Domino basically, so that is the publishing stage. And then what Domino does is it creates these individual resource orchestration templates specific to individual domains and distributes them. And just to give a hint of, again, I mean, it will be a little bit of iteration of what we do is we start from Tosca. So if you have NFV or OSS, BSS, that describes a service in Tosca template, what we do is we look into two critical sections. The first section is the topology. Topology basically lists the node types that you have and the relations with each other, which node is connected to what. Okay, so the nodes can be virtual links, they can be connection points, they can be actual compute units. So once you basically establish that relation, who is connected to, who is related to what, basically you are describing a large topology. So we basically look at the topology, we basically extract all the nodes in there, and then we also jump to the policy rules. I mean, there's a policy section in the Tosca and there are individual rules listed there. And we parse each rule and we look at what are the nodes that these rules are targeting and what are the properties listed in that rule. For example, you can have a location rule saying that I want basically this resource to be located in East Coast. So you will have a corresponding label for that and if you target VNF1, okay, every property that you list in that location rule has to be matching basically. So for every property value under a rule, we basically define, we extract labels. And in this example, Node 1, for example, is targeted by two rules and from these two rules, we basically extract labels X to Y and Z to A. So if a domain wants to host Node 1, it has to subscribe all these labels. So it has to tell the Domino Service that I support all these labels. So that becomes a candidate, a possible candidate who can host that node type. And the purpose of Domino is basically collect all the labels announced by individual domains and look at which policy rules has to be applied in that service description to what node type and try to basically match these nodes into individual domains. And if there are more than one candidate, if a node type, if a resource can be scheduled in more than one location, we try to basically schedule pack as many resources in the same domain as possible. So we want to use the minimum number of locations or the domains basically to host that network service. So we have also an implicit scheduler within Domino. In terms of service mapping, as I said, once basically you describe the topology, the service descriptor, it can be a quite large topology. If you have two domains that you want to basically send it, what you do is you basically split that graph, topology graph into two graphs and then you are supposed to generate a service template that describe each of those graphs separately and send to these individual domains and those domains then onboard these descriptors and then instantiate them. So the whole basically cycle from publishing to pushing is shown on this basically slide. This is what we support in Colorado Release. So basically a client can publish a TOSCA file, we extract the labels, we do domain mapping, we partition the template, we do the domain translation, we create a distribution workflow and then send to individual clients. And in doing so, we use two critical libraries. One is the TOSCA parser library and the other is heat translator library if a translation is needed. And let me go to the Domino demo. So in the demo what I will show you is there will be three Domino clients. One of them will be the publisher. It will basically describe a service. The service is very simple. It is a VNF actually that is composed of two VDUs and they are connected to the same network. And what the VDUs, they will have individual policies in that service template and those policy rules, one will be the location. The location policy will say I want to host this VDU in this particular geographical location versus that other geographical location. And on purpose basically we put two different policies for two different regions and one basically region target, basically one of the resources is targeted for one region and the other resource is targeted for the other region. And the controllers in those two regions we are using two different orchestrators. In one region, Techer is running. It is one open stack basically installation where Techer is in charge and the other one, there is only heat engine. No Techer, it's another open stack thing but two different basically orchestrators that understands two different templating languages. So although basically we partition the graph at the end of the day, Techer will actually get a native Tosca file because it can consume Tosca and the other domain will get a heat file, hot file. So let me play you the actual video. It won't be a live demo. So first step is we start Domino Server. Currently we don't support high availability so we support only one Domino Server and now we support, we start the Domino Clients. As I said, we basically need to start three different Domino Clients. One will be the publisher, the other will be the receiver of the service templates. We started the first one. It is registered, starting the second one and it is also registered. As you can see, it is coming from a different IP so it's a different domain. And the third client starts, that will be the publisher so it can service our NFVO, for example, who wants to onboard the network service. At this step, the registrations are completed and in the next step, what we will do is we will do some label subscriptions. Let's wait a little bit. I was too lazy so just basically scrolling up the comments. So client one subscribes to label one. That label one just says basically I can host this region, first region. And as you can see, it's a placement, toskapolices.placement is the policy type and it says I support region NOVA one. And we go to the second client and second client basically will itself subscribe for a second label which basically says I can support the second region and by the way, I also support, I understand heat, hot templates basically. It also specifies that. As you can see, the second client supports heat and it supports the second region. And in this step, basically we will now publish the third Domino agent will be publishing end-to-end service descriptor. And when we go back, we see that one of the both domains, this is domain one, it receives its template file. Domain two receives another template file. These are two different templates. And as you can see, this is a server view. It does a translation from toska to heat for the second domain. And let's check the files now. On the client one, basically we check the first file. As you can see, basically it is VDU one is mapped to this domain and it's a toska file. And let's check the other file in the other client. As you can see here, it's a heat template with a particular version that it supports. And the other one is the toska template. And remember, these templates are defining, describing two different VDUs. They are not the same VDU basically. And the next step, okay, we already distributed, we are done with the distribution, but let's see if we can instantiate those resources in those orchestrators. So let's basically use HeatEngine in the second domain. And as you can see, we are using the OpenStackStackCreate command for this. The stack is in progress. The creation, we will check the status of it. And still it is in progress. The creation is in progress. The second time we will check it, check the status, we will see that it will be created actually. It is already created. Let's check the horizon. GUIView. If you see the IP address, it ends with .8. We basically enter the admin console. Basically we are on the GUI side, we are confirming whether the resource is instantiated or not. On the client side, on the CLI side, it says that it is instantiated. As you can see, if you can see it, it says VDU2. So the second basically part of the VNF is basically created and it is active for the last one minute. And then we go to the other orchestrators, which runs Taker. So Taker has a two-stage process. First basically you onboard the descriptor. So we are basically the file that we receive is 6.yaml. We basically just use the seconds numbers to name the files for now. So we create the VNF descriptor. After that, we can actually launch that VNF descriptor. First check the ID that is given by Taker. Again, we are looking for the Taker command that creates the VNF. VNF create command. We need to pass the right ID that we just copy paste from here. And at this stage, we are waiting for Taker to instantiate the VNF descriptor. Basically it has one VDU, one connection, one port attached to that and one network. So let's check the status. It says spending create. The next time we will check it, it will be active actually. And we will have also management IP address that is given by Taker to us. The status is active. As you can see, VDU 1 at this management IP address is available. Let's ping using that IP address. And as you can see, it is instantiated. We are able to ping it. Just to verify on the horizon with what is happening, we are going to make sure the second mission, which has an IP address ending with 7. So it is a separate one, the previous one, if you remember, was ending with 8. So these are two horizon instances of two different domains basically. They don't share any resources. And voila, as you can see, the VNF with the test VNF name we created is there. And VDU 1 is running on the Taker site. And VDU 2 is running on the heat side. All right. So at this point basically, in Domino, we are able to do this partitioning and splitting, matching, and distribution. But what the one current assumption we make is if there is a boundary condition, in this case, it is, for example, in this graph, VL1, we copy that VL1 into both domains. So the actual two templates that we will send to these two domains will have that VL1 definition included in both networks. So this works fine if VL1 is a public network, or if those two domains share the same ID space, so that network name ID will, if it is shared between those two, it is not a problem. But if they have different namespaces, then we have a problem. Because now our VNF that is supposed to be connected between these VDUs actually will lose that connectivity. So the next step for us in the Domino project is actually create a third template, not just basically split end-to-end service, but also create a third template to stitch these resources that are instantiated in different parts of your overall distributed cloud system. So we are at this stage, probably, we will look into first Onos controller, and we will try to use their L3 VPN implementation for that stitching operation. And I think I pressed two times. Okay. And we will also have some API extensions going forward for release D of the OPNF with Danube release. We will basically currently Domino is stateless. If you publish something, we distribute and we forget about it. But in the next release, basically, we will keep a state. We will assign unique ID addresses to templates. If you want to basically update your service descriptions, you can refer to this ID and we can propagate the new individual resource descriptions of domains. And if basically we end up scheduling some of those resources to different domains because you change the service description, maybe you updated your policies, then we need to also go ahead and delete and on-board some of the descriptors from some domains and migrate them to other domains. So our intention is for release D to be able to have that type of state maintenance and updating all the service descriptions and propagate those changes across multiple regions. And obviously when you have those states, you want to query the past templates that you published. You want to able to list what you already published in the past. You want to basically query who received which portion of that template, which orchestrators. So you want to basically then directly communicate with that particular domain. So this is important for the second use case that we discussed, this high-level API templating. So you can basically, we distribute a template, but you still want to basically these controllers, orchestrators, should still communicate with each other directly to basically do some lifecycle management. So Domino is out of the loop in there basically. So we want to also support that type of functionality and get out of the way between the orchestrators. We want to just stay in the template distribution portion and onboarding portion of it, but we don't want to manage the lifecycle, which is the functionality of those individual orchestrators. And that sums it up. I don't know if how much time, or do we have time for questions? Do we have time for questions? So what... We are out of time, but we are here, so you can definitely ask questions. And join the force, I will say. The last message should not be lost. Yeah. I mean, this is a young project. We started early in March, and definitely we are looking for more use cases, more contributors and cometers into the project. Any questions before we wind up? Either you understood everything? Okay. Thank you very much, and thank you for your patience to hear us and look forward to working with you in future. Thank you.