 Hi everyone. Good afternoon. I'm Tomer from Nokia CloudBend business unit. I'm here with my friend, my colleague, Renat. We're going to talk about orchestrating end-to-end network service heavily using Mistral. But we're talking about the process of network service training, and we'll show you a bit of the CBND, our network service orchestrator. So Renat will start with around 15 minutes about Mistral specifically. And I'll continue with CBND. OK, thanks. Hello, everybody, again. So before I get started, actually, with some details, some of you already heard about Mistral. Some of you actually attended my previous presentation at previous summits. And a lot of things actually may sound familiar to you, but I was asking myself today, what else am I going to tell about this presentation? Actually, I've been asking about it myself the last week, probably. And the thing is, here's the opportunity, again, to talk about Mistral. And I just decided to emphasize the most important things related to Mistral, probably, and discuss the importance of certain things from a little bit different angles. So because as we go, as we keep developing Mistral, we understand that some of the things are not so important that we thought there were some of them. They are more important. And we have some vision, and it keeps changing. So I'll try to actually talk about it from a little bit different perspective. Like big picture, what is Mistral? So first of all, it's a language, essentially, for writing workflows. Workflow is kind of a general term, but what specifically we call workflow is a distributed process, essentially, that you need to automate. And this is like a purely technical term, probably, but it describes what it is, I think, best. It's an open stack service with the REST API, of course. You can scale it. So it scales pretty well, almost like linearly. And it's easy to build any kind of driven automation pipelines, and allows you to actually inject arbitrary code snippets into the workflow, into the scenario that you want to reproduce, you want to design. That's like a really big picture. So it's like an automation tool that allows you to write something in a YAML-based language. It allows you to upload it to this service, and service runs it in a scalable, highly available manner. And here's like the big picture about Mistral. Let's consider an example that I like very much. So many people, I heard from many people that this is kind of an abstract example. It has nothing to do with what we do with open stack, but I like it really much just because I think it illustrates the problem of automation pretty well, just because it's easy to understand that what kind of scale requires automation. And I find it actually pretty easy to explain to people from telcos what workflows are for. And I find it sometimes difficult to explain what workflow is for people who are not from telcos. But everybody knows basically what parallel computing is. Everybody probably heard about frameworks like Hadoop, some other computing frameworks. Basically, if we have some large amount of data, for example, produced somewhere in CERN, and we need to process that data somehow, usually like the typical approach nowadays that we need to parallelize it somehow, the whole huge computation. So for that, what we typically need to do, we can't use any single server, even if it's really, really powerful, just because it would take many, many years to complete the task. What usually people do, they try to parallelize, they try to actually acquire some computational resources temporarily just for this task to complete and then release those resources. And it all comes to using clouds, actually. So more specifically, what we would do here is we would basically allocate a number of virtual machines, configure them somehow. Then once they're all configured, we could actually split the whole computation into relatively small pieces. And each of those virtual machines would do something useful, and then we would aggregate those small results. And finally, we would get our final result, like aggregation. So after that, we may want to build a report on top of what we've done. And essentially, we could notify somehow some human that the whole process is actually completed. So by the way, some of the things can be done by other tools. It doesn't have to be actually mistral. That's why you can see it can be done by hit, which is totally true. What I'm trying to say here, even if you use hit or something else to simplify parts of this whole huge process, which can take actually days, even weeks and months, you still need to think about what's going to happen if, for example, a process like this is going on for a week and something is broken at some point. Imagine if you were trying to do something like that just by writing Python code. So if something gets broken, you never know what's left. You have to look at the tons of log files, and not always they contain all the information that you need. And this is kind of a challenge that we have in this case. And the value of workflow in this case actually exactly by in representing the whole process as the workflow, which is kind of a graph of states that we can have. It's a graph of task, essentially. Every task has a state. That's why I said a graph of states, essentially. So what we would do here, actually, we would represent the whole thing as a workflow here. And we would make it, as I call it, stateful, essentially. And this makes a really, really big difference compared to some other automation tools that we have in the market. And actually, state is a key thing here. So what would happen here is, for example, you see these lines here. And my virtual machines from number 2 to number 50 are totally configured. It's all fine. But in one of the workflow branches, we had a failure. And essentially, what workflow allows you to do is it allows you to start from the same point when you fix the problem manual. So you don't have to start the whole process again, because it's just too expensive. And that's the value of the workflow. Specifically, it's all YAML. When you start writing your workflows, it's YAML. And this is just a quick example of a real working workflow, actually. I'm not going to talk too much about the language itself, because it's, well, whereas it's pretty simple, the discussion about the language is the whole different story. Actually, it would take another hour or so just talking about the language itself. Even though I believe it's pretty easy to consume it, to learn it, it would take probably like a half a day to learn all the features that we have. And by the way, simplicity consigns us of the language was one of the goals that we had in mind when we started working on Mistro. So yeah, like I said before, just to summarize again what I said, state is a key thing here when it comes to describing what workflow is. It makes a huge, huge difference, because one of our contributors actually on the previous session said that when you're writing a workflow, it's really not easy, just to be honest. It's much, much easier to write a Python script or whatever else script, because a lot of people can do it really well. They have a lot of skills, so like from designing perspective, it's much, much easier to use like a general purpose language, essentially. But from operational standpoint, when you're dealing with those scripts in Python and Bash, something else, when something is broken, you're left with a mess, essentially. You have to understand what exactly is configured, what is not, actually. And the key thing here is state, exactly. So state actually also enables asynchronous processing and, like I said, effective error handling, which I also mentioned. So some other reasons to use workflows. Workflows, because of the state, again, they're scalable. So essentially, everything is stored in some storage and a persistent storage. And even if one of the nodes of the Mistral workflow cluster actually crashes, everything keeps working. So losing a Mistral node doesn't lead to losing a workflow, which is also really important. And that's totally different from running just a Python code. Like getting back to my previous picture with running workflow, like multiple branches is parallel. So workflow technology allows you to do things in parallel. And you don't have to actually worry about any parallelism yourself. And workflow technology allows you to do it pretty naturally. And it allows you to actually synchronize your multiple computational branches pretty easily, too. It has some certain entities in the workflow language. So this was kind of a quick introduction into what Mistral is. I think it's enough for now. So I just want to mention a few things that we achieved in Newton's cycle. I think it might be interesting for those who have been following Mistral quite well. So the major achievement that we made in Newton's cycle is performance and stability. So it's now kind of a couple of orders of magnitude faster than it was like four or five months ago. And we've done a number of things to make it happen, just because for our use cases, Santomar is going to tell about it more. It's really critically important how the performance, actually, how reliable it is and how fast it is. So some other things that we've done. It was also requested, like load timing goal is multi-vim support, thanks to our Hungarian team. And so essentially what you can do now, you can have just one Mistral installation and it can work with multiple clouds at the same time. So what's actually missing right now to be honest is you can't have workflows that involve multiple clouds within one workflow. But you can run one workflow on this cloud and another one on that cloud without reconfiguring Mistral at all. And it's critically important, too, for some of the use cases that we have at Nokia. So we also have now integration with key clock authentication server. So instead of Keystone, we can use Key Clock. And now we can use Gingy 2 as an expression language for those who don't like Yackle or can't use Yackle for some reason. Gingy is like no one expression language and you can use it. Yeah, like I said, multi-vim support is one of the critically important features that we needed to add, actually, into Mistral. So it's kind of a small illustration for that. As far as what we're going to do next, like around Mistral development, what we are really missing, I believe, is some kind of visualization of the whole framework. So primarily, people are interested in seeing some visualization for how the process is actually running. So if you started something and it takes a long to complete like this, it would be really, really cool to see it visually so that you see what's completed, what's not, what's still in the plan, and stuff like that. So we also want to make a lot of usability improvements. And we are now designing like new version API and command line interface. But don't mix them with language itself. So language is kind of stable, so we're not going to do any non-backwards compatible changes. And we also need to improve scaling, because right now it works really fast with one node. But we have some limitations when it goes to using multiple engine nodes. That's pretty much it, I think. In this part, and here's a list of achievements in terms of who uses Mistral in something real. So you can see a list of real usages. And this is a very small part of the whole list that we have in mind, actually. But those are general use cases that we are kind of proud of. So I think this part is over. So I can hand it to Tomor. Yeah. Thank you, Anat. OK, so I'm going to talk about CBND, our NFV orchestrator. What we see here is the Etsy-NVMano architecture. So basically, on the top, you can see the NFV orchestrator. This is what we are doing, actually. And the NFV orchestrator talked to mainly with the VFA manager. That's the actual manager that manages a specific VNF. It's need to talk to the VIM itself, mainly for networking purposes. It has a network service catalog. And of course, it's an orchestrator. It can orchestrate the whole network service. So in CloudBend, we have mainly three products according to this architecture. So the product that I'm working on is the NFVO, which actually automates all the operation related to a distributed, multi-tenant, multi-vendor network service. We have the CBNM, which is CloudBend application manager. That's the VNFMs. And we have the infrastructure, which is CBIS. That's actually the VIM based on a radar distribution with additions. But the NFVO itself is not limited to CBIS itself. It can work over any infrastructure, any VIM, not necessarily OpenStack. And it can talk to any VNF manager in the market. So basically, the end-to-end service delivery start usually with some kind of business service, OSS, BSS system. That's a start or deployment of a network service. Then it's go to our orchestrator, CBND. It starts to actually deploy the network service. CBND talk to the VNFM. Each VNF can be managed by a different VNFM and can reside on a different VIM, on a different geography. So the NFVO talks to the VNFMs and start the deployment for each VNF. Also, the NFVO is taking care of all the component of the network service, which are not VNFs, which can be physical network functions, usually at the edges of the network service. And everything that needed for the network service will see that in a minute, all the forwarding path, virtual link, et cetera. OK. So the NFVO is, again, it says a network service catalog. It automates the management of network service. Our design is that everything is pluggable. So SDN is one kind of plug-in. Right now, we're working with Nuage. OpenStack is a plug-in. And VNFM is a plug-in. So if you want to integrate a different VNFM, or if you want to work with a different VIM, that's possible. It's responsible for the lifecycle of the network service, deploy, scale, et cetera. Whatever you need to do is the forwarding path. That's the path between the VNFs themselves, again, multi-geographies and multi-VIM support. So this is how network service looks like. So basically, we have the VNFs. Usually on the edges, you will see PNFs also, which is physical network services. So here we have three VNFs. And on each VNF, there will be one connection point, at least. The most simple way of connection point is actually a VNIC. There are virtual links, which are the logical connection between the VNFs. And there is the forwarding graph, which is actually a definition of the traffic flow between the different VNFs. Forwarding graph can be based on, let's say, on the port. So if you have, let's say, port 80, you want to route the traffic through a load balancer or firewall. And if this is a spam filter, then probably the port will be different. And also it can be based on a business policy, like what the customer paid for and what services were purchased. So the terminology. Tosca, I believe some of you are familiar with that. Tosca is topology and specification for cloud application. It's built in, it's written in YAML. NSD is some kind of, it's a network service descriptor. And basically, it's a Tosca template packed in some kind of, it's called CSAR, Cloud Service Archive. So it's basically the template itself with some kind of descriptor. So what we do in a cloud network director, what's our flow? So basically, we start with a Tosca template. We know how to parse the Tosca template. We know how to process. We know how to, actually, we have a very complicated parser. We know, we understand all the relationship, the interesting function. We do a lot of validation on that. Once the Tosca template is parsed and done, when someone starts to deploy a network service, then we build something called an operation execution graph. This is a graph that's based on the Tosca interfaces and operation inside the Tosca interfaces. I'll show you a few examples in a minute. And probably the most important thing is the Mistral workflow, which is actually, we generate code, we generate Mistral workflow. And from that point, from the point that the workflow are ready, we just let Mistral do everything, including our own housekeeping, like creating jobs in the system, updating the status. So Mistral is taking end-to-end responsibility for the full network service deployment. OK. So now we're going to talk a bit about Tosca. What's important in Tosca? I'll try to do it quick. So in Tosca, we have no types. Probably the most important thing are no types and relationship types. And the instantiation of no type and relationship type are actually no template and relationship. So for example, no type can be Tosca node compute. And the node template is actual compute you want to boot with the name, whatever compute one. So just a simple visualization, each one here is a node template. You can see three node templates here. I took it from the Tosca document. So there are three node templates here. You can see that each one has properties. On the bottom, you'll see a server. And then on top, there is a database which is hosted on a container. And then the container will be compute. So all the right side, these orange arrows are actually a requirement or relationship. This is the same thing. The green thing is what's called in Tosca interfaces. And that's the actual operation that's happening when you do. You can run a script. In our case, we're running a workflow. But this is the actual executable. So in relationship, we'll talk about it in a minute. But there is a relationship that's derived from depends on. And the ones that are not derived from depend on, and then you have to compose your graph a bit differently. OK, so CISAR, like I said, it's a ZIP archive that packages the NSD. We have a JSON manifest file that's actually pointing to, you can see the artifact name, main email. This is the network service itself. We can have more than one network service in a CISAR. And CBND itself do a versioning of the CISAR. So if you go to our application catalog, I'll show you an example. So we have an application catalog here, and you can see that, for example, for this specific network service, there were three versions. You can download the NSD, upload new one updates, whatever you want. And so that's for the CISAR. So if we're talking about TOSCA interfaces, so basically what's important is that for each node type, there is interface called standard. For each relationship, there is interface called configure. So when we're talking about deploy for each node type, there is a three operation, create configure start, the one on the left, most left and right. And each relationship between them create another seven operations that can be implemented or empty. That's the standard interface in central normative operation that TOSCA nodes may support. I must say here that probably most of the, not most, but part of the operation are not always implemented. So if you want to create something, let's say I want to create a database, so maybe the create will be with some kind of script or workflow. The configure sometimes may be empty. And the configure itself, which is interface on the relationship, give you another set of operation. So this operation cover deploy and undeploy. So just a small example, this is something with the execution graph that we generate. There are three nodes here. And this depends on the relationship between them. And we take this, and we take all the operation from the nodes themselves, when we build some kind of execution graph. So this execution graph is actually, you can see there is a firewall, anti-DDoS, VNFs, and what we are doing in each operation. So we start with creating them in parallel. These two anti-DDoS and anti-SPAM, sorry, and the firewall has no dependency between them so they can go separately without any dependencies. The other one, the anti-SPAM depends on the firewalls, so they need to be executed one after the other. So this is one example. This is a different example when there is no dependency, so things can be done in a better degree of parallelism between them. So that's our execution graph. I want to show you something like a typical network service can generate a graph in a size of this. So the graph have a lot of nodes. After we generate that, we're doing some other process to remove all the empty ones, but it's quite heavy graph. And you can think that if you've created your own node types, or if you've implemented all the interfaces, all the operation in the interface, then Mistral is going to work really hard. So to show you, another thing is a bit... This is network service. Our network service look in our system. This is the network service part of the CSR, not the manifest, the network service itself. So basically, you will see here the VNF themselves, input parameters, the VNF themselves. You see here you have a gate VNF. PNFs are physical and network function that we need to usually configure as part of deploying the network service. Another VNFs. And you can see also all the forwarding passes here. That's the order of how the traffic goes between the VNFs themselves. And there is the policy which actually defined like port 22 or port 80 or something like that, like the traffic flows based on the port. And then we deploy everything. Show you an example of... This is a deployed network service. This is our demo environment from the booth downstairs. So for example, this is a network service. You can see that we have two PNFs here, first and the third, and we have two VNFs here. We can see the forwarding path here. This is the traffic flow. Everything is running. This is all defined in a Nuage SDN topology. This is the vitrage graph of our network service. So basically you can see that there is a network service here and we have the VNFs. In this example, we are using Morano as our VNFM. So it's Morano over heat. We talk directly to Morano and not the C-Bam. And Nuage is playing also here a major part. That's it, PNFs. This is a vitrage graph, so if something goes wrong, you will immediately see it on this graph. Except for that we have the output of the process. Here there's not much output, but we expose everything from the Mistral workflow back to the C-B&D and also we sometimes expose things from the VNF themselves, like outputs of the heat templates. We have the plugins here that manage all the resources. We have the Veeam, which we can see status of the Veeam. And we have the VNFMs, SDN controllers. I'm almost done. I just want to show you another two things. This is the generated workflow that we generate during the process. So we can see it's not, well, it's readable, but it's auto-generated. So basically we generate pretty heavy workflows. Some of them are doing, like I told you, some kind of housekeeping, updating the model in our, let's say, database or servers. Some of them are talking to a plugin layer that's talked to an SDN. Some of them talking to the plugin layer that's talked to the VNFM. So basically they're pretty heavy. Like Renat said, we are looking for a way to visualize the execution and monitoring somehow what's going on with a workflow execution. So we are working on something called Cloudflow. This is something we plan to contribute soon. So basically this is, it will improve, I'm sure, but this is something, this is just a preview. This is something that actually shows the different Mistral tasks and what's happened. The green arrows are the on success between different tasks and the red is on failure. So it just shows you what's happened between the tasks. If that succeeded, then it's continuing the chain. You can see also here there are some joins and forks I think here, or not, I'm not sure. That's it. Just want to show you, this is Mistral workflow list. Sorry. So if we look at Mistral workflow list in one of our test machine, so basically we have, let's say the top 10 or 12, what's we call built-in workflow. So we have a set of workflow that we actually ship with the product which is to create a network resource, to create a VNF, to delete a VNF, and then all this bunch of workflow after that is the generated one. And there is no problem if you want to upload your own workflow to the, as part of the CISAR upload, you can upload your own workflow. Let's say you have a new router that you want to configure and you can supply Mistral workflow for that, then you can do that. A typical network service deployment generates a lot of load on Mistral. So like Renat said, we did a lot of performance improvement on that. That's it. I'll finish here, I'll give five minutes for question if someone has any. Yeah, I have a question regarding support of the multiple infrastructure managers, multi-vims in Mistral. So deployment-wise, do you have to install Mistral on each VIM, or it's sufficient to have it installed on, let's say, a master region, and then it will talk to the other regions? The question is about Mistral itself. You're right, it's Mistral, not the multivim if we need Mistral on each of the VIMs. So like I said before, there is some support right now for multi-vims. It basically allows you to have one Mistral instance, sorry, one Mistral instance, and you can actually work with multiple VIMs. But the only limitation is when your workflow is running, it cannot include calls to different VIMs within the same workflow. Okay, and where does Mistral get information about the multiple VIMs? Is it parsing regions in Mistral? Well, this is up to the system that actually calls Mistral. So when you need to run something on a certain VIM, you have to provide a number of parameters so that Mistral knows where to connect their Opus Tech actions to. Okay, so basically if you supply the dash dash OS region parameter to your CLI, it will pick up the region name and it will work with that region. Okay, so is that simple? Sure. Maybe I'll use the mic, Andre? I can bring the mic. It's far away from you, sorry. So if you look at the Mistral help, you will find that several parameters starting with target underscore, or OS target underscore, these are the parameters you have to fill in. Then Mistral is going to target the cloud you want to use. Yes. Okay, so thank you all for coming. Thanks. We'll be here if you have any questions.