 Hello everybody and welcome to this session. So my name is Morgan Rishom I'm any V architect for orange and I will do the presentation with Arthur Bresen and director of product management for a cloud if I so the topic today is but VMS deployment in a PNF V and by extension some lesson learned on best practice for VNF and boring on open stack So Okay, so the agenda will first I will Give some some indication on what is VMS. I know that we are in open stack summit VMS sounds really talco and really And they so few world on VMS then we'll detail why we need an orchestration We will come back to open a V and fun tests with a functional testing in opnav focus on the VMS test case that has been fully integrated in opnav and a conclusion about Lesson learned and what will be the future for VNF testing within opnav project and more globally on open stack So first of all, what is VMS? So I don't know if you are all familiar with beautiful talco architecture so VMS means virtual IP multimedia subsystem, it's a Framework that can be defined by 3 gpp Quite a long time ago now It's an architectural framework to deliver IP multi multi major services Today, it's mainly used for voice over IP services And if you have service provider providing voice of IP services usually it's managed by VM by the IMS solution It's fully designed fully defined lots of standardization behind all the interface are fully defined So it's a very complex and rich telco architecture offering large-scale telco deployment including all the telco usual Constraints such as legal interception emergency number etc etc So as you can see, I will not detail all the box, but lots of people Rock on that they detail all the different interfaces all the different Box and the protocol everything so it's really usual stuff we do in telco Where people spend yours defining that and then the vendor they are implementing Solutions and we make lots of interoperability in order to make all these books working well together So it's still very Interesting to consider VMS because it's the core network is used for LTEs the new evolution of the network It's very well known in the telco world It's already deployed in lots of telco you have some constraints That are specific also to telco such as legal interception and adjust numbers I say usually people from IT They say oh you just have to get rid of your infrastructure and do everything from scratch But we have really big constraints that we we must take into account and and that has been precisely defined So why deploying a VMS? So because it sounds really any V in a PNV project that will detail a little bit later We started putting very basic tests such as ping between VMs But for telco making ping between VMs does not really make sense So it was decided to to find something that sounds more in a V And there was an open source cloud ready solutions on the market. So clear water solution by Meta switch It's very well known if you are used to attend Mobile World Congress on any summit Then you have lots of demo with Meta switch solution But the idea was really to be able to to to integrate it and to automatic automate totally the solution From the orchestration to the testing so this solution is already deployed by some telcos and We need also an easy solution to orchestrate it and to see how it was visible to have a VMS in a DevOps Loop on on C. Oh, it was possible And why it's an open stack because open a V even if it's not Mandatory, but today in the last version of Colorado the 37 scenarios that have been Validated are based on open stack as a cloud provider So open stack is no more an option Open stack has been adopted by most of the telco in the keynote yesterday morning So that any way was one of the key topic. So it means that my colleague from Germany is a they were on stage showing that okay, we are dealing with any way last year AT&T received Super user world this year. It was China mobile. So you can see that telco are Part of the business and VMS for them means something So that's a very brief introduction on VMS and then I will give the floor to Arthur to explain why we need a VMS, but we need also a way to orchestrate it Thank you Right, so we've seen this diagram this nice little diagram, which is very simple, right? So essentially You know systems such as an IMS it's very complex system It has a lot a lot of components which essentially, you know Come comes down to the basic or a component that we are all known and familiar with right so that you know Essentially virtual machines virtual routers. Maybe some neutral configuration Maybe some some software that we install on top of that virtual machine But essentially these are repeatable components that we use in an environment essentially, you know There's a lot of complexity into creating the right setup and the right The right deployment for a working actual environment So orchestration is a is really all about, you know taking automated processes. For example Install a router configure router install a virtual machine on with specific configuration for and prepared for a specific deployment For example, you know add EPA configuration to that virtual machine those kinds of automated processes And orchestration is both taking these automated processes and essentially a Deployment a deployment in a repeatable manner and in in a manner that That can create the right Workflow to bring up a actual working environment So essentially orchestration is all about creating these dynamic workflows based on those repeatable automated Processes so, you know, obviously, you know, we take this such as a complex example as an IMS And you know, we break it down to say several repeatable components and essentially we created the dynamic workflows to onboard and to install and then Manage these applications on runtime as well Now one of the challenges obviously, you know, we can script the whole thing We can script, you know, the automated scripts on top of that But one of the very complicated things about orchestration is making sure that you have the right context For each and every component in the environment, right? Because like if you're installing in a virtual machine The next component that needs to install on top of that virtual machine needs to know all the details from that virtual machine Now the next virtual machine, for example, we have, you know, another virtual machine that that has the router on top of that It needs to know the credentials in the IP address for the first virtual machine that we installed So we need to know and pass the Intel in the context of the whole deployment intelligently between these dynamic workflows that we produce in order to onboard such a complex environment So I'll give a short introduction into Clotify. I'll try not to steal too much time today So essentially Clotify is a, you know, an NFV orchestration platform It falls under the orchestration for service and resource orchestration Category but also is a generic VNF manager based on Tosca. It's fully open source And we also have a commercial addition to it Based on the premium package and the cool thing about it that is that it's highly pluggable I'll detail that in a bit But the main idea here is that essentially everything that you can think of and has an API and has a way to connect to it and Control it somehow programmatically. We can very easily plug Clotify into the whole loop and to to orchestrate That that specific device So as I mentioned earlier, Clotify is based on Tosca Tosca has three main parts to it. The first is the topology essentially a YAML description of the application topology Which looks something along these lines. Obviously, you know, I can have a small unit of measure. I can have a larger Composed blueprints for the more complex application to describe my whole network service And I simply, you know, use a simple DSL a simple YAML DSL to describe my topology And yeah, this is language right Tosca essentially is a description language So you can always extend it you can always introduce new node types For example, you know the language itself comes with a set of normative types That I can consume for describing my application but can always extend it Okay, so what I mean by that is that I can introduce a new node type to expose and express a new a new Component that was not, you know, was not included in the in the original Generic set of types that I can use as part of the language and then I can obviously compose My application based on these node types And now the second part and the more complex and interesting part in you know in our review is the workflows essentially a workflows workflows are Very important to be this to be the declarative right and what we mean by the declarative is that we produce this Workflows and we execute these workflows based on the topology All right, so essentially I have my application description as we have the example here And I have you know very simple application simple to know the application Obviously each of them can be a scaling unit of its own so I can scale for example the the compute node for for the database or for Whatever and then we essentially based on the life cycle Events based on the life cycle operations. Sorry for each again node type Okay based on these life cycle operations We can produce a workflow dynamically to trigger for example and install Operation for specific VNF for example just for onboarding, right? So these are the declarative workflows and obviously we also have the imperative workflows So essentially you can you have your own internal workflow in your you know For your specific VNF that you would like to express so you have the ability to create that imperative workflow as well With an access to the topology So we this is something we call the context of the of the deployment. So essentially once you Once you deploy a VNF you have a context You can always access these components using simple, you know simple Python code in order to for example pull out an AP of a database or pull out an AP of router or or you know Query the number of interfaces for a specific device. So these are the imperative workflows Now the third and third part to complete the whole picture is the policies So I can set policies to trigger specific workflows based on events that are happening in my environment So for example, I I monitor my my CPU load on my virtual router And once I recognize for example the amount of traffic the amount of the volume of the CPU the memory consumption, etc, etc I can you know a Take these metrics and describe a policy saying that once I reach a certain load I can trigger a scale event for that router, right? so that policy will simply trigger the scale workflow and Essentially part of that workflow. I would simply steer the traffic part of the traffic to my new virtual router that have just a Provision so these are the policies So just to put the whole picture together I'm I'm describing in a simple blueprint simple task a blueprint my whole application and in that blueprint I can include, you know, the resources For example, I can have scripts. I can add to have ansible components to install specific parts of the application can have chef Modules, etc, etc If I if I choose I can use an image for example, or I can choose, you know a pre-deployed Imaging glance if I if I'd like to depending on how I create the descriptor for my application So clarified essentially takes that application blueprint and Uses a set of plug-ins in order to communicate with the underlying infrastructure To onboard that application, right? So for example, if my application blueprint contains an open-stack node, obviously the open-stack plug-in would have the logics To access the open-stack API to trigger the you know The Nova command create and obviously pass it with all the needed parameters in order for it to work successfully For example, you know creating the neutral network creating all the dependent components and passing all the needed parameters for that and Then you know the really cool things we can do with that is that we can mix and match types, right? So you can use an open-stack type inside that open-stack type we can describe using a Docker type for example and inside that Docker type we can describe deploying an application, right? So we and then we can say The networking piece of that is implemented using an SDN controller, right? So we can have a description of a network that is being controlled by an SDN controller open daylight for example And then we can create this topology for this application essentially cloudify talks with the APIs In order to realize and onboard and install and manage that application. So that's a quick intro to cloudify Thanks So now I will speak a bit more specifically on open eva and fun test on all we use Clarify to integrate the water of your solution So first of all open eva, I don't know if everybody is familiar with that So basically initially the scope was to work on the integration because it's an integration on testing project Integration of the best of breeds of the open-source world in order to build a telco cloud solution To build telco cloud scenarios with lots of integration. So difference as in controller different feature So we put that together we test it and we give some confidence into the different Integration we can have so initially the vnf out of scope of that But of course if you want to stress a system you need some vnf to test it so that's The way open eva is doing so we have integration testing on new feature Project a new feature. It's one that has been shown in a keynote. So the doctor or the fault monitoring So every time we detect something that is not covered today We can initiate a project that will try to answer to a problem and the other main project are testing and integration So we can see here the different components were integrating and that are Distributed among the different project that are testing through CI and a federation of labs distributed all around the world in order to provide confidence into the different integration So the this project is under the Linux Foundation Organization lots of company are taking part to do that So what we do is we give confidence to scenario. That's what I want to say So a scenario two days up and stack plus network controller plus feature plus mode mode meets high availability or non I availability so for a functional testing for example, you have Different tests you run and then for example here you have two scenario You have the onos sfc scenario So you have the onos controller plus sfc atoms in order to have this feature and here you have a noise Dn so it means a pure neutrons with pure open stack with that feature and on hs So that's a dev stack or something similar with different installer and then you have the different test so that's what opnav is doing and Today we have different Combination of course was different installer different tests Different way to do into an automation for the installer. We have fuel from Maranti Sergio a red at to do canonical and compass from way away and for the test project We have functional performance qualification storage different test project that are dealing with specific aspect The third release has been announced on the 22nd of September and we are working on the fourth soon so My project I used to be PTL until two weeks on the functional testing project Was ready to to deal with functional testing in open a bit So does it work does it not work? We have internal test case such as V-pings as I say between different VM feature test and we have also of course reusing Massively all the upstream test suites that already exist So we are running rally rally running tempest running ODL or onos testing suites in order to give to the integration a global scoring and Confidence level For fun tests we use a docker file So you can use it even if it's not opnav any open stack based solution can use This this docker and it can be used to trigger all the different tests we have today so it includes the tooling the test scenario and The test yes a test scenario So fun test will also give an automatic reporting based on all the different results at what you show in the picture and Then you have a global scoring So now I will come back specifically to VMS and that's where open source is really cool So unfortunately this gentleman cannot be there today because he's at school I need the one responsible for for for the MS in opnav is an internship in orange It's six months at school six months around and you know he is smart on he was a bit fed up playing with Lego So he said okay, I will play it with advanced Legos and then he looks okay clear water is a cool Lego Clarify is a cool one. So he makes a match and he's the one who really Put all this stuff in motion and it's quite funny because when you say it's I know 20 or something like that And he didn't really ask for a lot of supros because the different Legos were well documented and was able to to build a complete end to an integration on automation of the test case in opnav So I mentioned the VMS solution here. That's the VMS implementation by clear water so all the interface respect the VMS Three GPP specification the implementation is a bit different and we say it's clod really for example We use a Cassandra ring database so an SQL database for for the HLR and things like that So implementation is a bit different. It's really clod ready, which is interesting and Why we choose clodify at the beginning we test several solutions But clodify was open source, of course It was one of the condition, but there are different orchestrator even if there are more since we start out But the integration with open stack was pretty clear. I'll say no need to Spend hours in in documentation documentation was was very good There was a full life cycle management already integrated in the system It was based on Tosca. We rounded that as well and the plugins There's an openness of the solution was very very nice So that's why we decided to he decided to go to to to clodify It's possible to probably you attend in several I say SDN World Congress and Any any meeting there are lots of demo with with clear water VMS and you can do it with Other orchestrator, but this one was really easy to implement and easy to integrate into the open evi from work So yeah, you can see an example of the Tosca descriptor so the blueprint you can download it if you want the description is relatively simple and Based on the restrict descriptor on the Tosca, then we will just show a step-by-step deployment of the system So in open evi we have labs So what we call farce lab farce is the name that the name of the project managing all the infrastructure So in open evi we have more than 20 lives. I think and it's automatically managed by Jenkins When the installation of the system under test of the scenario is done in one lab Then we call fun test fun test will create a docker or a corner in machine aside of the system Then he will ask weapons act to create the first resource each other the user of the tenant everything then fun test with In install clarify on a VM and it will download all the The Tosca stuff in order to be able to deploy automatically all the VMS including the different nodes or Instead of a sprued bono helix correspond to what we used to call in VMS CSCF HLR all the 3gpp basic stuff Then it run that Additionally in a fun test counter meta switch plan they develop and a set of Signaling testing so we have more than 100 signaling testing testing the different interfaces So it's purely seep here. You don't have a performance or stress test But lots of signaling with all the different scenarios or register Invite re-invite redirection of call etc So you're able to automatically send all these tests your provision automatically the database and run all the testing And it's run from the jump post so from something that is considered out of the system So it's really seen as a black box and the test as an user And when it's done it collects a result and he pushed a result to the result API that is used to give the scurrying to the scenario So at the end the results will be something like that. So here so it's a last June so you have the three step So the orchestrator time for deployments was at the time we we we need for for the plamont of Clarify then the clear water deployment and Signaling testing so can see here that the testing was not so good But it has improved a little bit and we we had the complete list of testing So some tests were useless For example, we don't have service application on top of this VMS So we were trying to to test it anyway So we improve that and in the next release we exclude this So that's what has been done by by a valentine and it has been integrated already in the end of the brahmaputra release So the release before the one From September, but it has been improved since this date and we can see that Based on this test case we learn seven things and for the next release we plan to have more vnf on boring and Practical testing of the vnf based on the same methodology that he used for the ims So the main lesson learn is the test case. It is interesting because testing just audio or Component is interesting, but for telco we need systems and we need telco grade system So we we had a VMS solution that where was telco grade and we have also An orchestrator tool that allows us to deploy it very easily So it's here in 40 minutes. You have a complete Deployment of a VMS. So if people are working in ops In telco, it's quite quick regarding what we have usually So we have access to so we learn lots of things doing this we we saw also some constraints that lead to some improvement on Some installer so they need to do some improvement And we were able to make also a functional test It's interesting to see that the work that have been done here. I've been reused massively after that by Vendor that use all the toss cascript and the methodology and the documentation that have been done for pnav As part of the training. So I know that my renties Allure Nokia on other company use it as an example So it was interesting to see that the work from the students that just play legal with technical building blocks Was back to to vendor in order to sell it to telco that create this thing, but it's quite unusual So for the future of the vending testing what we believe is that orchestration battle just begun So there are lots of orchestrator on market at the moment lots of open source solutions lots of different options What we want to do in opinion base, okay There's a slide battle, but we want to have a reality battle So what I say to people coming with okay have a new orchestrator, which is the best of the world say, okay Just integrate a test case using the same methodology and show show us that it's in CI on that We can because as it's in CI we're able to run VMS on all the installer on all the platform on get results So so we were really able to to distribute it massively into different configuration with different as in controller on see The potential problems. So what I said to them is just do exactly as we do Create a test case. So maybe not behind us. You can select other ones on for the next release in opinion They will have new vnf. So there is a project from okinawa labs to have a vrouter and they reuse clarified because it was already there And I think they find it also very simple. That will be VPC from open interface coming with juju as vnf So there will also other combinations that are possible But the idea is maybe to learn by doing and as today it's still not fully clear What I mean the interface between the orchestration? I mean it sees started doing the job, but everything is not finalized. So just do it learn by doing show what we can do We can also Mention that open avi organize every six months a plug fest when we welcome of proprietary vnf to be tested and We did that in Colorado last time. There is a plug fest beginning of December in Boston this year And for the future we'll have new open source vnf One or so of the option it will be to add new load test capability. I mentioned for the ims It was only signaling But there is a test project called yastic in open avi and we hope that in the near future We should be able to call yastic to generate traffic and to have additionally Also some stress test and some qualification with load test And that's it for us. Yeah, pretty much Thank you. Thank you. Any questions? No question. Okay, there you go Okay, we yeah, that's I think we cannot make a session without saying container in the session. I should have no no We do have a docker logo You're right. You're right. I mean we didn't try it yet because in opnv at least we are not Necessarily doing everything with open stack we were Validating scenario of integration Okay, but that's true that today all the scenario of integration are based on open stack And there is no scenario with open stack on Magnum and things like that. It's only pure open stack with Kevin It's only that It's pretty sure that we can do that also in contner We will maybe do it in the future because there will be new scenario coming and they will integrate container There is already a scenario in One scenario is dealing with LXC and LXD in opnv So we can reuse it that but the VMS have not has not been tested because you need some adaptation It has really been done in first step at least the automation script has been done to to match the default configuration but In opnv we are trying to be to address all the scenarios So we will have to adapt them to the V to to the LXC scenario And maybe later if you have a Kubernetes or anything dealing with that We will try to adapt that here and adapt with the people that are coming with the scenario But today I know I have no example in all my list of scenario of the VMS deployment in contner I I don't see any main obstacle for that But yeah, so we didn't do that. So we'll be happy to see Contribution to these repos of you know supporting containers and supporting difference Exactly Yeah Exactly. Yeah, it's essentially it's a set of applications. You need to containerize and once you containerize them you can Orchestrate them. It's that's essentially by the end there. That's a story Cool. Any further questions? So that's a actually a good question. So why claudify and not heat for example? So heat is solely focused on resources of open stack, right? This is the orchestra though the open-stack orchestration project But usually it's very important for you as a user to to first of all to be able to support Multiple VMS because essentially you want to have the ability to move your application from one VIM to another VIM if you Have your hot template you're pretty much bound to open stack and the second reason is that usually you do integration with additional technologies Right, it's not only that you want to orchestrate open-stack machines or open-stack resources You want to also for example integrate pop at chef Yeah for the test it's clear you can use it you can use chef you can use whatever you want But the guy who decided to implement it He did that with qualified because he think it was simple and he was very happy with all the life cycle Management that was not available, but that's clear that initially when metas we choose to be in opinion V when they say they start doing There's this test case. It was based on on heat simply on heat and it was It was not integrated in the CI because it was he is not finalized and when we in orange decided to We won't really this test case into the release then we Yeah, so the guy who made the implementation just select his favorite Lego and this time it was well But it works perfectly well with other orchestrator and I say juju has also charm to do it if we want and That's the battle for orchestration. I say it just began but what we can say is from our experience it was quite straight forward to do it was qualified and That's why for the new test case As I mentioned people reuse exactly the same piece of code to redo the same thing So deploys orchestrator or deploys vnf test your vnf But there are also some test case in open evis that are just doing the stuff with heat and It's working also fine. There was another one. Yep Could you speak a bit high but a lot or maybe go to the mic my understanding if you start your test case You had a container first is correct that from that container You first install a virtual machine on top of open stack with cloudify within and then within that come Within that virtual machine running an open stack you start by reconfiguring these the same open stack Deployment correct. Yeah, we have we have a tenant so from the fact that from the docker We just initiate the creation of the resource on open stack creating a VM on this VM We install clarify then we ask codify to consume the task file from this task Clarify call open stack again to create a different VM within the tenant and organize all the network and associate Deployment from clear water. So that's I think that's what you say, right? And would you say that this is a best approach of configuring the open stack from within the open stack or would you prefer? In operation that you have it outside So usually right so usually, you know you you see the diagram essentially you have the orchestrator talking to the open Stack right but the orchestrator needs to run somewhere as well So usually the best practices or this is what usually folks are doing that they're simply using the same Infrastructure but in a different tenant so usually they have the admin tenant with they you know with the with the management software and orchestration software And then the workload itself runs on a separate tenant or maybe separate region separate availability zone depending on the configuration of the of the specific open stack setup and depending you know on how the organization is running their own Their own environment Thank you. Any more questions? All right guys. Thank you very much