 Hello, all. My name is Alex Nostas, with me is Remy Vishery. We're technical marketing experts at Neos Networks. In this demo, we're going to show you how properly designed as the solution can help you migrate from monolithic applications to the microservices. So we'll start by giving a few information, some information about the Neos Networks. We're a Nokia venture that has been born in Silicon Valley, and it's still based there, but it has a global team. We're mainly focused on the data center network, and mostly in the cloud context. We offer open, high performance and scalable, as the solution that actually supports any workload, anywhere over any physical infrastructure. We are also members of OpenStack community since IceHouse released. So our product, which is called the Virtualized Services Platform, consists of three main components. There's the Virtualized Services Directory, which is the management engine. It has a northbound API, and it's used mainly for implementing NetR policies and enforcing them across all the workloads. The second component is the control plane component, which is the VSC, the Virtualized Services Controller. This is based on Nokia SROS, which is a highly reliable routing engine. And the third component is the data plane, which is based on the OpenVswitch. It's called the Virtualized Routing and Switching. It's based on OpenVswitch. It has some user plane libraries modified. And it basically does distribute the routing and switching across all connected nodes. So now that we discussed Nuage, let's just go over monolithic versus microservices. So what is a monolithic application? This is a traditional application, which is deployed as a single piece and maintained as such. So there's a few drawbacks, all challenges into this application. The first one is size and complexity. So when you need to add a service to this kind of applications, the complexity increases exponentially. And before this increased complexity, the impact of the change in case of upgrades is not mastered. So it's often not well understood. This leads to disruptive updates. So this continuous integration process is difficult. And there's also reliability issues. In case there is a bug in one module of the application, the whole application can go down because of that. As opposed to the first, as opposed to monolithic architecture or microservices, these are modular applications that are deployed as loosely-coupled components. So there's ability to decompose the application into multiple modules. These can lead to easy upgrades because every module can be upgraded individually. The developers, they can choose their own technologies. They don't need to stick to choices that have been made by the enterprise in the beginning of the project. There's also independent scaling. So meaning that we can scale one module without touching the other ones so there's no impact of our overall architecture. So while the microservices is the right way to build an application today, this still can be challenging and a disruptive process, mainly because of the network, which needs to adapt to this change. The question we should ask ourselves right now is, how do we really migrate from monoliths to microservices without service disruption? And this is where I want to hand it to Rimi, who's going to go through that process. Thanks, Alex. So now I'm going to describe a little bit what are the steps to migrate the application from monolithic architecture to something more like what we are doing today is like microservices. So the first step will be to have an application, a monolithic application, deployed on the bare metal server, which is composed of a database, API, and web UI. And the first step will be to decolorate the application from the database and we'll move the application from the bare metal to a virtual machine or still an open stack. The next step could be to actually develop new modules, for example, an ordering system for the web app or a new web UI. And what you want to do is deploy them on Kubernetes like to have more agility, more flexibility in the deployment. But as Alex said, the networking to the network, there is a network challenge, how if you can connect everything together? And it's where like new networks provide the solution is we can connect any kind of workload, so bare metal to containers, containers to virtual machines, and everything is working together as it was in the same network. So that's basically what we do. To export this service in this demo, we'll use two different type of load balancer. One is the open stack load balancer service. In this particular case, we'll be using Radware, RTA on load balancer. For Kubernetes, we'll be using Ingress Controller based on traffic. So I will just jump back to my laptop to start the actual demo. All right, it is done. OK, let's see. OK, that's good. I don't know if everybody sees everything. Yeah. OK. All right, clear. So it doesn't look good. I think it isn't. OK. Sorry about that. OK, here we are. So first, I will be opening some stuff. So our open stack controller and I will be opening the web application. So we're the first step of the demo where we actually have an application that is deployed on a bare metal server. Yeah. So this application is, as I said before, deployed on the bare metal server. So everything is on the bare metal. The data base, the API, and the web UI. So currently, if I'm going, like for example, to the beers, it actually go to the API and reshape all the products from the API. If I'm going to the source, it actually reshape all the source from the API as well. So as I said, the first step will be to decorate this web API and the API from the bare metal server to a virtual machine. So what we'll do is we'll actually launch a heat template that will deploy the virtual machine. So this is just basically a script that will launch a heat template to deploy the virtual machine on the open stack platform. So as you can see, we have a stack that is being created. It's actually deploying a virtual machine and associating the virtual machine IP address to the load balancing as a service pool. So the VM is running. Now we are going to do something is, as our application is residing, the database is residing on the bare metal server. We need to connect this bare metal server to OpenStack. And Nuart Networks has developed an extension for Neutron to actually use and configure gateways directly from Nuart. So you'll be able to select any gateway in your data center and actually configure a bridging between OpenStack subnets and infrastructure subnets. Which is basically bridging overlay to underlay. So I'm doing a bridge, and then I'm selecting the database underlay network. And I'm clicking Update. At this time, Neutron will make some calls to our management engine. And we'll start creating the configuration for the gateway. So now the gateway is reconfigured. We'll check if the application is running. OK, so the server is running. And if we go there, and we'll open the application. So the application is working. That's exactly the same one, because we didn't add any new module. We just migrated the code that we had on the bare metal to a virtual machine. So it's exactly running the same application. The only difference is the application on OpenStack is accessing the database using the Nuart gateway. So now the next step, as I explained before, will be to create new modules. We have, for example, to add an ordering system for the web app and change the UI. So what we'll do is we'll actually deploy some community spots and reconfigure the community slot balancer to expose the new UI and the new API. All right, so let's go to Kubernetes Cube. So first, I'm going to deploy the new module, which is the API order service. The second one will be the updated web UI. So now we have deployed the web UI and the API, the new API. We can reconfigure the ingress controller, which is like the community slot balancer, to actually dispatch the traffic inside the community disk cluster. So for those who don't know what a Kubernetes ingress controller is, it's just a reverse proxy that does layer 7 content switching. And based on the HTTP request, it'll just dispatch them on different pods. So as you can see, the pods are running. They are connected to the Nuart network subnet, which is also accessible from OpenStack. And then we can check also if the ingress has been created. So we have created this ingress, and it that points to our public FQDM. So now let's go to OpenStack, because we will need to reconfigure the LBAS to actually segregate the traffic between the new UI, the new API, and the old API. So we'll have to reconfigure the LBAS load balancer, in that case, the LTEON, to dispatch the traffic. So we'll be creating layer 7 policies and the LL7 rules to actually redirect the traffic in two different clusters. I'm going to use a script, because the commands are pretty complex, and I don't want to missype anything. All right. So the first one will be to create a new policy in position 1, because we want this policy to be applied first. It will actually be handled first, and then we'll create a new policy, the policy 2, which will actually go to the Kubernetes cluster. Then just policy alone cannot do anything. So we need to associate layer 7 rules to policies. So the first policy and the first rule will be to redirect slash API slash V1 to the legacy web app, the one that is running on the virtual machine. The second rule will be to redirect slash API V2, the new order mechanism that we have implemented to the Kubernetes cluster. And the last one will be to redirect the UI, the web frontend to the Kubernetes cluster as well. So as of now, we have reconfigured the albass. We have configured Kubernetes. And if we go back to our application, not this one, but this one. And I don't know. OK, this one. Sorry about that. Run tab. So that's the new application. So we changed the background. We added a couple of things. The order mechanism. So we kind of like now order some beers for tonight. We can go to the cart, do some payment, I don't know, tests and pay. And I will check it out. As you can see, we have a response slack from the API that said that the order has been processed. So that means that the container running on Kubernetes has access to the database still running as a bare metal server. The last step could be that we have this VM still running on OpenStack. And we can just get rid of it. What we can do is decompose the whole API, the V1, which has the product and the storage resources in two different services on Kubernetes. So that's pretty simple. We can go back to Kubernetes and create some other pod. API product, API store. OK, so if we check the pods, now we have some new pods. The ingress controller is automatically reconfigured itself. We don't have to update it. It's already done in background. The only thing that we have to do is go back to the OpenStack controller and reconfigure another time-mail bus to actually remove the first rule that was handling the V1 traffic to be redirected now to the Kubernetes cluster. So first, what I'm going to do is I'm creating a new rule because I don't want any disruption of traffic. So I'm creating a new rule that will say, anything that is API V1 redirected to Kubernetes. And then I will remove the original rule to make sure that when a new request come in, it will be redirected to Kubernetes. So just to summarize, we're traversing one load balancer as a service in OpenStack, which sends our request to the ingress controller and to the Kubernetes system. Everything is connected into the same global routing domain by NewHash. So now the Elbaster is reconfigured. We can go back to the application and see that it still works. But I mean, it could be the still the old application. So what I will do is I will just go back to OpenStack and I will just shut down the VM to show you that it's actually running everything on Kubernetes. So now the VM is ported off. If we go back to the application, it still works. I can still see my products. I can still see my stores. I can still order some stuff. So the only thing that we can do in the final step could be to migrate your database to some other containers, like stateful containers on Kubernetes. But we don't have any time. It's two minutes left. So I will just finish with that. And I will end over to Alex to finish this talk. Oh, we need to put that here. Thanks for me. Indeed, it's a great demo. In the conclusion, I'd like to say that the evolution from monolithic applications to microservices is a really complex task. And networking is only a small part of it. But it's crucial to have a networking solution that can adapt to all the workloads and that can keep connecting all these workloads at all the steps of the migration. So as we saw in the demo, new HVSP can do that. So basically, it interconnects all kinds of workloads and can adapt to new use cases. If you want to know more about this solution, just come to see us at our booth. The booth A10 is on Nokia, just behind. We encourage you to try Nuage. You can do that by going to nuagejax.io. It's completely free. It takes five minutes to register and to spin up a Nuage environment on your own and start testing right away. Another important event, this is the book signings for DevOps for Networking by Stephen Armstrong. Stephen Armstrong himself will be signing them at our booth tomorrow and the day after tomorrow. We'll be giving away 25, three books a day. First come, first serve. Thanks a lot. If you have any questions, we don't have time. We have 16 seconds left. Just find us, we'll be around. Find us just as of the demo and we'll discuss anything.