 Hi everyone. I hope you enjoyed the Service MeshCon so far. In this talk, we are going to speak about how virtual machines are becoming first-class citizens in Service Mesh. I am the Nijano, director of field engineering in IMEA at solo.io. So at the beginning, everything started with these modern applications and we've seen the evolution to microservices and obviously Kubernetes became the most popular platform to run these microservices. But we ended up with two separate worlds. On one side, you have some legacy applications running on bare-metal servers or on virtualization platform like VMware. And on the other side, the new modern cloud-native applications running on Kubernetes. And then after that, what we've seen is the Service Mesh started to become really popular with this idea that you can have on one side a control plane where you define all your policies. And on the other side, the data plane where all these policies are enforced. And when we speak about these policies, we speak about things like being able to encrypt communication between microservices, being able to get some telemetry information, and all these things becoming possible because of this sidecar proxy that is running on each pod in most of the cases on most of the Service Mesh out there and that they are based on Envoy. And basically, you don't need to do anything at the application level. But instead, you use the Service Mesh to provide all these capabilities like encryption, telemetry, else check, and so on. And what's very interesting is that it's about to unify these two worlds. What I mean by that is that you have the virtualization, the legacy application running on VMware or Bar Metal that can now be part of the Mesh. And that's very interesting because it means that I can now have communication between the legacy application and the modern applications being encrypted. I can do some kind of canary deployment, for example. We'll see that just here. You can have the same application running on the legacy environment and on Kubernetes. And you want to seamlessly migrate from VMware to Kubernetes, this application that becomes possible because that's one of the nice things you can do with Service Mesh. You can do some kind of, you know, canary deployment, some kind of traffic shift, say, okay, at the beginning, I start by having all my requests for this application are sent to this legacy environment. But slowly, I start to send like 25% of the requests to the same application running on Kubernetes. And I can check that everything works well. And and then when I see that everything is fine, I can migrate completely the application. So I can now do this migration from legacy environment to Kubernetes or to containers without any downtime. I can also get some extra benefits for my legacy application, even if I want to keep them running in their current platform. I can still, you know, get the encryption, like I said before, but I can also get some telemetry information and another benefits that I generally get with the Service Mesh. They are different Service Mesh technology available. But here we'll focus on Istio. If you look at the CNCF survey that has been done last year, we can see that Istio is definitely the most popular Service Mesh out there. And I would say that it will be interesting to see the results in 2021. And I'm pretty sure it's even, you know, bigger now. So we are really going to focus on Istio in this talk. And also, so one of the reasons is because it's definitely the most popular one. But also the other reason is that it now has a really good support for VMs and environmental servers. So let's jump directly in the demo and show you how all these things are done. And especially what we will do in the demo is that we use this environment where you see I'm running multiple Cubans clusters using kind. And on this different Cubans cluster, I have deployed like Istio and two of them, and I've deployed Gloomesh in the third one. I'll speak about Gloomesh very quickly later on. And what's interesting here is that I will show you that you can test this VM integration using a Docker container as well. And it makes your life a lot easier if you want to put in place like automated testing and so on. It's very convenient to be able to simulate this VM as a container. So I'll show you how we start from the Docker container and we put in place all the prerequisites to join the mesh. And I won't go through all the details because obviously you can find them in the Istio documentation. If you go to virtual machine installation, you'll see a lot of the things that we will do in the demo are described there. The difference is that instead of running this command in the VM, I will run them in a container and we have to adjust a few things just to make it work. But the baseline for the demo is here. So let's jump in the demo environment. So I have like four windows here. You can see one where I will prepare everything for the VM. And then what we will do is that we will start with an environment where, as I said, we have two Cubans clusters here and I've deployed the booking for application on both clusters. And we are going to start with this situation where when I send a request on cluster one and I try to go to the product page, then the product page is going to send request to the different services and we will play with this detailed service. So we will have like at the beginning, we will try to have the product page running in a VM and that's where we will start. We will deploy all the prequels so that the VM becomes part of the mesh and we have the product page going to the detailed service running here. And then later we will show how we can migrate seamlessly to Kubernetes, so migrating this app from the VM to Kubernetes slowly, like taking a canary approach. And then at the end, I will even show you how you can have some requests going to the VM, some requests going to the pod locally and some even other requests going in another cluster that is made very easy with the blue mesh. So let's jump into the demo now. And as you can see here, I have like my booking for application running. But the details, which is this part of the demo of the of the page here, we will make it run in a VM. So to do that, I'll just jump here and you see I will call my VM, VM1. I will use the namespace virtual machines. I will create a service icon for this VM and I specify which network I am on, which is the network of the VM, like network one is the network corresponding to my first cluster and the VM network is the network of my VM. That means that I will use gateways to communicate with the clusters. So I just like to copy and paste here. Then I create a working directory where I will create all the files that need to be transferred to the VM, which in my case will be like a Docker container. I will create a namespace for this VM and service account. So the service account that will be used to generate a certificate for this VM is also residing on Kubernetes. And then you have this notion of like a workload group that you can use if you want to dynamically register VMs, but here we are just going to create this DML and we will use this DML as an input for the Istio CTL command line that will prepare all the files that we need for configuring the VM. So I'm just going to run this command here. And what's interesting is that we can take a look at what has been generated for us and you see we have like a short living token. So that's kind of a way for the VM to identify itself to prove that it adds access to this service account. And it will be used to be able to get the certificate from the Istio control plane. And then later on the certificate will be rotated and every time the VM will identify itself with this certificate, not with the token anymore. We have also some information. You can take a look at what you have here, but I won't go and cover everything. As I said, it's mostly documented, but you can see information about the networks and the name of the VM and trust domain and so on. So then now I'm going to start Docker container that will be, we will consider as my VM. And you see here what I do is that I use the image, the Docker image of the details cells. So that will be the baseline. It will be easier for me to run the detail service because that's already what this image provides. So I'm going to just run this image. And again, you can consider it as a VM. It's not part of the Kubernetes cluster. It's running as a Docker container like Kain is doing. So if I do Docker PS, you can see I have my three Kain clusters and I have my VM here. So now I have my VM as a container. I'm going to just deploy some prerequisites that are needed for deploying Istio, the Istio Agents. And I'm going also to prepare the OS file that I will need so that the VM knows how to contact the control plane. So that's the IP of my gateway to reach IstioD. Then here I'm going to just, you see, like, transfer the different files that I got when I ran this command line here. So just, you see, just doing like Docker exec to copy some of the files when I created, you see, when I started my container, I just mapped this directory here so I can just like copy internally. Then I'm going to deploy the Istio Sidecar proxy here. So it will be running directly in my container. And again, copying a few other files that I had before, copying the OS file that contained this entry and changing some permissions on directories. And at that point in time, there is one special tweak. So the way it works in a recent version of Istio is that you have a DNS proxy running within the Istio Agents. And there is an IP table rule that redirects all the DNS requests to this proxy so that it knows about the IP of the different pods or in our case will be like the gateway and so on. But by default, the DNS server is this one, which is kind of a local IP as well. And that prevent this IP table rule to work properly. So I'm just going to change the reserve.conf of this container so that it just use like the Google DNS and the IP table rules will work properly. Finally, I'm going to start Istio Agents in this container. So now I have like my container that you can consider as the VM. And if everything works well, and I look at the cluster's entries here, I should see information for how to reach my services running in the cluster, like for example, if I want to reach the product page. And you see here, I see if I want to reach the product page, this is the IP and this is the port. And as you can see, this is the same as you can see here, because I use the Istio Ingress Gateway because my VM is in a different network. We can also check that we can access the product page from the details, which is not really what we want to do, right? Normally the product page tries to reach the details, but it is just to show you that the communication from the VM to the services running in Kubernetes is fine. And you see I have a 200, okay. And everything is secured, by the way, I didn't say that, but everything is secured with MCLS. If I would go to cluster one and the Istio system namespace, I would see that there is a pure authentication that has been created here, which enforced StrictMTLS. So here we show that we can get like StrictMTLS and encryption between the VM and the pod when it's the VM that initiate the connection. And we are able to reach the services that are running in Kubernetes. Now we are going to start the details service in this VM. And we are going to create a service and a workload entry so that the pods that are running on Kubernetes, they know how to reach the details service that is now running in my VM. And you see here, the most important part is here, you see the VM IP, which is basically the IP of my Docker container. And I can just show you that here. So we need, so it needs to know, you know, that this is running here. So I'm going to just create the service and the service entry. And finally, what I want, and this is where we start to use BlueMesh, we are going to use like a traffic policy. It makes our life easier because I could do that directly with Istio create a virtual service and, you know, destination rules and so on. But BlueMesh makes that easier for me to do so. So I create what we call traffic policy where I say when I have a request coming from the default namespace and going to the cluster, going to the details service on cluster one, what I wanted is that I want to send these requests to the VM in state. So the product page will send the request to the details service on the VM because the product page is re-diting on this namespace. I'm just going to create this policy here. And what I want to do as well is that I want to see if I do like a tail minus F. I want to take a look at the access logs here. So I can see the access logs in the different places. So here is the details spot on cluster one, the details spot on cluster two and the details spot on the details VM. So because of the rule I created here, I should expect now the access logs to change here right on this place. So I'm going to just first of all try and see if everything works well here. I can still see my details and should see here. So that's good. It's going to the VM. That's easy. I can also even go to Kialli and I can see that it's going, you'll see the rating and you'll see the VM. So you see the product page is sending traffic to my VM. And I can also even go to BlueMesh and see that I have my current policy which says when I have workloads here in the default namespace that send request to the details service then everything goes to the VM. So that's the current state. Now let's say I want to start to send some traffic to the details running on cluster one so that I can start to seamlessly migrate from my legacy environment to my container. So then I would do something like that. You see I say I update my traffic policy and the difference here is that you see I only send 75% of the request to the VM because I start to send 25% to the service running on the cluster. So I'm going to update that and here again now I should see some of the request going here and some of the request going there. So I can go back here and refresh and I should see here very quickly some access logs showing me that you see here and here. The requests are going now they are spread across these two clusters and in Kialli I can also see that you see it sends some of the request on the same cluster in the details and some of the request in the VM. And again if I go to Blumesh I see that as well right 75% are configured to go to the VM and 25% and go to the details on cluster one. And now what's quite nice is that I can also refresh all the stuff here and I can create a nice policy. First of all I can say I want now to go and switch completely to Kubernetes so that now all the requests will go only here in the clusters. They are not going in the legacy environment any any more. And again I can just refresh that here and I should be able to see my access logs moving just there. So just proving that everything goes on the right place like I was expecting. The migration is done now from my legacy environment to Kubernetes. I can see that here as well all the requests go now to the details here they are not going to the VM anymore. And I can show you like a small bonus that and that's also another very nice thing you get with Blumesh is that it makes your life a lot easier when you want to do like you know cross cluster communication. I could have done like an example where I would configure a failover if I cannot switch locally I go to the other cluster but here just to make it simpler what I did is that I create a traffic policy where I would say 50% go to the VM, 25% go to cluster one and 25% go to cluster two. So now I spread my request not only between my VMs and my pods on cluster one but I can spread my request between my VM, my pods on cluster one and my pods on cluster two and everything as you can see very easily by you know just creating like a traffic policy like that. So I just go there I apply this and now I can refresh my app like click many times here again I can see the spread is done here and you can see the logs moving everywhere and you can see you know Kiatty will also show you like a nice picture where you would see all the requests going you know across all these different environments and even like depending on how long we wait for the for the matrix but we will also see even like the matrix showing that this is going through you know all the clusters and normally you should have like a cluster two box displayed somewhere here it's just that it's not shown right now so let me refresh a little bit more and we should see like the details here the details running in the VM but also the the details from the from the other cluster like you see here just like to be a bit of time to refresh but you see now I also see like the request going to the other cluster and that's possible because we consolidate all the matrix with glumesh as well and we can present them today you consume them the way you want like in Kiatty but tomorrow it will be even like integrated in the in the in the glumesh UI so as I said you know I use glumesh in the in the demo to make my life easier for managing the traffic but you can also use it like for discovery so it discover all the workloads on all the clusters instead of having each cluster discovering the workloads on all the other clusters this discovery is done by glumesh it's more secure and it just like make the other clusters aware of that you have the failover I spoke about before you have the ability to have like consolidation of the matrix like I said but also being able to do that with the access logs being able to have like a global airbag where you can define who can do what across all these different clusters we have like a very nice support of web assembly there as well so if you're interested you know you can go on our website and on our Slack channel and you can you can ping us so thank you very much for attending this session I hope it was useful and and I think we we now have some some time for Q and A