 Welcome everyone. Welcome to today's session about virtual machines and service mesh, service mesh in enterprise. We will provide you much more than five things you need to know, but we hope you can pick up five things that are important for your environment and your implementation of service mesh and your application and take them and implement in your architecture. My name is Peter McAllister. I work for Citrate. If you don't know, Citrate is a leader in a service mesh space. We provide a platform to run your service mesh in multi-cluster in the press environment. In this session, we will cover first how VM joining service mesh, what concepts we use, what architectures we see in enterprise environments and how we address them. And at the end, we also want to go through the demo to see how VM is onboarded and how application VM works after it's onboarded. So, let's spend a little bit of time just discussing basic concepts before actual experience. So, everyone starts with the question, why? Why would we add VM to service mesh? And hopefully, if you're in this session, you already understand benefits of service mesh. I will touch on them a little bit later, but in general, mostly people who are asking these questions already know why they need service mesh. So, why do you need VM in the service mesh? It's the next question. And answer to it is mostly because you may have some traditional application in monolithic form factor and you want to put it into microservices, but it's work that cannot be done in one day. It requires multiple months or maybe even years to migrate all applications. So, and microservices idea is to start small and extend. So, you start with few services running in Kubernetes ports and the rest of your applications served by VM. Or you may have some services that run in Kubernetes space and the same time in VM. So, for load balancing and fillover and all other reasons, you may have two different form factors running the same application. For service mesh advantage, of course, you can just run with two instances independently and use load balancers, but question comes, what if I want to benefit from service mesh concepts? And first of all, of course, security. So, encryption of the traffic, you don't need any extra VPN solutions to tunnel your traffic between ports and VMs. You also can identify requests if they're coming from a valid source and respond to them only if they're validated. You can also get different rules around your traffic patterns. That is important. And finally, for your troubleshooting and analyze analysis of your performance, you also need to have one single picture of application, doesn't matter if it runs in a microservices or a VM app form factor. How we look at it in terms of technologies? So, we're trying to look at VM as it's another port, meaning you have a VM that runs application and it runs proxy. And all traffic comes to proxy and application and proxy only communicates using load balancers. So, application in theory shouldn't be exposed through the external world. It's a security requirement and it's a newspaper, actually written by one of my Twitter colleagues together with other members of our community that covers these concepts of securing application behind proxy. And if you think of port, you never reach application directly. You always go to proxy if application is on boarded to service mesh. The same time, even if we treat it as a port, we cannot completely treat it as a port because technically, on next level, you cannot run VM on the side of Kubernetes. You run it on a separate piece of infrastructure. So, in play comes all things you wouldn't worry about in one Kubernetes cluster, such things as a network and fire walling and routing. Additional to it, you think about application running in the same namespace. So, you cannot say, oh, my VM is completely independent. I can have it to be a member of multiple namespaces. Namespaces, it's very interesting concept on Kubernetes and it limits and specifies rules around your workload. If you're getting conflict between these workload specifications from different namespaces, it causes bigger problems that you won't avoid at all possible costs. Additionally, in terms of detecting where application traffic is going, what application traffic belongs to, it also needs to have namespace attribute or namespace label. Moving along, so what we're seeing in real life, it's multiple different combinations and architectures. So, okay, your VM, we discussed it already, runs in different platform, right? So, it's not part of your Kubernetes cluster. But what if it's behind firewall? What if it's behind load balancer? And you're addressing VM using IP address, but what if it's behind AWS? And then AWS load balancers, they don't provide you endpoint IP, they just publish FQDM. So, now you have FQDM different than FQDM of your application running on VM. And question is how we're going to authenticate it because certificate of VM doesn't match FQDM of AWS. So, what if you want to run multiple applications on VM? How proxy will find out traffic belongs to application number one or number two? How we collect metrics around this data? How we provide security around these applications that we want to keep separately? So, all these different scenarios and they come in very, very different shapes. There's no single answer how would you add VM to your service measure depends how your service measures build, how your VMs are hosted, what is the connection between them. But whatever scenario we look back to my previous comment, VM has to belong to namespace and Kubernetes. It cannot be shared between different namespaces. So, question comes, okay, what I need to do to onboard VM? So, you have to create free objects. Object number one is workload entry. And if you're not familiar, workload entry provides you description of VM. So, IP address of VM, label of VM, and all other stuff. Workload entry is the object that we actually use to bootstrap VM. So, data from this object together with certificates, together with a bigger mesh config gets transferred to VM. VM gets bootstrapped and it connects back to Istio ID and gets authorized there. So, then certificate exchange happens and VM becomes part of your service measure. You need two more objects here. One is sidecar. So, one in the middle. And sidecar basically defines how your application and your sidecar communicate. So, what ports are they listening to? What ports they sending traffic to? How they talk to each other on what all back addresses, what namespaces can call this instance with workload, what namespaces with workload can reach out. So, all these different things are specified inside car. And last thing you need to create, you need to create service entry. And service entry tells Kubernetes ports where they can find your VM or workload endpoint. So, it basically has pointer, it has service name and it has pointer to your workload. Sidecar and service entry can be replaced, changed and modified at any moment and always changes immediately applied or almost instantly applied to your Kubernetes infrastructure. With workload entry a little bit different. As I mentioned at the beginning, all data, it's part of a dataset that getting transferred to VM for bootstrapping. So, if you change your workload entry, the most common case you're adding additional labels or you're changing IP address of your VM, you need to take this file and put it on VM and rebootstrap it. So, workload entry changes are really closely comparing to other two. So, last thing before demo I want to cover, I want to cover how VM and port communicate to each other. And we totally relate here on mesh network configuration. Mesh network configuration specifies networks for your ports and your VMs and you can have gateway attached to these ports and VMs or you may have no gateway attached to it. And you can see basically all for combinations here. So, if there's no gateway attached to VM or Kubernetes port network communication going direct. So, your workload calls Istio proxy, Istio proxy calls destination Istio proxy and reaches destination workload. If you have gateway defined for VM, what happens here? Kubernetes, Kubernetes workload reach out to a gateway and then gateway reach out to VM. And opposite side, if gateway is defined for Kubernetes cluster, VM reaches out gateway and then gateway routes traffic to destination Kubernetes ports. And of course, if we have gateway defined on both ends, we're going to throw two gateways. It's important to understand just for your traffic pattern, traffic troubleshooting. I know in some use cases, are really, really looking at decreasing latency. So, having extra hopes differently may be concerned for performance. So, I just wanted to mention it. So, you have better idea. You understand how it works in a big application. So, let's move to the demo. So, first, just quick coverage what we will do in a demo and how it's set up. So, if you worked with Istio before, you know, one of the applications that heavily used is Bookinfo. So, Bookinfo application is already set up and I already have VMs that runs ratings and details on the same VM. We will look at the end how it works for the pre-established VM, but most important thing, we want to show you how to onboard new VM. So, we already pre-created service entry inside car in Kubernetes. So, we will not really change or touch them. So, what we will do here, we will create workload entry for new VMs that also I already pre-provisioned, meaning it's instance created on GCP. In my case, it can be AWS, it can be anything. And I have IP addresses of this VM and also inside of VM Docker is installed. Today, we can run Istio proxy on VM as a Docker container and as binary. For this specific demo, we use container, but really it's up to your decision as a customer what you prefer container or binary. During demo, we will create workload entry. As I mentioned, we use tctl, ttrade, service bridge utility to bootstrap VM. We will use SSH direct connection to bootstrap it, but you have also option to do offloading. So, export your configuration and import it on your VM, transfer it to your VM and upload there. We will show how proxies installed and configured and run. We will confirm that rating application response, rating application response to your request. And also confirm connection back from a VM to product page or show VM in a txb, ttrade, service bridge UI interface. And have a quick peek on existing VMs that runs already two different services, rating some details. And let's start demo. So, here you have a usb-tsb interface. So, right now we have a booking for application with all services running. Here is ratings that runs as a Kubernetes service and it's v1. So, what we are going to do, we're going to add another service that is going to be v2. But first, let's look what we have already in this cluster. So, we have workload entry and we have two sidecars. One is the full sidecar and one created for VMs. So, also I pre-created workload entry. You can see multiple labels here and also IP address that we're confirming on our right window, IP address of VM, external address and also internal. So, we need internal address so VM uses it because it's not aware of its internet address. And also we specify external address twice because we're going to use it for bootstrapping too. And now we're running commands to, sorry, let's apply this workload entry. So, it's successfully applied. Next thing we will do is to confirm it. It's applied, yeah. So, one second since it's created. And other records, as I said, we have already, it's for VMs that are already running and running details and rating similar, that's called details. These two services running on it. So, connecting to VM, you can see that if we run Docker command, there is no containers currently running. Switch back to Kubernetes cluster and basically run command to bootstrap VM. So, it takes a workload entry and it transfers all files to VM and it starts production. And you can see now we have proxy running here for five seconds. And that's successful. You can see in a log files, it's successful successfully connected back to Kubernetes cluster and running already in a ready mode. Let's start ratings. Rating started. Okay. So, we have now proxy in ratings containers running on VM. And if we look, there's a number of requests coming from my traffic generator. Let's confirm that it's not pre-recorded. Okay. So, there's new requests coming from our traffic generator. We get 200 successful response. If you look at logs for ratings, we also can see ratings container receives multiple, multiple requests and response to them. So, because right now TLS is not enabled, I should be able to query proxy directly from my machine instead of Kubernetes spot. Yeah. And you can see I'm getting successful response here. Okay. So, now let's try to call back. So, what we will do, we will just call from a VM back to product page and product page is a Kubernetes container, sorry, Kubernetes spot, and we got successful response here. So, last thing we want to show is different VM that already runs two services, details and reviews, I believe. So, it's called ratings, but I think that's details and reviews we will see in a second. So, we connected to this VM and let's see. Yeah. So, I have, it's another version of ratings actually I'm running here. So, I have reached your proxy in two application containers listening on different ports. And now when I look here, I can see I have our VM added to my map and I have version two, version two is VM that I added and that's successfully shown with diagram. So, we can see the traffic is growing there and getting responses and we can get all additional metrics. That's all I wanted to share today. Thank you very much for your attention. So, I would like to think service mesh project for platform, creating platform for us to make difference in this space, service mesh cone differently for accepting with proposal and letting us to meet you, you for your attention and hopefully you will be able to bring something back home and discuss. And of course, our customers, the trade customers for the great feedback that allowed us to make this product with functionality better understand different use cases, different scenarios, a life scenario that hopefully a lot of you can apply. Chita Red team for tremendous efforts are getting all of it implemented and available for our customers. And finally, I just want to say thank you to whole big Chita Red team for focus and concentration. Hopefully you've got five things that you need to find out in your environment before implementing VMs and we will be more than happy to talk to you of different use cases you have as enterprise, maybe something that we didn't cover so far and need multiple customers or you personally would benefit from. Thank you.