 My name is Junwei. I'm Senior Principal Solution Architect at Equinex. Hello, everybody. My name is Victor Martinez, Senior Solution Architect at Equinix. We're here to present Leverage Service Mesh for Enterprise Multi-Cloud Strategy. In this talk, we will cover multi-cloud strategy for enterprises and examine a few use cases. We will look into application deployment regarding requirements, key considerations and options in more details. Then we will discuss what role service mesh can play for multi-cloud strategy, followed by a quick demo and some future considerations. As the cloud adoption penetrating deeper into enterprise IT domains, more and more enterprises start to realize the inevitable trends of multi-clouds. Multiple factors can contribute to the motivation towards multi-cloud strategy. Each cloud provider has its unique offerings that are the best of the class. Being able to leverage best-of-the-class solutions from multiple cloud providers will allow enterprise to benefit without being locked in with one provider. Backup and disaster recovery always require a secondary region or location. In some cases, that may not be always available from one cloud provider. This is when another cloud provider can be a viable option, either in primary secondary or active-active manner. It is flexible and quick to set up backup or disaster recovery in a cloud than looking for other alternatives. In other cases, multiple cloud providers can offer optimal performance cost benefit if planned it carefully. Let examine the requirements associated with the application deployment in a multi-cloud environment. Take the example of best-of-the-class use case. When deploying a services across multiple clouds, several factors will need to be taken into the considerations. For example, when a database services is deployed in a different cloud away from other services, for example, webfront services, the latency introduced by multi-cloud deployment should not adversely affect the overall application performance. In addition, security over the connection between clouds needs to be in place to make sure the data transferring back and forth will be in alignment with all security and compliance mandates. Key questions we need to ask ourselves. Is latency bandwidth adequate for my deployment in multiple clouds? Would my data be safe when traversing between clouds? Is there any SLA associated with the connection between clouds? For the connection between clouds, there are many options to choose from. Public Internet with proper VPN tunnels can be an option for testing and validation. When ready for production deployment, private connections deserve serious considerations. Next, we will examine those use cases in more detail. First, let's look at scenarios for best-of-the-class use case. Here is an example of application book info. If decided to deploy some services in another cloud, we like to make sure the latency introduced by such deployment won't affect overall performance. To ensure that the latency and the bandwidth between two clouds needs to be within certain ranges. To achieve that, we probably want to have two cloud deployments adjacent to each other. On the right, we show a cloud infrastructure map for North America. At Ashbomb, there are multiple cloud providers within close proximity to each other such as AWS, Google Cloud, and Azure to name a few. This gives us a viable option for dimension deployment into multiple clouds at Ashbomb. In many cases, large amount of data are flowing between two clouds. Or security and compliance mandates are required. Proper security measured needs to be in place for the connection between two clouds. The private connection will probably be the first choice for such requirements. Many cloud providers offer private connection options. For example, AWS direct connect, Azure express routes, Google Cloud interconnect. To connect a deployment from two clouds privately, you can simply connect two clouds through a joint cloud unramp location. In short, multi-cloud access and the private connections are the key components to the success of the best-of-class use cases. Another example is backup and disaster recovery use cases. Leveraging another cloud for backup and disaster recovery can be an excellent choice in some cases. Take example of the deployment in AWS region, Azure Pacific Sydney as shown on the right. There is only one AWS region in that area. If you are looking for backup, most likely you will use AWS Asia Pacific Singapore. However, if data sovereignty is mandated, you will have to look for alternative within country boundaries. Luckily, you will find that many other cloud providers within close proximities. Most of them share the common cloud unramp location at Sydney. In the case of backup and disaster recovery in a different cloud, whenever the synchronization between two deployments is required, private connection would be necessary. In that case, you can always use the private connection to access both cloud provider from a joint cloud unramp location. Again, multi-cloud access and the private connection are the key components to the success of backup and disaster recovery use cases. Next, let's look at how service mesh can help to implement those use cases. In a multi-cloud environment, multiple clusters are formed to manage traffic between services through service mesh. By defining destination rules and virtual services, the ingress of each cluster will be able to route traffic accordingly. In the case of multiple clusters in a multi-cloud environment, the ingress will be able to route traffic to services that are not local to its cluster. Proper network configuration needs to be in place to route traffic between clusters through connection across multiple clouds. We will show some examples in the demo. Can the service mesh know where to deploy services into other clusters in a second cloud? It depends. With the cluster previously deployed in the second cloud, the service mesh will be able to generate detailed telemetries. But in many other cases, this is actually a day-zero planning decision. For example, we will have to look at the cloud infrastructure map like the one showing on the right. To first figure out where are the locations with multi-cloud access or MCA. Major cloud providers are almost everywhere these days, but not all cloud providers are available at a given location. Or a specific feature you desire may not be available everywhere. Carefully choosing the right multi-cloud access location may not be a trigger task and requires external measures. Once that is determined, proper ingress policies can then be injected into service mesh for multi-cloud traffic management. Again, multi-cloud access and private connections are the keys to the success of multi-cloud strategy here. For backup and disaster recovery, a duplicate deployment can be within the same provider if a dual region is available or to a different provider if second region is not available. Here, multi-cluster ingress or MCI will be helpful. Through MCI, proper backup strategy can be implemented whether it's primary, secondary or active-active. In the case of serving global user, multiple copies of the same deployment can be deployed in each of the region closer to users through the global load balancing feature of MCI. The inter-region deployment can be with the same provider or a mix of different providers as shown on the map to the right. The proper backup policy can be defined for both inter-region and inter-region scenarios. Similarly, as stated before, whenever synchronization is needed, private connection may be necessary. Once again, multi-cloud access and private connections are the key to the success of multi-cloud strategy. Next, we will share a few experiments how service mesh help with application deployed in multiple cloud environments. We will demonstrate that best-of-the-class deployment across two clouds can be achieved. In this case, we have already chosen Ashburn as the multi-cloud access location and have built a private connection between two clouds. Service mesh are used to route traffic across two clouds deployment. So, basically our goal in this demo is for you to see how we can connect two managed clusters from different cloud providers using a private connection instead of the Internet. The first scenario is a deployment of an application on a single Kubernetes cluster. The application is Bukinfo, included within the Istio samples, where there is a microservice that has different versions. Product page is calling to review service, balancing the requests. If we evolve this solution to a multi-cluster to extend our deployment, we must connect both Kubernetes in some way. As shown in this diagram, there is the option of connecting via the Internet, using Istio Getways and public addressing. Our proposal chooses a more secure solution and ensures better and more stable performance than the Internet. In our ideal scenario, we connect both private clusters through private connectivity. We achieve this by using Equinix as an interconnection provider that allows us to perform cloud-to-cloud routing, paving without leaving their own premises. Equinix offers us a service to deploy a virtual router and virtual circuits to the clouds in a flexible and on-demand model. And now, let's take a look at our GKE and AKS clusters in a scenario based on the later model. Of course, their nodes are not exposed to the Internet and their configuration is in a private mode. Initially, the Bukinfo application is deployed entirely on the GKE cluster. If we analyze the graph in Kiali, we can see that there is a traffic balance into the review's microservice and that all of them are in the Google Cloud. Let's remove now the deployment of version number 3, that is on GKE. As you can see, the router starts no longer appear. Now, let's wait a few seconds to see that this version is no longer available for the review service in the Kiali graph. Time now to deploy this same service in our Azure Managed cluster. If we wait a few seconds, we will see that we are able to display again this version number 3, but from AKS. As you can see, we have enabled the connection between both clouds using our private connectivity and extending the deployment of the application. To demonstrate that we are using this private connectivity, using Equinix as the link and ExpressRoot with Cloud Interconnect, we are going to disable one end of the multi-cloud connection. Let's select the villain attachment of our Interconnect connection. Here you can see that automatic private addressing is always used. So, let's disable it. Now, we just blocked the routing of data traffic between the two clouds. Let's go back to the booking for application. To see how version 3 reviews are not longer displayed. Now, the microservices that are in the Google Cloud are not able to reach Azure cluster. We broke the private connectivity between AKS and GKE. If we take a look at the Kiali console, we will see how little by little the services degraded for that version. Having demonstrated this, let's re-enable the villain attachment of the Google site. Now, after a few seconds our application in a multi-cloud mode should be stable again. For future consideration, we are exploring ways of automating the process for enterprise multi-cloud strategies, including multi-cluster ingress and global load balancing over multiple clouds and automating multi-cloud access planning with service mesh traffic management. To summarize, let's recap the key takeaways. Multi-cloud has become the new norm when enterprises consider their cloud adoption strategy. Multi-cloud access and private connections are crucial to the success of multi-cloud strategy. Service mesh is one of the key enablers for multi-cloud strategy. At the end of the day, how to automate the process to enable multi-cloud strategies are among the most important challenges faced by enterprises. Service mesh is definitely a right technology. Let's make it work. With that, we conclude our presentation.