 Welcome to the Get Started with Istio Service Mesh Workshop. Let me quickly introduce myself. My name is Lin Sang. I am the Director of Open Source at Solo.io. I have been contributing to Istio project since the beginning. I've worked at IBM for 19 years and joined Solo early this year in 2021. And I've also wrote a book Istio Explained to teach you the best way to get started with Istio. Before we get to Istio, you might be wondering what is Solo doing? Why am I giving you the workshop? In case you don't know, Solo is all about building application networking. Connectivities to solve these challenges through our Envoy and Istio-based solution, whether it's GlueMesh or GlueEdge or GluePortal and the extension using WebAssembly. GlueMesh management plan you could interact with Istio with the Role-based API. We are also hiring. As everybody else, please check out our career page. Today, we're going to talk about this foundation badge. This is the Get Started Istio badge offered by Solo. It essentially teaches you how to get started with Istio, what are the best practices to get started, and we actually have a test at the end. We require 80% passing on our test and you can retake if needed. We will issue the badges in the next few weeks after you take the test. Before we get to the workshop, I'd like to talk about the challenges of microservices. Fundamentally, how do you observe the interaction among your services? How do you secure that communication among your services? How do you handle timeout and retries as you distribute your services in a distributed cloud environment? How do you precisely control traffic and new version rollout? A service mesh is a programmable framework that allows you to observe, secure, and connect your microservices. It essentially helps you to deep couple your developers and the operators nicely without the need to ask your developers to rebuild anything. You might be wondering how do we get here? In the beginning, there are Netflix OSS where they provide Java-based libraries to perform similar functions to connect, secure, and observe their services. There is also Kubernetes who have won the container orchestration war, and the rise of Kubernetes draw many organizations into microservices so that they can leverage the power of microservices to increase their delivery velocity. There are a bunch of common functions provided by service mesh for microservices such as how do you discover services? How do you low-balance among your services? How do you secure that communication among your services? How do you control traffic? How do you mirror your production traffic for testing purpose? How do you apply policies to your services? And how do you observe the metrics and tracing and logging for your services? Is there any programmable API for me to config everything without going into detail proxy configuration, which is extremely complicated? In case you are wondering why service mesh? Fundamentally, service mesh helps you solve these connectivity challenges of your services whether it's security, whether it's routing, whether it's monitoring or service discovery. It provides the consistency experience for you regardless whether you are running your services, whether in Kubernetes or outside of Kubernetes, on your VM, in bare metal. It provides a nice abstraction for you. How does the service mesh work? A service mesh has a control plane on the top which you would be programming the control plane through service mesh abstracted API. By you controlling programming the control plane, the control plane is intelligent to translate that into a configuration that a psychoproxy can understand. The proxy is injected to each of your services that participate as part of the mesh. And also because of the proxy is sitting next to your application container it would capture all the incoming traffic and all the outgoing traffic so that it's intelligent to upgrade that connection for you, for mutual TLS. It's intelligent to collect telemetry data for you. It's also intelligent to determine which endpoint it should route the traffic to. Because of the proxy is part of the request flow. It's part of your data flow. We call it part of the data plane. There are a very rich competing ecosystem out there for service mesh. Many projects such as Envoy, Istio, Gloomesh, LinkD, AppMesh. By Amazon there's Khan, there's Console Connect and service mesh interface. Some of these projects are in CNCF, some are not. But it's a really rich ecosystem out there. Today, since you are coming to an Istio workshop we're going to get into the three key functions of Istio which essentially is connect, secure and observe. Istio architecture is very similar to the data plane and control plane architecture of service mesh that we just described. As you can see, Istio D is the control plane. Since Istio 1.5 we've merged every single component into a single monolithic component called Istio D because we realize it's much easier to manage one single control plane component. The psychoproxy can be injected to your services through automatic injection or manual injection and they are part of your request flow. This is where you want to pick the right service mesh architecture, the right project because at the end of the day the psychoproxy is a critical piece of your infrastructure. Istio also supports the ingress gateway and the egress gateway. Through these gateways you can define the services you want to export out of your cluster and also what are the services you want, external services you want the services in your cluster to have a controlled access to. Common adoption patterns I would say we're going to go through in the lab too. The first one is adopt Istio at the gateway. The second one is adopt Istio, just inject the psycha and instantly got the observability function out of Istio that you can observe the dashboards for your services. You can view distribute tracing, you got a ton of metrics. The third common pattern is to inject psycha and get mutual TLS to get that secure communication among your services and the last one is as you introduce newer version for your microservices you can precisely control how the new version should be rolled out through dark launch, through header-based routing and how do you do import external services into the service mesh. Alright, at here I'm going to send you out a link to our platform we're going to use an instruct-based platform where you would be, visit the link I just sent out and then you would go ahead and start provisioning a cluster. So we're going to use the environment provided by Instruct so the environment where essentially the environment we have a virtual machine and then you're going to run a lightweight Kubernetes distribution called K3S and then you're going to install Istio on that cluster and then perform the 4 key scenarios we just talked about. Let's get started. This is the link that you should be able to get to if you are having trouble to get to the link make sure you're logging with either your Google ID or your Twitter ID or your GitHub ID so we will make sure you're logging first and then once you're logging you should be able to see this particular lab and I need you to click on start track here and get started with the lab the lab is really targeted to be 2 hours and 30 minutes and we only have an hour and 50 minutes here so I'm going to try to run the lab with you so you can watch me doing the lab or you can do it yourself concurrently or if you don't like to do it concurrently you can watch me do it first and then you do it yourself I'm going to save a little bit of time towards the end of each lab so that you could potentially do it yourself the environment will be available for a few hours after the workshop so feel free to leverage that if you couldn't finish the lab immediately so don't worry we will have the lab available for you for the next few hours so what it's doing now is setting up the challenges as the environment is being provisioned I would like to walk you through a little bit more detail about the first lab so for the first lab what we are going to do is we're going to pre-check our environment just making sure it's ready it's good to install Istio and then we're going to explore different methods to install Istio and then there are different ways to install Istio you can use Helm you can use Istio Cardo you can also use Istio Operator on the server side for this lab though we're going to only use Istio Cardo because that's the easiest way to get started learning Istio Istio has different installation profiles because this is a get started workshop we will teach you to use the demo profile but we'll also walk you through to view different other types of profiles Istio also have add-ons so you're going to install add-ons such as Grafana Jega, different components to view your mesh dashboard to get visibility into your mesh Istio upgrade we're not going to cover that in this workshop because this is a get started but we will cover it in a future workshop as you can see our lab environment is ready so let me go ahead get started so the first thing we are going to do is we're going to download the Istio release binary and as you can see we're using Istio 1.10 here and the next thing we're going to do is add the Istio Cardo to the path so we can call it easily so now what I'm doing is to confirm my version 1.10 remember the precheck command we talk about now I'm checking to see my cluster is it safe to install Istio as you can see no issue found we're ready to install I'm going to query my profile to see what are the profiles available by Istio, by default and next we're going to do is let's go ahead install Istio because we want to make sure we have the environment to play with as you can see I specify dash dash set profile equals demo means I want to install the demo profile and I love the status bar which shows where I am with the installation now it's installing the egress and ingress gateway as you can see my Istio is installed let's see let's check out all the resources that we put down to this cluster there is a bunch of resources there's Istio D which is the single control plane component that we talked about earlier there is Istio ingress and egress gateway remember architecture diagram we talked about to control inbound and outbound traffic in the cluster and going out of the cluster there are deployments there are a bunch of config maps some of Istio leader if you have multiple Istio D this is where it can be useful we have CA secrets as secrets and a bunch of on-way filter for telemetry purposes so all looks good let's check out the customer resources installed onto the system you know what it's actually very few customer resources it's about 12 I think Istio also provides a convenient command called Istio Cardo Verify Install as you can see our install went very successfully the next thing we're going to do is install the sample add-on you can just install it as a Kubernetes application so just run kubectl apply command at the end you may get this arrow unable to recognize the reason you get this arrow is because sometimes this customer resource is being applied before the customer resource definition is installed in your Kubernetes cluster so to fix that you just run this command again so this would ensure everything is applied now if we do a gap pass onto the Istio system everything is running in just two minutes I got everything running the next thing we're going to do is open up the dashboards of Griffana dashboard of the premises dashboard now as you can see I can do something here see if I can get any information yes I could the reason is I have gateways as my data plane so that's nice if we exit out of here and I can also do similar commands to open up the Griffana dashboard so go to the Griffana UI tab here is your Griffana dashboard we're going to play with it a little bit more in the future lab but as you can see this metrics already I didn't have to do anything they just show up for me the next thing we're going to do is open up the JGAR dashboard so Istio Cuddle the dashboard command now if you go to JGAR we're not going to have any services because there's no traffic but it's nice to see the UI here the next thing we're going to do is check out the Kayali dashboard so go here as you can see I have a bunch of work clothes hopefully already just for Istio work clothes but for the control plane so yeah there's not much to see at the moment because we haven't installed anything yet so as you can see if I select all yeah nothing alright so that's it guys congratulations you have installed the Istio control plane Istio D, Istio Ingress Gateway Istio Egress Gateway you have also installed all the azons right premises Grafana, JGAR and Kayali so next we will learn how to expose your services to Istio Ingress Gateway securely so your clients outside of the Kubernetes cluster can access it we talked about the most adoption pattern is adopt the gateway only which means you have services running in Kubernetes that you want to export it out of the cluster by exposing your services out maybe on a particular port number on a particular pass on Istio Ingress Gateway you could also optionally config a Egress Gateway to say these are only these external services are allowed for my services in the mesh so this means you're not going to have psychos on service A or service B as this diagram indicated so your observabilities is only at the gateway your secure connection is also only of the gateway so there's no secure communication between A to B or between your gateway to your services we're going to walk you through Istio network resources allows you to config edge low balancer configuration virtual services allow you to config a list of route rules for your service destination rule allow you config subsets and policies apply for a particular destination service service entry allows you to easily access external services and psychos resources allow you to scope your psychos proxy configuration to declare your inbound and outbound configuration so it's an advanced topic which we will cover in a future workshop this is one of my favorite diagram that indicates the best way to understand gateway resources which is really what is the URL what is my host what is the path of my URL and providing to my clients and virtual services specify for that URL where am I going to send the traffic is it service A which version is it sending the traffic what are the rules to apply for the client side low balancer to reach that particular destination so in this lab we're not going to go through egress gateway due to the timing limited time we will go through our example using web API recommendation and purchase history we're going to expose web API to e-steal ingress gateway alright, our environment is ready for this lab let's make sure we navigate to the e-steal basic workshop directory and the first thing I want to do is deploy our sample application by creating the e-stealing action namespace first and then we're going to deploy web API recommendation, purchase history version 1 and the sleep application and let's go ahead check if our pods are ready looks like they are already running the next thing we're going to do is config the inbound traffic right so what we're going to do is check out the service object on the e-steal ingress gateway as you can see my e-steal ingress gateway have an external IP so we're going to export the gateway IP with the value of the IP so that allows us to access the gateway through this environment variable we're also configuring the port number for secure and non-secure port and the next thing we're going to do is config e-steal ingress gateway to do that we're going to use the gateway resource and the virtual service resource so let's review our resources for the web API gateway as you can see this is the host we are exposing the web API service and we're exposing it with let's review the virtual service resource the virtual service is bind to the gateway we just deployed check the name that matches and because we are deployed into the same namespace so you don't have to use a namespace slash for the gateway here as you can see the virtual service essentially can fix the route rule for this host on port 80 for the HTTP route to route to web API e-steal ingaction namespace to port 8080 so you may be wondering why is port 8080 right so if you do a guest service on the web API as you can see the web API is listening on 8080 on that container so let's go ahead and apply our configuration the gateway and virtual service resource now after the configuration is applied the e-steal automatically config the e-steal ingress gateway and now if we access our web API service you can see the service is actually up running recommendation and purchase histories everything looks good you can dig a little bit deeper into the e-steal configuration here as this is the route configuration we just declared for HTTP80 for this domain and you can also drill into details on the routes of HTTP80 as you can see when this particular route HTTP80 routes to web API in e-steal ingaction import 8080 and it has two times of retries and it's going to retry on its failure mode with 503 status code so this is all automatically provided by e-steal for you with your simple virtual service and gateway resource the next thing we're going to do is how do I secure the inbound traffic it's nicely exposed on 8080 but I want that traffic to be secure so in order to do that we're going to create a secret for e-steal inaction in the e-steal system namespace and then we're going to config the gateway resource to use the secret and then change to use HTTPS as the protocol for the e-steal inaction I.O. so this TLS configuration as you can see we are config simple TLS with the credential point to the secret we just created now we're going to apply this configuration in the e-steal inaction namespace guess what's going to happen if we call the web api on the secure port everything works same except now the connection is secure now my question to you is what's going to happen if I go back to call the web api on the port AD it's going to be rejected the reason is our gateway configuration only expose port 443 so anything else will be automatically rejected congratulations you have exposed the web api service to e-steal ingress gateway and you have done it securely the next thing we're going to do is adding services to the mesh so you can enjoy more benefits provided by the mesh you can click on the check button to just check to see if you've done the lab correctly and you know if it's correctly we're going to load the next challenging for you before we go to the lab I want to quickly walk through the lab 3 so in this lab we're going to teach you mesh observabilities you're going to incrementally add services to the mesh and then you're going to automatically see the visibility of the interactions among your services and you get all that just simply by adding your services into the mesh so this is the magic of the site apoxie provides when you deploy pods and services to the mesh there are a couple of things I want you to check it out and understand so the first thing is we want to make sure you name your service port for each of them that you want to participate in the mesh so for instance for web api in this example we've named it http as our port number the pod must have a service associated with it so as you can see you know we have a service object and a deployment object for web api we would want you to label your deployment app and version this is for observability purpose so we can differentiate this is the metrics from this particular app and this particular version of the app and we don't want you to use UID 1337 because the envoy proxy cycle is also using that UID the last thing I want to ask you is do you have net admin and natural privilege as someone deploy the services into Kubernetes if you don't I highly recommend you to check out Istio CNI in this lab environment we're going to provide you those privileges just so you know so lab 3 is really about adding services to the mesh we're going to gradually adding Istio proxy to three or four services web api recommendation and purchase history alright let's go to the demo of lab 3 my environment is ready so there are two psycho injector the first way is automatic which is highly recommended it works way better with Helm workflow the second way is manual injection in order to do automatic injection the first thing we're going to do is label the namespace with Istio injection enabled and then we're going to just curate the namespace making sure it has Istio injection enabled and then we're going to review the service requirements that we just talked about so if you review the web api service you can see the name the port you can see the labels that we added for observability purpose you can see the containers actually point to 8081 and along with a bunch of environment variable by the way we're using a fake service from Nick Jackson from our friend at HashiCorp for this example now we want you to do the same for the rest of the services just to go through the exercise in order to add services to the mesh we're going to rolling restart the API deployment as you can check the part status you can do get parts you can see you know it's rolling with two slash two and the old one is being terminated the next thing we're going to do is checking out the logs of web api just making sure it's clean it's good sometimes some parts after inject psych it does have problems so you will make sure it's still good now we're going to validate we can continue call web api through that is still in action host that we set up everything looks good so let's check out the details of the part as you can see it's actually a little bit more complicated here this part actually have three containers there is a unit container and two regular containers so the unit container essentially set up the ip table rules and using the proxy v2 as the image it sets up the ip table redirect with you know capture all the incoming traffic and capture all the outgoing traffic so that magic is all down through the unit container so we're going to teach you how to understand pilot agent with the ip table command so you can find out what exactly these parameter means by running this command so you can map back to the unit container configuration that's above right so for example you can do dash p 15 001 map to you know dash p here so this is the on-way port to redirect all the tcp traffic the next container we want to call attention is the istio proxy container so let's go up a little bit take a water break as you can see here it's the proxy container running in the sidecar mode it has concurrency it has istio d as the third provider the ca address has a bunch of environment variables but what's important here is it's using the same image as the unit container and it has a health check endpoint on port 15 0 21 it has a bunch of configuration to increase resilience it also have like the cpu memory for limits and requests this is important for capacity planning so whatever you specify here if you overwrite the defaults you want to plan for that capacity in your Kubernetes environment we talk about 1307 it's being reserved and it also has a bunch of volume mounted like some of these volume like the secrets are extremely important and istio token and the istio envoy so all these are very important alright so feel free to poke around the configuration here this is how you add web api to the mesh with that we are going to add the rest of our deployment into the mesh now if we do get pass on everything in the istio in action you can see everything is coming up as running and continuously you should do the tests just to make sure everything continues to be good that's how we want you to roll out your services to the mesh so congrats you've added psychoproxy to each of your services let's talk about benefit right remember we talk about the resources you have to do capacity planning for that psychoproxy but what are the benefits for you to spend the actual resources on cpu and memory right so the first thing we are going to do is generate some load so we are going to run this loop every 3 seconds we are going to put a little bit of traffic onto our web api through istio ingress gateway and then we are going to enable access to kaili through the istio kado dashboard command we learned earlier now if you navigate to the kaili ui you can select the graph on the menu in the same space select istio in action so I am going to unselect these two and let's see my favorite one is traffic animation and security so let's go ahead enable those yep now if you do a refresh here hopefully some traffic will come in yes it did now you can see you know we got all these traffic you can click on each of them you can see we have mutual tls enabled which we will talk about that in the lex lab so by default istio has a permissive mode which means we will do best effort to do mutual tls but if mutual tls failed we will continue to allow the traffic go through so this is good for onboarding but probably not something you want to run in your production environment if you only want to allow mutual tls traffic so we will talk about that in the lex lab how do you do enforcement of mutual tls the next thing we are going to talk about is distribute tracing so we are going to do back to terminal 2 so we will get out of the kaialik command prompt and then copy over the command prompt to bring up the jega dashboard and then we are going to go to the jega ui so hopefully we have some services here so the services that I am going to go to is web api right because that is the service we exposed so if I click on find traces you can actually see a bunch of traces like this one is going through is your ingress gateway to web api and to recommendation and to purchase history you can click on this and then see you know how much time it is spending you know it has a bunch of information for you without you actually needing to do anything right so you can see we have xrequest id so that is how you know the headers are propagated that is how we know these trace span are connected to a single request let's get control c out of jega we are going to look at grefana dashboard next so we are going to bring back to the grefana ui and let's see what we are doing so we are going to select dashboard and on the left side and then click on the manage button we are going to see a bunch of dashboard for easter egg here we are going to view the control plane dashboard first you can see how a control plane is pretty lightweight it has a bunch of configuration for pushes so those are the configuration when we drive like gateway and virtual services resources it has like all the xds active connection requests and psychokinjection so a bunch of all this information without you actually needing to configure anything it is pretty nice let's see we can also go back to the easter egg matrix let me check if this terminal is still running still running which is nice so if we go back to manage easter egg and we can see the mesh dashboard you can see all the data here you can actually drill into any of these and also see the purchase history data this is like htp layer 7 data you got all the incoming request duration the size all the request based data is available for you very nice so feel free to play with the great founder dashboard as you can see it is very powerful for the observability provided by a service mesh like easter egg the moment you add the psychoproxy to your services it magically gets distributed gets all the great founder dashboard kaili dashboard it helps tremendously when you need to debugging problems so congratulations you have added the sample application successfully and let's click on the check button to see if I did this lab correctly and you enjoy the benefits provided by the service mesh as well I want to walk you through the next lab we are going to walk through so we are going to talk about secure communication how do we increment the best practice is incrementally adding services to the mesh and also incrementally config mutual TLS starting with one service at a time and adding this to your target service so the way you config strict mutual TLS is a server side configuration through peer authentication policy so you can apply that at mesh wide you can apply that at namespace level you can also apply that at a particular service level we recommend incrementally but because this is a get started workshop we are going to apply it to the mesh wide so you can get a feeling of how it works if you are looking for best practice in the future therefore a deploy is still two production workshop which you can learn how to incrementally enable mutual TLS to the mesh so in this lab we are going to enable mutual TLS globally we are going to explain to you how workflow the key and certificates are distributed in Istio we are going to inspect the key and certificate for each of the services and then we are going to show you how mutual TLS is enforced in the Istio proxy so you will get to see the sleep container able to access web API with sleep container not able to access web API when they don't have the proxy so let's go to lab 4 hopefully our environment is ready so first thing we want to do is make sure you are in this folder which I think you are we are going to do pure authentication policy check on all the namespace just to make sure if you have any installed config so you don't have any so the thing we are going to do is apply mutual TLS strict mode to my Istio system which by the way is our root configuration namespace by default you could optionally config to use a different namespace but this is the one we have in our environment comes with the demo profile we are going to now be able to view that default pure authentication we just install to our Kubernetes cluster the next thing we are going to do is we are going to install the sleep application in the default namespace why we are doing this is because remember we only enable psycho injection on the Istio in action namespace right so if you are deployed into the sleep namespace you are not going to get the psycho injected so let's double check that so now if we form the sleep part in the default namespace if we type here what's going to happen connection reset why is that remember we just talk about there's no psycho proxy there's no upgrades the connection to mutual TLS so Web API is going to reject this request what's going to happen if we do the same command but from the Istio in action namespace it works nicely now let's generate some load of our environment here now if we go back to Kayali let's go ahead from terminal 2 because this command is holding up my terminal now if we go back to Kayali and if we go to the graph here if you enable security and traffic animation you don't have to but one thing I want to point out is you can actually see this thing here sorry mutual TLS mesh wide is enabled so Kayali actually knows it's intelligent to know that I enable strict mutual TLS for my entire mesh so let's check out how this works so with that I'm going to get out of here and we're going to do a proxy config command to check out the secrets for Web API from the output you can see the default secret and also the Istio service mesh root CA now let's check the issuer for the public certificate so if we use the same proxy config command but in this case we're going to retrieve a little bit more information and then use big64 to decode that you can see the issuer is clustered local and then let's check if the secret is valid as you can see it's valid for a day very short time but it is valid you should see your today's time stamp by the way because this is recorded the other thing we are going to check is verify the identity of the client certificate is correct so as you can see we are using the SPIFI ID for the Web API service account in the Istio in action namespace so this is the SPIFI format that follows trust domain, namespace and service account so you might be wondering where does the cluster local and Web API value comes from so let me teach you how to figure that out so the first thing we're going to do is figure out in the Istio config map there is a trust domain configuration so this is essentially configured at the installation time of your Istio and then the next thing we're going to do is view the Web API deployment YAML as you can see we actually created a service account called Web API so the next question I want to ask you is how does the Web API service obtain the necessary keys and certificates so remember early on we reviewed the Istio proxy container configuration and there are a few volumes mounted here like the Istio token Istio CA certs so these volumes form the config map in the Istio in action namespace so if you do a config map in Istio in action you should be able to see the root CA certs volume so during the start time Istio agent which by the way is also called pilot agent creates the private key form the Web API service and then sends the certificate signing request to Istio D which is the default Istio certificate authority in our installation to sign the private key and then using the Istio token which by the way is mounted to the pod here using the Istio token and also the Web API service account token which is also mounted to the pod so the Istio agent have all that information can send the certificate signing request and then Istio CA can send back with the signed key and certs and this is also why the certificate expires in 24 hours because we have the capability to do automatically key signing so before it's expiring we're going to send the same certificate signing request make sure that certificate is valid so the next question I want you to ask is how is my mutual tier is strictly enforced by the sci-car proxy so we're going to take a look at the proxy config all configuration it's actually a bunch of configurations here one thing I want to highlight here is you you will notice this time we don't allow raw traffic we only allow the TLS traffic so if you go through the configuration here let's see you will see it's only for the trusted TLS traffic so basically when mutual tier is strictly enabled you would only allow mutual TLS traffic and the raw traffic are rejected so you won't go through let's see if we can search for transport protocol here it's a little bit hard to find out the information here let's see if we can find it I think it's a wrapover these are route configs now my other question to you is you've only deployed a few services why is there so many envoy configuration for the pod, right? because we can't even scroll up to view all the configuration which is pretty sad the reason is it's your lessons to everything in your Kubernetes cluster by default you can actually config this with some of the advanced configurations such as discovery selector or the site card resource which we will cover in a future workshop for you mutual TLS for you is still service mesh follow me to talk through here so for the next challenge which is our last challenge we're going to talk about how to control traffic when you have more than one version which is very common right? as you increase your velocity of your services you're going to end up with more than one version so we're going to have routes between different versions how do we dock launch the services how do I shift some percentage of the traffic to the new services so we're going to teach you about again also about services for the route configuration but a little bit more on destination with subsets right? when you have multiple versions that's when destination rules are needed to config how you're going to split the traffic among the multiple versions we're going to also teach you service entry because there's a new version of our service that requires access to external services so the first thing in lab 5 is dock launching purchase history version 2 which is a simple version I wrote and then we're going to do a canary test on this new version by split of 80-21st and then eventually when we feel confident we're going to roll 100% of the traffic to version 2 and we're also introducing version 3 by dock launching it and then we're going to also version 3 allows you to connect to external services so we're going to add resilience to version 3 and also control the outbound services for version 3 and with the demo of 5 I see all the challenges ready here I think we're already at this location so let's check out purchase history version 2 so this is a fake service that I made some modification for version 2 and I think the main modification is just to say I want to have purchase history version 2 here as my output message and I'm actually also using external services called JSON placeholders so it's a dummy JSON services that I'm using because I want to get my message from that external services so the next thing we're going to do is deploy this purchase history of v1 so it essentially teaches Istio to route all the traffic to purchase history version 1 so my question to you is should I deploy the purchase history virtual service first or should I deploy the new version of version 2 first the answer is we want you to deploy the virtual service first because that's how you can guarantee 100% traffic goes to version 1 if you deploy the purchase history version 2 first the moment the deployment is running Kubernetes is going to run robbing the traffic between version 1 and version 2 which is not really what we wanted here let's review the destination rule here because there is a subset here but essentially through the destination rule of purchase history we're indicating for subset version 1 which is selected by label version equals v1 so the next thing we're going to do is we're going to apply the virtual service and destination rule we just talked about remember this needs to be first before the deployment is deployed and then we can start to deploy the version 2 purchase history deployment now let's go ahead confirm the deployment is running it could take a second or two as soon as you see both version 1 and version 2 running I want you to go there let's look out to the logs of version 2 so as you can see the logs looks okay so nothing looks odd here actually no, sorry as you can see there is a connection unable to connect, connection refused so there is an error here that's not good because it couldn't reach out to the JSON service at the beginning at the start of time this is not great how do we fix this? now let's generate some load so as you can see my virtual service and destination rule is working because all the traffic continue routes to version 1 so even though our version 2 is bad, well sort of bad it doesn't impact my version 1 and I didn't have to redeploy anything for version 1 now let's go ahead and fix this how do we solve this problem? I have an updated version of purchase history version 2 and in the updated version the only thing different is I added this annotation called hold application until proxy starts so in Istio by default the proxy container and the application container starts in parallel so there's no guarantee that when your application container is starting the proxy is already ready so this annotation ensures that the application container does not start until the proxy is ready which is absolutely needed in our container here because it reaches to external services so let's go ahead and deploy this guy here now if we wait until this guy is running now if we check the logs here unfortunately I think we check a little bit too early so let me go to Istio in action so this is still terminated which is probably why the logs didn't show if you can hear the logs looks good now so it's able to start the service and it's able to connect even though there's no post but it's actually working so that annotation is over rescued here now let's test the version 2 remember it's dock launch so we can't easily test through the Istio Ingress Gateway but we could potentially test when we exact into that pod and then we should be able to cur the local host so you can see now the response not only says version 2 it actually generates some response by asking the JSON placeholder service and print that response from the JSON placeholder service so this is exactly what we wanted we get to V2 we were able to connect to the external services and get a good feedback the next thing is version 2 looks good let's try to roll maybe some percentage or maybe a header base first so let's try to do a header base first so we're going to say when the user is JSON version 2 so let's go ahead and apply this virtual service configuration now if we curate the visit the web API service and if we use the JSON what do you think it's going to happen hmm what's still getting purchase history version 1 why is that ok the reason is because JSON here we had lowercase j we had exact much for uppercase j so let's correct that this time would work yeah as you can see we get version 2 a nice reply back the next thing we're going to do is canary testing we're going to try to shift 20% of traffic to our new version let's go ahead apply this and let's generate some load as you can see you know we had about 20% traffic goes to version 2 so we have 4 out of 20 so it's exactly 20% next thing we want to do as we increase our confidence is let's put 50% to version 2 so let's go ahead apply this virtual service and now if we do the same curate hopefully we would have a nice split as you can see immediately after I apply my virtual service it still picks up right it automatically routes 50% without me needing to change anything in my version 1 or version 2 let's shift all traffic ready for that to version 2 it's looking good so this configuration apply now if we do a test here we would expect only version 2 the next thing we want to talk about is control our remember we had external service JSON placeholder so what if I don't want any traffic going outbound only the traffic that I as the admin or operator are allowed check our installation you can see we didn't have anything can fix so this means we have the default outbound traffic policy which is any so when allow any means is used that means any external traffic are allowed so we're going to modify our installation a little bit to say you know what I only want registered service I don't want anything else now let's confirm the new configuration is picked up yeah it is registry only now right so let's go ahead send some traffic maybe the traffic would fail now because it doesn't have access to version 2 it does take a little while for that to pick up so the reason is the registry only is config in a config map and we don't expect you to have config map changed constantly so it does take a little while I think it's like one or two minutes for it to pick up now you can see you know when you call from recommendation to purchase history it's getting 503 because purchase history couldn't access the external service remember it still automatically config retries so you actually see a retry here of three times attempts to get access and it fails so this is all expected right because we tell Istio not allow anything unless it's registered so let's go ahead fix that the way to fix that is using service entry resources in Istio so through the service entry I register with Istio this is my external services and the services external to my mesh and you know I'm going to access it through HTTPS using TRS port 443 and I'm going to rely on the DNS resolution to resolve this host let's go ahead apply this service entry in our Istio in action name space and now if we send some traffic here hopefully it's going to work yeah it did so as you can see Istio automatically pick up the configuration within many seconds the other thing I want to mention is you can also put a virtual service on the external service like in this example I'm adding a timeout of 3 seconds when accessing this JSON placeholder right so if it's over 3 seconds I'm timing out so you can add timeout and retries to increase the resilience for that connection to the external services now my question to you is what if you want to securely restrict which parts can access the external service should you send the traffic to your external service through Istio egress gateway we will cover that in the Istio expert workshop okay let's wrap up for this workshop for this lab a service mesh like Istio has a lot of capabilities to allow you to manage traffic flow within the mesh allow you to control what's the entry to the mesh and also the traffic leaving the mesh these capabilities really allow you to control precisely how you want your traffic to be controlled for your services you can also use Istio easily to build resilience through timeout which retries circuit breaker and ally detection to increase the resilience of your services we will have a bonus section if you are ahead I encourage you to do the bonus section if you don't have time don't worry most people won't be able to get to it especially in under 2 hours we expect you to finish them it's not required for passing the test so if you like you can go ahead follow up for the bonus section but given the time we're going to wrap up so like I mentioned this is the Istio foundation badge I'm going to send out the test and survey in the chat room so you can access the test please give us the feedback and good luck with your test I expect we will be able to issue you a good badge upon your successful test offer passing 80% before I let you go I want to talk about at solo we provide enterprise Istio production support through our gloomash we provide upstream Istio a minus 4 long term enterprise support we provide critical security patches within a day after the security announcement is out we provide very high level SLI for step 1 also we have a lot of Istio expertise in how to provide architecture and operation guidance so thank you and to learn more with solo.io we have essential workshop we have Istio expert workshop coming so check us out and sign up for an upcoming workshop to learn more about Istio and gloomash thank you so much I will be around for any questions you may have