 Good afternoon, everyone. My name is Oankar Bhatt. I'm an engineering manager at Casten by Veeam. My focus has been in the areas of authentication, role-based access control, multi-cluster management, for the purposes of data protection of applications in Kubernetes. This is through our product, KTEN, that we've built at Casten. Really grateful to be here in person today doing this tutorial in front of all of you. Super grateful to the reviewers and committee that spent so much time reviewing the proposals and accepting my proposal. So very happy to be here. I'm a first-time speaker at KubeCon. Now, before I move ahead to the other slides for the tutorial today, a few things I want to highlight. You'll see that there's a QR code at the bottom right corner of my intro slide. So do make use of that to access the GitHub repo that I've created for the purposes of today's tutorial. It has a lot of prerequisites that you should set up in order to go through today's tutorial. So that's one. There's also a URL there right at the bottom which will take you to the same link. The next thing I want to highlight is I will pause about three times for 10 minutes in various stages of the tutorial to help unblock you guys if you have questions and if you're stuck. And lucky me, I have my team from Casten who's going to help the audience here as well. So during the breaks, I'll walk down and we'll help you if you're stuck somewhere. So I have Sylvan and Matt from the product team at Casten and I have Tom and Lee from the engineering team at Casten. So thank you so much guys for helping out today. All right, so let's go to the agenda next. So today I'll talk about why you might want to attend this tutorial. Hopefully some of you have already seen the proposal and that's why you're here but I'll double click on that a little. And then I'll talk about what are Active Directory and LDAP. We'll introduce the application that we're going to be deploying today for today's tutorial. That's the application that we're going to secure. I'll cover Open LDAP, DEX, OAuth 2 Proxy which is another open source project. We'll then go through the prerequisites again and we'll set up Open LDAP, DEX, OAuth 2 Proxy and the application as part of the tutorial. I will show a demo. I will be going through the same steps just like you guys today and I'll show you a demo of what secure access to that application looks like at the end of the tutorial. And then we'll do more Q&A. So that's the agenda. All right, so why attend this tutorial? You might be in that class of users who's thinking about migrating applications to Kubernetes and you're thinking about, hey, I don't want to reinvent the wheel when it comes to authentication. You might have standardized on using Active Directory in your organization and you might want to use that for meeting your authentication needs in a Kubernetes cluster as well. If that's the case, then you're in the right tutorial. You might be in another class of users who's not interested in migration but you're just deploying new applications in Kubernetes and you want to leverage your Active Directory server for authenticating access to that app. If yes, then you're in the right tutorial. If not, yeah, you can get out of the room. I'm just kidding, I'm just kidding. In a stick around. And I want to touch a little bit upon why am I doing this tutorial today. So I've been with Casting for about a little more than two years. In my time at Casting, I've worked with multiple organizations to install our product, K10, with Active Directory-based authentication enabled. So when the requirement came up initially while I was at Casting, I was the engineer responsible for integrating the open source project DEX in order to meet this Active Directory authentication requirement. And so I want to use this opportunity today to share what I've learned through my experiences of configuring and using DEX for this. And so I'm really excited about being able to do that today. All right, so what is Active Directory? It's a directory service developed by Microsoft for Windows domain networks. It uses the lightweight directory access protocol and other services like Kerberos and DNS, but we're not going to dive deep into that in today's session. And so the next question is what is that lightweight directory access protocol? In short, it's called LDAP. It's an open vendor neutral standard for accessing and maintaining a directory service, a distributed directory service. So you can think of an Active Directory server or LDAP server containing the user names and passwords for users and a client application can authenticate access to it by talking to this LDAP server or Active Directory server. So they can validate users that are trying to access that application. And so I told you in the agenda that we'll be talking about open LDAP. So open LDAP is an open source implementation of LDAP. Now for today's tutorial, we'll be using an open LDAP server as a stand-in for an Active Directory server. So I'm going to show you the Kubernetes YAML for an open LDAP deployment, an open LDAP service. We'll work through deploying that. And that's going to be your LDAP server for the purposes of today's tutorial and demo. All right, so now we introduce the super critical application that we want to secure and that's Pac-Man. So at the end of this tutorial, you'll have Pac-Man installed on your machines and we'll be securing access to it by authenticating against an open LDAP server running in your Kubernetes cluster too. All right, so we know that's our goal. We have Pac-Man application and we want to secure access to it. So I'm going to go and talk about option one first and we'll eventually lead to option three, which is our final solution. So in option one, you can rewrite the Pac-Man application so that anytime someone tries to access it, you can send an LDAP request to the open LDAP server and have Pac-Man process LDAP responses that are being sent by that LDAP server. Right, it's possible. We can spend time on looking at the code, rewriting it, but that's not the goal for today's tutorial. For today, we want to not rewrite the application at all in order to secure access to it. So that's our goal. So let's look at the pros and cons of option one here. What's the pro here? You have a nicely packaged open LDAP server. There's a container image. You can deploy it in a Kubernetes cluster and now you have a server to authenticate against. But the con here is you have to rewrite the app and we want to avoid that. So that takes us to option two. So in option two, I've introduced a new box here called DEX in the middle. DEX is an identity service that uses OpenID Connect in short called OIDC. You'll hear that very often. It uses OIDC to do authentication on behalf of other applications. So over here, as I told you, the disadvantage of option one was you had to rewrite the Pacman app so that it can speak LDAP. So we've solved that problem here by offloading that problem to DEX. So DEX is going to be the one that's going to send and receive LDAP requests and responses. So far, so good that solves one part of the problem. But the disadvantage in option two is you still have to rewrite the Pacman app so that it can talk OAuth or OIDC with DEX. So there's still some work to be done and we're super lazy in this room today. We're not going to rewrite the app at all. And obviously, you know there's an option three coming to solve this problem as well. So I touched upon a few terms, OIDC and OAuth, which I haven't introduced yet. So I'm going to do that before I move to option three. So OAuth 2.0 is an industry standard for authorization. Think of the situation where an application wants to authorize a second application to access its data or features in the first application. So that's where OAuth 2.0 comes in. It acts as an authorization layer. Now OpenID Connect is a thin layer on top of OAuth 2.0 which allows an application to obtain login and profile information about the user. So you have the authorization layer in the form of OAuth 2.0 and you have OpenID Connect which acts as the identity slash authentication layer. So that's the difference between the two. So this brings us to option three. If we have four boxes over here, we have the Pac-Man app. And in order to solve the problem in option two, what if I told you we could use a reverse proxy to handle authentication and authorization for us? So the answer is yes. That's another open source project called OAuth 2.0 proxy which we will be deploying in our tutorial today. So OAuth 2.0 proxy is the one that we offload the OAuth work to. All we have to do is redirect HTTP requests meant for Pac-Man to the OAuth 2.0 proxy which will start the authentication flow. As the authentication flow goes to DEX and the LDAP server, if the user entered the right credentials and if they were authenticated successfully, then the flow will go back all the way to DEX to OAuth 2.0 proxy and then we redirect the user back to the application. So there you have it. You have met our goal of not rewriting the application by deploying two open source projects that we'll be double-clicking on in today's tutorial. All right, so for the prerequisites, note that today's tutorial will work only on x8664 or AMD64-based machines. So unfortunately, ARM is not supported by the Pac-Man app that we're going to install in today's lab. Now the packages that you want to install are all accessible in the URL or QR code that I've pointed to here. So I'd suggest if you don't have your laptops out yet, get them out, get started on installing the prerequisites for today's tutorial. So this is a good point to take a 10-minute break, a five to 10-minute break to let you guys install all that's required. So I'm going to just go ahead and do that. Please feel free to raise your hand if you have any questions while downloading the prerequisites and one of us will come over to you. So while you're doing that, I'll talk a little bit about kind. So kind represents Kubernetes in Docker. It's a quick and easy way to install Kubernetes cluster inside a set of Docker containers. You can use brew install kind if you are on Mac. If you are on Windows, then you can use the Choco package manager to install kind or use your favorite Linux package manager if you are on Linux. In case you don't have a laptop today, then it's a good opportunity to connect with people in your community. Use your buddy's laptop to go through the tutorial. Hello, yep. Okay, so I'm going to proceed on to the next part of the tutorial. I will take another break once we get to a certain point and we can unblock you if you're stuck on the prerequisites. But now as you can see on the screen, I've highlighted just the open LDAP box here. Our first step in this lab is going to be about setting up just the open LDAP service. So we'll deploy it in a Kubernetes cluster. We'll add users and groups to the LDAP server. And we will use LDAP utilities for interacting with this server. So you'll be able to add a group using the utility command LDAP add. And we'll also use LDAP search to search for users and groups in this LDAP server. All right, so if you haven't seen it yet, this is the page that the QR code takes you to. All the prerequisites are over here. So I'm going to go ahead and start with the open LDAP part of our tutorial. So to begin with, we're going to use kind in order to create a cluster on my laptop. So you can use kind create cluster dash dash name and hand over the name of the cluster that you want to create. In this case, it's KubeCon. You can use kind get clusters to list the clusters that you have created. Also note that once this cluster is created, your Kube config will be set up appropriately so that a KubeCut will get namespaces is against the new kind cluster that you just created. So once the cluster is created, I went ahead and cloned the GitHub repository that we are going to use for today's tutorial. I created a namespace named open LDAP. And then I'm creating a secret of a generic secret named open LDAP. And there are three fields that are going to be present in the secret. The first one is the admin password for the LDAP admin. The second is a field called users, and I've added three users in this command. So that's production admin, production basic, and production config. And then the third field of the secret is a field called passwords that contains the passwords for the three users. So the secret has been created, and the next step that I've executed is I've done a cat of the open LDAP deployment that we're going to be working with today. So let's take a look at that before we proceed. There's a metadata section, there's a spec section. The key thing I want to highlight in the open LDAP deployment is that in the spec section, for the containers, we have set up a few environment variables. The first one is the LDAP admin username that's called admin. The other three environment variables are sourced from the secret that we created in the previous step. So that's why you'll see a secret key ref, and the name of the secret is the open LDAP secret. I then run kubectl create dash n open LDAP dash f open LDAP deployment to create this deployment. Next, let's look at the service definition for the open LDAP service. So you'll see here, it's a service named open LDAP of type cluster IP. We're not interested in exposing this service outside of our cluster today. So it's of type cluster IP. It's going to be listening on port 1389, and the target port is also 1389, which matches with the deployment container port as well. I then run kubectl create dash f using that file to create the open LDAP service. Let's watch for the pods until they come up. Everything's good so far, it's in running state, and all the pods are ready. So next we're going to run the port forward command to make sure that the open LDAP service is listening on the local host IP address on port 1389. Okay, so the service is up. We finished the first stage of our tutorial. We're almost there. We're going to interact with this LDAP server next. Right, so next what we're going to look at is we'll create a new group. So you notice that we've already created three users while we were creating the open LDAP deployment. Now we're going to create a group called Pacman admins. So the way you do this with anything to do with LDAP is you create a file with the dot LDAP format and I'll get into what LDAP is in just a bit. There are multiple lines here. You'll see a DN representing the distinguished name of the record that represents a group in the LDAP server. The common name CN is Pacman admins and there are two members in this group. There's production admin and production basic. So notice I haven't added the production config user to this group. We have three users and only two out of them are members of the Pacman admins group. So LDAP stands for LDAP data interchange format. It's plain text format that's used for representing directory content and update requests. In this case our update request is about the addition of a group. As I described earlier, we went through DN and CN. So I've copied just one line from the LDAP file to describe what each field is. We've already covered distinguished name and common name. There's also a field called OU which represents an organizational unit that the user is a member of. And then there's DC which represents domain components. So if for example your organization's website domain name is www.example.org then your domain components would say DC equal to www comma DC equal to example comma DC equal to ORG. So let's carry on with the next part of the tutorial. So we're now going to interact with this LDAP server. So you'll see I'm catting the Pacman admin group dot LDAP file. We've already looked at that. So next I'm running the LDAP add command. The dash X represents simple authentication. The uppercase H represents the LDAP server that we're talking to. And as you know, we've already started a port forward command. That's why it's listening on localhost colon 1389. The uppercase D represents the distinguished name of the admin user, which is CN equal to admin, DC equal to example, DC equal to ORG. So think of that as the username of the LDAP admin that's interacting with the LDAP server. The lowercase W is that admin's password. And you'll remember this admin password was the password we created when we deployed open LDAP. And the dash F accepts a file. In this case, it's the LDAP file. You'll see a message from our LDAP server that says the entry has been added. So next we're going to use the LDAP search command against the same LDAP server. Most of the arguments are the same here. The only difference is the lowercase B. So what the lowercase B tells LDAP search is the root at which to perform the search of records. In this case, the lowercase B is set to DC equal to example comma DC equal to ORG, which means we're searching for records, all the records under that domain. So you'll see in the output, there's a distinguished name for the organizational unit named users. There are three records. There's one record for the production admin user, a record for the production basic user, a third one for the production config user. There's a group called readers that was created by default when the open LDAP server was deployed. It has all three users inside it. So make note of that because you'll see the groups show up eventually when we perform authentication successfully. So keep in mind that all members are in the group readers, but only two members are in the group named TACMA and admins. And this output is also confirmation that the LDAP ad actually worked. All right, so okay. So we've reached that part of our tutorial where everything related to the open LDAP box is all set up for us. We're next going to move to deploying decks. Notice that my red box is a little bit extended because while configuring decks, we're going to be adding configuration that's related to the open LDAP as well as the OAuth to proxy, which is a client of decks. Now before I run through the commands, I wanted to highlight the values file that I'm going to be using to install decks. So I'm going to be using Helm for decks. Helm accepts a values file and that's what I'm showing here on the screen right now. Lots of values here. I've sort of decomposed it into three pieces, which I've listed on the left. There's an LDAP connector piece which tells decks that it should communicate with the LDAP server. That's the decks issuer config, which is about decks itself. And then there's the client config, which is about the OAuth to proxy that talks to decks. So let's focus on just the LDAP connector config. So you'll see there's a host config that points to the LDAP server that we've deployed on localhost. But I'm not using localhost IP here. I'm using Kubernetes's naming convention for a service, which is servicename.namespacename, and then the port number that it's listening on. There's an SSL section, which is not relevant for today. It's all demo, so we're not going to care about SSL, but you should definitely care about it in production. We have the bind DN and bind password up top over here. So that represents the admin credentials for the purposes of today's demo. So it's the bind distinguished name and bind password for the LDAP admin. Next we have two sections that we'll cover, user search and group search. So let's start with user search here. So the user search config is telling decks how to look for users in the LDAP server. There's a base DN and an optional filter. So the base DN tells decks, you know, where do I start my search in the LDAP server? It's the root of the search or base distinguished name, in other words. The optional filter lets you filter down the records that are being returned by the server. So instead of getting overwhelmed by too many records, you can use the filter to reduce that. The user name field here tells decks that the UID field in the LDAP record should be treated as a user name. So in an active directory environment, this might typically be something like a SAM account name, or I think there's a user principle as well. So this is something where you should discuss your active directory team should get involved and they will give you the best practices for your organization, about which field is the right user name field. But since we're using open LDAP for today as a stand-in, we're just using UID as the user name. Next, you'll see email, ID, name, and preferred user name. So keep in mind, if you remember, I said decks is an open ID connect provider. It's going to generate a JSON web token for you every time you successfully authenticate with it. This token is going to have fields like email, ID, name, and preferred user name matter, and all you're saying, telling decks is to use the UID of a user's record to populate the JSON token when authentication is successful. So similarly, you have a group section that tells decks how to search for groups that a user is a member of. And once again, just like the user search, you have a base DN and a filter. And then you have this matching criteria that tells decks how to match a group with a user record. So in our case, if production admin is present in the Pacman admins group as a member, then we're going to get a hit there, and decks is going to return that group. So that was everything related to the LDAP connector. Next, let's move on to the decks config as an issuer. So the issuer URL is the URL where decks is going to be available on as an OIDC provider. So in this case, again, I've used the Kubernetes naming convention of service name dot namespace name. There's logging format. It's just info level in this config. So for storage types, decks supports multiple storage types for persistent storage. There are database options. You can look at decks' documentation to learn more about it. But for today's tutorial, I'm just using the in-memory type implementation. And then you have the web config, which indicates the clients that it allows to connect to decks. In this case, we're allowing all clients to connect to decks. And it's going to listen on port 8080. Okay, so let's get to the last piece of the Helm values file, and then we'll go and deploy decks. So the last piece is the static client configuration. So we're registering the OAuth to proxy as a client of decks here. You have an ID, a secret, and a redirect URI. So the redirect URI is important here. What this is telling decks is if authentication is successful, redirect to this specific URI to continue the flow. Okay, so over here, I'm going to continue the steps for deploying decks. I've done a cat of the decks' values file. You know, this is inside the GitHub repo that we have cloned under secure dash pacman slash decks. We've already gone through all the values. Let's proceed. So we're gonna create a namespace called decks. We're using Helm, as I mentioned earlier. So we're doing a Helm repo add decks with the URL for decks' chart. And Helm repo update decks to pull in the latest chart from there. So there's one thing left to do before we proceed to actually run Helm install. In the Helm values file that I have in my GitHub repo, I haven't updated the bind password. So let's set the bind password there before we proceed. I'm having some trouble editing this file. So it's not editing this file right now. Kind of, I guess if it's stuck, the CD secure pacman, CD decks, edit any way that works. Let's just try that. I could probably just continue from the other shell. Okay, thanks. All right, so I edited the values file so that I entered the admin password for the LDAP admin. And it made progress and ran Helm install using the updated values file. We ran a watch on Kupkar to get pods in the decks' namespace. And that's what is running right now. It's in running state. So we're good, so far so good. That was affecting the screen a little because of zooming in. All right, so thanks. So we have decks port forwarded so that it's listening on local host and port 5556 now. All right, so that's stage two for us. Among the four boxes that we have in our end-to-end flow, we've deployed open LDAP and we've deployed decks. They're both accepting connections. So this is a good point to, does anyone want to pause at this point? Are you stuck? Can you make progress? I guess I'll just keep going forward then. Don't see any hands raised, okay. So let's just keep going. Okay, so the next thing we are going to deploy is OAuth through proxy. So as I said, it's a reverse proxy for handling OAuth for us. We're going to have decks as the OIDC provider that OAuth through proxy interacts with. And once authentication is successful, OAuth through proxy will redirect to Pacman. Okay, so over here, we're going to create the namespace Pacman where our application is going to live. But before we deploy the application, we're going to deploy OAuth through proxy first. So you'll see I have cd into the OAuth through proxy directory of our GitHub repo and I've done a cat on the OAuth through proxy deployment. And then I've executed kubectl create dash f of the deployment file. So before we proceed, let me explain what the deployment looks like and what's important over here for configuring OAuth to proxy appropriately. The key thing to look at here is under the spec for the containers. We have the provider set to OIDC and the URL for that provider is represented by the OIDC issuer URL here. You'll see it's set to dex.dex which is running in our Kubernetes cluster already. The next important piece of config is the OAuth to client ID, the OAuth to client secret, and the redirect URL. So this represents OAuth to proxy as a client. And if you remember from our dex values file we had registered an OAuth to client there. So I want to highlight that over here right now because it's important for the client config index to match with the OAuth client config you're seeing here for the authentication to flow to work successfully. So if you're facing some problems this is a good place to go and debug later. The last piece of config that I want to point out before we move ahead is the upstream config. You'll see that it's set to pacman-actual.pacman. So when authentication is successful from OAuth to Proxy's point of view the upstream URL is the URL that it's going to redirect to or rather proxy to. And we're going to set up the pacman actual service towards the end of the demo. So we have done with the deployment. Let's look at the OAuth to proxy service YAML. It's of type cluster IP. We're not exposing it outside the cluster. The port number is 4180. Viran Kupkar will create dash F using the service file. Let's watch for the pods and wait for them to be ready. So so far so good over here. OAuth to proxy is running. And next we're going to port forward the OAuth to proxy service as well. It's going to be listening on local host IP and port number 4180. All right. Let's finish the third box. Now before we proceed, this is a step that we have to complete for this end to end flow to work. We want to edit the system's hosts file so that if the user or when the user accesses the pacman app through the browser the browser will be able to redirect to the local host address where the server is listening. So you know that we've deployed decks. We've deployed OAuth to proxy. We have run port forward for both of them and they're listening on the local host IP. But how do you tell a browser to access those services? You know, there's no, we're not using DNS to do that. We're using the hosts file to do that redirection for us. So there are two lines, one for decks and the other for the OAuth to proxy service. I'm going to go back to our GitHub repo where we have our steps. So let's look at the hosts file. And you'll see I have already added lines for decks.decks and OAuth to proxy.pacman. All right, so now we're getting closer to the final workable demo. We're going to install pacman now using Helm. So I'm going to run Helm repo add pacman with the URL for the chart where we have this app running. Since I've already added it previously it says it exists. I'm going to run Helm repo update for this particular Helm project. Everything looks good. We'll now run Helm install pacman in the pacman namespace. So you'll see Helm dump and output about the status of the installation. Let's watch for the pods to come up. So over here you'll see that the first entry is the OAuth to proxy pod that we deployed previously in the pacman namespace. The second one is the pacman app that has the logic for the game. And the third one is about storing persistent data related to that pacman application. So while we wait for that to complete let's talk about the next few steps here. Once the pods are all ready I'm going to run port forward for the pacman application and then try to access it from my browser without authentication, right? So we've set up the pieces but there's some glue work still left. And I'll show you that I can access it without authentication for now and we'll fix it eventually so that it works with authentication. All right, so we have all three pods running at this point. So I executed port forward of the service pacman and it's listening on port 1990. So any HTTP traffic that I open on my browser is going to be forwarded to port 1990, all right? So you see that the pacman app is right now up and running. I'm able to access it without authentication but the whole goal of today was to secure access to this highly critical pacman app. And so that takes us to our next step. So if you remember from the diagram what I had mentioned was any HTTP traffic meant for pacman needs to be redirected to OAuth to proxy. And to do that, we're going to update the pacman service to redirect it to port 4180 instead of the original port 8080. So I have these patch commands ready here for doing this. Let's stop the port forward for now. So you'll see in the patch command it's kubectl patch service followed by the service name pacman in the pacman namespace. The type of patch we're going to run is a JSON patch. The operation is a replace. And what we're replacing is the path meant for the port of the service which is slash spec slash port slash zero slash target port. And I'm setting it to the value 4180 instead of the original 8080. The other thing that we have to update in the service is the selector that's going to be present in the service. The select is responsible for which deployment that traffic has to be routed to. And so we're routing that traffic to OAuth to proxy instead of the actual pacman app. So let's run port forward again and see what happens this time. Let's go back to my tab and reload this. And you'll see... So this is open ID connects login page. We have been redirected to the second box in our flow. Since OAuth to proxy is configured with DEX as an open ID connect provider, you see a button show up here, sign in with open ID connect. So when I click here, the flow is going to take me to DEX's login page. Right, so I've come to a new window here. You'll see DEX's logo show up here by default. You'll see DEX.DEX here in the browser tab on the top. And notice the reason this works is because we updated the hosts file so that when the browser accesses DEX.DEX, it's being redirected to the local host port number 5556 where DEX is listening. So I'm going to enter the credentials for our production admin user which we added in our LDAP server. The password is test password admin. I click login. So hold off on the four or three for now and I'll explain a little bit about that later. But what has happened now is I did get authenticated successfully. The flow came all the way back to OAuth to proxy. I just have to redo this flow again. All right, so this is where I want to reach as at this stage of the tutorial. So you saw the traffic redirected from Pac-Man to OAuth to proxy. You saw the DEX login screen. I entered the production admins credentials which was correct. I got redirected back to DEX and then to OAuth to proxy and I got a 502 bad gateway error because we still have one step remaining which is the arrow over here. We still haven't completed this part of the puzzle and we need to deploy a service that OAuth to proxy can redirect to. And so that's going to be the last step to see the demo working end to end. Let's take a look at the final service that is going to help bridge everything that we need. So this service is called Pac-Man Actual which is the actual service that's going to serve traffic. It's of type cluster IP and the port is 8080. I want to show you the OAuth to proxy config again so that it puts things into context over here. So if you remember from the deployment we have an upstream field that says it's going to redirect to the Pac-Man Actual service on port 8080. So that's the service that we're deploying here which is the final piece of this puzzle where we want to fix this part of the traffic flow. So let's create the Pac-Man Actual service and then let's port forward again. I'll go back to my browser and restart the authentication flow. So just repeating all the steps in the flow again. I've been redirected to OAuth to proxy. Let's sign in. I've been redirected to DEX. I've entered the admins credentials. Just ignore that for now. I will explain what's happening there again after a few slides. So this time the flow went all the way from DEX to OAuth2 and it went to the upstream service called Pac-Man Actual and I can finally play Pac-Man happily without knowing that there's secure access to it. Nobody can really go and maybe delete my scores. Maybe the high scores are really important. So I just want to highlight that redirection part again where we changed the port 8080 to 4180 and the selectors in the Pac-Man service to OAuth to proxy is deployment for the redirection to happen to OAuth to proxy. We saw the 502 bad gateway and we fixed that by deploying that last service called Pac-Man Actual and then we went through the whole flow. So now I just want to go back to the 403 that we were seeing and to explain what was happening there. So you've already noticed twice that during the login process I saw a 403. This is to do with the token that's being stored in the session. Remember initially during my slides I said we don't want to rewrite the Pac-Man app at all. Well, that was mostly true. If it's Pac-Man or any application that you want to deploy in your cluster you have to have sign out and sign back in flows. So your sign out flow is going to clear sessions for you so that future sign-ins don't hit errors such as the 403 that we were seeing. Now the way you want to do that is OAuth to proxy for example has something called slash OAuth2 maybe this is not very visible but in my URL here I have slash OAuth2 slash sign out. So OAuth2 proxy as a service supports certain endpoints that let you handle this. Sign out makes sure that the session is clear. So once I go to sign out it takes me to OAuth's login screen. This time if everything worked as I expected it shouldn't show me the 403 this time. So you see that I don't get a 403 this time because I cleared this session that I had previously and I don't see that anymore and I'm redirected to Pac-Man on successful login. So we're almost there. I wanted to highlight the logs that you'll see in DEX that will help you understand what's happening in DEX's backend. And to also find out if the configuration that you made in DEX is actually doing what it's supposed to the logs will be useful for that purpose. So here I have four log messages that got generated when I logged in as the production admin user. So the first log here is telling you how DEX is performing a user search. You have the base DN from your DEX config. You have the optional filter called iNetOrgPerson and you have the UID set to the username that I entered on DEX's login screen. The second log message here shows that DEX was able to map the production admin username to an actual LDAP record with a distinguished name that looks like this and you'll remember this from the LDAP search that we did against the open LDAP server. So this is an example of a log when login is successful. The third log here is an info level log again that's showing how DEX is doing a group search. This is again based on the config in the DEX values file. We have the base DN and the optional filter from the group search section of DEX's config and we're looking for the member that we just found in the previous step. The last log is showing that login was successful. A user was found and notice the groups. Readers and Pac-Man admins is present in as per DEX's logs which means it's going to include it in the JSON web token that's going back to the client. You know my next slide is it's very identical to the previous one. The difference is it's a different user named production config and the other big difference is remember I told you that there were two groups in the LDAP server a group called readers that had all three users and the group that we added today called Pac-Man admins which had only two. So since the config user is not part of that Pac-Man admins group you see just one group show up over here. Alright, so having gone through today's tutorial you've deployed open LDAP, DEX, and over through proxy to sort of get a hands-on experience about how you can authenticate access to applications using an open LDAP server. When you're ready to move this to production you'll want to figure out how to configure DEX so that it can talk to an active directory server maybe. So a few things to highlight here when you are ready to do that and hopefully you walk away from here being confident enough to do that. You'll want to talk to your active directory team about the host and port number that the active directory server is available on. You definitely want to fix your SSL config so that it is secure and most likely your active directory team is going to mandate that you do that. The other thing is you remember we used LDAP admin credentials that's not a best practice in production your active directory team will create a user for the purposes of DEX talking to the active directory server so you should use that DN and password up here and I briefly touched upon this earlier too we said username UID here but in an active directory scenario this would be something like SAM account name or I think it's principal name I think that's everything active directory specific that I wanted to bring up here. All right, so a few links for further reading we have dexidp.io which is the official DEX website that has additional documentation so DEX supports lots of other connectors not just LDAP you can do GitHub based authentication, Google SAML connectors and there's a long list that you can go look up depending on your users needs or your own needs there's a Slack channel where the maintainers of DEX are active and they will answer questions so highly recommend joining that channel I have links for the OAuth 2 Proxy project as well that has great documentation about some of the configuration that I have shown you in the OAuth 2 Proxy deployment we have the link for open LDAP and the last link is my GitHub repo it's a public repo, it'll always stay that way if you want to go back and run through these steps all the prerequisites and the steps from today's tutorial are present there and that's the tutorial for today hope you all enjoyed this and I'll keep it open for Q&A if you want to come up to the mic and ask any questions I'll be happy to answer them or if you're stuck somewhere right now in the tutorial we'll be happy to unblock you the last thing I want to touch on is at Castin we're hiring you can look at our castin.io slash careers page if you're interested in working with us we have roles in the backend team, front end team automation and we have a cloud native position open as well so if you're interested come talk to me after the tutorial or even at Castin's booth and we'd love to connect with you thank you everyone, pleasure being here can you hear me? can you check if the mic works up front here? yeah I think it's working there we go so if I wanted to deploy Donkey Kong for example do I need to deploy another deck server proxy? a good question so you can reuse your decks server remember I created it in a separate namespace you'd have to deploy a separate OAuth to proxy though for the Donkey Kong app you'd have to register it as a client index so there's a bit of config that you have to update in the existing deck server notice from the values file the static clients is an array so it can accept multiple clients so all you need to do is deploy Donkey Kong a separate OAuth to proxy for it and just update your decks config and that will do thank you yeah thanks for the question alright so I think that's time then thank you very much for attending this tutorial I hope this was useful and you're walking away confident about how to deploy these projects thank you so much for your time and have an awesome group con everyone