 All right, that looks good. Again, thank you for showing up today for our webinar, Security and Access for Kubernetes with Jonathan Canada, our sales engineer. Jonathan has a lot of experience and a lot of certifications in the industry and he's gonna go over a workflow on how to use RBAC and SSO for Kubernetes using open source teleport and GitHub Teams work around. So Jonathan, take it away. Thanks, Micah. Hi, everyone. As Micah mentioned, I'm Jonathan Canada. I'm a sales engineer here at Teleport. In this webinar, I'm going to talk about best practices for controlling access to Kubernetes clusters using role-based access control, so native Kubernetes RBAC capabilities and then tying those roles to SSO identities using open source teleport. So everything I'm going to discuss today is all open source, it's all free tools. And if any questions come up, please feel free to put them in the Q&A section of the Zoom session and we'll definitely get to them at the end. Here's the agenda for today's webinar. So I'll first start with a high-level overview of security discussing attack surfaces, security architecture, zero trust concepts and tying those into how you might approach securing Kubernetes workloads. I'll then move into speaking about using a unified access plane or an access gateway for accessing infrastructure. I'll then discuss native RBAC capabilities within Kubernetes. I'll then move into an overview of my demo environment followed by an actual demo. And finally, I'll answer any questions that have been added to the Q&A chat. In this section, I'll define a few key security concepts so that we have a common set of definitions to work from. The first is an attack surface. According to NIST, an attack surface is the set of points on the boundary of a system, a system element or an environment where an attacker can try to enter, cause an effect on or extract data from that system, system element or environment. So in other words, an attack surface describes all of the vectors for exploitation. And this slide discusses the different technology layers you'll find within an organization. And as you look at each one of these layers, you'll want to perform an attack surface analysis for each one to determine how you can best mitigate potential threats and vulnerabilities within each of these layers. So some questions to ask within each one of these. So for networks and infrastructure, how are users and devices accessing your networks and servers? Is SSH being used? Applications. What applications are running in your network? Who has access to those? How are those being secured as well as the underlying operating system and host? How is data being protected? Endpoints. How do you control which users are authorized to access different parts of your network and different parts of your infrastructure? Then lastly, cloud. Are there open S3 buckets? Are API keys being shared or not rotated? And with each one of these layers, if you look at Kubernetes, the Kubernetes deployment can cover all of these layers. So it's critical to think about how you can properly secure your Kubernetes server API, the applications running in your clusters, the infrastructure, the clusters they're running on, and the endpoints your users are accessing the Kubernetes clusters from. The last high-level security concept I'll mention is ZeroTrust. ZeroTrust is a model that was developed in response to NIST's request for feedback to a document called developing a framework to improve critical infrastructure cybersecurity. Now, according to the analysts at Forrester, the three main concepts of ZeroTrust are, ensure all resources are accessed securely regardless of location, adopt a least-privileged strategy and strictly enforce access control, and then lastly inspect and log all traffic. Now, how Teleport can fulfill each of those concepts at a high level is Teleport uses end-to-end TLS encryption. Open-source Teleport can map GitHub teams to groups and users that exist within Kubernetes clusters, so RBAC within Kubernetes. And then lastly, with Teleport, all KUPS ETL requests are fully logged and even KUPS ETL exec sessions themselves are recorded. So now we'll discuss using a unified access plane. And when you talk Kubernetes, the odds are is that you will end up with multiple clusters. You might have a development cluster, a staging cluster, a production cluster, et cetera. You might have clusters per region, but in order to deal with all of those different clusters, you really want to centralize the access. So you wanna pipe all of your developer access to your clusters through a single choke point so that you can enforce your policies there like a proxy or a gateway. And that gateway is a great place to attach to your SSO identities, any requests and any modifications. That way you get accountability and user attribution from everyone in your organization about who does what and when to your production and staging environments. So you really want to make sure that the gateway only allows SSO users through and records their identity with their requests. The other thing is, even though you're using Kubernetes and it hides a lot of the underlying hardware and infrastructure, as I mentioned in that layers slide, all of that is still there. The servers are still there. There are applications in your clusters. There are still operating systems and most likely you still manage everything under Kubernetes. So the worker nodes, control plane, access that via SSH or something similar. So you want to remember all those different parts that go into a Kubernetes deployment. So you should do all of the same enforcement that you do for Kubernetes as you do for SSH or internal applications that is attach SSO identities to your access control. And when using SSO identities, you should map users and groups that exist in your identity provider to users and groups that you create within your Kubernetes clusters. And groups and users within Kubernetes can be created using native Kubernetes role-based access control, which is what I will get into now. So role-based access control or RBAC is a method of regulating access to computer or network resources based on the roles of individual users within your organization. So this means use least privilege and grant access based off of a user's role within your organization. Now, Kubernetes RBAC role or a cluster role, they contain rules that represent a set of permissions. Those permissions are purely additive, so there are no deny rules. Now, the differences between roles and cluster roles, a role in Kubernetes always sets permissions within a particular namespace. So when you create a role, you have to specify the namespace it belongs in. The cluster role, on the other hand, is a non-namespace resource. So the resources have different names, role versus cluster role, because a Kubernetes object always has to be either namespaced or not namespaced, it can't be both. If you want to define a role within a namespace, use a role. If you wanted to find a role cluster-wide, use a cluster role. And in my demo, I'll demonstrate using a Kubernetes role to limit a group to a single namespace. So I won't be using a cluster role in my demo. As far as my demo goes, I'll first provide an overview of my demo environment. So I've created within GitHub, an organization that I've called Kubernetes org. I've created two GitHub users, an admin user and a dev user. I've then created two GitHub teams, an admin team and a dev team. My admin user is a member of both teams, the admin team and the dev team, whereas my dev user is only a user within this dev team. I've also limited within Kubernetes that the dev team can only be part of the dev namespace. They're not going to be able to do any actions outside of that one dev namespace. I've set up two Kubernetes clusters, two internal applications. I've enabled SSH on one of my two Kubernetes clusters and I'm using Let's Encrypt for sign HDTPS certificates on my Teleport proxy. So again, everything I'm doing here is completely freely available and open source. So these are my two users, again, the admin user, dev user, they want to access some resources down here, whether it's an SSH server, Kubernetes cluster, an internal application. From their perspective, they will never directly access any of these. So that's one added layer of protection is you can now keep your Kubernetes server API hidden from the public. From these user's perspective, they go through the proxy before they access anything down here. And how they go about accessing these different items is they can use TSH, which is Teleport CLI tool. They can use Coup Ctl, normal Coup Ctl with Kubernetes or they can use a web UI that's served from a Teleport proxy. So for my demo, I have Teleport proxy and Teleport authentication service, both running on a single EC2 instance. For production deployment, I would advise separating these two components and making each one of these highly available. So you might have a load balancer in front of a group of proxies, a load balancer in front of a group of authentication servers. And the role of the authentication server is to store all the logs that Teleport is generating. So those logs are a trail of who is doing what, when and it's all tied back to their identity. The other log is session recordings. So SSH sessions, Coup Ctl exact sessions and then also certificate authority. So when a user goes to authenticate to access something down here, what I have set up is this SSO integration. I created an OAuth app within my GitHub organization. And based off of one of these, each of these users group membership within my GitHub organization, they will be able to access something accordingly. So for the admin user, they're gonna be able to access everything. My developer user, they're going to be limited to only being able to SSH as an Ubuntu user, and they will be limited to the dev namespace within my Kubernetes clusters. So when either one of these successfully authenticate, they will receive a short lived SSH certificate that they can use for accessing SSH servers. And they will also receive a short lived Qube config that they will use for interacting with Kubernetes. And again, everything they do is tied back to their identity as it is within GitHub for this example. So I will now switch over to my demo environment. So the first thing that I will show here, first of all, this is my local laptop. So I'm not remotely accessing anything right now. How I've set up the integration with my GitHub organization and my Kubernetes clusters is using a GitHub authentication connector. So if I take a look at that in here, the part that I will highlight is this teams to logins piece. So what this is saying here is, here's my organization, Kubernetes org, here's a team within that organization. So anybody who is a member of this team called admins, they will be allowed to SSH as root, as Ubuntu, and here's where they get mapped to any Kubernetes groups that I've created. So in this case, they're getting mapped to the system masters group, which is an admin group within Kubernetes clusters. So they're gonna be able to do essentially anything. Whereas here, I've created this mapping also within the Kubernetes org in GitHub. This team I've created called devs, somebody who's a member of this team within GitHub, they can only SSH as the Ubuntu user, and I'm mapping them so they only can be part of the devs group within Kubernetes. And I'll show you that the devs group can only be part of the dev namespace in my clusters. So if I show you the role that I created for my two clusters, it's pretty straightforward, this one. This is saying this is in the dev namespace, this is the name of this role, and here are the rules. So somebody who has this role is going to be able to do all these verbs on these resources and API groups. So essentially everything within this dev namespace. And I've also created a role binding which is going to bind a subject to that role. So in my case, the subject is a kind of group and the group name I'm creating is devs also within the dev namespace. And this role binding is referencing this dev access role that I just showed you. So this is how I'm creating this devs group and then also mapping it to the role so that somebody in the devs group can only do things within the dev namespace. So now to actually log into my teleport cluster to show you my Kubernetes clusters and everything that the teleport can do for me here, I'm going to use TSH, which is that CLI tool used by teleport. So I'm going to TSH login to this proxy that I have. So this is its URL and this proxy is being served from that EC2 instance that I mentioned a moment ago. What I'm going to do here, so by default it's going to open my Chrome browser, but I have two other browsers open. I have a Safari browser. This is where I'm signed in as my admin user in this GitHub organization. Then I also have a Firefox browser open where I'm logged in as this dev user who's also part of that Kubernetes organization. So what I'm going to do is copy and paste that link that teleport gave me. So it allowed me to successfully log in via GitHub because I was already logged into my GitHub here as this user. So if I return back to my terminal window, I can see I've successfully logged in as this user. So that's my admin username. This is the cluster. I get these SSH users that I can log in as and here's that system masters group that I get to be a part of. So now if I do TSH, cube LS, I can see that there are two Kubernetes clusters that are part of this teleport deployment. So if I want to switch between the two, you can see right now I'm logged into cluster one, if I do TSH, cube log in cluster two. So I switches me over. And if I do any kubectl requests, like get pods, so all the usual stuff, and if I kubectl exec into one of these pods, I'll do that here. That's the pod I'm going to go into with a bash shell. So now that I'm in here, I'm in this pod. I can do any sysadmin stuff I want to do. I jump around, maybe do top, fix what I need to fix. When I'm done, I can exit out of this. And so what I'll show you now is if I go to the UI, so my teleport UI, and I'm going to do it in the Safari browser window because this is where I'm logged in as my admin user. And here if I go to the login page, so this is the same proxy I was just interacting with in my terminal window, teleport.gravitational.io. So I'm going to log in using my GitHub team. So I enter here. First of all, these are two servers that I've enabled SSH access on. This one is that cluster one. Kubernetes deployment I showed you when I first logged in via TSH. If I wanted to SSH into this, I could click connect, choose one of these two users to SSH as, and this other server here, this is my actual proxy that I'm on right now where this UI is being served from. But what I want to show you here is an activity and audit logs. So built into teleport is this robust audit log feature. So I can see even Kubernetes requests if I click the details on this. So I can see lots of information, IP addresses. I can see it was some request on this cluster. I can see that it was a get on pods and everything in here is all tied back to this user within my SSO, so within my GitHub teams. So I no longer have to guess who might be SSHing or who might be using Kube CTL against the cluster. It's all tied back here. And so even though there's all these, this robust auditing, you can also set up third party integrations. So you could have your logs be sent to something like Splunk or Elastic and I'll next show you what it might look like to use an Elk stack. But first, I'm gonna click this session player button. And so this is the full session when I use Kube CTL exact a moment ago, this is everything that just occurred. So this feature, I mean, auditors love this. I've also known a lot of developers myself included who love this when they have to go back and try to configure something that maybe they have not configured for a while. Maybe they've forgotten some steps. You can come back in here, watch one of these, copy any commands that are occurring and then use this to reconfigure something. So again, the two audits, two audit log types that Teleport produces are these who's doing what type audit logs and then also the session recordings like I just showed here. I mentioned that I'll show you an Elk stack that I've deployed here. So I have this internal Elk stack that I've deployed that I'm using to aggregate all my logs. So if I come in here, the only way to access this is via Teleport. So this is in a private subnet. If I go to discover, I can come in here, use queries to search for specific items. So in this case, what I'm looking for are any cube requests or any session starts. And so I can start building out queries and start building out alerts on anything I want to. The other application that I'll show you while I'm in here is this cube UI. So you might have applications that you've deployed in your Kubernetes cluster. So what you can do here is you can expose those applications via this Teleport UI. So this Kubernetes dashboard, I've deployed into one of my clusters and it's only accessible via Teleport. So I would have to use that same S to SO login to actually access something like this. So you can get creative, imagine all kinds of different applications. You might deploy in Kubernetes and then expose via Teleport, maybe Jenkins, Grafana, all that stuff. And so if I also go back to my audit logs, you can see that even opening these applications is generating audit logs. So I can see app session started, I view the details. So I can see it was this Elastic, this Elk stack. And again, everything is tied back to this user's identity. The last thing I'll show as part of this demo is, so again, I mentioned that I created this integration with my GitHub organization and Teams within my GitHub organization. So I first showed you just now somebody who is part of the admins team, they're able to SSH as root and Ubuntu, they get to be part of the system masters group. And it's because of this mapping I set up here. I'll now show you somebody who's in the devs team and how they can only SSH as Ubuntu, they can only be part of the Kubernetes devs group. So let me log out as that admin user. I'm gonna log back in to my proxy here. And I'm going to copy and paste this into my Firefox browser because this is where I'm signed in as this dev user. So if I paste that in here, successful login, come back here, you can see that I've logged in as that dev user, I can only SSH as Ubuntu, I'm part of this devs Kubernetes group. So if I try to do something in the default namespace, it's gonna fail. Because this user, they're only allowed to do things within the dev namespace. But if I now do kubectl, get pods on the dev namespace, now I can actually get stuff here. So summarize here, I created an EC2 instance, deployed a teleport proxy onto that EC2 instance, and I used two different Kubernetes clusters that were deployed behind my teleport deployment, added two applications, one application was an Elk stack, one was a Kubernetes UI that I had deployed within a Kubernetes cluster. And I showed you how you can map teams within your GitHub organization to any groups that you create within your Kubernetes clusters. So now let me switch back to my slide deck here. So to summarize what was covered throughout this webinar, remember that Kubernetes expands the attack surface of your environment. So if you introduce a new layer, you have to make some other layer less relevant. And again, those layers I'm referring to are things like network layer, cloud layer, end user, hosts, all of those. You should also turn SSH off for the majority of your engineering team. Having both present increases the probability of you getting compromised. But if you do have SSH access enabled in your Kubernetes cluster, just be sure to apply role-based access control and synchronize the two so that they have the same authentication gateway and the same access gateway. And access to all of your different environments, dev, prod, et cetera, should all be controlled through the same gateway for access and for authentication. Then finally, role-based access control tied to your SSO identities should also be used and be sure to regularly inspect and audit all access. So thank you for attending today's webinar. For next steps, here are a few links I recommend viewing. I highly recommend watching this webinar, best practices for auditing Kubernetes. This one covers best practices for auditing, logging, generating alerts in Kubernetes.