 Thank you, Jeremy, for the keynote. I'd like to take a brief moment to thank our amazing sponsors. Without their support, this security con would not be possible. So thank you to Red Hat, Sysdig, Uptix, and VMware Tanzu for being our diamond sponsors, and also thank you to Appiro for being a platinum sponsor. With that, we are ready to begin the general sessions for the day. And first here, we have Mohan Atreya from Rafa Systems. Mohan will be talking about securing the Kubernetes infrastructure using Kubernetes Zero Trust principles. Mohan. Thank you. Am I audible? Thank you. Good morning, everyone. Great to see so many people here. So I'm going to talk about a pretty interesting area. This has been something that we see a lot of users attempting now when it comes to Kubernetes. And I hope at the end of it, you'll walk out with some interesting insights. And if you have some questions after the session, I'll be around. So just look me up and happy to chat more. So what did we think we wanted to talk about this topic? A lot of you here are very technical. But there's a vast majority of people out in the world who use Kubernetes a little differently. If you've seen some of the reports recently, like from Shadow Server, there's nearly 380,000 Kubernetes clusters open on the internet. Think about that, open on the internet. That's not good. And we kind of play in this space. So we've been kind of talking to people, why do you do this? And a lot of people say, this is too hard if I have to go another way, go through a secure tunnel or something like that. So with this backdrop of why so many people are out there with open Kubernetes clusters on the internet, I mean, think about it. Your API server is the way by which you interact with your clusters as a user. It's open on the internet. Anyone can touch it. Not good, right? So we ask people, what do you do typically? And then maybe there's a better approach. So what people are typically done, and this must be very familiar for you if you've been in the industry for a while, they put up a bastion. If you go to a security team, they probably will say, maybe you should use a bastion, right? Kubernetes is new to them. So what did they do? They go set up a bastion. There's a, you know, you land up doing a queue code right from the bastion onto your clusters in your VPC or something else. And now you have a problem. The problem is, well, there are two problems. First is, from the bastion, you can see everything in that VPC. If I'm an attacker, that's a juicy target, right? And number two, me and my colleague getting out of the bastion using the same Kube config most likely, right? So that's not good. Now you can't tell who did what. So this turns out to be a problem. OK, we're to start. You land up having problems. The second thing that people do is they land up saying, well, this bastion thing is not working for me. Maybe I'll put up a jump host in front of it, right? I put up a jump host. Now the user or the developer in this case is really, really ticked off. Because now you're asking the user to go through two jumps, right? Now, you may start thinking here at this point in time. Some gears are probably churning in your head. You're thinking, how is this any different from VMs? Have you done this for VMs for hundreds of years? Why is it so painful? The problem is, in the old monolithic world, the typically the only person doing this was the ops person, right? In the Kubernetes world, as every developer, when you have 100 microservices, the poor ops guy doesn't know which broke what broke where. So many developers access to help out when you want to resolve issues, et cetera. So when you have hundreds of users attempting to do this, you're in a real, real painful situation, right? This is what people encounter in real life. The last thing that we see people do is they plonk a VPN in front. Not bad, right? Except if you can afford it, these VPNs do cost a lot of money. And if you have three, four data centers, I mean, it's game over from a budget perspective, right? You have so many VPN concentrators. But the good thing is, now every user is using their own Kube config, and they're doing it right from the laptop. Life is a little better, right? More secure. The primary thing is cost. Of course, this does not remove the attack surface problem, which means if I can VPN into my VPC, I see everything inside the VPC, right? How me do. Not good, right? Now we talked about how can I access stuff. Now let's talk about the second dimension, actually, very aspect one. If I'm accessing Kubernetes, I thought I'd have some kind of operation. A developer have the same kind of access to administers. That's a known cluster-wide privilege every developer in more. What people do? Stand up having, for example, stand up saying that, hey, all the developers of this application in Acme names have to be in Acme names. Acme access on the top, the orange line, that. Want to make sure that the developer who doesn't need access to cluster-wide privilege can only access things at the namespace. But if I'm an operations person, allow me cluster-wide privilege, because I need that. So you guys know this fundamental logic, this RBAC built into Kubernetes, have to use it the way by which you control your Blaster Radio. Hello? Better? All right. Sorry, guys. So I'll just go back to that previous slide for two seconds. So the quick summary there is something called RBAC, a role-based access control in Kubernetes. Really important for you to understand that, because you do not want every developer who does not need access to the entire cluster. You want to make sure you use RBAC so that they can only see what they want to see. Really, really important. So what happens at scale? Now, we have one user or maybe one cluster, not a big problem. You can do all of this manually. What we see customers struggle with and users struggle with is when you have tens or hundreds of clusters, and typically, like when I talk about microservices, when you have hundreds of microservices, you have hundreds of developers who constantly move from one team to another. Now, your problem is magnified. You have hundreds of users, hundreds of developers, and you have many, many clusters. Now, how are you going to manage all these RBACs and even keep sense of them really hard? So essentially, if you are looking into a model by which you are doing this manually, you're now in an impractical zone. Is this not possible to do this? So this is where the processes fall apart, and you need some form of automation to make sure that the wrong people do not have access to the wrong things or more things than what they need. So now, let's look at how can we solve this problem? Both of these problems in one fell swoop. So let's look at this from maybe a requirements perspective. If I'm an operations person, and I have hundreds of developers that need to access stuff, and I need to support tens and hundreds of clusters running in data centers in many, many regions in Amazon, et cetera, the first thing I need to do is I cannot afford to put my clusters open on the internet. Everyone agrees to that. The next thing I need to do is make sure that I don't put my developers through this complicated bastion or VPN-based user experience, which everyone hates. None of us like that experience. So what you really need is to leverage what's been out in the market for tens of years. I mean, if folks have heard about companies like Zscaler and all these companies that have been very pervasive, they all do this. But for web applications, why not for Kube-CTL? Why not for Kube API? That, for all, runs on HTTPS, right? So the first thing you need is a way by which you can access these clusters, even though the clusters are cloaked behind the firewall. Can you do that? Yeah, if you can do that with Zscaler, why not? The next thing you wanna do is you don't want this R back to be permanently injected inside the cluster. You wanna do that dynamically just in time. Why is that really important? You don't want credentials to be permanently sitting on clusters, right? You want to remove them after the session is complete. So the automated, ephemeral R back, really important. All user access need to be strongly authenticated, given, right? And then finally, this is a question that people struggle with from a governance and compliance. If I come and ask you, hey, what are the Kube-CTL commands that ran against my cluster? Or rather, Johnny ran some commands yesterday. What did he run? People have no idea, right? There's no way to reconstruct that. What if there was, right? So these are the four things we see that organizations need to have a sensible practice around making sure that you run things at scale with your clusters, where you open up access to your developers, give them a fantastic experience and make sure that you're not running insecure. So what are we gonna talk about now? I'm gonna, for the remaining part of the session, I'm gonna do two things. I'm gonna talk about an open source project that we announced sometime back called Parallels. It's actually based on something we noticed about three years ago in the market. In about three years ago, we all started working from home because of the pandemic, right? And we had coincidentally introduced a zero trust access capability in our platform and that took off like a rocket ship. I don't have the charts here to show, but it was almost like a curve like that. And it's because people are working from home, everyone hated VPNs, nobody wanted bastions, and everyone wanted instant access to the clusters. So you're gonna see that in action with this open source offering. We open sourced it because we felt that it's really important for the industry to kind of move a couple of notches up from a security perspective, right? Like everybody should be able to do this. You should not be requiring a commercial offering to do this. So let's talk a little bit about Parallels. It's open source, Apache license, so you can pull it down and use it, right? And if you are using something like DigitalOcean, you have a one-click experience. Just go there, search for Parallels, click it, install it, use it. Please participate in the community, right? Like we would love to see how we can improve this. We'd love to have contributors. I mean, you guys are all pretty aware of how the community is so important to make sure these projects are successful. So help us out. Let's make the industry more secure, try and participate in this project. So how does this work? Now you've kind of heard about the Zero Trust. Let me kind of show you architecturally. Then I'll show you a live demo. I'm sure it'll blow your mind, right? So the first thing you have is you have the access manager, Parallels access manager. This is a proxy. It's a Kube CTL proxy or a Kube API proxy on the internet, right? It's got these, an agent running on every one of your clusters and notice the arrow is out, outbound, right? The agent dials out on port 443 from your clusters behind firewalls to the Parallels access manager and runs a long running control plane session, right? It's essentially keeping it alive, the connection. Now, when the developer who's sitting at home can access the Parallels access server, they don't know any different. They get a Kube config, they say Kube CTL get namespace or something like that, right? They don't know if they're touching Parallels or the end cluster. They don't need to know. They're hitting the proxy, right? And the proxy authenticates them. It's a two-way authentication with digital certificates. It's very strongly authenticated. Of course, everything is encrypted. It's HTTPS, right? And not only that, anything they use or types is now audited, right? See, if I type some funky commands, you can reconstruct the whole set of commands and from a compliance and governance perspective, this is really important, right? And once that authentication is complete, it's just basically on the wire. The commands are flying back and forth between the client and the server. Now, before that, the downstream APS server at this time has no idea who you are. It has no RBAC for you on that cluster. So remember, at step two, when the user authenticated with the Parallels access manager, they're also indicating you wanna access the cluster on the top and RBAC is generated on the fly and a service account is injected into that cluster, right? This is all happening in milliseconds, right? Milliseconds or microseconds, right? So the APS server is the one that's actually controlling everything behind the scenes. Parallels here is nothing more than the main gatekeeper. It's just an access proxy. That's pretty much it. It's access proxy, authenticating you, generating the RBAC, injecting the service account on the cluster. Sounds fascinating, right? I'm sure you guys wanna see a very quick demo of that. I'll do that right now, right? So let's see that. So what you're gonna see here is I have the Parallels access manager set up. It's running in New York in digital ocean, right? I'm now here in Detroit and hopefully my internet will work and you'll see me attempt to run a kubectl command and I can run this against three clusters. What I've done is I've simulated a typical environment any small company would have. A dev environment, a staging cluster, a production cluster, right? And what I'm gonna do here is I'm gonna show you if I log in as an ops person, I can see everything because that's what my access rules are. And I'll have cluster wide privileges but if I log in as a developer I will not have access to production because that's the rule I set up here, right? Now this should be like easy, right? Like how do you do that? Why not see it in action? So let me very quickly. It's gonna be complicated because I'm holding the mic and doing this at the same time. So this is a parallel access server. So what you're seeing here are three projects. I have a dev project, a prod project and a staging project. Inside that I have a cluster, a dev cluster and if I wanna kubectl do it, I mean I don't know where it's running, I don't need to know. And I can say, hey, get namespace and I get the thing. Now the key thing here, you don't even see the service account being dynamically injected on the cluster. So now what I'm gonna do is I'm gonna flip to another cluster, just for kicks and I'm gonna show you live, let's go inspect the service account. So when I run this, let me type this out if you can't talk and type at the same time. Yeah, it's also autocomplete, thank God. So when you go look at that, if you look at this service account for this user, it was generated about two hours ago when I was just practicing and rehearsing for this demo. Now what I'll do is I'll log in as an ops person and we'll see this being generated live, right? So what I'm gonna do now is I'm gonna log in as an ops person. I'm gonna authenticate as John, right? And if you look at John, the company, the central access manager said, John has access because he belongs to the ops group. I have access to the dev prod and staging. This mapping is done automatically based on my group membership, right? Then I go in a prod and I'll do the same thing. And I'm hitting the same cluster. And if I look for the same thing, which is a service account minus N, battle system, you'll see that, ops John, this email, just 10 seconds ago, the service account was dynamically created on that cluster, right? Just 10 seconds ago and when I log out, that service account will be flushed, which means nobody has keys to the kingdom all the time. Every time I access, I need to authenticate. Really important. So tomorrow I change my role. Today they move me from ops to dev. I will not have this level of access anymore. Everything is automatic, right? So that level of federation really important. Now let's log in as the dev, right? So I'm gonna log in as Sally. Sally is a dev in this company. And when Sally logs in, remember Sally, because she belongs to the developer group, the company had decided Sally and developers don't have access to the production environment. So as you can see here, there's only dev and staging clusters visible. Now Sally also belongs to only one namespace called Apple, right? There's a namespace called Apple on the cluster. She only has access to that. Let's see if Sally attempts something smart and attempts to see anything more what happens. But before that, let's see. I'm just gonna try typing in, I'm listing all my pods in my Apple namespace. Everything looks good. Now Sally says, you know what? I really am curious. I wanna see who else is running what on this cluster and I type a get namespace and you notice access is forbidden. Why is this forbidden? Because the RBAC for Sally that was injected under that cluster is constrained down to the Apple namespace. She can only see the Apple namespace, nothing more. Imagine doing this for hundreds of developers who are constantly moving from group to group or hundreds of operations people. Everything is automatic, right? You add people to the right group, RBAC follows them, right? That's the power of Parallels for you. So, the last thing I wanna show you here is the audit trail. So if you remember, as a user, I was typing a bunch of things. As a company, if I'm an auditor, I can go in and look at every command that was run without Sally here or John here or the admin here, whatever I did, everything can be tracked, right? So from a governance perspective, people can rewind back and say, this person did this, these are the sequence of commands. What are C users tell me now on forums is fascinating. We initially built this assuming they will use it for security purposes. What people are saying is they're using this to automate run books now. Like, you know, this is really sharp engineer who knows how to run the cube cuddle commands in a certain sequence. They capture that and say, I'm gonna bottle this up and I'm gonna have other people follow the same sequence of commands. So very fascinating set of feedback from organizations. So in summary, we encourage everyone in the industry to do three things, at least three things. Please do not put your Kubernetes clusters on the internet, cloak them. Make sure that people have zero trust access. Make sure your developers get access to your clusters from anywhere with the right RBAC. Make sure the RBAC injection, the service account injection is done dynamically, which means only right level of access is given at the right time. And make sure you have an audit trail of everything. This in summary would get you to a point where you're running secure operations for your Kubernetes environments. I think that's all I had. If there are any questions, I'll be around at the back or outside. I'll also be at the booth, a RAFE booth later this week. Let's just stop by. I think we'll have something about Parallels there. Thank you. Do we have time? Go ahead. Yeah, we have the mic. So what is the current state or status of Parallels? Can it be used in production? And second question is the performance hit. Like what kind of performance hits are we looking? Yeah, yeah, very good questions. So two questions there. What's the stage of the project? As you all know, at CNCF, there's a pecking order, right? You eventually want to get to a graduated stage. We are now in Sandbox. We submitted for Sandbox. So we really have been looking for the community to kind of help us. You know, we need the right kind of sponsorship from the community to move up the ladder, right? So help us out there. We're right now in Sandbox, getting into Sandbox, application for Sandbox. We had to climb up that ladder, right? It takes sometimes years to climb up. So that's the first question. The second question is what's the performance hit? So there's typically about a, when you saw the whole thing in action, right? Me typing in the command as a user is imperceptible, but you can measure it. It's typically in milliseconds. It's in milliseconds for the first time when the user is authenticated and the service account is injected on the downstream cluster. That's about a millisecond needed that is to just orchestrate that. But typically the user never notices anything. By the time you think about the command you're gonna type, it's all done, right? So, and you can run these proxies anywhere in the world. So what we see organizations do is say, hey, I'm gonna run my Parallels access server closer to my clusters because it really comes down to distance. You can't beat the loss of physics, right? So if you run the Parallels access server closer to your clusters or closer to your users, I think that's the decision you gotta make. Yeah, yes. Yes, good question. I'll repeat it because I'm sure others couldn't hear. So there are two fold questions. One is, does the Parallels access manager need cluster-wide privileges to do its job? And number two is, is there a plan to do a third-party pen test? The answer is yes to both. The third-party pen test kind of happens naturally as part of the CNCF process. Every project is put through a microscope, right? As it happens. I think that's why we encourage every project to go through the CNCF governance model. Yeah, yeah, perfect, thank you. Any other questions? Yeah, reach out to us on Slack. We're pretty active on Slack. I think I just look for Parallels. I'm sure you'll find it. Thank you.