 Hi, I'm trade off north with your house and clear center and this is secure multi user HPC jobs in communities with Kyberno. A little bit to describe our HPC workload using communities. It's mostly lightweight interactive workloads. So they have minimal security requirements compared to traditional HPC. Right now it's mostly Jupiter in our studio. This is serving up numerous classrooms being used by distance learning. Quick overview the technologies are using so open on demand is how we make the HPC jobs, or how we submit them to communities. It's essentially a web portal for HPC access key part is it runs the web process runs as the user logged in to the HPC system supports multiple resource managers like slurm torque and kubernetes. Then there's Kyberno which is a Kubernetes policy engine. You deploy policies using communities native resources. And policies will go over a little bit. Some of the challenges was for HPC jobs and communities. We wanted to treat the jobs similar to traditional HPC jobs and allow on demand apps to submit to both slurm and communities. Because communities pods can run as root. That may represent challenges for shared file system access like GPFS or NFS home directories. And we had to ensure that the users processes running in the pod. We're all being run as that users UIDG IDs. This is a big part of this was mainly to treat them as them for the file system access. These are the patterns. All user pods are in a namespace with user dash prefix within their username. These namespaces are bootstrap by on demand and log into on demand. There's also roles and access controls, given to them in their namespace to allow them to do just enough to run on demand jobs on the as well on a HPC jobs. This is self authenticates with our key code IDP. And the OAuth, the OIDC tokens for on demand are allowed to use the Kubernetes thanks to OAuth to audience. We deployed a tool we wrote called K LDAP config maps so this maps LDAP data into config maps that can be used by Kyverno. So policy solutions with Kyverno. We have policies to ensure UIDG ID and supplemental groups of a pod match a user's LDAP record. We have policies to restrict host, host path access map locations to path paths we want them to have access to such as NFS home directories GPFS and some files or slurm. We just allow all forms of privilege escalation. We also enforce max resource requests and max runtime of pods. Here's an example of how we're selecting resources in the policies so you can see over here we have the user, user dash prefix which made it ready to see to match all the user pods by namespace. One thing we learned very quickly was, you can only validate on creating update when you're dealing with LDAP data because if LDAP data changes, while the pods running it might become impossible to delete. So here's an example of getting LDAP data into the context for the policy. So here we're getting the UID mapping. Here's a validate on the run as user. So we're using the UID to ensure that the we're doing a look up so here's an example what the data looks like so it's user dash my username with my UID, and we're ensuring that that UID is what's being used for run as user. We also validate supplemental groups which is very important for file system access. So here's an example of what the data looks like it's a JSON string of an array. And we're using a deny condition. Here's some additional policies this one. Sorry. This one is to restrict the host path they can access so we allow them to access some configs munch sockets or slurm, and then user home directories and GPFS. Most of the other policies we use come from upstream caverna policies home chart. These are mostly around security. Some of the ones I showed here are custom ones. And here's some of the resources so upstream policies for caverno, the OSD policies that we deploy with helm, and then only to the 88 k8 LDAP config map which is how we get the LDAP data available to keep caverna.