 I'm Sergei Bratis. This is Rebecca Shapiro who did all the work. I am a research assistant professor at Dartmouth. She is a PhD candidate in our lab and you know for a who we are well you know we're here from the frozen north the northern Apalachia and it's a live-free or die state. This talk will be as Travis just brilliantly introduced it about the deep magic that is being used everywhere whenever a program runs and is severely underappreciated as any deep magic is. So what we're going to show you is that there is enough power in just the linker loader to be borrowed to write in the ELF metadata just in ELF metadata not malformed or anything. Any program you might care to write. Any transformation of your process memory any transformation of your runtime that you care to do which is to say it's during complete it's got loops and becks built it. The interesting thing about this is besides illustrating a philosophical principle on which I will expound profusely in just a few moments. It happens before memory protections are set and it doesn't care about ASLR at all because this is the linker loader this is the thing that finds the dynamic symbols. It's the keeper of the names and ASLR is nothing to it. So with that in mind you know I'm sure you can find some interesting obfuscation offensive defensive users of this thing we're going to talk about the general mechanism and what it teaches us about what our operating systems really are because this is what hacking is about. There is a computer as taught and programmed and then there is a computer as it exists really. We find that computer so we're exploiting it. If your system has not been exploited you don't understand it. And exploits these days are no less than programs running on the target they come in encoded as crafted data although there are other delivery methods but this is the most common and they reliably cause the target to run a computation that was never intended to run. It happens to do that by leveraging above an error a feature these are known as primitives and there is a brilliant introduction in Frack 61 article 6 by JP that talks about this particular one primitive which looks like a really weird assembly instruction. A, A, B, A for BMO right 4 bytes almost anywhere. And if you're familiar with format strings %n is that thing that you can leverage to cause the internals of the print format or converter to do a computation for you that was never ever expected and can be quite a nasty surprise to the owner of the system. So in the slide deck on the disk on the CDs that you got I have a rather wordy set of slides explaining why this is interesting. And then of course just a couple of hours before the con I got this great idea. Hey there would be people who like HP Lovecraft here. Possibly. Okay. So weird machines. I'm going to talk about this term that got coined when I was giving a talk at Felix Lindner's Rikiri T. Lab's workshop and you know that I absolutely love this picture. This is what 20th century warfare was supposed to be in you know in the eyes of the artist from the 19th century. So talk about weird machines. Cyber war. Probably our ideas of cyber war are going to be just as accurate. But coming back to what is the real computer that is in front of you? What is the real system as opposed to what is the model of it? We discover that with exploits. And then you ask yourself well what is our exploits? Are they data or code? Isn't this interesting? Exploits are of course proofs that your machine is capable of a computation that you have not foreseen. It's approved by construction. By shell dropping. You can't really argue with a root shell. To quote FX. But exploits show up as inputs. But they run on the target. So what the hell is up with that? Is this a von Neumann machine or some strange Harvard architecture machine or maybe Miss Katownik University of kind of an architecture? So an exploit is a weird stored data program running on the target. And if you think about your code as this Lovecraftian entity that you just summoned up to work on your data, build your mystic pyramid. This is actually Yogg Sothoff. He holds the key. He is the key. So we're not summoning anything in this auditorium. I can't afford the liability insurance. Neither can Dartmouth. So this is your code. Its tentacles reach into your memory and fiddled with bits of data. Pulling it up, pulling it down, rearranging them. And this is how you build the computation that you want. But if you look carefully enough, and this is how people think of their code. Oh, there is code. There is data. But if you look carefully enough, it's a bunch of little octopi like things. Each with a small number of tentacles working on a very small piece of your data that you control. And if some of them are interesting and that they present a bug or a feature or a primitive, then you can build a machine out of them by manipulating this metadata. To this machine, your metadata is byte code. The primitives themselves are the little implementations of that byte code. And this has been the hacker folklore of advanced exploitation since at least 2000. I take absolutely no claim except to naming it and dragging it out as a sort of a coherent body of folklore. If you don't take my word for it, look at Halver Flake's talk which he gave at infiltrate last year. Where have we seen that? Well, this is of course how virtual machines work. Virtual machine has this byte code and you implement those little byte codes. And the only difference between a weird machine that is what runs your exploit, the collection of your primitives that pass the control and data flow through some areas of memory that you control, is that for a virtual machine, your byte code is usually contiguous. For a weird machine it's not. So that tape is really broken. And so think of your target that does anything interesting as this rich, dancing, singing, milk giving, woolly pig, right? And it's also a code running pig. If you can, if there is enough complexity in the target, how is this for authors Clark, a fourth law, any complex enough memory reading writing program is indistinguishable from a virtual machine on its data. As long as you can craft the data. Input data for a complex enough program is indistinguishable from byte code. For the pieces of the grand monster that you manage to decompose, unwind the tentacles of the monster and isolate which memory locations are serving for your implicit flows. And so the bit of philosophy is this. Code is doesn't work on data. Data runs on code. Input strings run on your parser. As the tentacles reach down, there is strange bubblings in the monster, in the old one. And this state that state changes that you create in it, are your expected computation, are your exploit. You can make it explode. You can make it build not a mystic pyramid but something more useful, such as a root shell. Packets run on your TCP IP stack. Packets are little programs that run on your TCP IP stack. The changes in the state of the stack are what you are after. Heap metadata runs on the heap manager. Format strings run in the print. And elf metadata runs on your runtime linker loader. Which VIX is now going to present. So think of this entire folklore as bringing the idea that you're programming weird machines when you're exploiting something. And this is all over the place really. This is the art of boring code and chaining up the primitives and creating, setting up, instantiating and programming the weird machine. We're showing you a weird machine that's there and that's well formed. But we would love to see its uses for more interesting purposes. With that, Bex will take you through the wonderful world of elf. Thank you, sir. Okay. A magic forest. So I'm just going to try, there's a lot of information I want to give you. Not enough time. So I might end up skipping. But I really just want you to get an idea of what's going on. And you can continue if you're interested. Look at the code. I'll post a link. I'm going to start off talking about the elf's, what it is, point, the linker and loader and so forth. Look at some private work. And try to give you enough background so you can understand how we're using these primitives and getting something that really works outside of what you would normally do with elf metadata. And hopefully you'll from the philosophical reverie that they just plunged you into. So what basically the point of elf is just how the different components of the GCC tool chain communicate. You have the assembler, static linker, so forth. But in the end there's a metadata that's structured so that each component knows how to act on its input. How to lay it out, how to compose it and so forth. And mostly I'm looking at the runtime loader. And that's all. That's all I care about. There is a lot of information in these elf files. And I'm not using all of them. Not constructing, crafting most of it. But I'm looking with symbols, relocation entries and so forth. Well, that's pretty much all I'm working with. And the dynamic table that helps tell the runtime loader where everything is. And that is what I work with for this tool set I'm making or I'm calling it full set. And just to give you some idea of how many different sections are there in elf file each one with its own function in constructing the runtime in the modern binary more than 30. Yes. So all data and or code metadata so forth is contained in elf sections except the section headers themselves and the segment headers themselves. Typically there's one section header per section. And the header just helps you look up where in the file the section is supposed to be understand its permissions in memory, raid, write, execute, where it should be loaded and so forth. And most sections contain either metadata and is a table of the same type of metadata in a single section. No terminated strings for the string table. Mixed data, something statically allocated. And the code itself which is called text. And there are a couple of sections I'm focusing on for this talk. They are the most useful in trying to craft metadata and get some extra computation out of the runtime loader. And first with the symbol table. Symbol tables provide information on all variables, all functions and so forth. So you know where it's located. Sometimes there's a value. Sometimes there's other information on the size, how to look it up for the linker. And then the example symbols, there's one that's a variable standard in and has a preset value. Second one is a function. Also has some value in there. And then the definition. So this is the structure itself. There's a point, a name which is actually a pointer into the string table. There is some information about it, value and size. And think of this data as something that the dynamic linker loader eats. And finds you the right library, loads the right library and connects up the little stub that your call goes to in the procedure linkage table through the global offset table. And through a little, this little stub the dynamic linker gets called, finds the symbol and loads the symbol into memory. And if, since the lookup is actually by name, there is a little hash table involved in that. And it just jumps around that hash table until it finds the match so that it knows what symbol is there to be loaded. Now the thing about this is that the complexity of these operations is pretty damn high. And it starts resembling your Lovecraftian old one or that woolly singing mill giving pig. And so the question is, should I manipulate that data? What can we get? And I'm not really talking about buffer overflows. Assume that it's perfectly legal. It's just that some of those offsets are going to be pointing at other structures. And think of my own grandpa in the process. By creating this assortment of interlinked data, you can actually drive this thing into insanity or into executing any given program. So I will continue. As we like to say, however, the ELF metadata is just in our vision and the way we see it. It's the static linker's love letter to the runtime loader. So for relocation tables, this is information from not only the runtime loader but the runtime linker. The first one I listen to is for the runtime loader exclusively and every entry in its process during load time. And the second one is for the dynamic linker, the PLT. And that's processed as needed at runtime. Information contained in these entries include offset where you are eventually writing the output of this operation and usually you calculate some sort of address right it in. Info includes the symbol that if there's a symbol that the entry works with. It's a symbol number. Also the type of relocation entry is there. And then for AMD64, there's also a den which allows you to, depending on the type of relocation entry, add another value into the final computation. Now imagine this is the thing that keeps the ASLR working. So you've got this code that you've compiled and you're about to load at this address which is in the L file. But this is not where you want to load it. You want to randomize that base. Or you can't load the library at that address because another library is taking that up in the process of virtual address space. And so what you need and what you are getting out of this mechanism is a little automaton that goes and patches your binary coder data based on these data structures. So you look at this, you see a useful auxiliary mechanism for relocating data, moving it around in memory. When we look at this, we see this strange creature that touches, you know, lifts up those relocation entries and writes other things in memory. And by the way, we're inspired in this by the low create by escape. So look at up in uninformed number six. So again, what you see is a machine that you can program by crafting those relocation entries. Not making them illegal or anything. Just driving that engine to do something more. So two other sections that are interesting are the global offset table and the procedure linkage table. And the global offset table is merely a table of addresses. It's used by the linker to look up any LinkedIn functions that you do not know or exactly they reside until runtime. It also contains a pointer to the linker function that's actually invoked during linking, runtime linking. And this link map structure that contains everything you really need or the linker needs to perform the linking about for each, and there is one structure for each library for each elf object. That is there. There is a link map structure. The PLT contains instructions that just work in tandem with the GOT to invoke the fix up function if needed or to invoke the function itself if it's been patched. Because I think you might want to actually start explaining those symbols. This is a global offset table, obviously. You probably got the relocation table before. What is that one? It's been so long. There's a procedure being linked to it. Oh, yes. This is a procedure linkage table, exactly. Shall we go on? Yes. And this dynamic table which contains all the information needed by the runtime loader. And there's basically a tag that says what the value is. And I will, these are a bunch of ones that we use in our work. Yeah. Remember, these are things that point at other things. These are offsets of strings in symbol tables. These are offsets into library files where that symbol live. These are offsets into a section where things need to be patched on relocation. Now imagine them having a great family. I'm going to continue. So despite that this, all this information is in the headers themselves. But there's a separate table that has a copy of this information just for the linker. And the dynamic linker and runtime loader, in the end after it's loaded, only work with this table right here. So if it ever needs to know where the relocation entries are, it looks at the value of DT Rella in the knows the size, symbol, did a location in the symbol table, the location of the GOT, and anything else that's needed for clean execution in the runtime loading and linking stage. So just to give you an example of what happens during loading and linking, suppose we call exec on ping. What happens in the kernel is there's initial setup of the address space where we have the executable mapped. We have the loader itself mapped and a stack and an entry point is set to invoke this loader. And so now in user space, the loader takes over and starts sending up more things. It looks up the needed libraries. It builds in a heap stack. Oh, the stack was already there. And then it sets the entry point to the executable. And that's all we are working with in that part of the process. And you'll see that red is execute only, yellow is read only, green is read write, which is important to us. And each library has its own address space for the read only, read write executable code. And this is an actual memory layout of ping. I took out some parts to fit on the screen. And the stack is at the bottom. You can't see, but it's on the slide. It's just hanging off the bottom. So going back to this, let's zoom in to the executable. And the nice thing is even normally during runtime, sorry, normally during execution, even if you have ASLR, you have to actually put an extra compilation options to get the executable itself to be mapped in a different location. But I haven't really run into that. Like even with ping, which is something that set UID root, the executable is always mapped to the same spot, at least in Numbutu, that version that I was working with. So if we zoom in and see what's there, all these sections I was talking about are mapped in some in read execute only space, some in read only, and globe offset table and data so forth in read write. And then something to note is the dynamic table itself. During the relocation process, I have found that I can write to it. I haven't actually looked to see at what point it's read only, but after everything is set up, this dynamic table is read only. So in our perspective, we are interested in these entries that we see, the symbol table of relocation entries and so forth. And the DGOT we're interested in, which is actually writable. But then there's also the metadata, these link map structures, which is actually part of the linker loader memory and heap. And so these link map structures have is what we end up working with to do what we need. This is not the entire structure definition, just things that we end up working with. It contains the base address of where the library is loaded. It actually contains pointers to the dynamic table entries that we're interested in. It contains a bit to say whether or not it's been relocated before. And some other things that we end up working with just to get this code, this engine working as something that's turning complete. But I won't go into details. And then also, since every ELF object that's loaded has its own structure, it's in a doubly linked list. And so there's next and previous pointers in each link map structure. And that's how they're ordered in, how symbols should be looked up. So there are fun ways to craft ELF metadata. And mayhem, in fact, 61.8, as we used it to inject object files, you can intercept library calls to run injected code. And so there's been work by Cesar and Mayhem in that, resident in attacker built library. So you can inject code into a library and get that loaded either by playing with some environmental variables, LD preload, or you can play with the ELF structure itself and put another entry in the dynamic table. And this is the point. You do not understand those structures until you make them do something interesting. Then you actually know what the code behind them, what that automaton is doing when it's getting that data. And so cheating the ELF by the grug, I totally recommend, this is a classic that explained this for the first time, and this was for the longest time, the most detailed excursion into how these things really work. And so low create is inspiring worker, helped inspire what we did. May actually use as relocation entries to pack and unpack binaries. And we're kind of taking that a step further by just using relocation entries to act as code itself or to have an effect, some sort of Turing complete machine running in just runtime loading. So low create is a patcher. So it can go, it run over code and transform the code because, hey, if we're rewriting bytes, we need not rewrite just addresses. We can rewrite op codes and immediate as well. And so how is that for an upfuscator, the upfuscator packer, unpacker? But let's build something bigger with loops and branches. However, in the tool chain we're working with, the text section is actually not writable by the time relocation entries are processed. So he was working with P and it worked. So here's our warning. I actually am proud to take a little side note off of this. Yesterday I got it working with ASLR with when the library is in the stack randomized. But this is specific to our architecture that we're using AMD64 on Ambutu. There's no reason that it won't work on others. But the details are specific to this. So first off we need our symbol table and relocation entries in a place that we can read write. Normally it's in read only. So we inject it using the RC toolkit right there, below the data. And there's space there. And it's a great toolkit and takes care of any rewrite updating headers for us. So for the relocation entries that are used, there's lots of them. And we only need three to do what we're doing. There's copy. And basically what it does is it acts as a mem copy of the copy. And you give it a symbol and the symbol value is treated as a pointer to where to copy from. The symbol is also, you get the size of how much to copy and then you copy it to the offset specified in the relocation entry. There's another version where you give a symbol in an offset and it looks at the value in the symbol and copies that over into the offset. And it adds an add end if there's one. And the base of the library, since we're working in executable, the base is always zero. It's been, unless it's actually randomized, it's mapped to the, well it's actually not zero. It's, I believe, like 40,000. So if this doesn't look like byte code to you, then I must have confused everyone. Yeah, these look like off codes to me. I'm not sure about anyone else. And then, so the last one, relative where it doesn't use a symbol, it just adds a base if there's one and you write it into the offset specified in the entry. So we have arithmetic. Close. We do have arithmetic. That's very straight forward. Another thing we use is these indirect functions that are supported by the runtime loader. And these are special types of symbols in which the value of the symbol is actually some function that runs and whatever is returned gets ridden into the offset. And we found it useful. Because you need branching. Normally here I'd have the video, but a lot of you saw it already. Many, many years ago. You don't want to hear me sing. So what did we do? We decided to find the simplest, most annoying programming language to work with and implement that in these relocation entries. So traditionally there's eight instructions. There's a notion of a tape and a pointer into this tape and you can write increment, decrement values and test and branch and output byte. So you have standard out or print off or whatnot. Or you can read in a byte, get char. But we actually are not worrying about those two because no one ever talks to the linker loader and I can imagine implementing it, but that's not in our version. So as an example, if we do plus angle bracket minus, so we increment the byte of the pointer, we move the pointer and then we decrement the byte. This is what happens. We increment the byte, the tape pointer, it gets moved, decrement. And it's fairly straightforward. It's just really hard to write anything useful in because that's hello world. You know, think of it as the Turing machine but more fun or more tingling in your brain. Fun. Fun. You know. So this is sort of the process of compiling brain fuck into ELF metadata. You have an executable that you specify. You have your brain fuck source and there is some configuration that you need to do and I have automated it as much as possible. You invoke this compiler and you get just another executable and that's it. You hand it to the loader. So first of all, you write your exploit in brain fuck. Okay. Actually, you can compile to brain fuck from C or your favorite functional language. I feel like we'll hit some limits, though, on size. It's not the... Maybe symbols. We can do better. We can optimize. Oh, my God. It's full of symbols. Yes. Well, it's full of relocation entries. Two. So this is the configuration. There's a lot of stuff we collect at compile time automatically. And so from the dynamic table, actually, so the address that they got, in theory you can get some sort of ROP gadget that returns zero. So it's not... We don't have it done automatically, but you can imagine if it exists. We can find it. And the location and memory of some other things in the dynamic table. And then data collected at runtime using everything else is the base address of libc, the loader, and a stack location, which that's new. We found there's the auxiliary vector, which is basically, well, the loader's love note to the executable itself with all its arguments, environmental variables, et cetera. They're all located on the stack. Well, there is a static variable in the loader that points to it. And so I can get to the loader. I can read this value and get some offsets. Isn't this nifty? Not only the executable knows how to call the loader from its stubs, obviously it needs to, but there is a pointer back as well. So I am not giving you the full setup. You need to read the code for that. I'm trying to simplify things. But the basics of what you need for a brain fuck, there's more you need if you want to execute cleanly in that you have your brain fuck execute in the metadata and then have your executable itself complete. There you need more than just this. And I put a little note at the end. So for this symbol table, we inject a couple of symbols. The address, there's one that holds the address of the tape head, or wherever the tape head is pointing at. That copy of the tape head value, an address of, yes, an address of this symbol, the copy of the tape head value that we need so that we can do some copy operations. So observe, the tape is now this, the tape is now spread through the ELF metadata. But that's okay, because you see an RTLD, I see a bytecode interpreter. Yes, and so the last symbol is just the ROP gadget that we need, something that returns zero, an address of that. And for the relocation entries, we actually have, well, before the instruction starts, we have some setup that's needed so that we can clean up, and so we can find all these library-based addresses of libraries. And then we have all the instructions. Then we have some cleanup instructions, and I'm calling them instructions. They're just relocation entries that do something useful. And then some cleaning, and then the originals. So we can actually have the executable, execute after we do whatever we want. So this is how the tape pointer looks to us. We have our two symbols that we need. The address the tapehead is pointing at and a copy. And just as a note, anything on the tape needs to be in read-writeable memory. And yes, and not just the tape though, the relocation symbols as well, because we edit them on the fly. But luckily this is the case, because the memory protections, the bulk of the memory protections get set up later. So we need, okay, so we need two instructions. Oh, I'm sorry. We need, yes, we need two relocation entries to do, to give you a movement of a tape pointer. One of them, I'll show this little demo, that's not a demo, it's an animation. So first we get, we calculate, we have the offset of tape pointer value, basically the next symbol down. And then we have, so we can start writing to it and copy it over. We have, so the symbol of the tape pointer site. What we do is, the tape pointer is a pointer to the symbol. We read that out. And we end up using, good morning me. So what happens here is, since we control the relocation sections, and we're using them as tape, we have to embed the brain fuck interpreter with all of six, it's brain fuck instructions, into the pieces of code that work on the tape. Luckily, we can craft those entries, luckily we can craft those entries to point at other entries so that we can start building loops and branches. So instead of actually giving you a step by step, since we're low on time, I'm going to just point out some interesting things. In addition to subtraction, we end up having to rewrite one of the relocation entries to get this done. To do unconditional branches, we realize that when the relocation entries are processed, it actually steps through this link map structure and goes to the previous one and moves to the previous one. So what we need to do is set the previous to our current so we can start processing relocation entries in ourselves again. So we have go to, right? This is the primitive, right? This is the implementation of the bytecode that we are creating. And the fact that a particular pointer is pointing and then is being followed to the next primitive enables us to do looping. And so we have to do some extra work to make sure it loops again, because after the first time memory permissions are set, it doesn't try to reload twice. And that sets some permissions so we can't rewrite ourselves again. But we use this little chunk right here to end up clearing out this relocated bit so it will process itself again later, which is very nice. And so we have loops. And we also need to rewrite some of the dynamic table entries to point to where it should go next and the new size of the relocation entries that we're processing. But again, the functionality is helpfully already there. This is what the relocator actually does all over itself. So to do conditional branches, we use this indirect function symbol. And the interesting part is this shindex, if it's one, it treats it as an indirect function. If it's zero, it treats it as whatever values in that symbol as just a value. So we end up doing it is telling that to overwrite this end, which basically end is where the relocation entries stop being processed. So we can either set it to zero so the next time the loop executes, it stops. Or we can set it, it ends up being something way bigger than any of the relocation entries and it will keep on going and we can fix everything up in the end. And so we have branches. We have an instruction that behaves differently whether it reads a zero or a non-zero. So to get actual details of how we do it, I look at the code. It's on GitHub. I'm going to skip the demo because I want to move to something a little more practical. We inject the back door into ping. And so by doing this, we don't use the health of the brain fuck. We actually create our own primitives to start following pointers. At a known place as a pointer to the linkmax structure of the executable and from there, we can start finding where the libraries are and from there, we can figure out what address is to write into the global offset table. So when the functions are invoked, it invokes what we want instead of what the linker tells it to since there's a value there. And so what happens is we rewrite the symbol over and over and update the value through these relocation entries and how they work with the symbol. And what we end up getting, we can get the base address once we get where the link map structure is itself. And from there, we can find any function in the library that we're linking. And so I do want to demo this. And the version of ping that I have has this interesting argument, this type. And I actually have, the way we work this is we tell staircase comp to exec it instead. And that's how we can get a shell. And we also need to tell it not to set UID back. Ping is set UID root. And then eventually drops privileges. We can trick it into not dropping privileges. So your output is all of your memory space in the process. And you can rewrite it any way you like. So why not rewrite it to give yourself root when a particular option is invoked? So instead of giving a live demo because I'm a bumbling idiot when it comes to typing, here is a video pre-recorded demo. And you can't read it, but that's fine. So I'm just showing that there's this type that we're playing with. And we end up, oh, so first we compiled our new relocations entries into there. And we set UID ping. So because it needs to be set UID. And when you run it, normally, this is running at double pace. I don't know why. I must have changed something. So let me pause for a moment. So let me pause. So ping works normally with the new relocation entries. Or at least you can ping normally. There's probably some other things that might break. But you can ping normally. But when you put in dash t, and then we just give it backdoor.t.shell, it just invokes bash without any arguments. It's just otherwise you get crap in there because we are not using the properly. But what happens is once you evoke it with this, suddenly it calls exec on backdoor.shell and we prevented the root privileges from being dropped and now you have a root shell. And we go on to just show that there is no difference in the code itself. And since it's in double time for some reason, maybe we'll have time to see this. There are differences because there's different sizes. We injected metadata. There's so shaws are different. But the code itself is not. We only checked metadata. And I am going to probably not let this finish running. I'm going to skip ahead a little bit. Yeah, you'll trust us that if you dump out anything that could be disassembled and compare the disassemblies, they are exactly the same. The only difference and all of the program compiled to rewrite your memory at runtime is in the Elf method. So the left side is our injected version. The version of ping with the injected entries and the right side is the original. And so the original entries are still there. Ping still executes. But before all of them are our injected ones. And the same goes with the symbol table. But we inject our symbols after the original symbols because everything is referenced from the beginning of the symbol table. And we don't want to mess with the linking that it needs. Let's get out of the full screen. Ah, F11, of course. So that's sort of everything in a nutshell. Again, I suggest going to our website. But I need to thank Sergei for leading this. And Sean, who's my actual thesis advisor at the moment, for putting up with me not doing the work he wants me to do. And Greg Q for doing a lot of great work in the RSA toolkit for allowing me to inject relocation entries, mayhem, scapey. And then the noun project where I got all these cool little things. Thank you.