 all for coming to Bodny with Bites. I realize this is one of the more obscure topics on the Ruby conference schedule. So I feel like you're absolutely the most adventurous crowd at RubyConf. Right on. Get on, yeah. Welcome. This is Bodny with Bites. I'm Lino Nikolai. So this talk is very simulation and graphics focused. So this talk uses the graphics gem. This is a very new gem, still in beta, authored by Ryan Davis, who just walked in here and left. He wrote this to be a very simple, very powerful graphics library. It's based on SDL. Instead of a game library where everything is drawn upside down and there's a lot of optimizations for game purposes, this is a visualizations library. So even though it does run very fast, it conforms to usual mathematical conventions, like having your up access be up, things like that. So if you're interested in how I did any of these simulations, all of the code is done with this library, gem install dash dash pre for the beta graphics. And all of the code is on GitHub. I am Lido underscore Nico on Twitter. You can find all of my code on GitHub with the same name, Lido Nico. So imagine you are algae. Here's you, an alga. And we'll call you A for alga, that makes sense. And you are going to do what algae do, which is make more algae through a process called mitosis, which is a fancy way of saying cell division. Once you're done, you'll be the proud parent of a baby alga, which we'll call B for baby. And this process will keep going. You'll keep splitting off. Your little baby alga will grow up. So you have alga, baby, alga, and so on and so on and so on. Each alga will split. Each baby will grow up. And I'm just going to start representing these things with letters so I don't have to keep drawing them. Alga, alga, baby, alga, baby, alga. It keeps going. Incidentally, this forms the Fibonacci sequence with the length of each generation for the same reason like breeding rabbits does. You have that same exponentiation effect. So this process was first described by artist Lindemeyer, who was a human and not a plant. He was a botanist. And he called these systems of generation L systems after himself. They're very simple. They consist of a start like a single alga and a set of rules like one alga becomes an alga and a baby. A baby grows up into a full-sized alga. So let's talk about computers for a minute. You might recognize this as something that we come up across in our day-to-day computer science lives as a rewriting system. We know these from the Game of Life, which rewrites a grid. Each cell gets rewritten based on the structures around it. Or the famous Koch snowflake fractal. Each line gets rewritten with a line segment with a triangle and so on and so on. This is a string rewriting system. So this is a pretty solid indication. This is a grammar. But what is a grammar? A grammar is what strings or grids or lines or things. You can do it in any dimension, please. Can you make with rules? Those rules can be of any form, whether it's kind of the geometric. These grid shapes match to these cells in the Game of Life. They can be like our L systems. You have an alga that is splitting and a baby alga that's growing up. And that is the definition of a grammar. I know that seems absurdly broad. I'm gonna flip that around to something we're a little more familiar with as programmers. What strings can you match with rules? Like here is a regular expression. What strings does it match? Bet you guys didn't know you were getting a pop quiz, right? Don't even worry about it. So when we talk about grammars, it'd be nice to be able to know where L systems fit in here and to organize grammars in a broader context. And to do that, we will use computers to organize our grammars. Now we know regular expressions fairly well. And to implement regular expressions, the only computer you need is a deterministic finite automata. It's a fancy name for saying a bunch of states and a bunch of transitions. You don't need any memory. You don't need like a stack or a heap or any of that. Just leave that out. Your computer only has to be the states and the transitions between them. Now with that computer, with all of the computers you can make in that structure, you get every regular expression ever. You get all of the sets of strings that can be matched with a regular expression. You call that the regular grammars. So this little box indicates all of the strings that can be matched with regular expressions, all of the grammars you can make. But we know from experience that there are grammars outside of those. Like if you have, like me, totally naively tried to parse HTML with a regular expression. It doesn't happen. Because there are structures that exist outside of this small area, there are also finite grammars, which are a subset of regular grammars. We're going to ignore them. So HTML, you need to stack the parse. You need to keep track of which tags you're inside because you can have self-similar structures inside those tags that you all parse in the same way. But you also have to remember how far down you've gone. And this is starting to look a little bit like a hierarchy here, right? We have the outsides and insides. And this is called the Chomsky hierarchy, described by the linguist, Noam Chomsky. And as you might guess, there are things even beyond these context-free grammars that we use for HTML. You guys ever heard of a programming language called Ruby? No, okay. So beyond context-free grammars that you only need to stack parse, we have context-sensitive grammars. C++, for instance, is the canonical example where you can't tell, based on reading the code, if what you have is a class instantiation or a function call, because they look the same. And the only way to tell is to check your broader context to see what part of the language this is. You need random access memory to do that. Now, there's even more, like, beyond this, but what can we do to our computers because we already have random access memory and that's what this computer has? How much farther can we go? And the answer is recursively enumerable grammars need a possibly infinite amount of memory to parse. Like, you might just keep going and going and going. You need the full Turing machine with more memory than you could ever possibly enumerate. And Ruby exists, I'm not exactly sure where, somewhere between context-sensitive and recursively enumerable. I suspect it's context-sensitive, but I don't really have a proof of that because every time I open up parse.y, I just kind of look at it and then close it again. And then there are things like Perl, which can't be parsed. That's not a joke, you really can't parse Perl. So L systems are a weird slice of a bunch of these layers. Like, you can match with an L system with these rules that I've been talking about, some finite grammars, some regular grammars, some context-free and some context-sensitive grammars, but not all of them. And I'm gonna make that concrete in just a second. There are also context-sensitive L systems which are even stranger and beyond the scope of this talk. So L systems are a strange slice of other grammars. So let's take the example of matching nested, balanced parentheses. We know we can do this with a context-free parser, same as HTML. You keep track of how deep you are and you just keep recursing down. But an L system can only match this structure if there's a token in the middle. As we saw earlier, we can make a rule saying token X maps to X with parentheses around that and iterate on that process and get these nested, balanced parentheses. But if there's no token in the middle, the L system just can't conceive of that structure. You cannot match it with an L system. And as we know, if there are an arbitrary number of balanced parentheses, you're gonna be absolutely out of luck with your regular expressions. So here's an implementation of an L system in Ruby. You'll notice it's dead simple. In fact, most of it is bookkeeping, like keeping track of what state we've passed in and what the rules are. So I'm just gonna highlight the actual logic of the L systems, which is a one-liner that I've split up into three lines. You have the state like an alga and a baby. You split this up into a list, alga, baby. You apply the rules to each, the rules being a hash that you can map across the list and then you join them back together. Have a nice little illustration of the diamond of how that works. That's all. That is the entire implementation of L systems. So you can throw this into a console and it definitely works. I've tried it. You have your alga all in a line. Alga, baby, alga, so on. But what I really want, my goal for this talk, is more dimensions. So since we're talking about plants, how about a sprout? Here we have a simple L system describing a sprout. We can do this in two rules only. You have a leaf and each leaf buds off into a stem, a left-facing leaf and a right-facing leaf, and each stem simply grows longer every generation. Now, we've been drawing alga all in a line, representing that with a string, and now we need some kind of way to draw things in two dimensions because we have left leaves and right leaves and we're going to end up with a two-dimensional image. So to do that, we'll introduce turtle graphics. You guys ever played around with turtle graphics? The logo programming language? Maybe you were introduced to as a kid. The idea is you have a turtle and you can command this turtle. You have control of this turtle and you can tell the turtle to do things like move forward and the turtle will move forward and draw as it goes. You can say, save our current position, push that onto a stack. Turn left, any number of degrees move forward and then you can recall the previous position and zip back to that. So how a turtle evaluates a string, I'm just gonna give a really quick example. You provide the turtle with a string, that's the S, left bracket S up at the top, and a set of rules for what the turtle can do. In my implementation in Ruby, these are all symbols or a list with the turn and the number of degrees. So when you see a stem in this case, move forward. When you see an L, move forward. If you see a left bracket, turn left. So as the turtle evaluates, it'll simply go, I see an S, I will move forward. See a left bracket, I will turn left to 60 degrees. I see another S, I'll move forward. Clear? Cool. So in talking about the sprout, we need a way to break this structure down so that the turtle can understand it. And I want to really emphasize that the turtle is an arbitrary choice of drawing framework. Like if you are a graphics programmer, you would do this by multiplying a bunch of rotation matrices as you go. There's nothing, no inherent connection between our turtles and our L systems, just that turtles are cuter than rotation matrices. So that was the criteria I went by. So at the end of a stem, imagine this leaf, this branching structure, we're going to save the position at the end of the stem. We're going to turn left, draw leaf, but to draw the right facing leaf, we'll restore the state. So we go back to the tip of the stem, turn right, draw another leaf. We always know that these will occur next to each other in general, in a sort of recursively descending way. So we'll always be returning to the same point to draw these two stems. So our sprout is an L system. We have a bunch of rules. This is just writing in a very terse sort of way, the rules I just described with the left leaf and the right leaf in the stem. And then we make a new turtle with the current state of the sprout and we tell the turtle what each symbol in the L system means. So the first generation is boring, it's a leaf and you know, whatever. Second generation, also boring, it's the same thing we started out with, right? But the next generation is interesting. And I'm going to run through it just to make sure we're solid on how these L systems evaluate. With a stem, a stem, we're saving the position at the tip of the stem, turn left, another stem, save the position again on the stack, turn left, finally draw a leaf, then recall back to where we were before, turn right, draw a leaf, you guys, I think you guys get the idea, right? So trees don't actually look like this, right? At least that's what I used to say when I gave this presentation, it turns out that there are plants that look like this. Here's a drosser of binata. This is an ancient carnivorous plant. Still around today, you can find it, I think in the wilds of Washington state, that really looks like that. It has that exact branching structure, like each angle 60 degrees, it's crazy, but aside from these carnivorous plants that exist in a tiny region of the world, trees maybe don't look like that in general. So let's model an actual tree, like a juniper branch. Now this looks complicated, but it's the same thing we were talking about before. You start with a twig, T, and you map that to a structure. Now that long string just really means this image, right? Each twig becomes this branch each time we evaluate it. And so here's the animation of the turtle executing. And when we're done, we have something that looks like this, which is a pretty good approximation of what a juniper branch actually looks like. Now juniper branches are a little more complicated, each twig has a different length depending on how long it's been growing. This is a really good approximation, especially with just a turtle, right? So we've talked about how L systems are grammars and then you need a computer to evaluate these grammars. So as the plant itself is growing and evaluating, this is the plant running a computation as it grows. I think that is super-need-o. You can also do weirder, non-organic stuff with L systems, like here's a fun Serpentski's triangle. You like really weird stuff, there'll be dragons. So now let's take a different view of the growing stem of a plant. We're gonna go from top down, going to slice right across the tip of the plant's stem as the leafs bud off from the center. So in this area of the plant, which is called the meristem, we have an area filled with plant growth hormone and nutrients and all sorts of things that leaves really like. Now as a leaf grows, it'll greedily suck up everything around that area and then head out from the center in search of more plant growth hormone. Now the next leaf budding off into the center will be budding into an area that doesn't have as much growth hormone. It will set off in a different direction in search of more growth hormone and so will the next leaf. Now I want to model this. I want to have my plant sprouting leaves, but actually implementing this would kind of suck, right? I don't even want to think about the exact time complexity and the time complexity doesn't really matter. I'm more using time complexity as a way to describe code suck here, right? We really don't want to implement that. We can do better. So here's the very first model of how humans understood how plants grow. This was discovered at Cambridge and I think the 1950s or so. And that model was charged particles. Now particles, little drops of fluid filled with iron shavings or something like that, all have the same electric charge and will repel from each other because of electromagnetism. And this happens in, like you can actually run this experiment and get as you drop drops of charged fluid onto an electric plate, they will spread out in the same pattern that leaves grow on the tip of a plant, these kind of characteristic spirals. That's really neat. Like that's a pretty good approximation we know and we can implement that with some drawings here with the graphics library I've been talking about where each particle moves away according to the laws of electric repulsion and you'll see after a while it starts to vaguely form these spirals. Now the time complexity is O of N squared, code suck about the same. We're not gonna worry about optimizations there like no quad trees for this. Don't even worry about it. By the way, that was Alan Turing's model. He was the first person to describe this in the general case. He called this the hypothesis of general phylotaxis which means my idea of how leaves grow. And then he later refined that to be an even more accurate model. It's interesting that it's Turing that made this first model because the mathematics is very similar to the mathematics of actually parsing computer languages, right? And Turing devised a closed form solution. Like he didn't run the simulation, like I just didn't measure things. He figured it out drawing in his notebook. Like here's his drawing in his own notebook of how plants grow around the Maristam. And we can notice as Turing noticed that each leaf sets off at a very specific angle to the previous leaf. Like that holds all the way around the circle. This is about 137.5 degrees and if you're Turing, you derive the closed form solution of that which is the golden angle, 360 degrees over five, but the opposite side of the circle from that. So now that we know this closed form solution, now that we know the exact angle that each leaf sets off from the previous one, we can build an L system out of it, right? In this case, the rotation, you have a stem, you rotate it and grow a new leaf. The rotation is going to be 137.5-ish degrees. So we can implement that and boom, artichokes. So the time complexity of this is fine. It's like, you know, you simulate it once, right? It's all of one or whatever else systems are the linear. So that's great. And the code is, as you've seen, a few lines of Ruby. So the code suck is dramatically decreased. So let's go back to algae, but this time we'll do it in more dimensions. Here's you again. And this time, there are many of you all in a line. And we're going to do the cell division thing again, this time not worrying about single alga and babies, just all algae. And each alga is going to divide downward, down on the plane. That, except life is really complicated and some of them have divided more than once in that time period. And this process will keep going, where some of them are going to be a little more enthusiastic and divide multiple times. And it keeps going on and on. It's not actually getting smaller as it goes down, it's just me running out of space to drive. And as an aside, we're going to represent these cells. I've been representing them as dots with little smiley faces inside them. But actually as they grow, they push up against one another in this sort of form where we end up with a whole bunch of polygons at the boundaries between cells. I'm going to flip that around. So the red lines are the cell boundaries. We can make that much simpler by just connecting the centers of the cells. We put a point in the middle of each cell and connect across each edge with a line. This is called taking the dual of a graph and it's the same information, it's just fewer vertices, fewer edges to keep track of, much simpler. Now in this case, I can draw that. We'll draw each line connecting the center of each alga and then we take away the illustrations and we're just left with this graph. It's much easier for us to deal with as computer scientists. So as we simulate this, I mentioned that things are not actually getting smaller as they go down, but they are, you'll notice, getting more packed tightly. But we know that the cells are all about the same size and squished up against each other, so I want to simulate what happens when these cells all squish together in that way using the well-known Unix command jellyfish. This is what happens. You can see from the top of the screen down all of these vertices all connected with the line, which means there's an edge between the cells. This ruffles out into three dimensions into this kind of jellyfish-tentacle-like shape. It's the same structure as jellyfish-tentacles, is what I call it that. And as it ruffles, it moves into three dimensions. We have kind of bending stiffness here to make sure it just doesn't go all wibbly, but notice this form of this. You'll notice this everywhere. Like you can probably go outside and see some plants that have this wavy structure at the edge of their leaves. Like, that's an instant identification of, like, yeah, I know what the cells are doing here. This happens in jellyfish. Up where I'm from in Seattle, we have Oregon Grape and Holly, which does exactly the same things with the wavy lines. If you're from the desert, you've seen succulents that do this same thing. You've seen euphorbia all over the world that all have this same structure. Now, this structure is a specific kind of surface. The world we live in, things like this table are all flat. They are not curved in any way. As we move through space, we're not, like, warping as we go. We're just kind of existing. But we live on the Earth, which is a sphere. We're small enough that as we walk around, things curve away from us as we go. If we walk in a straight line, two people at the equator, 90 degrees apart, walking a straight line, they'll hit each other at the North Pole. This is an example of hyperbolic space, which is in many ways the opposite of a sphere. It has some strange properties. It's the one with plants. How many unique parallel lines? If you have two lines through a point and draw one line, parallel to one of them, it'll always intersect the second line. In our world, sure, you have one set of parallel lines all pointing in the same direction. On a sphere, if we're talking about the lines that exist as the great circles all the way around the equator or through the North and South Pole, there are no parallel lines. You can't draw two things that never touch. But this one, you can have multiple parallel lines to any point. They're pointing different directions, but they also never touch. These services have other different properties that are really interesting. And I'd love to talk about them more, but the moral of this story, what I'm getting at, is that plants don't care about any of that. We can verify this, right? We can go outside and ask an Oregon grave, like, man, what beautiful hyperbolic leaves you have. What's the relationship of the distance of any given point on those? And I guarantee you, it will not answer. Thank you, that's all I got. We have a bunch of time for questions. Yes, in the back. The question is, can I think off the top, I guess, about any kind of application for these services, given that we use nature a lot in inspiring our designs. One of the reasons that this hyperbolic structure occurs a lot in nature is that it's surface area maximizing. So that's a pretty solid example of something you might conceive of it being used for. To my knowledge, it's never actually been used, but it occurs not randomly, but because it serves a specific purpose, and nature selects for things that do things like that. This is how I started programming. I was an illustrator. I was trying to draw, or trying to sculpt in a 3D modeling program, a model of a jellyfish, and I couldn't, for the life of me, get all the tentacles right, because they're really complicated, and I thought to myself, I bet I could get a computer to do that. Turns out you can. Right, well, we had a talk earlier, this RubyConf by Seng Wei, Wave, maybe, about parsing Ruby. If you haven't seen that, go back and watch it on Confreaks, and you'll get a better understanding of formal parsing theory. There's a lot of things you can look up from there. This is a huge field. The question was, how do I get a formal background in this? I don't even know where to begin. There's so many aspects of what I've talked about. What are you interested in specifically? Anything, all right. You can read Turing's papers. They're all available. A lot of the papers that Turing wrote in the last few years of his life dealt with stuff like this. There's one really interesting one about the patterns that appear on the faces of fish from a similar biologically motivated example. There's all of his papers on the growth of leaves. That kind of stuff, if you're comfortable with the very formal math side of things, that's where I would start. No, all right, thank you very much.