 Are we all set? Anybody feeling tired? Anybody need to do some jumping jacks before we get started with calisthenics? So in case you didn't know, this is array-oriented functional programming. And this is the only slide. Basically, what we're going to do here is we're going to sort of pick up on what you need or sort of try to attack that issue of the wall of learning APL, the initial difficulty, the hump, and give you guys some tips and tricks of how to think about your programs and introspect a little bit on your own design process and then correlate that to APL code. So everybody's going to have some assumptions about how to design or write their own software. And specifically at the low level, there are techniques you use, things like pattern matching, ifs, recursion, things like that. APL is different. And so what we're going to do here is we're going to go through those by example and look at sort of how you might do things one way and then try to break that apart and see how we can do the same thing in APL and pay attention to the differences in the way we think about these things so that you can begin to get an idea of how to begin thinking like an array programmer in your process. And hopefully that will ease the hump of using array programming in your life. And Deval has kindly insisted that this set of trinities shows up here. And I mostly agree. Remember that on the whole, from a high level, chances are when you're dealing with FP, you're going to be working with abstraction, function abstraction, and ADT abstraction in particular, right? Data abstraction, lambda abstraction in a particular way. In APL, we want you to take that and try to replace those with these principles here. Instead of abstraction, try to look for a direct solution. Instead of ADTs, use arrays. Instead of these lambda abstractions, go for the primitives in the language, right? So what is that going to look like? Well, I think we can have Deval set us up on our first example, right? So do you want the code or do you want? Well, it's just, it's just, which one do you want? Well, we can. Let's start with primes. Primes, yeah. So this is just a very simple frame. I think the right side is. Oh, you want the whole, well, I'm just going to make this maxed out so you can. Yeah. So this is just a regular prime, finding first, give me first 13 primes or give me 130 primes. Just written in Haskell, just regular Haskell where we loop over, I mean, loop in the sense filter and then this complexity is, you know, all n squared, right? Yeah. So this is a classic functional n squared algorithm for computing primes, right? And hopefully everybody, does anybody, is anybody lost at this point? Does anybody need help here? Is this syntax confusing to anybody? OK, so now as we go through this, I want you to also think you guys are allowed to interrupt us, create havoc, all that kind of thing. Importantly, we would love for you to interrupt us with your own pet small introductory programming, favorite little snippet, favorite small code snippet. So if you have an idea of like, oh, well, what about this? Something small, something sort of that you might do in an intro programming course to learn the basics of something. Bring it up and we'll see if maybe we'll do that instead of some of our other prepared examples. Because we have other examples, but we'd much rather play around with some of your code if we can. So this is going to give prime numbers pretty straightforward, right? So what happens if we take an APL approach to this, right? So let's do this the long-winded way first, right? So let's just say we have the natural numbers starting from zero, right? That's iota 20. OK, so we begin doing that. Let's store that into x. So now we've said we bound x to iota 20. That'll give us our results. OK, what do we do now? Well, what most people are thinking about now is let's use a filter abstraction, exactly as we did in the Haskell stuff. OK. Instead, what if we look at the mod of everything compared to everything? OK, OK. And now let's see if we can find the zeros. OK, so zero equals the x outer product modulus or residue of x. So now we have this pretty little picture of all of our ones and zeros. And what does this actually say? Can you spot the pattern? What is the name for this? Can you spot the pattern? Where are ones? Yeah, yeah. So what is this abstractly representing to us? What is the name for this expression? Sort of like factors, right? Factors. So what we can do is we can sum the factors along the columns. And that tells us how many elements divide. Is this large enough font for everybody to see? Or do you want it bigger? Bigger? So this was a column reduce, right? Is that a make it bigger or you're good? You're good. Oh, OK, OK. Cultural miscommunication. So Aaron, just to clarify, you did a column reduce, right? Yeah, so pretend that we put the plus symbol in between each row all along the columns and just. Right? OK. So then, of course, we can ask, well, which ones have two factors? OK. And oh, oh, oh, oh, there. We've got this Boolean vector. And here's step number one, right? And this I want you guys to think about this, right? When we do this, we're going to do that and we're going to filter x. So now, immediately, the word filter means something, right? So here is mistake numero uno primo most important failure that I see when I teach beginning functional programmers to work with the stuff. Is they think of filter as some function alpha alpha applied over all of the elements that we're working with, omega, and filtered through. This is how people think about filter normally. So would you want to clarify what alpha is? Yeah, so we're going to go through this, right? So this is exactly how this is the, if we wrote this in scheme, right, what we would have is we'd have a map, or we would have some kind of filter function that we would then pass a lambda to, which would be some kind of predicate. So we would say something like maybe equals x0 and then we would filter that over some list of stuff, right? And notice the key point here, what did we use? Remember our trinity? We used a form of higher order programming and lambda's to do this, right? This is a form of function abstraction. But in APL, this is an anti-pattern, at least in this case, because what we've done is we've introduced some barriers that are going to cost us potentially in any real production code. This is fine if you're going to play with primes for like 20 elements or something. But what we've done is we've created this, each abstraction over omega, and it ruins us, right? So the moment you're working on APL code and you start thinking, oh, well, I'll just write an operator, a higher order function that applies this function over these elements within each. If you see this, stop yourself, smack yourself upside the head a couple of times, go outside, do a marathon or something, come back and think about it differently, right? So it's a code smell in APL. It's an extreme code smell potential. Now, we do do this at some time, but when you're just starting out, this is a bad idea. Instead, this expression does something different. It's a filter operation, but instead of filtering by having filter a higher order function that's then going to take a predicate and apply it, we're just going to receive a bit vector that masks over the things we want and the things that we don't want. And use that to select out of our original elements, right? So if we look at our original elements compared with this bit vector, right? We can look at X compared to the bit vector and we can see that we're selecting out which elements we want, right? Do we want to explain those symbols that we just threw in there? All right, so, breaking it down, APL evaluates from the right to the left, right? So we kind of executing this direction. We sort of read from left to right, execute from right to left. So breaking it down in execution order, we table it. So that converts the vector to have the shape of a column vector, right? Sort of like into your traditional linear algebra. Then we have our original value X and we're going to catenate X together with that value, right? So what we did is we squashed the two together along the columns. And so then, I don't think that's a very pretty display so I'm going to transpose it and flip it on its axis so that the columns become the rows and the rows become the columns. And so, you know, this expression here is a little long-winded, frankly. What's another way to think about the primes? Another way to think about the primes is a prime are all the things that are not members of the products of other things, right? So if we take X here and we don't want to deal with one and the others, right? So we'll call that Y. We're just ignoring the zero and one in this case. And then what we can say is, well, we just want all the Y's that are not products of themselves, right? So this is saying not Y a member of the outer product multiply of Y at X. So what does Y out of product of X give us? So Y is a vector. It's going to go point-wise or it's going to take each element in Y and compare it with each element in Y and multiply it together to get the table of all the products just like we did with the residue but this time we're using multiplication instead. And so then we can use that as a set and we can say what members of Y or what elements in Y are members of this set of things and we get this vector. And this vector, this bit mask says no. Those are all the things that are not primes. So we, to get the things that are primes we need to negate that. But this is also actually a little bit verbose. You know, just not quite satisfactory because this Y thing in there is it's mucking about with our system. And this membership of the outer product. So why don't we instead combine that membership in the negation and we can turn this into a train and say, well actually what we could do, let's just simplify this, we could just say Y without all of those other elements. That works pretty well. Probably there's another way which I can think of if you can put some patterns again. Yeah, well let's clean this one up a little bit because we still got Y sitting around. So instead let's just get rid of the Y. So Y is all the natural numbers not including zero and one up to 20 exclusive. Right? So it's, so then what is this guy here? That's just a function train. So the way you can think about this is this is just a function where we apply the outer product multiply as a selfie. What that means is whatever it receives is its right argument. It's also going to use as its left argument. So the selfie of plus on X is just gonna do X plus X, right? Is it the S combinator? Yeah, this is commute. It's commute but the monadic version of commute where we pass it on in one argument, we just replicate the argument on the other side. So if you wanna call that the S combinator, go ahead. We'll just call it the selfie. It's the confused selfie. Notice the. And so the selfie is the, now is a function that's going to be the right argument to our without function. And as the left argument, we're just going to use this right tack. And that just says given two arguments, I want the right thing. And in this case, whatever we pass to the right is just going to be our element. So what we can do is we can just say this whole thing is gonna get Y. So this, if we expand this out to what actually happens is then it becomes Y without the outer product commute of Y. And then from there, it becomes the Y without of Y of the outer product Y, right? That's how that expands out when we apply it. So this is a function definition, but not a lambda abstraction. What's the difference here? Make a guess, anybody make a guess. It has to do with what is a fundamental aspect of what a lambda abstraction does. In some sense, on most functional programming languages that are lexically scoped. Keyword lexical, findings, C and curly braces, scope. Yeah, scope. Yes, exactly. A lambda introduces a new lexical contour, a new scope. This is a function definition that does not introduce a new scope. It's just a function definition, but it uses the same scope as the enclosing scope. Okay, so there's my prime numbers idiom. What's the computational complexity? Well, it's trivially quadratic. So maybe we need to improve that. So of course, I think Deval can show us which pattern. Yeah, yeah, go ahead. So if you can get the prime numbers at the indices. So can you have the zeros and ones wherever we saw? Oh, you want the map? Yeah. Two rows as a layout, two by. Oh, the two by, yeah, yeah, yeah. So that's the transpose of X, catnate of the table of the two equal plus first axis reduction to zero equals X out of product residue of X. Now, do you see a pattern here? Do you see a pattern here? The ones with the ones are, and the top one is what? The index, the index. Yes. If you look at the index, wherever the one is, so we just need to get the index in. Do you want to get the index? Yeah, and that would give me the prime numbers, that's it. Well, we've got this, which is the Booleans, and we could say, well, where are the ones? This pattern is easier to recognize, right? Then the other one. So this really, when I went through this and I had my aha moments, like I just have to look at the data patterns. In FB, we move element by element, whereas here the entire thing is right there in front of you. Yeah, actually, okay, so just a touch of usability if all of you are thinking, oh, I'm gonna have to learn how to touch type the 20 APL characters per minute to do anything in APL. No, no, unnecessary. Dialogue does include a language bar which allows you to see what all the APL symbols are and walk through them with your mouse and see documentation for each of them and so on and so forth. I just hide that from you because I'm cruel and sadistic. I hate desktop scaling. Sorry? And we lose one. Yeah, yeah, just watch the video from the morning, or I mean, I'll do that once here if you want and you can decide whether it's better for you. Oh, I was gonna do a little bit more than that. So we can do, all right, that makes it more clear, right? Who votes for the English language version? We got one vote. Yeah, no, so this doesn't really help you that much. Question, yes? Yes, yes, it looks like the definition of what a what is. Yeah, oh, so you're wondering about the symmetry between the definition. Well, so what is the definition of a prime number? It's the one where there are two factors for every number, in APL roughly 50 give or take. It's like learning a new alphabet, but we should, does anybody now, I think we've exhausted the quadratic primes at this point, unless somebody wants to dig into this more, but does anybody else, before we move on, have a question about either this or maybe another problem they'd like to see? Yes, convert a string to camel case. Ooh, I like that. So how do you know where you wanna put your camel humps? Sorry, sorry, what? Okay, and then capitalize anything that's to the right of it. Okay, yeah, yeah, that could be fun. Yeah, yeah, all right, yeah, so do we wanna do, so we've got the uppercase letters, do we have the lowercase? What do we get for the lowercase? Okay, all right, so the 819-I-beam. Oh, wait, where's I-beam on this system? Oh, yeah, okay, the 819-I-beam, yeah. Oh, 819, okay. But if you name that case, sorry. If you name that case. Well, let's just run it. And what do we wanna do with it? Yeah, quote A. Okay, cool, so let's call that lower and upper. I'm using names just because I'm throwing you a bone. Okay, so now we have the upper and lowercase, so let's pick some strings, so examples. We have quick. I should just say these numbered functions are experimental features that haven't quite made it into the language yet. Okay, so here we have three examples. Let's take the first example, right, and so we'll get the first of examples. Okay, so let's find the spaces, or actually let's find anything that's not part of the alphabet. So the alphabet is the upper and lower cases, right? And so we wanna look at x and see what's a member of that. Okay, so now anything that's not part of that or remember our pattern before we can do the without and that gives us two elements, but we actually want the Boolean matrix in this case, right? So we want this mask, okay? So we're gonna remove those, but before we remove those I'm going to use that, so I'm gonna call that my mask and I can do the neg one rotate of that mask and shift it over by one. That points to everything that needs to be uppercase, right? So then if I use that, I can reduce over my original example to get the two letters that have to be uppercase. So from there, I can look in the lower case and find out what the indices for that are and index that from the uppercase, right? And get the uppercase values. So then I can just replace that in x one, so I can say inside, for instance, the neg one rotate of mask over x one, I want to set those to the uppercase values open, just as an example. So now we camel case that and from there, I can just say I only want the, yes, I'm sorry. Oh sure, okay, so let's do another example. Two queues in the string like the quick, quick, quick, like that. Okay, let's run that. So let's take this, do the zero on that. The zero on that, oh, we're gonna have to set mask, so mask is going to be the x zero member of upper and low, something like that. Any bugs we see here yet? Or are we good to go? Ship it. Nope. So let's go through here. Oh, right, but yeah. So there we have our upper and lower cases, right? And what we probably want to do here is we can strip out the uppercase version, right? So notice that we didn't get the inside queue that you were worried about, because what we did here is x zero, if we ask which ones are members upper and lower. Are you intentionally avoiding using 8.19 IBM to do the uppercase? Yes, okay, yeah, yeah. We demoed 8.19 to get lower, but I mean, we can always just apply the 8.19 IBM, right? But I was hoping to not have to rely on that, just demo the indexing, so. Because yeah, if we take something like this, we can apply the 8.19 IBM to that and lowercase it, but that's kind of. No, but you can uppercase it with the left argument of one to 8.19 IBM. It's like that, yeah, yeah. The 8.19 is because it looks like big BIG. So if we have, we want to get everything that's on the, so M, x zero member upper and lower. So if we do this, this is everything we want to keep, right, everything we want to get rid of. These are the places that will have to be uppercased, right? But if we apply this here on x zero, that'll show us all of the places that have to be uppercased, right? So now, how would we deal with this extra Q? Any thoughts? You could end that vector with, whether it's a member of lower. And that vector with, yes, we could do that, right? So we could ask whether it's a member of lower. There's also another trick we could do, right? Well, let's just call, let's get our ABCs here, which is upper and lower, right? And so if we ask here, ABC index, we'll get the index for everything. And notice that if we find it in upper, we want to keep it that way, but if we find it in lower, we don't want it that way. So what if instead of indexing on upper and lower, we index on upper and upper? So that gives us the uppercase values. And so if we have M, is M the right thing that we want? Okay, so we can use then M to say not M, X zero gets, actually, let's do it up here. Actually, we did the negative one, rotate of M, it's X zero. And now, we've updated that. And so now we just do the not M over zero and that'll. So Aaron, I took your offer actually. Yes. And I, over the lunchtime, so just like he proposed this idea that we should do this as an audience for the problem and we do it here. Well, actually, are you coming up with another problem? No, the Reddit square problem. Oh, okay. Yeah, we can show that, but I want to make sure that we've addressed the camel case thing, right? So we've camel cased it, and I haven't made any attempts to optimize this at all, right? This is just me exploring the problem a little bit, right? And APLers will do this a lot, is we'll experiment with different approaches and we just play on the REPL to see what we get out of this, right? So does this, let's recap a little bit. Does everybody sort of understand what we did here? Because I think maybe we need to go through this again just a moment. Yes, yes, okay, okay. So we, this part where we did this, right? Do you understand what this is? Shouldn't you be assigning M in that? Yes, I should be assigning M. Otherwise it's confusing. So yeah, we can scope M here as well. So this part here, is that clear? Okay, so we're looking for the things that are in the upper and the lower case and everything that's not, and we rotate it once to identify the thing that's just to the right of whatever's not an alphabetical character, right? And so what we do then is we take this and we look this up, this ABC is our alphabet, right? And we can look up, we can see where the position of this thing appears in this vector, right? It becomes an index, so we call that index up. We saw the ABC index of ABCXYZ is zero, one, two, 23, 24, 25, right? And so we use that as an index into our upper, upper, right? So upper, upper has the same number and same shape number of elements as upper, lower. So all indices into upper, lower will be valid indices into upper, upper. And so from there, we can just do the index. So now we're gonna index into it, yes. So now what we did is we did a fancy vector setback. So we said, let me just put that down there so I can reach it. So what we did is we have everything on this right side which is our uppercase things. And we said in the X zero vector, the positions identified by this bit mask set them to these values. So you can use any selection function on an array to the left of an assignment. Yeah, it's like an update in SQL update where some selection. It's equivalent to this, give or take where we looked at the bit mask, figured out the indices positions and then did the bracket assignment to that, right? And so notice here that we're using something that imperative language is also kind of used, right? This bracket notation or these assignments. However, what's different here? This bracket appears at the top level of our conception, not at the lowest level, right? This is important. It's inside of here we've got lots of meaty information that's going on. And we do that work there and the mutation or the operation outside of that is at the top level of this expression, not inside of some inner loop at the very, very bottom. And that's an important critical concept difference, the way of thinking about this. So we're sort of inverting the things that we're doing on this place. And yeah, so questions. Yes, yes, somebody figured it out. Yes, so there's a bug in this program, right? Yeah, yes, yes, exactly. Yeah, both of these are edge condition things that this won't handle, right? So you can play around with this and think, how would we tweak that to eliminate those conditions, right? So one way to eliminate, for instance, the X zero problem is just pad it with the space at the beginning, right? To handle the double spaces, what would we do? Does anybody have any ideas? I don't know, is that going to fail? Aren't we checking that it's a member of upper and lower? Is it a problem if there are multiple spaces? Yeah, so if we, I don't think it's a problem. Let's do X zero with something like that. So we're gonna get an index error, why? Because it's going to look up the space inside of ABC and it's not gonna find it. So what do we do? Yeah, yeah, we could potentially, we could squash them together. We can actually do something a little easier too. Well, what does index of return if it can't find the element inside of ABC? It returns the index one more than the length of ABC. So what can we do? We're gonna get an index, we can just put a filler element into that index. But that's only going to work if you have two spaces. No, we have four in there. If you have a full stop and a space, it's not going to work. Full stop and a space. Yeah. Yes, no, it'll work. Well, it'll replace the full, oh, okay, we don't care about full space. We're gonna delete them anyway, so we can replace the full stop with a space and it's not gonna be a problem. Does that make sense? Does that answer the question? I'm so glad somebody figured that out. I still feel it would be simpler if you, when you create M, and did it with whether example zero is a member of lower. You wanna do that? Do I want to do that? Not really. It is fast. I think... Do you want me to type for you? No, no. I think if we get rid of this, we're gonna have to learn to type on your keyboard. Then if we say, so we do the one rotation and that with example being a member of lower. Yeah. So let's just execute that one sub-expression. And then we don't need ABC either, right? Because then we can just look it up in lower, right? Okay. Yeah, that looks about right. We don't need this. And why doesn't that... So let's evaluate that expression and show off some tooling. Why doesn't that work? Well, let's run it. No, no, I mean, like, let's just run this piece. How can that not work? I think it needs to negate. The one on the left of the end identifies the elements that should be translated, right? We dug ourselves into a hole. No, so I think what we need is, if you wanna compare, this is all of the lower case elements in the system. Yeah. And we want everything that is not a lower case thing. To uppercase it, right? Or no, we want everything that is a lower case. So I don't understand how ending, having selected which ones we want to uppercase, the potential upper and lower case and ending that with whether it's lower case can give us a problem. So the rotation here, let's see what X is again. All right, so let's do a comparison. Let's do X zero, catnate with... Right, except you've done a neg one rotation on the, and a not, right? So let's visualize this a little bit and see if we can figure out what's going on. Oh no, there are none to be uppercased. Right. But why does that give an error? So there was nothing to uppercase in this case. So when you did the index... I still don't understand why that gives a... Let's go through it, so... Yeah, so if you just compute that thing there to the right here. Let's compute that. Yeah. And that's nothing, right? So this is an empty vector. So if we use this to index into upper... Should be fine. That's fine. So now let's say... Oh, it's because we need to sign to M, the entire, the ended... Yeah, yeah, yeah. Right? The problem is we need this assignment to M here. Where? There. Like that, yeah. All right. Because we're using half a partial mask on the one side. Like that, right? No, just without the not there. All right, there we go. Yep. Does that make sense? What we did? So let's take a moment, let's reflect a little bit and ask questions of what's confusing you at this moment. So we can backtrack a little bit, yeah. Is he? Really? Well, I mean, we can do it with a fold too. Move on. Do you have a fold-based approach? All right, go ahead, yeah. Yeah, yeah. That would work, yeah. Okay, so let's explore those options. Let's do that one first, right? So we have X1, something like this. Let's actually move this back to the quick brown. Something like this. And let's... So let's take that. And you said you'd apply words to it to split it out into something else and run that. So you'd run over it, everything that wasn't a space you'd convert to a space, right? Yeah, that's not alphabetical. So let's put in some stuff in there for that, something like that, right? So if we have X0 and we can say that's not member of the ABCs, right? We get this stuff. So the first thing we can do, actually at this point, you wanted to do the words, right? Okay, so your plan is to do with the, is to convert everything to spaces and then apply words. But your words, you're just gonna, you're trying to compress the, yeah. Yeah, so you're basically partitioning into the words, right? Yeah, so here, that's an interesting idiom that you pull up. So here's an example of, these are all the things that we want to keep. Does this standard IBM partition do it? I think you want the, just the enclose. Like will that partition work? Will that partition work? Oh, actually no, the IBM one should do that. Yes, custom to that new symbol yet. So we applied words to it, basically. Does that, that's what you wanted to do, right? Yeah. And what that primitive does is it starts a new partition every time it gets to an element which is greater than the one before, the number before. So and it considers it starting with a zero. So the first one starts a partition like that. And then of course we can just go down each one and capitalize the first one, right? So and one of the ways that we might do something like that, how might we do that? We can do the first each of those, right? And we then can convert those to uppercase, which if we're gonna cheat since we already know how to do this, we can apply the IBM to that, something like that, right? And we can do the substitution. Does reach indexing work? Yep. You could reach index, okay. Let me see if I can do that correctly. Actually, let's save a few things so that like this applied to. You name that thing uppercase. Yeah, we'll call that. Ucase. And so we have the use here. And so then what we can do is X, this guy here is what I want. I want this guy. So we'll call that the words, right? So these are the words. And so the count of the words is something like this, right? And so what we actually want, get the indices of each of these, catnated each with zero. Let's do the reach index. I think you could actually just use the first each on the left hand side of an assignment. Let's do it, let's do it. I think. So if we do the first each word gets U. Yeah, and then we turn it back into regular form. Does that, is that the algorithm you were expressing? Yeah. Sorry? A one-liner what look? Fold? Yeah, okay, so let's do the fold version. So the fold, we just need to buy two fold, right? Yeah, yeah, yeah. So let's do the two reduction over X1. You need to explain that function. I'm trying to see if anybody figures it out. So then what might we do? So if alpha is, are you waiting for the audience? I'm waiting for other people to do this. Because you wanted to, yeah. Yeah, so we can, so let's do this sequentially first and we can see if we can, so we can ask ourselves, there are a few cases that we're dealing with, right? Well, do we wanna do that that way? I think we just do this first and we can store that into our little chunks that we've got. And then we can say the first of these chunks, which one is, this is how I'm thinking about this, right? So you get the first and you can say which ones are a member of the ABCs, right? And so these guys are the ones that have non-matching first pieces. Yeah, and so these are the ones that need to be uppercased, right? Okay, a better idea? Go, go, go. Well, I think so. All right, let's see it. Last time I had a better idea. Remember, you guys, this is just for you to see how we play around with problems and to see about how we might go about things, right? We can give it a name. We can call it Camel or CML, sorry? Yeah, yeah, sure, sure. So we can, I can transcode that if you wanna see that. He wants to see the transcoded version of his fold in there, so. So if we say upper is the one compose, no, no, do we have to do the compose? We do, don't. Yeah, yeah, yeah, yeah. So that's that uppercase function that you had, the uppercase? Yeah, well, it would help if I made it big. Okay, so then you wanted an isAlpha, so you have an isAlpha, so isABC, I'll say that. And so that is, let's say, just a function that says member of ABC. That's a bad abstraction, don't do that, but just for this sake. And so then we are going to do a reduction over X1, right? And we'll say isABC on this alpha character. And if it is, we're just gonna give alpha. Otherwise we will, you case, no, you said, you want to what? Okay, so you would say the uppercase of the, yeah, let's do the uppercase of the first of omega catenated with the one drop of omega, right? And we have to do caten here, right? So that's the algorithm you have. Does that make sense? So that's just the same reduction that you have. That's basically a tit for tat conversion of it. The problem with doing something like this is it will not perform well. This is terrible performance. Yes, omega is the right argument, alpha is the left argument to the function. And the reason this is terrible is there's no way you're gonna get good vectorization on this on an interpreter, or it'll be more difficult. A fold in a super compiler or something that has a really pretty sophisticated type system can sometimes get that right, but you've got a conditional in there. And that conditional branch is gonna be very hard to vectorize. So you're really limiting your potential performance on that. The words conversion approach is actually a much slicker approach, the partitioning it up and then splitting it. And we have, you saw the nice idiom for that, right? So that's just the membership of the first skits on your worded version. The word is just the membership with under bar enclose. No, no, even worse than that is if you've got a branch in there that fundamentally alters your behavior, it's very difficult to vectorize that, because you can't do say four or five or six characters or eight characters in a row all at once. That actually, there are techniques that I've seen in academia for doing that, but they don't perform nearly as well as an actual approach. So I got this to work, and I can explain why it wasn't working. So what this one does, this has no switches, right? So it's calling this lambda, I guess that is a correct expression to call this a lambda. Yes, that is an actual lambda. On a window of... Introduces lexical scope there. Window of size two, with the left argument alpha. So I'm saying if alpha is not a member of, if the left argument is a special character, so not an alphabetic, then I use that as the left argument to this case function, because it takes one to convert to uppercase or zero otherwise, and apply that to the right element of the pair, right member of the pair. And then this thing here, this is the intersection symbol. So I take the intersection of ABC and what we had there, just to eliminate the dots and the spaces. And my first attempt gave this completely bizarre result here. And if you look carefully at that, you'll see that there, this is letters in alphabetical order, because that's the intersection. The intersection is not symmetric. So this is the elements of ABC that are found in there. So I need to throw in a commute to say use this as the left argument to intersection so that I get rid of what I don't want. All right, so I think we've come up with like four different implementations, five, six different implementations of camel case now. You had a question? No, no, only my first one did. Finding the indices, there's on GPUs, it's log n critical path, on CPUs, what's Marshall got down to with where? It's finding the indices of a thing. Sub nanosecond per element on this kind of thing basically. He's got a talk at dialogue that actually just came out. So you can watch his sub nanosecond find a search indexing thing and he's got an algorithm for doing a binary search on the registers inside of the CPU that does this. Yes, significantly faster. One of the problems in APL, this is interpreted, right? So this function here is actually being invoked, n times. And that in the interpreter has a high cost. You really want to go for the array oriented solutions because they run faster than hand coded C because there's a team of insane people optimizing using registers and SSE instructions and things that you wouldn't normally do as a C programmer. And in fact, this kind of use case is exactly the kind of use case that Marshall was optimizing for the find the indices thing, and membership actually as well. Any of the membership indices where operations, these small ish left arguments or small table lookups with the potentially large lookups on the other side was one of the use cases he was using. So yes, it's fast and faster than the branching. For a number of reasons, A, the primitives are very fast but also you are in some sense reducing the amount of branching you need to introduce into the system and the amount of indirection that happens under the hood in the interpreter. Branch prediction failures are becoming a very heavy cost in modern CPUs. And the more array oriented you get, the less conditions you have, the straighter it runs through. Okay, so I think that all, oh yeah, another question. It's not forcing you but you'd be wise to change them, yes. Yes, so the trinity, directness, primitives, arrays versus abstraction, lambdas, ADTs? Yes, it's been proven that you can kind of do that but don't. So there's a paper on, oh, I don't have my bibliography. It'll appear as a citation in my dissertation but there's a paper of somebody who's done an algorithm to convert recursive tree functions written in a standard style into a sort of efficient implementation of the same thing on GPUs which uses the data parallel approach. The problem is that that requires compiler support to even begin to approach anything usable and even then the performance overhead you're engaging in is non-trivial. It's much easier and much faster and much more expedient to just write the code directly as array operations instead of doing this conversion. If you look at people who have done, tried to do the research into the recursion stuff, another one would be Eric Holk. If you look at Eric Holk's work on Canron, no, no, not Canron, it's another K language. Look up Eric Holk's dissertation. What he did is he designed a compiler that took recursive scheme type functions and using region-based memory management, mapped that onto GPU expressions. And so he proved that the GPU can technically implement lambdas and closures and all of that stuff on the GPU correctly. The problem was it's not performant. It doesn't perform well compared to the other things. So those techniques, there's another PLDI paper, I think 2009, 2010, somewhere around there where somebody did branching operations on vector machines inside of the CPU. The problem there is also that you're introducing significant amounts of overhead to deal with the recursion and the branching, that kind of thing. So it doesn't scale very well, especially compared to starting out with vectorized algorithms from the beginning. Yes. So all the time complexity for all of these expressions is more or less easily understood. The current system cannot, but I'm hoping to add that as a feature in the compiler maybe in a few years where the asymptotic complexity is just mechanically given to you for code written like this. Yes. It can be extremely precise on exactly almost down to the byte level of exactly how much space, memory, and time you might use. But when I do it in the compiler, it's probably going to be an upper bound, like a big O, upper bound limitation. 15 minutes. Where does the time go? What do you have anything to say to those two expressions I put up there? I think this is one of the most fundamental patterns. One of the sort of the collection of maybe 20 array-oriented patterns that you need to learn that we really do need to put in a book, right? Yes, yes. So the most difficult or the trickiest part of this manipulation was determining which characters are candidates for uppercasing. Yes. And the question that needs to be answered is a character alphabetic and is the character before it not an alphabetic. So using this M and not negative one rotate of M is a very, very common pattern. It's the kind of thinking that you need to get into to code idiomatically in array languages. And you'll notice this takes the place of a lot of our function abstraction and functional programming. Our logic we encoded into the data structure. So rather than like in a, well, I don't know if we have time to do that. But there's an example on our GitHub if you want to see it for the stuff of the lazy sieve. Did you do the sieve with the lazy? Yeah, so a lazy sieve implementation of primes. We're going to do Vedic first. I think we should do it. But the sieve, a lazy version of that, stacks procedures up to do the logic computation. And so the evaluation of that is either the fused or unfused evaluation of that function chain to figure out what you're going to do for the next thing. For APL, rather than that, we explicitly encode that kind of logic, that question of is it or is it not into a Boolean expression like this that isn't really higher order. It doesn't store procedures around or closures. It doesn't close over the environment or other things like that. It just directly expresses the particular logic. And then we work off of that data variable, that Boolean mask. And dialogue is extremely fast at that. Yeah, it's worth mentioning just an implementation detail that the APL, I think most APL systems have bit Booleans. So in this case, we'd be doing 64 bits at a time. So for this string here, it would be one bang. Yes, it would just happen as one thing. And you'd be done. All right, so I think we should do that. There are more examples. We spent an hour on somebody else's example. But no, we should do Vedic. So this was something, Deval, you should tell the story. Oh, OK. Yeah. Do you want me to pull in? If you can type, because I'm not as fast as you. So yeah, so I also took the offer, by the way, which he gave. So I came up with this problem, which is on Wikipedia, Vedic Square. So what you do is you take a multiplication table literally. So if you can show the PDF. Oh, do you want me to type what you're saying? Oh, OK. Yeah, that's fine. So it's a multiplication table that we generate. And then yeah, so that's a multiplication table. And then we need to do the sum of digits of each of the cell. And that's what we are going to get as a Vedic Square. And then we can convert that to a function and derive lots of patterns which you can see visually. Can we? Do you want to convert to Vedic? Yeah, convert to Vedic. Sum of digits of each of the cells. Just if you can. Do you want to see the first one first or do you want to do both? Yeah, just show the output first and then split this whole expression. So this is what a Vedic Square looks like. Sorry, just a little cleanup. And now if I equate this to 1, I get a pattern. If I equate this to, so we can grab that in a function. So equate this to 2, you get another pattern. Equate this to 3, you get another pattern. And can we show just the patterns that there was? Do you have a slide? Yeah, yeah, what do you want to see? The patterns which they can see visually on the PDF. Yeah, yeah, yeah. Oh, the Vedic. This one, one of the text version, right? No, the graphical version. OK, OK. Oh, you could go to if your internet is on, go to Wikipedia, that should be it. OK, now let's do what are we looking for? Vedic Square. There you go. So this is, as you can see, if you equate, this is the Vedic Square. And if you equate it to, let's say, 1, you get all the ones highlighted as a pattern. And so on and so forth, up to 9. And if you can scroll a bit on the right side, you can see all the patterns which are generated here. Should we do that? Yeah. Yeah, all right, let's do that. So if we have our Vedic Square, right? Yeah. Let's take the Vedic Square and do the 3,3 reshape of 1 plus i of 9 out of product equal on the Vedic Square. So those are the patterns. He will animate it, probably. That's what. Sorry, what am I saying? What? Asking for it, can you have nice looking image? Oh, you want a nice? He'll do more than that. A dot in a quad? Why stop at a dot in a quad? Why don't we just do a black and white image? You just whipped it in five minutes. So there it is. And let's, oh, but it should be the other way around. And can we get it in frames? Animated frames, one pattern after the other and you see game of Vedic game of life. Oh, you want to animate it? Yeah, if you can. Oh, OK, OK, OK. We get Vedic game of life. So you want to see the pattern. Pattern one after the other in nine frames. So let's take this guy. Oh, I guess I should actually use a, I should seed it with an actual element. There you go. That's the thing that makes it interesting. Yeah, let's talk about what we just did here. So what we did is we fed in one of the Vedic numbers that we want to compare. One, two, three, we started with zero, right? And we said, well, zero equal to whatever we find here. And then whatever that is, that's a black and white image. So we display that out. And then we delay for an eighth of a second. And then we increment our Vedic square for our next image replacement. And we say, let's increment that. But we only want to do it on a cycle. So let's do the nine mod of that. So do the increment mod nine and add one so that we're in the right space between one and nine. And then we repeat. And we'll get that and we'll continue on in our invent loop. So this is an event loop. I was thinking more of the expression where we generated the Vedic square. Did everybody get that? Deval can explain that one. So first, let's ask the audience, how would they generate a multiplication table in a functional programming language? Yeah, the multiplication table. Oh, you want me to show them a multiplication table? Yeah. Flat map filter. Yeah? So flat map will be there, right? Yeah, it's an out of product. Yeah. Yeah. Now, at this point, I should make a mention. You'll recognize a lot of these things, right? Out of product, fold, map, reduce. These are words that you've heard. They're words that you think you understand. And if you try to apply your FP thinking to the use of these things in APL, chances are you're going to go awry. Yes, we have them. We use them. They do the same things. But you should try to learn them the APL way because they operate over n dimensional arrays instead of usual structures. And that can cause some confusion. So this overlap can often lead a functional programmer astray because they'll go, oh, well, they have fold. So that'll let me do this. They have reduce. That'll let me do this. And yes, but then the code you write isn't actually going to be idiomatic APL. The way in which we engage with those tends to be slightly different. So you should be careful about when you use those. We have all of those. They do all the same thing. But they tend to do them in a slightly different way. So just beware when you engage with that. So the next step would be what? After we generate this, we want some of digits of each cell. So how would we do it in functional programming way? We would have to write another pipeline where we just do the sum of digits of each of this, some logic. But look at that. Each digit is separated out. Oh, whoops. That's a little hard to read, isn't it? We'll stick with that. Sorry. It's representing each number as a polynomial, base 10. So the 10 and the 10 are the base for two digits. And then we sum them and then do it twice. So if we sum these up, we get the sum of your digits. And we have to do it one more time. This is one approach to it. Exercise to the reader. Try to find different ways of doing it. 10 residue omega plus omega greater than 10. I can't help it. I want to do this one. Five minutes. OK, so let's see. How many minutes do we have? Five, five, four? OK, what do you think? What should we do? Eight minutes? Eight minutes. Ooh, generous. We have four more examples. Four more examples. Who's ready? Go. Ready? Well, oh, OK. Game of life is a required one, I guess, right? OK, so game of life. Oh, do we want to beep, beep, beep, beep, beep, beep, beep, beep, sure, sure, sure. So iota nine, oh, and then we can do the three by three reshape of iota nine, which gives us the, and we can ask the member of what is it, two, three, four, five, seven, nine, something like that. Yeah, I'm trying to just filter it up with something. So we can do the 50-50 take of r, say. And that gets a big space. And we can let's do a, yeah, yeah, let's make that, it's a big one, and let's do some rotations. So let's set 20 rotate and 20 rotate along both directions. And so that's going to put our little guy right in the middle. So we're just creating a big image to display, right? So we have this. We can just, we've got our little guy stored there, right? And so now what we can begin to do is we can look at a smaller version of this guy, right? So we can maybe do a 10-10, maybe a 10-10 take, right? That's too big. Five or seven, five, five, yeah, five, five take, right? And let's center that one up. So let's do a two rotate, two rotate, something like that. Maybe a one rotate here. Yeah, one rotate is better. OK, so that puts him sort of in the middle. So I'm just going to call that rr. OK, so now we have this guy, right? And we can talk about the 10-neg-1 rotate each on rr. So now we've done rotations along these directions, right? And we can do the 10-neg-1 outer product rotate along the horizontal axis of the 0-neg-1 rotate each on rr. And that gives us all the combinations of the rotations, right? And so then we can catnate that out into a vector, sum that up. And this gives us the neighbor count for the next generation in the game of life. And so we can ask, where are the threes and fours, which does not look like that, looks like that. And we can say the generation of the game of life. We can add them together. So if there are three as our neighbor count, then you alive. But if you're a four, you're only alive if you were previously alive. That's what that's saying. And both of these contribute to the next generation of r. So we can say to that, disclose it, and there's our next generation of r. So this gives us our function life. So let's parameterize that. All right, so now we have life. So now we can animate it, right? So we can do the image of life. Actually, let's just display what we have. And then we'll again, oh no, my mouse, I hate that. Just delay by an eighth of a second again. And then we'll run life on whatever we get. And we'll run that a little bit. And we'll run that on a 128 by 128 random white noise image. What did I do? I did something. I hate when I do something. Where's the value error? Oh, oh, that's why. Reset. I need a drawing window. There we go. And now we have the game of life. Now, we only sort of did problems that I think people might claim APLs just particularly well suited to here. But if you go to the workshop tomorrow or if you talk to me offline, I'll be happy to demo some of the other things like tree manipulation, compiler design, any of the other things. And the compiler in fact is written in this style. So I guess we'll show that. And then do you guys wanna see anything else? So here's the compiler. I've added comments to it, yeah. So you'll notice we have lambda lifting. We have wrapping returns. We have lifting guards and if statements. We have lifting expressions. We have lexical scope resolution. All sorts of stuff. So propagate the ground and free references up the lexical stack. This, this is it. Yeah, this. Yeah, this is actually a lot smaller than that one. With the comments and the white space, we're at 73 lines. It's I think about 40 lines of actual code. But you'll notice I haven't actually done anything that you haven't already seen. I'm using all the same syntax, all the same things that you guys have just learned today. I haven't added any new tricks, except a couple of primitives that you haven't learned yet. But there's nothing extra here that was done except some extra domain knowledge about how to write a compiler. Do you have the, do you have the, I go to Stormwind. Oh, it's here, right? Yeah. We can spend the last two minutes watching. Where's the video? Tech demo? Oh, the code. Well, we had planned five demonstrations and I was going to do a couple where the code looked much more mundane to give you time to breathe, the kind of code you might actually find in real systems. But unfortunately, we're out of time. So here's an industrial APL application. And what's interesting about this one is that none of the waves or any of this, all of the environmental data is generated. It's not stored. So like if you see trees and other things like that, that's all generated off of open source GIS data that's available in the region. And as you go through it, it just generates it automatically. And then all of the wave calculations, all of that is dynamically calculated using APL simulations. The OpenGL is still calling OpenGL through an underlying game engine. But all of the calculations about the wave forms and all of the when to do this, when to do that, all of that is being done through APL. But most of the APL in the world is doing financial treasury management portfolio balancing, that kind of stuff. Now, one of the fun things is one of the conferences, they hooked this up to an actual like boating simulator. So we had this giant screen in a boat and it rocked around on us while we tried to drive it through a thing. That was fun. So like all those trees and everything like that, that's not stored data, that's automatically generated from GIS data that's stored just as the open source stuff. And then it's just procedurally generated. Yeah, it's used by some Scandinavian navies for training purposes. And I think this was done at this point by one person, right? This is one person that did it initially, right? No, the whole thing is still one person. Yeah, this is all one person. He had some help from the University of Helsinki and the theory of wave generation, Baltic waves. Yeah, yeah. These are not just ordinary waves, they're Baltic sea waves. Yeah, so he consulted with somebody who was an expert in the formulas for simulating waves and then he encoded that into APL and did this. Any other questions? I think, I mean if you go to the dialogue website, there's conference videos from our user meetings and I think if you look, you may need to dig a bit, but they're presentations by users and occasionally you'll see some code in those, though typically not a huge amount. Yeah. And you also have a list of a couple of case studies on the dialogue website. Yeah, but again, there's not a lot of code. I mean, it's the same kind of problem we heard about with the prologue, right? That either people are embarrassed or it's a secret weapon. Either way, they don't want to talk about it. Right now, I don't know. So should I pull up the GitHub page so people can write that down? Yeah. I mean, so I've been asked a question about Docker here. So one of the things we're working on right now to make it much easier to deploy APL applications is that we're making public Docker containers with APL pre-installed available. So you'll be able to launch, like write one text file containing an APL function and launch it as a web service in a container. And that's going to make it way easier to deploy APL than it has been historically. Containers are just magic for APL. And so if you want to look at some other basic problems that we did, these were ones that we had intended to show you. So prime sieves, Pythagoras, Mac addresses, trees, stuff like that, but obviously we didn't get to that because camel case is more interesting. But tomorrow, you can bring lots more meaty problems and we can tackle them all day and play around with them and maybe even work on Docker containers if you really want to. The internet here is a little, that might be a little iffy, I might not recognize it, but maybe you could play with the web server. I can certainly demonstrate that. Yeah, so tomorrow we might, we could pull up a web server, write a little bit, oh, oh, ah, I forgot. I never showed try APL. No, so this was a one-day application that I wrote for a client that all they needed to do was some hierarchical e-commerce stuff that their e-commerce application wasn't providing. So their e-commerce person said, oh, $5,000, three months to do. And so I said, no, no, no, no, no, no, no, no, no, $500, one day to do. And so that's what this does. And if we look at the code here. You should have kept the money. This is dumb. What can I say? So if we look at the code here, this is a my server page. So this is the actual web page that is serving that piece. And so the actual logic that's computing the solution is these four lines of code. And that's computing a hierarchical bill of materials stock calculation for an arbitrary e-commerce inventory, something like this. And that was the thing they said was going to take three months to build. Otherwise, this is sort of more traditional APL, more explicit to build the HTML and generate the stuff and trap some errors and other things like that. You'll notice the first line of code is the most hideous one. Yes, that is the most hideous one right on top. Oh, on class. Oh! What fails? You mean what where fail? What fail? No, if else. Oh, if else. I did that because occasionally, at the top level, I will introduce an if than else. I mean, when I read something like this. It's better than a UML diagram. I think of this sort of two, there's two things going on here. There's the structure, which is the class end class and the control structures. And then there's the lines in between which are APL. I don't really think of the control structures as APL there. Structure. Well, actually, so you might be curious, but there was a volume in the original requirements doc. There was no requirements document for this. The person just explained what they wanted and there was a little bit of an issue. So we did user pair programming. She came in, read the code with me and identified what was the actual thing that we needed to do. So she didn't spend any of the time in here. She spent all of her time in those four lines of code helping to figure out exactly what the calculation they actually needed was while we went through. There are APL programmers. We saw the power operator in the Vedic solution where we applied a function twice. So there are people who will do conditionals by using a Boolean right argument to the power operator and calling a function zero or one time instead of control structures. There are also APLers who will use computed go to indices to do control structures. All right, cool. Thank you very much.