 Hi Tanesh, how are you? Hello. Oh, I'm behind, but yeah, hello. You're a silhouette today, huh? Yeah. Hello. Hello, Vish, how are you? Good to me, how are you? Good, thank you. Where are you joining us from? From Austin, Texas. Okay. Yeah, so our company, when we initially started doing image repetition based deep learning, we followed your course and that was the starting point for us on our deep learning journey. Oh, excellent. Thanks for the course. It's going okay. Yeah, it's going okay. I'm not working on deep learning. It's the go to resource for new hires. Fantastic. Hello. Hello. Hi. How are you? Oh, I can hear you. Can you hear me? That's wonderful. Good. Good. Yeah, I thought I had some audio issues there, but it seems so is good. How are you doing, Jeremy? I'm okay. I'm making some good progress on getting the course ready. It's just mainly been about getting NB process working. Well, wonderful. And I'm illustrating it with Dali two pictures. Wow. Wow. It's going to be awesome. Do you have access? I just got it. So good timing. Sweet. I still haven't gotten access. I think I signed up like a week later. So that might have been why I'm still waiting. I can try and do a demo if you like. We're awesome. And this is so wonderful with abstract concepts. I mean, for illustrating articles, you know, I think this is going to have an amazing. Well, group of people, a very wide group of group of people who might benefit from it. If I were a journalist and we're working on an article, I mean, wow, you know, this is so much better than going to a stock photo or whatever. I mean, they've got a non-commercial restriction at the moment, but presumably at some point they'll have some kind of paid surface service. Oh, look, he's here first time. I think Molly's joined us. Hi, Molly. Have you joined us before? Possibly not able to chat. That's all right. Hello anyway. No, I haven't had a chance to join before. Well, welcome. Have you watched any of the videos? Got any APL-ing as yet? I have started the first video. I just saw the, that, um, the study group had started yesterday. So yeah. No worries. I'm sure you'll be caught up in no time. Uh, this is what it looks like. So who wants me to, yeah, who wants to suggest a prompt? You can always put it in the chat if you like or just tell me, what does the fox say? Wait, you're meant to be giving something to draw a picture of. Is that so you can draw a picture of? Maybe a student working on an assignment using array programming language. A student working on an assignment using an array programming. Language. Do you want like a photo realistic photo? Do you want a 3D render? Do you want a pencil drawing? What kind? Huh. Maybe something with colors. So pencil drawing, not so much, but. You can do a color pencil drawing or oil pastels. Oh yeah, yeah, that sounds great. A color pencil drawing. I don't think Dali's interested. Um, know when a thing about, uh, about array programming from what I've been able to see. Yeah. I don't imagine it should, but you know, maybe it will surprise us. Add digital art for striking high quality images. Oh, that's good to know. It's taking a bit longer than usual for some reason. I'm not going to make movies with Dali and so on. Like explain everything to it and it makes it call me. Something's happening. Okay. This is a problem when you. I often find when you kind of like put these extra details, it tries to write things about it, but it doesn't know how to write. So it doesn't really know what array programming is. So it's like written. Some words that it thinks look a bit like that. All right. So if we did like a programming assignment. Pretty well though. Yeah. The, you know, first and sixth ones that they look somewhat accurate, I would say. Yeah, this person's coding by drawing on a screen. But you have, you have to pay for this for this access to Dali. This is free. Anyone can just go in here is it. You have to apply and then you just wait for well months. I think it was. Okay. I might try. Right. Well, while we're waiting. It's been happening. Not too much. I added a blog post on my favorite advent of code problem from last year. And it's one of. Yeah. That one was. I don't know. Just really, really cool. Kind of worked for APL very, very well. So this is a map. Representing the height of the ground to that grid point. This is the high bits. Okay. That's cool. The part one, you have to find what you do have to find the low points. Yep. Find the low points. Add one to the value of each low point and add them up. That's a good one from Tanishk. Much more creative. A professor who is a cat teaching deep learning in a classroom for the realistic. See how this went. Okay. It is a pencil drawing. Yeah. And it's a little bit of a different. You told them, told it was programming and higher glyphics. The higher glyphics are more than a rate programming. Maybe. And interestingly, when I started watching something about APL on YouTube, it started recommending new videos on, you know, hero glyphs and. I guess Egypt and stuff. So. Can draw the connection. All right. Let's open up our one. I think there's anything to pull. But just in case. There is. Oh, because of the GitHub pages. So I thought we could make a custom operator today. The cat is not a professor. No, it's not. A professor who is a cat. Hmm. I guess at least it's got glasses on the top left. Yes, I mean that could well, but clearly somebody else in the background is doing the teaching. I did an interview this morning with a 16 sets. You wish. Online magazine kind of thing. We talked about this idea that. It's kind of prompt engineering as a skill now. That's a skill I definitely don't have yet. I feel like it doesn't generalize from one model to another. Exactly. Exactly. Has anybody got a favorite like. Drawing program for. Have you tried procreate. I mean, not something that fancy like something just for doing. You know, these kind of things. And I also just find out why my right mouse button doesn't work because it doesn't work. I'm now not sure how to insert. Age. I'm guessing everybody is probably. The familiar with the idea of. Gradients, but. It's fine if you're not because I thought we could just briefly mention it. So if we've got something like a quadratic. And the gradient. At some point. Is. Is the slope at that point. So this would be the gradient. At this point. Is that slope. And. You may or may not remember. That the slope. Is equal to the rise over the run. Which is the change in why. Over the change in X. So if this is. If this is. X to. Y2. And this is X1. Y1. Then the slope. Is. White. Is rise over run I said. So it's from the ground. Is. Y2. Minus. Y1. Over. X2. X1. Okay. And so what you could do is you could pick. Some point. A bit, you know, like, so this is the point we're going to pick. And then you could just add. A little bit to X. Like 0.01 X. And this is the point here. So in this case. You could actually write. Y2. And in a different way. So if you've got, so this is like, this is, this is some function of X. Like X squared. This is X. Right. So. You could change. Y2. To instead be. Function of. X. Plus. A little bit. And Y1. And Y1 would be. Function of. Would be. A function of X at the starting point. And then you divide by the amount that you moved X by, which in this case would be 0.01. That'd be another way of doing the change in Y over changing X. Does that make sense so far? And so this is like. An approximation of the derivative. It's the slope at a point. And I thought we could try and do this. In APL. So if we picked a function. I'll show you something interesting. You can do a function with a couple of ways. I think we've already learned. That you can do a function like this. If you want to do Y squared. You can do. Omega. Squared. That's one way to do it. But. You could also just do this. Right. So you could do something like this. If that makes sense. So then actually what we might do first. Is a nice. Operator. So it just returns the power. Operator. Yeah. So this is saying that that's F is. The power function. And that might not sound very interesting yet. But you could do something like this. If that makes sense. So then actually what we might do first. Is a nice operator. It might not sound very interesting yet. But you could make it a bit more interesting by saying F is. A combination of. A function and an operator. And we could give that a more sensible name. For example. And that's exactly the same as doing. Oops. Exactly the same as doing that. Right. So here's an example of something we could do. Let's learn a new operator. Which is called. Bind. Which is J. So we can get the help. Oh, that didn't work. So we can get the help for it by typing help. Okay. So this thing, this symbol, you'll hear it a lot. It's called Jot. And it's a. Oh, this is finished. There we go. So. Professor. There was a cat is too hard, but a cat professor is fine. Not sure what happened before. But clearly. Professors. You can't point at things this cat professors. Just faking it because there's a real effect. Most of the cat professors. Some of them are faking it. But yeah, professors pointer whiteboards with chalk or sticks. Apparently. It's not obvious it's deep learning. But it certainly looks very mathy. Yeah, it's not good. Well, yeah, that one has like some sort of network or something. It looks like. Yeah. Interesting. Yeah. So Jot is maybe looking at some activation function. Yeah, could well be. Okay. So Jot. Is a dyadic operator. So that means that there won't be a monadic. And it can be a couple of things that can be beside. Or bind. Depending on. Whether you pass it functions or arrays. Now. Or. Python programmers. Let's call this Python. Equivalence. Partial. It's the same as using partial in Python. So that's a kind of a functional programming idea. Most of the functional programming ideas in the standard library. Are in the funk tools. Module. That's where you can import partial from. And here's what partial does. You define something. And so let's say we could call this power. Which does that. Three squared. Okay. So what if we wanted to now define squared using power. We could say square. Equals partial. How comma y equals two. And what that says is. So partial is a higher order function or an APL operator. Something that returns a function. And the function that returns is something that's going to call this function. Passing in this. Parameter. That's going to always set y in the function to two. So I could then say squared. Three. So it makes sense. So bind does the same thing. So we could say squared. Equals. We can create a function. So. We're doing a power of function. We're doing to the power of two. So this is a function. We don't say equals. Obviously. And so you can see here. This is this idea that we don't need to. Use curly brackets and omega and stuff. We can just define a function directly. This is called point free programming. In. Other languages. And it's basically where you write functions without specifically referring to the parameters they're going to take. And in this case. Because. This is an operator. That therefore returns a function. There's no need for. Round brackets and whatnot. Does that make sense? Nope. An operator that returns a function. All operators return a function. That's the definition of an operator in APL. Do you remember last time we did operators we learned about backslash and slash. Yes. And those are. So it's the B site or bind. That's an operator. Right. So it's a return. It returns a function. Okay. So it takes the argument on the left and on the right. That's right. So. This is the function to apply. And then this is the right hand side to bind. And it returns a function. And it's an operator. Therefore it returns a function. Got it. And you can do it the other way as well. Which is you could say. A power of two. And then you can just do the opposite to. To the power of. So this means two to the power of rather than to the power of two. So. Two to the power of. Three is eight. So that's the equivalent of this is the equivalent of partial where I bind the right. Right hand side. This is the equivalent of partial where I bind the left hand side. Well. It makes sense. It's just surprising. I was ascended to listen. It's like Lego bricks and. It is. Yeah. So. Let's move some of this stuff into the right spot. Okay. So then. Yeah, let's just go like this. So this one's called bind. And binds a very common. Computer science term for this. Partial function application or bind in C plus plus it's called. Now the other thing we could do. Is we could do. Beside and beside is what happens. If you put. If you just pass two functions to it. So for example, we can create a function. That first does. Reciprocal remember monadic divide is reciprocal. And then it does either the power of remember star is either the power of. So this. So if we go, for example, reciprocal of three. And then we do either the power of that. It's that. And we should find F of three. Is the same thing. That makes sense. So this is called function composition. It's only just one form of it. So first it does this function. And then takes the result and passes to that function. You can also. Use the function. That is returned dietically. Like so. That is going to be the same as. I think. Well, it's not. Is it. To. Yeah, it is. It's going to be to. To the power of. The third. So this will be the cube root of two. Yeah. Okay. So it first. So in the case if you do it dietically. Then it first applies this to the right hand side. And then it applies this to the left hand side and the result of that. And they've got some really nice pictures of this. Somewhere under if they're here. Oh yeah. Bind is also called carrying. Composition. So there's lots of ways of doing function composition. And so beside, as you can see, it takes Y on the right. It passes it through G. Which is this function, which was reciprocal. And then F, which was power of. Gets the left hand side and the result of that. Which. Interestingly enough. Is exactly the same thing. As if you. Just put the next to each other, I think, oh no, that's different. That's interesting. Okay, I'll take that back. There's something in that. It's an API wiki is good for this stuff. Wait, says it's the same. That's very confusing. Oh, it's without the parentheses. So that does the same thing. So there's a few ways of doing the same thing here. So it's a bit confusing that. I mean it doesn't need to be, but it can be if you're not careful that this is a dyadic operator. Because it takes it takes a left hand side and the right hand side. It returns a function. The function it returns. It's a function that can be used. It's a function that can be used. It's a function that can be used. It can either be used monetically. Or dietically. That makes sense. And so the way it behaves. Is based on. If it's used monetically, it's reasonably straightforward. We just first apply this. And then we apply that. If it's diatically. It behaves like this. It makes sense. It's just interesting that it exists. It can be used for. So the purpose of it is to create. Is to is the way I think of it is to expand mathematics. Right. So in mathematics. We stick symbols next to each other and they have meetings. Right. But the rules for what symbols you can stick next to each other and when. Very a lot depending on the symbol and stuff like that. You know. So APL tries to just lay out the ground rules. And says this is this is how you can do it. And so let me show you. My favorite so far. Example. Which is. Calculating. Which is calculated in the golden ratio. We're going to need some more operators for this. Okay. So. We're going to use the next operator we're going to learn is star. Dioresis. Which is. Shift P. And this is also called power, but it's actually the power operator. Rather than the power function. Which is confusing. So. Do you actually have to be careful. The power function is just a star. Okay. It is a. Operator therefore it returns a function. It's dyadic therefore it needs a left hand side and a right hand side. So let's define. I don't know if any of you have done. Metamathematics. But metamathematics is the philosophical foundations of mathematics. And there's a. A small set of theorems that you can use. Which I think are called the piano theorem. Piano axioms. Which people used to think you could then derive. All of math from. Although it turns out it's not necessarily true. And so basically the idea is you'd say okay we're going to create something called 0. And then it creates something called equals. And equals is defined by saying. Every number X equals X. And that if X. If X equals Y then Y equals X. And if. X equals Y and y equals Z then X equals Z. I mean it's a very long view. if x equals y and y equals z then x equals z. I mean these are all obviously true things it's just kind of defining some basic theorems. And then it defines this function called capital S where we just basically say it exists and it returns another number and basically it defines it in such a way that it's the successor of a number. So the successor of 0 is 1, the successor of 1 is 2, the successor of 2 is 3 and so forth. And we actually can easily define S now because it's plus 1. And so if we assume that 0 exists and we assume successor exists then we can create the number 1 and we can create the number 2 and we create the number 3 so forth. And so at this point we want to invent addition because in mathematics you basically are not allowed to assume anything exists except what you put in your premises. So addition is what happens when we apply S a bunch of times. If we want to add 3 to 0 we would write 3S's followed by 0. Does that make sense? So the power function, the power operator simply says how many times to repeat a function? So I want to repeat the function S three times. So that's the same as writing S space S space S. So if we want to create a function called add then we basically are going to apply the S function alpha times to omega. So that's going to start with, that needs to go in really backwards. So for example 2 added to 3, we'll start with 3 and then it will apply S twice. That would be the same as writing S space S space 3. Does that make sense? So we just invented addition. Yes absolutely it does. So we can do the same thing for multiplication. We can apply, multiply. We need to start at 1 and then we can multiply. So we're going to multiply by, I'm not sure if it matters which way around this goes, we're going to multiply omega, we're going to multiply by alpha omega times. Oh I wrote volt, which obviously is going to break everything. I need to add. Okay I'm adding, oh I'm adding to zero. There we go. So I'm adding to zero. This is what I add each time. This is how many times I add. So we just invent a model play. So we can of course now invent to the power of and that would be where we'd start with 1 and we multiply. The number of times we multiply will be, yeah the thing on the right. So 2 to the power of 3. Okay. Yeah it is quite mind-bending to look at the syntax too because you know it's the star thingy, the star-diarrhesis. Like it looked already quite interesting when it was with the monadic s function. You know and now it can also. Yeah this s thing you have in the cell 26 it's just you know on the right hand side of it and that's what it does but then you can also use it with dyadic. Yeah and it's you know another example of just composing this very small piece of functionality. So it's applying its left operand function cumulatively g times and if it's a dyadic function it's applied to repeat oh no that's not what we want. It's bound so if there's a left argument it's bound as the left argument. So we've basically seen this idea right of binding an argument. So that's basically what it's doing it's saying that this is multiplied by alpha each time. Okay now where it gets really fun is that you don't have to put a number on the right you can put a function on the right and so this is going to come somewhere towards answering your question radic about what's all this for. So we've now got a sequence of five glyphs in a row. Okay so what does that mean? Well this glyph here we know it means take the reciprocal and then add and as this is on the left so it's going to be take the reciprocal and add one. So let's try it let's just grab that copy we'll call that f and so if we go f of one um oh and that's sorry one f one I should say that equals two right because it takes the reciprocal of this and then it applies plus to the result and the thing on the left so one plus one is two because that's because of this this is what we get if we do beside we first apply reciprocal to the right and then we apply plus to the left and the result so the reciprocal of one is one and then the left hand side is one and the result of reciprocal is one and so one plus one is two and we could take the result of that and pass it back into exactly the same function with the same left hand side and we could do that again take the result of that and pass it in to the right hand side and if we do this a bunch of times we actually are doing something quite interesting which is we're creating something called a continued fraction and the continued fraction that we're creating is this one so we started with one plus one over one and then we made it one plus one over that then we made it one plus one over that then we made it one plus one over that now if we keep doing that enough times eventually we're going to get a number called phi and phi is also known as the golden ratio and the golden ratio by the golden ratio appears in nature and art basically everywhere golden ratio so basically you know it appears in nature when we look at kind of the proportions of things it appears as the ratios and famous paintings it appears on the snails shell it's this number that appears everywhere and why are we talking about it well we can calculate it by typing this again and again and again but that's going to get pretty boring we could do this right so that's going to do one over sorry one plus one over one and then it's going to do one over that and that's going to do one over that that's going to do one over that it's going to do one over that and I think we now know that we could that we could replace that with just do f a bunch of times, I don't know, five times. So that's nice, because now we can let go a bit further and get, that's actually a pretty good estimate of the golden ratio. There you go. Yeah, about one to 1.618. Does that make sense so far? Yes. All right. But how do we know how far to go? Well, basically we want to keep on applying f until the next time we apply f, the result doesn't change to within floating point error. If you replace 15 with equals, then the power operator, if you put this on the right-hand side, repeats this function again and again and again. And each time it passes to this function, the previous result and the current result. And it stops if this function returns one. And in math, we call that the fixed point. The fixed point of a function is the point at which, or sequence I guess, is the point at which it stops changing. So there's exactly the same thing, but iterated exactly around a bunch of times. I'm not sure how to change the precision that we print out things here, but if you printed this out in high precision and then passed it to itself again, it wouldn't change. And so if you replace f with its definition, which is this, then you get that. And so the answer to your question of like, what's all this for, is so that we can write short, concise, mathematical expressions for things like, here's the fixed point of the continued fraction of the calculus five. Is that kind of mind blowing? It is. Very much so, but it's amazing. It is amazing. And yeah, there's something delightful, I think, at least to me about, realizing notation can take you this far. And like, I would much rather write this, than this, you know? And this doesn't even, you kind of input this in a computer because what the hell does dot, dot, dot mean? That means like, oh, a human ought to be able to guess what goes here. So yeah, I think it's beautiful. It's a beautiful notation. It's a powerful notation and it lets us express complex things fairly simply, like once we get the idea. And the nice thing is then like, because we were able to express this quite simply, then we can use that as a thing that we then go create another layer of abstraction on top of that. We use that as an input to something else, you know? So it's because that, you know, if we didn't have this kind of bind, sorry, this sub aside composition idea, then this whole thing wouldn't really have worked, you know? We can use these ideas in Python as well. In Python, you can do function composition. And I think Vast Core might have something if I'm going to correct me. I can't quite remember what's where. You've got partials that could be involved. Yeah, compose. So this is the same as beside an APL. So you can pass it a list of functions. So for example, here's one function and here's another function. And then here we are composing the two together. It goes in the opposite order, I think. So it applies F1 first and then F2 with them correctly. Sorry, what were you saying? Oh, no, I was going to point you to the compose function but you found it before I said they make one. Yeah, no worries. And there's a bunch of things like this in Vast Core. So for example, I really like partial. So I created the better version of partial which doesn't throw away documentation. This is basically the same as kind of broadcasting. Here we are. I created my own bind class. Now I created all these before I did much with APL because they used another functional programming languages. Possibly, or maybe even probably APL did them first. I'm not sure. There's actually a whole academic world or componentary logic which is all about, there you go, eliminate the need for variables. Kind of like point three programming. And the ideas that are in APL, I wouldn't say they come from componentary logic because nobody knows if Ken Iverson had ever seen it. This stuff before, it's quite possible he reinvented all this from scratch but it turns out they're all exactly the same. And something that Connor in the Arraycast podcast mentioned which I just arrived is that there's a book of puzzles which actually cover lots of the componentary logic ideas. So I will let you know once I start going through what I think. So there's a lot of layers of the onion we can peel off and it turns out that we kind of find ourselves in all these different areas of math and logic and philosophy and whatever. Jeremy, if you check that book out, you have to let me know how it goes. I checked it out before maybe a couple of years ago. Stephen Wolfram Mathematica was talking about combinators. Maybe it was only a year ago and I happened to get interested and I checked the book out and I kind of worked through some but I got lost pretty quickly. I'm sure that you will probably fair much better than me. So I think it's good to see how it goes. Well, you know, maybe at some point we can start working through it together if people get interested. Yeah, it's interesting listening to like an early Arraycast episode where kind of talked about how he went to some, he just started with APL and he went to some APL user group meeting or something and like said to people like, wait, trains, are they combinators? And combinators are trains and everybody at the APL user group meeting, he said, all said like, no, this is a new APL invention, it's not some other thing because nobody, you know, like well, intellectual worlds don't mash very much and nobody realized. Particularly the APL intellectual world, I've noticed it's pretty cloistered to be honest. So anyway, yeah, they are the same thing. So I'm trying to remember why the hell we were doing all this. This is definitely our most intense study session, right? And please don't be worried if this feels a bit intimidating because it is, and that's fine. And we're meant to be learning new interesting parts of the world and when we find them we shouldn't expect them to make sense straight away. So we were doing, that's right, we were doing Jot. Yeah, I mean, that's probably it. That's a good place to finish, isn't it? I just want to remember that we were doing, yeah, that's fine, I don't think we need any of this. Oh, that's right, we're going to do custom operators. Okay, I think we've got some good background to tackle that. Oh yeah, and we were, yeah, yeah, okay. So we got to here. I guess before we go, we should at least write down something so that we've made some start here. So, oh yeah, okay, I remember now. I wanted to point out then, yeah, okay, so we were going to use, I was going to do squared, that's right, so I wanted my function to be squared. And so to do that, I had to show you that this means squared, okay? So then to calculate the derivative of this function at some point, we're going to use this formula, we wrote down this one here. So it's going to be F of, let's create some number to hold at 0.01. So the difference we're going to work on is going to be 0.01. So it's going to be the function F of, and so let's calculate this at some point, say two. Now let's do three. Okay, so we're going to calculate it at X equals three. So we're going to go function of X plus D, okay? Minus function of X, okay, and then that's going to have to go in parentheses, okay? And then we're going to divide that by 0.01. There are better ways to write this in APL, but I want to make it like somewhat familiar. Okay, and for those of you who remember calculus, the actual derivative of X squared is two X. So the correct answer would have been six. So which is, you know, we're on the right track. If we set D to a smaller number, we would get a more precise answer. Yeah, so we're going to try and head to, is that we can actually create our own operator that will calculate the derivative of function at a point. And so maybe that's what we'll try to do next time. Any comments or questions before we go? That's really cool. Ha ha, awesome. I'm glad to hear you. Yeah, that's great again, thank you. All right, all right, bye all. Thank you for showing us all this, bye bye. Bye, bye, thank you.