 Hi, I am Bhaskar. I have been working on Erlang since 2007 when I had the freedom to do whatever I wanted on my own startup. So I chose Erlang. Prior to that, like a lot of people who get introduced to Erlang, sometimes they start working with eJabadi or RabbitMQ and then have to debug something or implement a plugin. So again, my first brush with Erlang was with eJabadi and right now I work at HelpShift with the event pipeline where we handle, you know, like hundreds, like billions of events that go through Erlang every month. So I just want to start off with a sort of historical overview of, since the talk is about recursion and why sort of tail recursion is important, especially in Erlang. So there was a, so going back in, you know, dialing back in history, there was a conference in 1955 called Automatic Computing where, you know, in Germany where they decided that, okay, we need to come together and standardize on, you know, future improvements and so on. So yeah, so they, so the group at ACM also got involved because they wanted to have a sort of global sort of unification for machine independent algorithmic language. So this is basically the formation of BNF actually as a standard. So the goals of, you know, this IAL, the language was that it had to be, you know, be represented mathematically. You had to be able to put this in publications, you know, be print at the time. And it should be able to be represented as programs. So during this phase in the late 50s, a lot of progress actually happened, you know. So they agreed actually on BNF at this time. And they renamed it from IAL to ALGOL. And there was a lot of, it was truly a world sort of, you know, international effort because, you know, they were like 50 centers and close to 100 people working on, you know, from compiler creators to language designers. So it was a real unification of unified effort. But that said, you know, it also formalized, you know, things like pass by value and reference. And it introduced a lot of things that made them a little uncomfortable as well. So some of the things that we actually take for granted today, you know, was actually laid, the foundation of that was actually laid in those committees there. But that said, it wasn't all rosy. So there were clashes. For example, in Europe, they used to use comma for like 100,000 and US used to feed it. So, you know, since it was a unification effort, small, silly clashes like that, ranging to more complex ones like, so at that point, you know, Lisp had already been introduced and they had more industrial sort of experience. And in fact, John McCarthy had already introduced recursion in Lisp. And he was trying to actually, so that community wanted to actually include recursion, you know, in the spec, you could say. But at that point, I mean, nobody knew, I mean, at least in Europe, they were not, you know, as welcoming about, you know, formalizing on that. But at Dijkstra actually, so it's actually like a pretty much like a Bond movie. So secretly, you know, we have these 700 people, you know, collaborating on something. And then suddenly in the last, and they published this committee report also. And then Dijkstra, they said, let's add this one sentence. You know, the exact sentence was to add recursion, you know, just one sentence he added without anybody knowing. So basically, you know, what was implemented in that report, that was what language creators and all that after that would have to do. So he wanted to actually make sure that was in there. And, you know, this is against, so that's how recursion was added in a sneaky way. And, you know, some people resigned in protests. They were first, you know, a lot of people, they made galley over ISDs and all that for the first time. But, you know, overall it was considered a success and, you know, international effort for a standard for defining programming languages. And I'll see how this, show you how this whole algebraic language was used in Erlang later. But, for that, let's get into the history of Erlang now. So Erlang, its origins are, it was actually created in 86, but there's a little more history before that. So if you could call Joe Armstrong, you know, let's say employ one of Erlang. You know, he sat in a room and, you know, people said, okay, fine, go play with your toys. So that's what the guy that Ericsson basically said. And the challenges that Ericsson were basically telephony. So usually when you think of creating a language coming up with a language, right, you think of, you know, what are the goals, you know, do I tick, how many of these things do I tick off? Like, you know, it should be expressive and robust object oriented. So with Erlang, none of this happened actually. So he didn't actually have this huge master plan of making, you know, the world a better place and all. The only goal of actually Erlang was concurrency. And everything that happened actually was a consequence of choosing concurrency as the only goal. Now, why was it number one? Because think about it. So your telephone exchange, okay. Now, if there are two telephone calls happening, when you put your phone down, doesn't mean another phone cost to go down. So that is like the basic sort of thing. So you have multiple phone call conversations happening. And they should happen independently. If one failure and one happens on event in one should not affect another one. So that's what crosstalk pretty much is if that happens. So the common thing, like, it's quite obvious, you know, they should not share anything. Like, what do I say on one call? You should not go on the other call. And so the concept of, you know, message passing, in fact, actually K has his origins with this, like, simple concept, what we see in telephony. And so Joe Armstrong actually saw inspiration in things like abstract machines. And it's not that concurrency was invented at Erlang either. So now, although concurrency was the first goal, like I said, there were other languages, you know, which offered concurrency. So, you know, and in fact, he had actually gone through the papers of all these other languages as well. And he actually got inspired by different things that each of them had. For example, the fault tolerance, like, when something bad happens, somebody else should be notified. That came from a language called Plex. And so now the, I think, in another year, it was 1987, the team had grown to two people, double the team. And so they already had these sort of goals. So, even they hadn't even started writing a language. But these are the goals they already had. So, the programming languages, properties, that basically depended on, it should take, it should not take time to create a process. The time to perform a context switch between processes also should be less. And since they are, you know, having independently, the message passing, let's say, if I call you, you know, obviously that should be fast. So, it moved to then three employees. And they were working on, so they were still not working on our language, though. It was conceptualizing. And at this time, though, so they used to actually read all these papers. And it still wasn't prevalent that you use concurrency. And, you know, they still were in a world where semaphores and locking and so on were prevalent. So, in fact, there's a funny incident where these three guys, who were responsible for a line, went to a conference just like this. And then asked the speaker, okay, you know, they were like, okay, I want to learn functional programming or see how concurrent systems are designed. And they actually asked one of the speakers, say, so what will happen, like, okay, I send something like this, and one of them dies, and what will happen? So, the typical reply was, it won't die. Or, you know, you know, we can't do anything about things going down. So, that sort of, you know, taunted them. So, now here's where the algebraic, you know, algal sort of rule comes in now. So, Joe Armstrong then went ahead and started working on small talk, okay. And so, in fact, he didn't even write a line, start working on small talk. First, he just wrote his vision of, you know, this message passing in that algebraic language that came in the 50s. So, now, because he wrote it in that algebra, another prologue engineer who happened to be walking around, he just said, hey, I can implement this. Because one of the goals of the BNF was that you can translate into program. So, this is where sort of, you know, a full cycle comes around. Now, if they hadn't made that, added that decision of adding the BNF and algal, maybe you wouldn't have had a, had a lang today. So, Joe, he started looking at programming, actual, you know, programming now. So, he picked small talk, and then he then moved to prologue because, you know, this, this guy clearly, his colleague knew a bit of prologue. And then, so his first experiment basically was, I mean, and what the algebra talked about is you should be able to dial, like, every number you hit in the phone. It should, this was running on the simulator, of course, but every number you dial, it should, it should message pass to the exchange. And when you dial a valid number, that is, it reaches all four digits or all five digits, then it should dial on the other side if it is a valid number. And if it is busy, come, you know, reply back saying it's busy, if it's not busy, ring on the other side. So, these are all events that actually sent to and fro between, I mean, you could say actors. But another way of thinking is of this is it's actually more like state machines because these are actors which are alive and they could get an event pretty much at any time and it should, like, for example, how do I know when I've dialed a complete number? So, you dial one. So, let's say that the valid phone number is five digits. So, it's a state machine which first takes one digit, then two digits, where it reaches five, then make a call. So, that's a, so that's the reason why, you know, you could actually, they actually realize very early that things like call control was best modeled using state machines. Now, it is still 87 and then they finally decided that they would start writing in prologue. And they started patching, you know, and writing extensions out of prologue. They eventually named the language, somebody actually, one of them made, named it all. And here's the irony. Now, even they started off with concurrency as a goal and in his own words, you know, Joe Armstrong said that we didn't realize that at the time, but that copying data, okay, setting between processes, it increases isolation, increases concurrency and simplifies construction of distributed systems. So, this is in retrospect, actually, you know, this is a consequence that he actually realized when he started off with doing one thing right. So, you can actually, you know, go on with, there's a lot of interesting history. So, I stopped at 87, but actually, you know, there were, there's a lot of interesting things that you can actually check out. There are three different papers actually by the first three employees on the Erlang team. You can, I'll probably, actually, I have a link, I think over here. And so there were times inside. So, initially, the Erlang Ericsson team told them that you have to, I mean, they benchmarked this, you know, the first version and said, you have to be 40 times faster. And, you know, then they made it 200 times faster. They said they had to be, you know, 40,000 times faster. I mean, that's the kind of replies that they kept pushing for internally. And then there was a time where, you know, it was actually banned. It was doing well, but then for some reason, some business decision, they banned it. So, he just changed the name of the thing and continued working on the same thing. So, he says that, you know, if you want to continue working on something that is banned, take this change the project name, it'll take at least six months for the management to realize that you're still doing the same thing. So, you can check out these papers, like the true story, this is by Mike Williams, Williams is an employee too. True story about how we entered the Erlang, that Joe Armstrong's history of Erlang and Robert Wording, history of the Erlang VM. Like, I mean, if you want to read more about this, I recommend you check this out. So, let's now move into, you know, under the hood of recursion. Now, and take all optimization. So, you might, you might have come across, you know, factorial and functions like this, where at the end of the program, you have something like return n plus fac of n minus one. So, that's one way of doing it. But a tail call optimized version is basically not having to remember what to do, where to return to. So, this is pretty much like, once I call that last function, it forgets everything that happens before, like, every time it enters that function, I don't know what is happening, what happened before. So, can anyone say the output of R of three? Let's spend the time look at this and, you know, give me the output of R of three. Now, this is not Erlang, but yeah, so let's just see what happens. So, R of three. So, if you look at the code, it three is not one. Therefore, call three into R of two. Two is not one. So, call two into R of one. And so, it goes, it sort of goes in deep and comes back out. So, that's your typical, this is a recursive function, but it's not tail optimized, they'll call optimized. Now, let's see the comparative, the tail recursive version. Now, this will also give you six, but you notice that you pass in state, state over here. So, this will also give you six, but you can just look at it and sort of see the difference now, of the call stack pretty much. So, so let me give you a sort of non-programming example, but something that actually will help you understand as well. So, let's say we have an example of a program that must go behind three doors, open the door and see if there's paper outside. Paper outside the door. We'll have the procedural approach, recursive approach and the tail recursive approach. Now, pardon my drawing skills, but I spend 10 minutes on this. So, I go behind door one. So, this is, let's say you have a function, three different functions, like function one in blue, function two in yellow, function three. So, you call them one after another. Now, this is how the call stack will look like. Each of these steps is called a called frame in the call stack. So, let's say function one, function two, function three. Just call them one after another. This is what happens. It goes into function one. See if this paper outside comes back. Then goes into function two. Is the paper outside? No, come back. Then three goes through this paper, comes back out, done. Now, this is another way. So, there's three doors, but if you open one door, you see another door there. You, then you go through that door. And there's another door inside that. Okay. So, you finally, these are doors, by the way. Okay. So, you walk in and you see that there's no paper there. Okay. And then you come back, closing the doors back. And finally, there's another version where suddenly he goes in, but finds the back door. Oh, there's a back door there. I don't have to come back all the way. So, this is basically the procedural approach. The recursive approach, where you go in and then come back out. Because you have to remember where you were. And the tail recursive approach is you don't, you don't have to ever come back. So, you finish the function and that's it. You're done. So, many of you might, you know, already see the advantages or maybe disadvantages of this as well. So, if you come back to this example of the two snippets, now both of these definitely are recursive. Okay. Now, you can actually just look at the example on the right and see, you know, it just looks easier to run off in a different message independently. Because I can compute that without doing anything, like without looking at anything else. Whereas on the first example, it's tough to, I mean, you have to go in the same order. Right. You can't suddenly change the order also. So, so both are recursive. Yes. The first one is easier to do, you know, debugging and back, you know, seeing your back raises as well. But you can't change order. The second example, it looks easier to paralyze. And speaking of paralyzing. Now, there's an important law that we need to know. So, I mean, something that a candidate for being, you know, paralyzing is that it should be sufficiently independent. Now, these look sufficiently independent, which is where Amdahl's law comes into the picture. That is, does anyone know which year he did this? I have to check that out. But the speedup of a program using multi core is limited by the time needed for the sequential fraction of the program. So, you are as good as your weakest link if you had a Hollywood version of this. So, that's pretty much what it's saying. So, in your program as well, even if you are, you know, concurrent and parallel and all that, you are always bottlenecked by actually the stuff that is not parallel or not that cannot be made concurrent. So, you should think about those. I mean, regardless of the language actually. So, yeah. So, speaking of, you know, paralyzing, I think it's a good time to move into spawning or forking. So, now when you think of forking, the first thing that comes to mind is, you know, demonizing and demons. So, I think it's, is it fair to say that a demon is a long running process? Or, you know, or, yeah. So, let's look at a rough skeleton for a demon and see. So, many of you, you know, might come across a written some boilerplate for doing something like this. So, for the most part, you're in, you know, the parent process and, you know, at the end of forking, you know, the execution is now in the child process. So, this is in C. Now, you can try this out right now. So, just do brew, install, Erlang or sudo apt-get Erlang, create a new example fooconf.erl and type this out. This is like the pointless, the most pointless demon you can have, but it is a demon nonetheless. That's all it is. Now, if you've seen the examples I gave you earlier, this does not reference anything before. It basically calls like we are tail recursive calling tail recursive again. So, it is recursive. There is no, I'm not doing a one plus something. It is called optimized. And so, this is tail recursive. So, it is called optimized and it is recursive. So, actually, this is so tail recursive, that's how it's actually a fundamental part of a process because anything that is a process is actually a tail recursive function. So, the moment it stops being tail recursive, the process dies. So, that's how you stop a process. So, just to see whether I was actually bullshitting, I decided to write a test. So, I have foo. Can you guys see the examples? Yeah. So, there's foo and this bar and foo. So, I just wanted to see that if I run these two functions, what's going to happen? Okay. So, a lot of you might think, okay, the scheduler, I mean, usually if you do a while loop, right, while of one, you know, do something. You know, that's in C, I mean, you come across that. So, in Erlang, the whole CPU sort of scheduling is actually dependent on the number of cores in the machine. So, Erlang today's already can scale to how many of course you have on your machine. If you have, let's say a quad core, you have four schedulers, all right? And each of them can, will take care of the parallelizing. So, I ran this example and you can see x and dots. So, it's, it wasn't as though it was like constantly getting, you know, chugging on that one foo bar, foo function or the one bar function. So, again, you can see it's tail, they're called optimized. It is recursive, but it is swapping between foo and bar. And I didn't do anything here. So, the scheduler, so, in fact, SMP88, that's the number of cores and therefore, number of schedulers. Okay, so, so if, so, yes, indeed, the smallest, so these are, in fact, the smallest demons that you can do. So, now you can actually just think, how do I do a long running process in something like, I don't know, Java or, you know, any other language you want? So, this will run for years. So, that is, in fact, the goal of, like Joe Armstrong said, to run concurrent processes that run forever. That's the only goal yet. Yeah, so Erlang scheduler, like I said, there are eight cores here. If you run Erl, this is what you get there today. You are a hit enter. You'll see that, you'll get that information. Now, you'll see the function spawn over there. That's how you create a, that's a reserved keyword to create. So, if you say spawn of a any function, that function will now run sorry, as a new, on a new process. So, you can decide, it's now in your hands, how many processes you have. You know, you can decide whether a job should run on process one or process B, whether the intent of running the job can be passed around and then it can be run on that process. So, it's, it gives you these decision making powers. And that's why it's actually really powerful. Just a little word on spawn again. So, if you're, if I run ChildDemon, it will run forever. I, you can check the memory, there won't be any, it won't go out of memory, that's an help. If I want to fork and then start a demon, that's all I did. So, spawn of, that is I want to run ChildDemon as a new process. So, if I just run ChildDemon right now, I wouldn't be able to basically be blocking my current process. So, if you open the ripple, that's your current process. If you run ChildDemon there, it'll block your current, basically it's running in your current process. So, you would do something like fork. And that's how the spawn reserve keyword works. So, like I said, I mean, another thing that you'll find that if you, you know, work with Erlang is actually quite expressive to mix and match like this. So, here's a function called get okay. At the end of which it returns okay. But it calls these foo and bar. So, yeah, so let's see what foo does. So, another word about processes now. There's another reserved keyword called self that will give the current process identifier. So, whatever you run, if it doesn't fork into a different process, it's running in the same function. So, if I run add, subtract, divide, it's all running the same process unless I message pass and then the flow continues somewhere. So, even though they are running a different function, it's still the same process. And like I said, the process will cease to be alive when it stops being tail recursive. So, if it returned, get okay, it would keep running forever. But in this case, it returned an expression, you know, okay here. So, now this is an example of something called guard classes. It's sort of like a polymorphism you could say, which you see in opes like if bar gets the value zero, yeah. Yeah, I forgot to write foo here. But foo is actually just giving okay. Because if I run foo, this foo, it would run forever. It would never enter, I mean go out of foo. So, foo also must return a value. So, I should probably update this. So, yeah, if bar look like this. So, here's an example of doing something. So, bar all I want to do is do something with the numbers 10 to zero. Okay, so this is these four lines basically do that. That's what it does. So, you see there are two occurrences of bar. I've sort of defined bar twice. So, bar of zero and bar of x. So, what that does is you put your special cases of functions above. Saying that I'm calling bar. But if the value of x is zero, then do this. Okay, and for everything else do something else. So, yeah, do something x just something random. Any questions about this? Here's another example. This is a complete sort of example. So, if you do apt-get install Erlang and put this into your editor. So, firstly, the module and Erlang file name have to be the same. So, this has to be eg1.erl. Okay, so I have to do something. So, let's just see how expressive this is. Now, I want to do something every second. So, do something prints a dot. Wait for a second. That's how I implemented. Okay, time or sleep of 1000. And we know how a demon is. Demon is just called a tail recursive function. So, what would it do something every second demon look like? That's exactly the last three lines there. So, it calls do something which prints a dot. It waits for a second time or sleep. And then it calls the same function again. Tail recursively. So, this will again run for months. I can do, send some logs. I can check some metric. I can check for connectivity. I mean, this is like the smallest even you can think of. Yeah. Yeah, so that you have access to the ripple. So, yeah, for that, a process has to be told that it will receive data. So, when you say message passing. Okay, we haven't got to message passing. But, so let's say a process A. Okay, it's running tail recursively. For it to receive data, there's a particular keyword called receive. So, like I said earlier, a process will be alive as long as it's either tail recursive or if it's waiting for an incoming message. In which case it will go to sleep. Okay. So, if it is, if I have a receive block, which is the next function actually. Here's a function called receive. Okay. If I write, let's say do something and then receive, this will actually wait here until somebody sends it a message. So, any other, so like I told you to let's do a typical client server sort of example. I do spawn of one function. I do spawn of another function. Okay, and if as long as I know the process identifies of each I can say send message to this process identifier and he'll receive it in the receive block. Okay. So, here's another small example increment of x. Increment increments plus 1. Increment till 100. Okay. So, all we want to do is go from 1 to 100 forever. Sorry, this example yeah, it's fine. So, if I call increment till 100 of let's say 0. It'll call, it'll go 1 till 99. Go to 100. But now I put a guard clause over there. So, when it reaches 100, it'll go back to 0. So, again, and now you can put a timer inside. So, example, so every second, you know, go to 100. So, you can really compose very whatever things easily like this. Now, at the bottom here is a receive and increment forever. So, to your question of, okay, let's say a process can receive something. Now, this process, if I run receive at line 54, it'll get a message and then die because it's not tail recursive. But in 642, it receives a message and then again, this call receives forever. So, this is a tail recursive function that will receive anything forever. So, tail recursion as such, you can say it's great for making demons. But when you add, you know, message passing, so it's like it can send forever. It can receive forever. It can, and then when you add network programming, it's like even more mind-blowing. So, you can like connect two things. It's great for doing things like retries, you know, and basically anything. So, tail recursion and network program is actually a great combo. Now, speaking of network programming, here's like, I don't know how readable this is. Okay, so these, there's some library modules, you know, called Gen TCP. So, in two functions, you can get a server. So, let's first make a TCP server in your line. So, that's a line number 46. Okay, so it says listen on port 8021 and then accept. Okay, so this will, if I just stop my code at 48, so it'll actually, as soon as I connect to that port, it'll die, the server will die. Okay, why? Because it is not tail recursive. If I stop, that is, if I don't end with a tail recursive function, that server listens only once. Now, in this case, I have a, I don't, yeah, so this will, this function on 52 receive and reply, it is not tail recursive. So, what will this do? It'll receive a TCP packet, it'll echo it, close the socket and then look at line 57. It is not tail recursive because it doesn't call the same function again. So, it will receive on packet and die. Okay, now, let's now combine everything. So, on line 64, you have something that receives and then again always receives, it's a receiving loop, like we showed in our early example. Now, the function on 64, no, sorry, 60 will receive forever. The first function listens once. The second function replies once and we have sort of semantics of how to do something forever. So, which brings us to the last two functions. That's a TCP server that listens, replies, forever. Okay, on a particular port. So, I'll just run this example. I can, let's just run this. Oh yeah. Okay. Yeah. So, that's what is this thing there. So, this is a opens socket to that server we just created and it gets back echo 4 and this is the code basically for that. What are the keyboard shortcuts? Okay. So, now, how many of you think this is hard to read syntax? I mean, maybe, I mean, for me it looks pretty, it's not too much boilerplate actually, you ask me. And once you understand the semantics, like how to receive one message, how to loop forever. You know, then when somebody says that or lang is hard to read syntax, that's my response. Okay. Now, all right. So, okay. So, yeah. So, I talked about, we talked about recursion, tail recursion, tail recursion with a receive, tail recursion that receives forever. Right. And then tail recursion with a network programming that is tail recursion with socket that will actually do clients of a network program. So, it was actually Leslie Lampert during awardee and considered the father of distributed programming. He actually said that if you're not representing your distributed system as a state machine, then you're doing something wrong. Because anything can happen basically, connection can go off, your internet can go down, some error can happen. So, just having, you know, send and the next line receive, you know, maybe the API can eventually do that. But under the hood, you should be implementing a state machine. For example, I've opened, I'm attempting to open a socket to a server. Now, now, how many of you have heard of Apache Kafka? So, we have an in-house online Kafka producer. It's on GitHub, helpshift.github.com flash help shift. So, there how it works is that, so in Kafka, you have concept of topics and you can publish to a topic. So, everything is done very lazily. So, as soon as you start ECA, that's the name of the program, it actually, you can start this, immediately say ECA, publish to this topic, this message. And it doesn't, it'll take the message, it'll buffer it. Because it'll attempt, in the meanwhile, the state machine will attempt connecting to the Kafka actual broker. And until the connection is established, and you get metadata back. So, it's still buffering. So, the state machine is actually holding state, and it's all transparent. But under the hood, it handles disconnections and immediately it stops workers. When the connection is reestablished, it starts a lot of actors. You can actually, so it's actually very, so when anyone asks me to implement, let's say a DB client or whatever client, if you're writing a library, you should really consider FSM state machines. And here's where Erlang's OTP comes into the picture, because OTP gives you a ready-made sort of boilerplate, where you can just say, basically it's a boilerplate sort of skeleton for making state machines or client servers or whatever you need. So, you can just work on the logic. For example, when I get X, do X plus 1. That's all you have to say. Okay, and then you can say, when I get this event, then do, change the state to that. When I get a TCP close, do some react attempt, reconnect. So, you can actually give, it's a fairly, very simple language, you could say. It's a sort of meta, it's a boilerplate, Erlang sort of module, where you can clearly define how your state machine should operate. So, that's gen underscore FSM, you can search on 9.5 those examples. So, just to sort of in conclusion for, how much time do I have left? Okay, great. So, to conclude, we saw that tail recursive functions, because they don't have to, like we saw the door example where you enter three doors and you don't have to come back. So, there's no increase of the call stack, there's no memory leak. So, which is why tail curl optimized recursive functions are the foundation of processes. And as long as a function is tail recursive, the process is alive. And we saw that when, once it returns any expression, the process dies. So, processes are, you know, it's exactly the foundation for state machines. And state machines is basically the foundation of building distributed system. So, it's rather, and that's why actually, you say Erlang is great for distributed computing. It's actually because of, you know, one, each of these factors. You know, so in a way you could say, you know, Erlang is, you know, great for distribution. But under the hood now, you know that is actually because of tail recursive. So, I have actually another extra part, which I thought I'll just go through if I have time. Now, this is nothing to do with tail recursive, but more with concurrency. So, for example, if you have a producer, consumer problem, let's say you have a stream of incoming numbers and you want to publish them concurrently. So, regarding tail recursion, right? So, I understand the point of using it for, you know, to, to run a process and keeping it alive. But on the factorial program that you showed earlier, right? So, the, the expressiveness was slightly lost, isn't it? So, when you see X, X into, you know, f of X minus 1, is more clear. It looks like a mathematical notation. That was an Erlang, but that was like a typical sort of C example. Okay. So, no, I'm talking about... Oh, that example was an Erlang. That is a typical, that was C. So, my, my question is actually for tail recursion should not be used that way, right? Yeah. So, if I do f of, like, in Erlang also, you could have a factorial function that says fac of n is fac, like n into, n into what fac of n minus 1. You could, you could do that. Okay. But it's, it is recursive, but it's not tail recursive. Because, like I said, with the exact way, whatever, where's the door example? Huh. So, with recursion, so with recursion, where you do n into n minus 1, I mean fac of n minus 1, it pretty much has to go through each of these doors and then once it's done, it has to come back out, right? So, whereas the tail recursive is where it has everything it needs to go forward, never has to come back. So, okay, let me come back to this. So, yeah, if you have a stream of numbers and you want to, you know, in parallel, send them somewhere, you know. So, one of the drawbacks of, let's say parallelism in this case is that it could go out of order, actually. This is something, this might be an acceptable sort of a drawback. In our case, when we're getting like, you know, slight, first of all, for this Kafka producer, we send them, we get them, we batch them and then we send them, it doesn't really matter because it still goes in a, in a second or two. But maybe if it's like something mission-critical where order is important, you might have to have a sort of a bottleneck. So, one process that actually gets everything in order and then send it. So, you have the power to actually make these kind of decisions. Yeah, so these are different socket thing also we saw. Okay, I'll just go through some examples then. I think I'm pretty much done. So, yeah, I think I'm done. So, thanks. We can move into...