 So what you just saw was a musical Jagalbandi. We are, we're going to do Kod Jagalbandi now. This time, we've been doing this the fourth year of Kod Jagalbandi. Last year, we explored programming paradigms. This time, we are going to explore concurrency. And we have lined up three melodies. However, we only will be able to do two. So remaining, the third one, which is, everything is an event, is there on GitHub for you to see if you wanted to. So let's dive into the first melody straight. We have heard these terms, concurrency and anonymism. And we are now going to explore these terms and try to delineate them. It has been used synonymously, and so we just want to play this first melody to delineate the two terms. So for concurrency, we are going to use a TCP server, a simple TCP server, and demonstrate that. And for parallelism, we have another problem, which determines the portfolio, net worth of a portfolio. So let me launch straight into the problem. So as you see here, this code is in Java. It's a simple server, which you want me to increase font. People at the back. OK, thank you. So it creates a server socket. It starts out in a finite loop waiting for a client to connect. And as you can see, the moment the client connects, it invokes this particular method to run the client. And as you can see here, we have the moment the client connects, we open up the IO streams, and then we start reading. In case if the server receives a quit from the client, the server then breaks out of this loop. So that's pretty much the standard socket that we have. So I am going to run this piece of code. So we have our server waiting. Now let me fire up a telnet client and say hello. It responds back. Yeah? So this is one client connected. Let us look how we can connect multiple other clients. So I have over here with me a client class, which makes a connection. Yeah? Again, it opens up the IO streams and does a send receive to the socket. And whatever is being received later on is being printed here back so that the client sees. And finally, when it closes, it sends a quit signal to the server. Here's the main program. What I'm doing here is I'm creating about four clients, each on a separate thread, as you can see here. And after two seconds, so it sends a hello message and then it sleeps for two seconds here. So likewise, this happens for four times. So let's now run this client as well. What's happening? These guys are trying to send message, but this guy who is at the ticket, at the check-in window, seems doesn't want to go away. And the client is, these clients are stuck waiting for their turn to arrive. Yeah? Yeah? Checking eight rides, yeah, you're right. This bloody business customer in the airline queue checking eight rides, yeah. So what do we do? This is a useless server. So until this guy gets out, the others won't even get a chance. So now we can see each of the clients completing. Yeah? Is this something to use? What do you think? This is a useless server, right? So we need to make it concurrent, right? So let's look at how we can make this particular server concurrent. So let's go back here and make this server concurrent. Ideally, moment the client connects, this is the code that gets executed. So we would, in traditional Java world, we would spawn a thread, a variable raw thread at this point, and then handle the client on that thread. Here, I'm just going to use completable future, which is available in Java out of the box, which does the same job, essentially. And that's called run async. And in run async, it requires a runnable, as you can see. What I'm going to do is I'm going to shove in this piece of code in run async. And obviously, I need to wrap the exception because handle new client, as you can see, throws IO exception. So I need to wrap this in a try catch block here. And I am going to save IO exception. And as in all Java stuff, we gulp the exceptions. So I'm just going to gulp it. Let's start the server again. So I'm not very confident that that's thread same. Ah. That's a big question. We have to worry moment we spawn threads left and right in novel object-oriented programming. We have to watch the mutable state. Well, luckily, here, I do not have any mutable state because I just have a server socket. So this code will be thread safe. Let's run the server. It's waiting again as usual. I'm going to start with telnet. Say hello. And as you can see here, earlier in the earlier code, it was serving each of them on the main thread. Here, it's using a worker thread out of a fork-chopping pool. So now, if I run the clients again, all of them are able to get served. So we have essentially created a few more check-in counters at the airport. So Brahmaji, can you show me the code again? So as I see this, you create one thread for every client. And every client runs on its own independent thread. It's like multiple counters at the airport checking counter. And everyone gets a fair chance at being served. Is that right? Yes, this is concurrency. Do you agree? Seems we agree. That was concurrency. Let's now try to look at an example which I think we will agree is parallel. So the task we have constructed here is to compute the value of a portfolio. We intended to trace it because that lets us look at the code. So the task here is to compute the value of a portfolio. We have a portfolio which we represent by a vector of codes. So those are the ticker codes of the stocks that we have. We type codes here in the session. We can see what that looks like. And we have a numeric vector of quantities, which is the number of shares that we have in each holding. We see Google is there twice. Maybe we purchased Google shares at two different times, different holdings. We might need that later for a tax computation. But now we just want to compute what is the net value of this. And the piece of information that's missing right now, of course, is the price of the codes. And here we're using the each operator, that thing that looks like two claws at the top of the line, is mapped in other languages, each in APL. And it applies getPrice, which is something that goes out to an HTTP server we have running locally to retrieve the price. So I'm going to invoke that. And that takes a little while. It did that. It returned seven prices. Names are very important. And when you have some things with an S on the end and others that aren't, you confuse yourself. And then the final step in the computation is in APL, we can write this as the dot product or the vector product. We have two vectors. We need to multiply the corresponding elements, quantities and prices. And then sum them. And we can write that as plus dot times the price. Plus dot times quantity returns the value. Now, the problem with this is that in APL, and I guess mapping those languages, this thing is sequential, execute sequential. So the parallel version of this code, which looks like this, you can see it looks almost identical. There's one difference here. And that's that this, you notice I've replaced the each by capital I and then this French I with diarysis. The reason I have to do that is we're working on a model here at the moment. It's not fully implemented in the interpreter. So this construct here, parallel each, is covered by a model where we've used this name because it looks very similar to that. So what happens when I execute that is that it will invoke. It will create seven isolates, put the get price function inside. They are running in a pool of processes that do some brain threading. It will immediately return seven futures. And I can use that array of seven futures in the computation and do the calculation. So the difference between those two pieces of code is just inserting a parallel operator to invoke these codes in parallel. This is indeed interesting. Let me. And I hope you agree this was parallel. Whereas versus concoct. So I can pull off something similar in Java. Let's look at portfolio equivalent. Again, I must admit it's quite verbose, but well, that's what it is. So let me first show the nonparallel version. So essentially what I have instead of your vectors, I have a map which holds the stock holdings. The ticker is the key. And the volume of shares is right here as a value. So when I run this code, it is going to calculate so many stocks. And as you can see, it is running sequentially because for everything, the main thread is being used. So it first does Microsoft. It gets Microsoft price that goes for Google, such as Google price back. So everything is done sequentially using just one single thread. I will turn this into parallel. JavaScript API provides a parallel switch, just like it's not as parallel and notationally similar to yours. But hey, we do have this parallel switch. And when I run this, underneath the streams API, I will start the thread pool and unleash the threads. And as you can see here, each worker thread picks up a particular piece of work of getting price. So you can see various different workers, 7, 5, 3. Even main is now participating. Main doesn't see the title. This is work stealing happening. And then we get the network. Is this really parallel? Well, this is parallel. No. So we have threads here. And we have threads as well. What do you mean? Let me show. So I am going to modify this small piece here, the start function, start method. And what if I say, so you see what I've done here. I have created four copies of this object. And I invoke them in parallel, each one of them would call accept and handle client. So is this parallel? Is this concurrent? What is it? I'm confused. Well, I think just by using the word parallel doesn't mean that it becomes parallel. If you reflect deeply, this is still concurrent. So if I understand this right, what you're saying is in parallel, we are looking to utilize our entire available infrastructure or resources to complete a given common task which we split across those resources. And in concurrency, we are looking to optimize availability or responsiveness for consumers that come to us. It's like having one customer hog four counters because he has eight packs to be checked in, so two per counter, which is not really a very worldly scenario. But we are from the other world anyway. And versus having four counters serving different customers. Is that fair analogy? Yeah, I think the concurrency means that you are really able to open another desk at a moment's notice. If somebody else shows up, you can always open another desk. And maybe even though the performance is slightly degraded, you will give them service. That summarizes well, because goal of parallelism is performance and goal of concurrency is responsiveness. And I think both can coexist in the same system at the same time because these are two different properties. So I think that sums it up well. What do you think? Latency and support, is that correct? Not about latency, but it's more about the ability to serve despite you have low, so that's responsiveness. So concurrency is at a higher level than parallelism? In the sense that concurrency is modeling the independence of events? Yes, indeed, if you look in concurrency, each of these clients are served independently, oblivious of other clients existing. So that is concurrency. Whereas parallelism, what really happens is, you have a implicit synchronization point. You have to split the tasks, you have to do the work, and then gather the results back. So there is this implicit coordination point. Isn't it at a higher level really that we need to talk about parallelism? We could talk about it at that level as well, but we can talk about it at a code level as well. So in this example, as we have specified, we have four publics for this something that has to be made in order for it to come back. And in my process, I have only one coding mechanism. So how would it execute that? Yeah, so I'm replying there is this thread mechanism which spawns the functions, correct? I mean threads are spawn and functions are launched. That is completely separate from what parallelism and concurrency is. This is the enabler for both, yeah? So these are really good questions. Maybe we can move to the next melody, in favor of time? Sure. So what is next? Yes, it is functional programming to the rescue and typically in structure. So what do I mean by this? Well, we have seen, we have worried about, really about when we render our code and when we want to parallelize our code or make it concurrent, the structure of the code changes very significantly, right? The structure of sequential code versus the structure of concurrent or parallel code is significantly different as compared to what we anticipate. So we have to a priori think about how we go to code, right? So this particular melody, so there are two things here. There's something called as mutability as well. So we're not going to go down the mutability route. That's a melody by itself where we talk about shared mutability, isolated mutability and taming it, but yeah. So we'll focus this particular melody, focus is on structure of the code, how it looks. And we have a very simple problem that we have tasked ourselves with. We have a geography service and it provides us with two information. A, given a lap long, it gives us weather of that particular place and given the lap long and the radials, say around 25 kilometers, it gives us the nearby places information, yeah? So that's what we have as a problem statement. So let's look at how this particular code would be rendered sequentially, yeah? Just I'm using Scala at this point, it just happens to be that. This is just setting up the chores, setting up the URL and all the stuff that is required. This is going to make the request, given a particular URL, return the result and I'm going to invoke here this one after the other in sequential, as a sequence and then collate the results, right? And then finally print it. So this is standard vanilla sequential code. So it's taking a long time. It's supposed to, it's gone. Oh, sorry. Okay. All right, so let us make this parallel, right? And so that the main color red is not gonna get affected. So I'll now switch over to C sharp, just to see the flavor, how we were doing these things way back a few years ago, right? So again, this is chores, I won't go into the chores. The interesting bit here is, I'm using a compound latch as a synchronization point. What I have is at line number 17, I'm making a request and so it enters here and when this request comes in here, I put this request on a thread pool thread, right? And I ask it to run. So the thread pool, should you do it, run it and when the task completes, right? It's the latch signals back saying, all right, I'm done. So let's decrement the count by one. And then this guy would repeat the same and until then the main thread is going to wait. When the count reaches zero, it is going to become free and then execute these two lines. So let's see this in action, I hope. We got faster, you agree? As opposed to before? Maybe it's not that good, it's not that good. All right, so I hope people have forgotten my Scala joke and they've even forgotten the Scala example because the C-sharp thing is here and that allows me to make my next point. So isn't this code too verbose? This is, right? Thank you for the yes. So this is an example of the same solution in Clojure and all of the data set up ceremony is 20 lines which is probably what you want to ignore and we just move to 25 through 27. So what are we doing here? We have an input function actually called slurred. So slurred calls an X, an argument which is a string and that's generated by the weather URL function. Thank you. Interesting. So back to this. So we simply compose things in this nice fashion where it was generated the URL. We pass it on to slurred, slurred pitches it but now we surround this inside of a future and that future is going to return, future of course, the price goes to place of course, weather and places. And line 33 now, which was pretty simple, is split info at weather, at places. The at that you see is just the unit operator. So you see that the amount of ceremony here is pretty low, right? There's still some small constructs that we need to be aware of. For example, especially the DREF. But on line 33, until those two futures have finished, it's, this program is going to block and once it's done, it's going to conceal. So you can see the amount of lack of ceremony if I can put it that way. Here, what does this look? Very beautiful. But still too verbose. But it's very nice to come after the closure because it's very similar in fact. The APL is going to be very similar. Let's trace this. So the same as the other solutions, there's five lines of code that are really very uninteresting that create these two URLs that we're going to. So we're going to, we're making two web requests, right? We want to do them simultaneously. And oops, this code is, was modified. So what we're looking at here, when in the last run through, I modified this code to make it sequential and the parallelism was taken out and we didn't reset. So I'm going to make it parallel again. Actually it's good because it allows me to show you how to do that. So if this is sequential code, it's calling get request data, which is slurp. I love the name slurp. I'm going to adopt that. This is essentially the same as slurp. And if I just call that normally, it would just go out and get the text and put it in places nearby data. To make it parallel, I don't use the I followed by an I with diarysis because it's not an array. I don't need to combine parallel with map. I'm just making one single asynchronous or parallel. I mean, we call the name of the symbol in APL is parallel. And that's one of the things we've discussed, right? There's some confusion right there because I'm just making one call, right? But I'm going to make another one down here. So I'm going to call get request data again in parallel with my main thread. So if you look carefully now, I'm going to hit enter three times and execute the next three lines. And what you'll see is that line 10 and 11 finish instantaneously because I immediately get a future back. I don't need to say future because in APL if you use the parallel operator by default, you immediately get a future back. And then line 12 where I construct some JSON which assembles these two and catenating some strings together here to make a single JSON result is going to pop. So here we go. One, two, three. So it's blocking there for about three seconds, I think. And then it finishes. So the only difference to the closure really is that there's no, it's implicit. In APL you can have implicitly a raise of future and the interpreter blocks on them when it needs, when it executes a primitive that needs the value. So you can collect these things together into a race. That doesn't require the value. But if you use the actual value, you block it. So even a little bit less, Sarah and me. I agree, that was nice. So far, whatever we've looked, we've looked at kind of pulling the data. Let's turn this the other way around where when the worker thread is done, how about it says I'm done and it issues a callback, right? So we know from the JavaScript world they have dealt really well with the callback hills. They have promises to deal with the callback hill. So let's look at the JavaScript code which is promissified now. So essentially I'm making a request and I wrap this request in a promise. And in promise, only two things can happen. Either the promise can be, can succeed which in case I resolve and return the result or if something goes wrong, I reject. I break the promise. And that's about it. And then I have this particular function which is whether in nearby places I take, consume two URLs, create two promises and pass them as an array to a promise.all. So what promise.all does is it waits for all the promises to complete, right? And when it's done successfully, it calls the dead. Otherwise, if anything fails, there is a catch handle there. You can do various things with this, but yeah. You could recover from the failure as well, but for now, this is the very simple one. And let's run this now. Again, this is the ceremony part. And as you can see, this returns itself a promise. So I have to consume the result I do then. So I'm just going to run this code. So now we got weather using JavaScript, right? Now this is, this particular code is, has, you know, promise in some sense is a monad and it allows you to change the sequence of computations using the then catch, then catch, then catch pipes. The structure of this code is pretty different from the sequential code, right? It is. We haven't yet reached that point where the structure can be similar. So let's, so if you look at the sequential code early on which we started with and if you now look at this, this is still different. How can we still make it same? So languages like C sharp, Scala for that matter of fact, JavaScript have come up with what is called as an async await construct, which actually the compiler underneath will generate all this boilerplate code and it allows you to write, it allows you to think sequentially but write the code in parallel. Is it a go and closure have that? Go has first class language support for it. Clojure gives it to you via, the code is inclined really, yes. Though they may not be derived but you will have to use library. Go has it in-built. So let's look at the sequential code early on. This was the sequential code. I will, because it was done in Scala, we'll use Scala to convert the sequential code into an async await style code. So again, these are the chores and I have some work that I'm doing, very important work. We'll come back to that. But if you look at, this is the crux here. If you look at the get request data, I'm wrapping this in an async block. If you look at the earlier one, the get request was not wrapped in an async block. Here, it is wrapped in an async block. What it does is it returns a future. It immediately starts when it is called. On the background thread, the thread pool threads are not really visible to you because it's under this abstraction. And so what I do is I create two futures, one for getting weather information, the other for getting places nearby. And also, after I get the, after I fire the futures, I want to wait a wait. So you can see here a wait, which waits for the weather future. But because waiting is not good, so I wrap this back again in an async block. So now this code runs async. So, and like we talked about callbacks, this is the callback that gets registered. So when this future completes, I will get a callback while I can go ahead and do something important in the meanwhile. So let's see this in action. Two, three, four, five, six. I can see that. So you saw this doing important stuff appeared first and then the weather information and the places nearby information arrived. So this is, now if you look at this code here for a while, the old sequential code. And if you look at the new code, which is parallel, we are getting there. We simply have to write sequential code. We don't have to let go of our sequential thinking. Just wrap this in an async block and you're done. That was a big one, wasn't it? Ah. Wow, that was quite a lot of things that we covered, I think. Right? We went through some sequential code, C-shark, then we converted that into Closer, new Closer. Ah, of course, my favorite. And then we saw from the push style to the, from the pull style to the push style towards the end, right? So that's kind of an evolution. So did the audience also see that? Did the audience also see what the evolution looked like? Maybe it's a good point right now to hear what they have to say, whether they have been sleeping or whether they have been awake all this while. Let's open the floor for joint reflection. We have mics here. We're not gonna take it, we're gonna pass it to you guys. It makes it less scary because it doesn't have this word morai or something like that. So for a person who's coming from an imperative style of programming, it kind of connects really. This kind of happened when I was using async away as a part of my JavaScript project. Yeah, it was easy to get it. So yeah, it's less scary. I'm sorry. You were awake last night, wasn't it? Is it awake? No. Let's see. So maybe we can just have this joint reflection where we could start this thing. So if you are old enough like us, then you must have seen how we moved from processes, signal processes to forking processes. And processes, forking processes are still considered okay. And that was after a while people thought, no, that's too heavy, right? Creative processes is heavy. So we have threads. And threads were like the next beautiful thing, of course, until they were not that beautiful. So they became heavy. And they're the same threads. So what we thought was why can't we reuse threads? Because with threads are in the process, they can cooperate, they really share, they have trust in built into this ecosystem. So let's create thread pools. So we had thread pools, but we still couldn't get rid of mutixes, semaphores, latches, what not, right? So functional style now really starts to come in and start to shine. And we have, of course, we've had it for a long time, but in just your current papers, the process and data structures, and now have immutability and immutability of structures coming in. We have some nice constructs. We have some nice compiler level advances which allow us to rewrite code and do our processing in terms of async. But behind the scenes, maybe nice syntax, all of that, right? So you see that the evolution path has been pretty nice, I would think, if you really go through the entire history. I would like to interject and say that if you look at this async await code, there is a problem here if you can spot. And that has taken evolution even further, which Etta has picked up on. Etta land, yeah? So the problem with async await here is this, I think, this is not composable. Though we have stepped, again, back into the paradigm of sequential realm, but still writing parallel, this is not composable. So Etta has what is called as fiber bone head to do exactly this. So this gets, you know, gone. This is gone, so async await. So that's the next evolution. What I hear is cat's library in Scala also has fiber, but I think Luca was here, he said, this is not a still a bone head, but you can still try this using cat's library. So there is a fiber concept as well, which is floating around people maybe playing with that idea. What's the deal with the fiber? I think it's a cooperative unit of state. Yes, yes, yes, and you have to yield inside. And as soon as you're running, it becomes capable of stepping in and figuring out that this is the point where fiber is locked. Yes, so that's where it's, that kind of thing is taken inside the bone head. Okay. The bone head sort of has size. Yeah. The one question that I had, just to give you an example that you showed here. Yeah. Was that, you know, functional programming would have really shown that, if you showed some sort of, you know, mutating behavior, which you're avoiding because nothing is shared and there is no mutability anymore. In this case, I thought it was just the fact that the abstractions have become more visible. Yes. Yes, so the path, there's a two path, like I said. So shared mutability, one needs to tame it. And then the actor model, which is isolated mutability. So yeah, that's, these are all melodies and there are lots of stuff that is it. We, in fact, we later moved to what is called as everything is an event. So where we've used RX, and you can check the code on GitHub as well. And there is comparison of RX with core async. So core async, yeah. So we have implemented, basically we have a stock. Closure core async, yes. And we have implemented what is called as, there is a ticking stock price and you're supposed to do brokerage addition and runtime calculation of network and show it to the user as it kind of goes up and down. So yeah, it is there. You can go take a look, yeah. So one thing that stood out to me was the contrast between the APR example and the other examples was when we use the parallel operator, the mechanics of it are implicit. Like we don't need to care about the mechanics. Whereas out here, whatever other examples you're using, we have to explicitly, like the programmer has to explicitly, like think about the mechanics and then make that choice, design choice in the code that they're writing. So can you kind of comment on that? Yeah, I mean, the origins of the APL of course are, I mean, like functional programming is mathematics, but it's coming out of linear algebra rather than the category theory. And the really important thing for us when we design these features, and these are very recent additions to APL just in the last few years, was that the parallel should be able to put that parallel operator in or take it out. I mean, if the functions are pure, the functions that are invoked do not have side effects, there should be no change in your reasoning about what the program is doing. So those parallels are essentially hints by the human to say, I think this is worth parallelizing. I know something about the quantity of data. Go parallel here. And I have to admit that this last step, which is an evolution because it allows you to control the chaining, I feel uncomfortable. I haven't learned to love it yet. So I mean, the subtext for that is, I mean, when we are writing code, there's a, we're dealing with a complexity budget and that always has to stay in our head. And when we are making, when we are putting knobs and all the controls ourselves in our code base, then I think that complexity budget kind of explodes at some point. So in non-trivial examples, this might become, like you said, like compositively problems arise. And we don't really know in running systems how that system will grow and evolve and where what looks like a problem that we've analyzed right now, suddenly becomes concurrent, the choke points kind of show themselves over a period of time. So, I mean, the underlying question is, where is it? Where does the implicitness help and where does the explicitness help? Okay, this is fairly my view. I think the cognitive overload is high. With implicit, there's no cognitive overload. So that is a clear distinction right there. But there's still the problem, right? So if you had a one million element array and you said, go do this in parallel and it created a million isolates and then started sending TCP sockets and things to control them and launch green threads, you wouldn't work very well. I don't think there's any way to avoid the program in knowing something about the hardware and the amount of data. Not for the next 50, I mean, we've been trying for 60 years without solving it yet, right? It's a hard problem. Yes. So, I think there's one minute left. Yeah, so, two minutes here. So, if you have any other reflections to make, yeah. We have two minutes. So I'm curious about the Scala thing, which was slow. Although things were happening in parallel, we had seen, we have made things parallel, but still they were slow. Was the compiler reason behind it or some runtime issue? We weren't compiling it, so it was just a joke and take it as a joke. This editor, we are doing on the fly compilation and then running it. So, it does deal with it. Thank you. This is about the way it's done. Oh, sorry. So, anybody else? The APL was doing all the fly interpretation. Yeah, so, should we call it a plug-in? Okay. Okay, thank you everybody, hope you enjoyed.