 Is it on? Great. Hello, everyone. Actually, hopefully all of you had a very productive week day. Some of you look sleepy, too. But thanks for coming here. Myself, Abhinav, this is Douglas. Today, we'll be breaking up the talk. Like, I'll be the one doing the talking, and he'll be one answering your questions at the end of the talk. So wherever I mess up, I just talk to him. So before we kick off, can we have a show of hand? How many of you are actually doing production go? Can you just take a look? Well, quite many, actually. How many of you are just familiar with go? OK, cool. And anybody who just doesn't know what go is? I mean, no shame, like, yeah, sure. We'll tell you the talk accordingly, I guess. So today, we are here to talk about concurrency in general, and which go provide us to deal with it. So it's going to be a very high level talk, like it's not going to be anything in depth. It's a very general view of concurrency. What we're going to talk about is why we all love go or hate it, I guess. Then what concurrency is, what Golang provides in order to deal with concurrency, and then we'll have a short dive in code. So see that kind of binds the whole talk together. Where the fuck my scores are? Where is it? So why go? Well, because go was inherently designed to deal with competitive processes that are trying to acquire resources at the same point of time. In general, like, what I'm trying to say is that go is a language that is designed to handle concurrency. The, I mean, like, we'll see how later in the talk. The other thing that probably very good about go is that its syntax is really, really simple. It removes all the syntactic sugar that, okay. What happened? So yeah, the other thing is that, like, it totally removes the syntactic sugar. It has really nice cross-compilation tools. It has all the linting and formatting tools, all those nice things built in. So what it gives us, at the end of the day, as a developer, it gives us the ability to write beautiful code that is easy to understand and maintainable in long term. So that's one of the reasons, like, why you should at least take a look at what go is. The common use cases for go are services that don't require a lot of complex business logic, or at least in my opinion. So the common use cases are something that don't require a lot of core business logic, but it's more about fast decision-making and, like, you know, doing a lot of things concurrently, I guess. So the, at GoJ, commonly we use it for the edge services, like authentication service, web proxies, routing. Even like we have a nice project called Heimdall. So this kind of, like, binds in all these good things that I discussed earlier about Go. It's a wrapper on an HTTP client. It's a wrapper on history that provides you retry capabilities and circuit breaking and all that. Other nice projects that are already there in Go is Kubernetes, Docker. Most of you might already be aware of them. Where did it go? Sure, it keeps disappearing. Sure, can we just go next? Like, yeah, technical problems, as always. Yeah, okay, so sure. What is concurrency? So concurrency is essentially the concept of doing multiple things within the same amount of time. What does that mean is, no, no, no, no, no. Okay. Is it with your display up, down? Yeah, up. Yeah, okay, so yeah. This means, so concurrency is more about dealing with doing a lot of things at the same amount of time. What that means is being able to break down your tasks into smaller parts that can be executed out of order. So, but whenever we talk about concurrency, what usually happens is a lot of people think parallelism, which is a quite closely related topic, but it's a little bit different. Parallelism is about doing multiple things at the same time, simultaneously. Concurrency is more about doing two things or multiple things within the same period of time, but you might necessarily not be doing them simultaneously. We have a short example for that. So, let's go to Bob. So, Bob is our ticket booth operator. His boss was like really angry at him and like, you know, that you are acting slow. So, what he did is like he, sort of like, okay, instead of one queue, I'll start working with two queues at the same time. So, he was very ambitious. He started solving with two queues at the same time. He decides to like, he decides who gets the ticket at what, in what order. So, like this ABC and UVW guys. At the end of the day, everybody gets the ticket, but his performance was not improving. Why? Because there is only one Bob, no matter like it's a single queue or a single task or like two tasks with same amount of load. So, his performance was essentially not improving. Instead, it was more decreasing because he has to actually keep on deciding which one to serve, which one not to serve, say no to people, blah, blah, blah, right? So, then he came up with parallelism. So, he hired Alice. So, now we have Bob and Alice. We have two queues. The amount of tasks reduced in half, right? So, that is the essentially difference between concurrency and parallelism. Like probably I got it through, I guess. Next. Sure. So, go includes few things to support concurrency. So, that's why we said that go is inherently concurrent. So, first of thing is a go routine. Go routine, you can imagine a lightweight process or a thread that you can use to achieve a small amount of tasks. The other thing is channels. So, whenever you are talking about concurrent processes or concurrent tasks, usually since you are trying to do two things within the same period of time, you want a means of communication between the two. So, that's what channel, that's how go encapsulates this kind of communication medium in terms of channels. Then there is something called a select. Select is, I will talk more about select, but select is something like a switch statement for channels. It helps us writing non-blocking go code. Yeah, I'll talk like a little bit more later. And then there's like last thing that is sync package. So, sync package provides, this is the main utility package that provides all the concurrency primitives that go has like weight groups, atom makes once. But out of these four, we'll primarily be focusing on two of these, that is channels and select statement. Probably in some future talk, we'll cover the rest. So, again, back to channels. So, channels are something that connect concurrently executing codes, which is usually a go code, go routine. So, the beauty of channels is that they are type. So, if it's a integer type of channel, it will always contain integer. If it's a string type of channel or if it's a structure that's another go primitive. So, it's going to contain exactly the same amount of information, same type of information. So, having a channel like ensures type safety. Channels have send receive semantics. So, in order to communicate between two go routines, you'll always be sending from a channel and receiving from a channel. You can actually even quantify that a channel is a send only or receive only. So, that helps you like avoid people doing multiple or trying to do multiple things on the same channel. Then the other thing is that data is copied to and from a channel, which can be a good thing or a bad thing because since data is being copied all over the place, so depending on like, your amount of data, your memory footprint might be large if, you know, there are a lot of things or a lot of go routines that are trying to copy it off the channel. Then, sure, so this is what a channel looks like. So, okay, so this is a string type of channel. An arrow, as I say, like go has a very simple syntax. So, an arrow going in means you are pushing to the channel and arrow going out of the channel means you are receiving from the channel. So, there's no, no like really complex logic here. A channel, a sending channel will be blocked until the message can be enqueued and a receiving channel will be blocked until there is a message. The top part, like until a message can be enqueued is kind of important when we talk about channels because channels can be of two types, a buffer channel and an unbuffer channel. What does a buffer mean here? So, think of channel as a passageway and a buffer is kind of the size of that passage. In an unbuffer channel, you cannot really hold anything in the passage. So, if I want to talk to Douglas and there's nothing in between, so I'll be sending some information to Douglas and until Douglas receives it, I am blocked and yeah, Douglas is blocked. If I have a buffer in between, so let's say like yeah, there's a table or table here that can hold a letter. So, I want to talk to Douglas, I keep my letter, I go ahead and like continue doing my thing. Douglas can receive the letter whenever he wants to and he can process that letter. But if the table has only a size of one or like a fixed amount of size, so I can place only that many channels, that many letters if the size was one and I'm trying to like place two letters, then I'm blocked again, right? So, that's the sending blocking thing, right? So, well, blocking is bad. Like anybody can guess like why blocking is bad? Show of hands or like no? Come on guys, like yeah, sure, why? It definitely introduces delays. When we are trying to do concurrency, like we are kind of trying to optimize for time I guess. What other things? Sure, so, well, besides really like the why blocking is bad is because it can introduce deadlocks because that can be of multiple reasons that the two things that are trying to do stuff, they are hogging resources and like one is blocked and it's not releasing resources, other people need it. So, that can be one thing. The other thing is like they are independent cascading blocks that are happening and that may result into a circular block that will in turn result in a deadlock, right? So, anything else I want to talk? So, the other thing is like, I mean, if something is blocking then we are losing cycles. So, every cycle that we lose, we lose the cost of infrastructure that we devote to our software. So, that's another. So, blocking makes your software more expensive. Every cycle you lose that can be well spent into doing something else that is non-blocking, right? So, that's why we, instead of having a blocking only mechanism, we have a select statement. So, select kind of makes your channel non-blocking by providing you an ability to wait on multiple channels at the same time. So, what it does is that it provides from a bunch of channels, you can, whichever channels goes unblocking, you can do that first. So, for example, like, again, same thing. I'm talking to Douglas and I'm talking to my friend here. Both of them are supposed to give me some message, but Douglas is very busy, he is free. So, I can listen to both of them at the same time. Whoever gives me a message first, I can work on that faster. So, that kind of gives me a sense of, or that gives me a way to optimize my, optimize for time and not being blocked forever for one thing. The other thing is that select provides is a default statement, so default case, in which if I'm blocked by everybody, I can do something by default. So, I'm sorry, like, I haven't put the syntax here. So, that's how select looks like. So, it's very similar to a switch statement. In this case, like, you are seeing the three things. Sorry, this is, so the first statement is a receive. The first statement is a send, second is a receive, third is also a receive, but like, we are not using that value. It's, and the fourth is the default. So, what happens is like, depending on who sends me, who gives me a response first, if I get a response on channel one, then I receive a value and then I print that. If I receive a response on channel two, oh, sorry, if I receive a response on channel one, I send that value. If I receive a response on channel two, then I say that I have received it. If I'm totally blocked, I'll just tell you, like, you know, I'm blocked and like this whole cycle can repeat if it's inside of our loop. It doesn't repeat by default, okay? So that is the select example. Okay, let's talk code. I guess that's what I'm a little bit better. So everybody with me until now, like nothing too complicated, I guess, sure. So today's code deep dive is based around a toy example in which we are trying to optimize for time again. So we have a data producer and a DB sync. So we want to write data to a database, but we don't really want to wait for the right to finish. So, yeah, let's take a look, right? You wanna walk through the code? I mean, I'll control this. Sure. So here we have like two of these constructs, like data producer and a DB sync. We start listening, sync starts listening to the producer. So whenever our data is produced, it goes to the sync and it's on the, correct. And yeah, producer is running, sync is running. So let's run this, okay? So everything is nice and dandy, like, you know, this is happening because, can we go inside the producer? So this all is happening because like we have a nice run here. Can you zoom in? Let me know when to stop. Haven't heard stop yet. Like, yeah, continue zooming. I don't know. We are too close, right? Yeah, we are very close to your front door. I've rolled my face up. Okay, yes. I think it's okay. Yeah, like, dude, I'm running out of like screen space now. Great. So, yeah, the main thing that's happening here is that we did set up an interrupt handler for some reasons. We'll tell later. So yeah, the main thing happening is here that we continue producing data until the stop flag turns true, which will happen for some reasons later. We simulate working here by need or do something. And once this simulation is finished, we write to the channel, which is a DB, yeah. I guess I have skipped some part. Cool. Let's go to the sync, sync.listen. Sorry, sorry, sorry. So let me backtrack first. So as I was saying earlier, that the sync is listening to the database, to the data being produced by the producer. So I guess we are okay. So here is the listen call. So what happens here is that when you're at this particular, where is the ad worker? Yeah, where's the, where's the get handle? Get channel? Shit. Okay, so just take my word for it, like we're gonna find it. But so during listen, what happens is that we are trying to share a channel between the producer and the consumer so that we have a common way of communicating with each other. So that's why when, during the, yeah. Oh yeah, sure, sorry. So that's the place, right? So during listen, what we are doing is we have a set number of workers that we want to start. So that's like something that we should always control that how many go routines you are going to spin up in a program. We should not like just spin up a go routine for every single task. Otherwise it's gonna run out of the memory. So we spin up like a fixed number of workers whenever we are spinning a worker, we provide it a channel that you will be using to communicate with it. So producer.getChannel gives it a receive only channel so that the, during the start worker, during the start workers, like getChannel is giving you a receive only channel. So when the worker starts, it can start listening on that channel. More on that later. So going back to the producer, we see that we did some work and now we are trying to write some data onto the channel which is here. And we were having a, so once this gets in, now in the DB writer or in our DB sync, we are writing it to, yeah. So our worker is receiving data from the channel and then write to the database. So everything was nice and handy, right? Now we, our database started freezing. Why? We had too much data in there. We had like really complex queries running in. Something happened. Douglas, you are not running? Yeah. Cool. So we simulate like a lag by control C, okay? So now we are seeing a jitter, okay? Sorry. Now we are seeing a jitter in the output because our database is freezing. That is happening because our channel itself was inherently a blocking channel, unbuffered channel. So every time we get the information, it waits until it's actually written to the database or actually it's being, it has already been at least read from the queue. Then, yeah, still we are writing to the database separately from whatever we are doing. But yeah, we have to at least block the producer until the data has actually been received by the DB sync. So we can improve it a little bit by introducing the concept that we discussed earlier. So you see like every number that is increasing is like increasing by, should be increasing by one, but like, yeah, okay. So yeah, this thing can be improved a little bit by introducing a buffer channel, okay? So what we do here is like we, when we create the data, create our producer, what we do is like we provide it a buffer size. So that will kind of decides the table size that we are going to have in center. So let's say like instead of zero, let's have a size of 10, okay? And then let's run it again, okay? So everything nice and dandy, we encounter the same kind of error again. What? Okay. Again a fail. So what I was hoping to show you here is that since the buffer size is large, so we are able to write it faster. I mean, we don't really have to be blocked until all the message is written, okay? Sorry, do we have the print statement for the producer? So the reason that you are seeing this is because the DB sync is still writing with the lag. So whenever it writes, you see the output of the DB sync but you're not seeing the output of the producer which changes with the change in size of the buffer. Sorry about that. Okay, so let's reach the ideal situation that where we want it to be. So cancel it dude, just kill, yeah, sure. So sure, sorry, yeah? Okay, sure. So what we did to like overall improve or change this blocking mechanism is to actually introduce the select. So this kind of helps us to have some kind of failure over strategy in the whole system. So in case like we have, we encounter like severe lag or severe, yeah, severe lag in the system or block, yeah. So severe lag or block in the system. So in that case, we have a failure over strategy that doesn't really fail your application altogether. So here we have, we are doing something similar but instead we introduce the select and whenever we are like really, really blocked which shouldn't happen in the usual scenario. So if you are really, really blocked, we start writing to a separate thing which is already shared. So we run it like now we introduce the lag, sure. So now you're seeing like in between we are writing to the database as well as we're writing to the radius whenever like the lag is too much. So it doesn't really reduce your performance but yeah, it continues to have this kind of thing little bit failover in between. Let's say your database really goes down. So in that case like if we hit Control C, yeah. Yeah, so now you see that everything is just directly going to the radius. You still have a running service but like everything is right now in memory probably time to call prod support and like, you know, get your database up and running. Now let's assume that like everything worked and your prod support person did the fix and whatever. So you see that we started writing back to the database because this particular block, see the reason we are, why we are writing to the database is this channel became full because we are not able to write to the database. So the channel became full, we started writing to the radius. Once the database came back up, it started, our sync started reading again from the channel and then the channel became empty and we are able to queue more messages and that's how we are able to like, fail back, like switch over to our original scenario without doing any additional deployments, additional redeployments to our main service. Of course like, I mean, this includes some kind of tech depth or some kind of dependency that you have to migrate your data between radius and whatever but it's a really toy example, right? This is, what we are trying to show here is that in case there is a failure in your system, in a concurrent system, using select and default can provide you a way to kind of degrade into a less, less serviceable condition but once you recover, you can like fail over back without any issue, okay? So that's all for the demo part of it. So let's talk about like one last topic that is when not concurrency, okay? So we all know like, you know, that concurrency, let's switch to concurrency and concurrency will increase our chances of, increase our performance time and blah, blah, blah, I guess but when do you guys think that we should not be concurrent? Again, like any guesses? Sequential processing. Correct, sequential processing. Anything else? Okay, so concurrency is like really useful under two circumstances. When you have a mix of IO and CPU bound tasks or you have enough hardware to run, enough hardware to run tasks in parallel. What I mean by that is like, see, for example, let's take the first case, okay? We have a task of calculating pi, that's like just, we have bubble sort, okay? So, or we have any particular sort, forget bubble sort. So sorting is inherently IO bound task, right? And we have only a single processor. Now we can definitely break a sorting algorithm like merge sort into tasks that can be run parallely and independently. But it's always going to be, you can always, you are always going to do one thing at a time, there is no blocking, but you will lose time during context switch. So whenever like you are trying to run multiple, very high IO bound tasks, high CPU bound tasks, concurrently on a single processor machine, it's going to cost you more time. In another example, can be like just a series, a sum of array, okay? That's like the simplest I can get here. So a sum of array, okay? It doesn't really matter in which order you sum the numbers, it's going to be always the same thing, right? So you can write a nice go routine that sums all your numbers in parallel, but you only have one single processor. So in that case also like if you are trying to like, spin up a lot of go routines and trying to do it, it will not give you any better performance like then a single processor, like try to do it like on a benchmarking system, it will fail. But the same thing will work nicely if you have multiple cores. So if you have a four core or eight core system, so since like all this addition is actually happening in parallel, so it's well and good. The other thing would be a high IO bound task and a CPU bound task. So if you have a combination of these two, so yeah, you can spin off at least two go routines, one dedicated to IO, one dedicated to CPU. So whenever the IO is blocking, your other go routine is actually utilizing the CPU time. So that's about it I guess. Questions? And Douglas? Sure, thank you. Yeah. When we have... Oh really? Okay. And when is it possible to use the party tool? Sorry one sec. Could you repeat that please? When is it possible to use the go routine? And okay, let's say I just want to run something in parallel. I don't really care about concurrency. So when is it possible for go routine and when is it possible for Q? Using the party Q, Q engine. Using a Q, so you mean an external Q to the system? Yeah. Yeah, maybe you should just do it on the mic. Yeah. Okay, so the question is when is it... Let's say I want to run something in parallel, not really concurrency. When is scenario suitable for go routine? And what is scenario suitable for using the party Q engine? Okay, so it's a little different. So you would think of concurrency as more of a design problem, right? It's not just because I want to do things in parallel. And the reason you would probably use an external Q-ing system is because, A, you have shared consumers to that queue, right? You have a shared consumer to that queue, right? So you have many instances, let's say, of your application list to an external system, right? So I don't think that the two really compare to each other. For example, like one thing you could do is, you could have, for example, let's say, let's take Kafka, right? Like I'm producing messages and I have workers consuming them, right? But I don't want to keep my messages in process. Okay, let's say I want to process a report. So it might take five minutes. Exactly, right? So because, okay, let's say even if you have a buffer channel, you will have this buffer in memory. If that process goes down or that, let's say you're running container gets rescheduled, you're going to lose that, right? So essentially you want an external system that abstracts this Q-ing from your logic of working on the task, right? This is primarily when I would use an external system, right? But concurrency is more about design. So within my system, I can still design concurrent systems to be, let's say, object-oriented, for example, right? I could use some observer pattern or something, or use callbacks, for example, to really orchestrate this behavior. So I think of concurrency as more of using these primitives to design your core as sequences of independent tasks, right? Now, when these tasks need to coordinate within something that needs to be persistent is a different design choice. It drives more into architecture than your low-level implementation, right? So, I mean, it would vary a lot, for simple message, like if I just want to communicate between two functions, let's say I'm not going to use an external Q, right? It just doesn't make sense. Yeah. I mean, is that, I'm sorry, was that clear? I'm not sure. So the point is only persistence, right? Persistence and sharing, yeah, if you want to share between processes, yeah. Yeah, I mean, just to break it down, Douglas is a very technical guy here. Okay, so back to your question, like you have a report that you want to be analyzed, right? This analysis can be done by multiple consumers, like multiple people. So if the analysis of the whole report is not atomic thing, like it needs to be analyzed by, in part, the multiple people, and these people are outside of your own main thing, right? Main process, main task. So in that case, yeah, definitely you would need a queuing mechanism, because you want to distribute this analysis task to different people, right? You don't want to do this analysis by yourself, faster. By go-routine, so you can do that? Yeah, but go-routine cannot distribute it to other processes. Okay. I mean, technically, you can. Oh, other machines. You want to do it for other machines, right? What do you mean by other machines, is it machine or? It's other run times, essentially. So go-routine scheduling is still within a single, like go-run time, or a single CPU, if you can imagine, as a single CPU system, right? It's not like you can, there might be implementations where, where you, exactly, right? There might be implementations where, you know, I have a go-routine, and it's producing to a socket, and then, you know, on my other side, I have another go-process running. It could be a clone of the same one that has a consumer reading from a socket, right? And I could simulate the same thing using go-routines, like, yeah, over the queue. But why would you have an external queue, right? It's an expensive operation if you're just sending something simple. It's only if you need, you know, to maintain that data externally, your process, so that you're not dependent on your instruction point execution, you know, to maintain the state of that, right? Thank you. Yeah. Why? Yeah. Nice presentation, by the way. Thank you. You choose to abstract in a producer-consumer fashion rather than exposing channels. Is it for your user? I just wanted to find out your reasons for doing that. Oh, okay. Sure, you want to take this? Probably. Yeah. Yeah, yes. It was actually more for coming up with an actual use case than rather just discussing channels. Channels and isolation is a very critical topic, but unless you're taking a deep dive in channels, channels like at this level, just talking about channels didn't make a lot of sense to me rather than like writing up some kind of use case or some kind of private example. That's why I did wrap it around in a producer-consumer. But do you use this in your production code? Yeah. Yeah, okay. Sure. So in general, like, you know, when you talk about clean code, you would like to hide that implementation detail under some interface, right? For example, let's say, not just channels, right? Let's say if internally I use an array or a linked list or a hash map to do some state storage, I wouldn't really want to expose that structure to anyone consuming it, right? I would always do it through an interface so that the behavior can be verified as a black box and then I can unit test it through that interface or I could like verify the state through that interface. So that's one of the reasons, like, I mean, this could be very debatable thing, but I like hiding the implementation details under a clean interface so that even if that changes, your consumers don't really have to do any rework, right, like just expose the functions. All right. What was this with interrupts, signal handling? Oh, it was just to simulate, like we needed a way to trigger the lag so we just use an interrupt to simulate it, essentially. Like, yeah, we could have just put a timer or something that would keep getting slower. Essentially, just to run that simulation. There was a question at the back. If you have a loud voice, you can shout. Okay, thanks. My question comes in two parts. The first part is, okay, so on quite a high level, how pervasively use this goal line in Go Jack? That's the first question. Two, one of the criticisms I've heard, right, is that because it's a relatively new language, a lot of libraries are not as battle tested as, say, the more popular languages like Java or C++. Okay. So from experience, right, how true is that criticism? Sure. Okay. So, about Go at Go Jack, right? So, I mean, as you heard from the presentation or the video at the start, we do use Go very heavily, right? We have one of the largest Go clusters around, to be honest. What we use it for mostly is more IO bound things like proxies or gateways that have high throughput IO. I don't know if I should say this, but like, yeah. In my experience, Go is a very simple language, right? You don't get a lot of syntax out of the box, and this makes it a little more verbose than if you're used to a more higher level language than Go. So we try to, when things start to get too complicated, like it's nicer to use a more high level languages which gives you constructs, which can abstract nicer. But coming to your second question, right? Like about Go being not, I think Go is pretty battle tested. I mean, like if you look at a lot of large software, right? Runs reliably with Go Docker or Kubernetes, for example. It's definitely like one of the main things out there. The ecosystem is, yeah, it has a lot of varying libraries and frameworks, so you have to pick and choose what you want. It's not like something like Java where there's an obvious choice on what to go for. But you generally, each library or frame, I'm not gonna say, there are very few frameworks, but each library has a nuance. Like it's intended for a very specific purpose, and you would generally read the design principles and pick what you want. I haven't found it lacking anywhere. Like there's really no, at least for our use cases, we haven't really found the need to really reinvent something. Of course, Hamdoll is something that we use heavily internally to go to it just because by default, the Go HTTP client, you know, doesn't do some things like timeouts and retries. So just to wrap it and make it reusable, that's why we write some wrappers around those things. Yeah, but that's about it. But yeah, there's nothing really lacking in Go at the moment. Do you guys really actually have to test the library? Oh, not really, no. Yeah. Yeah, yeah, yeah. Yeah, no. I think you need to make the deficient between standard library and the party library. That's actually a very good point. So the standard libraries in Go are really, really nice, right? Like you get HTTP out of the box, you even get like sockets out of the box, for example. You get like image processing out of the box, hashing out of the box, even the test libraries out of the box, right? So for things like that, I would trust that blindly. Third party libraries, yeah, you can never be too sure, right? But, and there's always open issues on GitHub that you can look at or, you know, the number of stars or the contributors with their reputation. So yeah, picking third party libraries is a little tricky, but I don't think that it's like much of a problem. Yeah, yeah. But the long and short already is that you still have to be very careful, right? That's with any language. I don't think that's the Go issue, right? Like, if you looked at Node, like, yeah, that's, yeah. Sure. Anyways, right, yeah. Hello. Hi. No? He's dead. Oh, really? Just shout out. Okay, maybe I'll just shout out. Yeah, live. Sure, good. Okay, so. You're working now. Okay, so the first question would be, I think on that note, I'd just like to ask because I'm not really developing Go. I'd just like to be introduced to you. So, how do you think the error handling mechanism within Go is, is it, do you think is it? That's a very opinionated question. It's horrible and wonderful at the same time. True. I have a short answer for that. Yeah. So it's verbose and explicit. It works. Yeah, but it's ugly, right? Like, but it works. Like, you're going to make less mistakes with it than you would with, with, with a lot of other ways of, like, except throwing exceptions and things like that because it's very explicit. But yeah, it's not the prettiest of syntax, to be honest. And our second question, I think invariably when you're talking about concurrency, you end up with the issue of synchronization. Mm-hmm. And with synchronization, one of the methods of handling is using a telephone. Okay. How, how good are they? Okay, so this is like a larger topic with like lock-free and wait-free programming, right? But essentially how Go solves the, the coordination or the synchronization problem, or I would say the orchestration problem, to be honest, when you have concurrency, is by communicating between, between concurrently executing channels. What this means is you never really share state, right? Because if you see the semantics of a channel, you put data on it, data's copied onto it, and your receiver pulls data. And the very primitive mechanisms that you have to coordinate is send and receive. So by using these, you can like, you can limit yourself to a design which doesn't rely on any shared locks at all. So you don't, you do not really need like, you know, any semaphores, for example, for very simple designs. There are some cases where the channel mechanism is an overkill and which is why Go has a sync library in the standard library. So you do get like, things like wait groups, if you look at the code, we'll post a link to it later, but we do use wait looks or you have atomic operations like, you know, compare and swap, or you know, if you really want to, you know, use them and their CPU primitives really. But in my experience, using them is generally an anti-pattern unless you're really, really optimizing. Okay, like, yeah, yeah. Because you generally, again, like, when you're locking, you have shared state and managing that is difficult. See, concurrency or parallelism by itself is not so bad. It's when you share the state and managing that, that's when it gets tricky. When you have a shared... Exactly. Yeah, even if it's not mutable, you might just end up in a deadlock, right? Like you're blocking on something shared. So yeah, cool. Sure. How are we doing on time, actually? Where are you? Okay, cool. Is there any, I think, numbers of any comparison with respect to both channels and routines against multi-threaded applications using Java, like, was there, how much of a throughput was better or was there better CPU utilization? Have you come across anything like that or have you done anything like that? Yeah. Well, I can't really point you to the specific numbers, but yes, the footprint of a go-to-team is way less than a footprint of a thread. Okay, for go-to-team, go-to-team's footprint is just the stack size. That's a word, I think, like, yeah. It's very, very light. So yeah, it's like, yeah, Java threads nowhere come close to it. But yeah, I really can't point you to specific numbers or like, we haven't really done a testing because other people have already died first. Right, yeah, just to add a little bit on that, so go-to-teams are not really threads, okay? They're like, processes maintained within the go-run time. So the OS never sees a go-to-team, unlike in Java, a thread is a native resource that's scheduled by the JVM and then by your OS. So yeah, it's common to spin up thousands of go-to-teams. That's not uncommon. About performance, I think it really matters with what you're doing with them, right? A native thread could perform better in some cases, but yeah, in general, you would use go-to-teams more as an orchestration strategy or a design strategy rather than just for performance. I think the performance thing comes later. There was another question. I think Aria's off and during one of the questions, Judy mentioned that go-to-teams can become a bit more verbal, quite verbal than when those things happen and you might use the hyaluronic language. Is it out of development effort or is it something like a technical issue? Okay, yeah. It's more of a people problem, okay? Like this is with most programming languages. See, in the end, you want your code to be maintainable over time. Like I don't want to, and when you have an idea, you want to express it very cleanly with your syntax. If the language blocks you from doing that, that's when it's kind of time to step away. If it's hard for people to read like, you know, like a lot of if-else's and yeah, if-error, not-nils and these sort of things get in your way, then I think it's time to kind of step back and see like, where can you separate the two? It's definitely not a technical problem, okay? Like you can do everything with Go that you can with any other language. It's just that, yeah, you might like get really frustrated and cry yourself to sleep with me or something. Yeah. So in the case, say you want to go away from Go and they're like, okay, so maybe the application has a part, maybe the project has a part of the logic where you actually require concurrently and the performance is required and the other part that they don't need. So I suppose Kozai will try to separate those parts, is it? Definitely, in general, it's a good idea to separate when your scaling requirements are different. So if you have like a scaling requirement for let's say one part of your project and you have this other logic that is not so intense, maybe it's just doing like some lightweight computations. In general, it would be good to scale not just because of the maintenance perspective but also because of your infrastructure cost. Like I wouldn't want to take that whole thing and replicate it everywhere. If I can just scale one part and get a boost in performance. So it's a good idea to separate it in general. But yeah, I think what we consider is like I said, most of our use cases, at least so far that we've been in Kozai has been things like proxies or things that do authentication, for example, where you have a gateway and it quickly needs to verify something and then read out requests and things, Go works amazing in that and it's very easy to design that as well in Go because you can think in terms of its primitives. Doing the same thing in a language like Java, a little more work, yeah. Cool. Sure. Oh, okay. So with this, when you're separating, I thought, of course, if we introduce some over here, like you need to put them together. Say it's a new service, then it's a new over here. What's the... It's not really a good question, but what's the project of handling of it? Like how do you do it? So, I mean, let's just say it. You're talking about how do you split an application into microservices or something like that. It's a whole other talk and I'm not sure if it's relevant here, but there are strategies for that. Like you could front it with a gateway and then read out requests and, you know. Like, yeah, but let's talk offline more. Yeah, I think we do have some experience with that. Not so relevant here. Yeah, cool. Any other question? Cool, okay. Please, yeah, yeah, sure. Yeah, so I want to say that. How do you visualize or lock the channel? Yeah, is it full? Yeah, I mean, you have to visualize for debugging the channel. So, I didn't get the question. Like, if you're debugging the channel, the channel is not something that doesn't really have attributes. So, the channel does have enough information about it to tell you, like, what is the buffer size? How much queue has been filled? How much is it empty? So, when you actually go into the debugger mode, like, you can actually see all of that information. So, you don't really have to just imagine, like, what is the current state of my channel? Is it really blocking? Like, yeah, it's nice enough to tell you. Thanks. Okay? Yeah. Sure, thanks for listening to us. I hope you have something. Thank you. Sure.