 Now it's time to introduce Bob, who is going to talk about communication breakdown coroutines. So please welcome him. Thank you. So first, Mike, can you hear me? Yes. Yeah, okay. I'll try to entertain you during this lunch break. I'm going to talk about communication on coroutines. And I'm a mobile lead developer at Quick with headquarters in Finland, and I'm based in Stockholm, Sweden. So coroutines, we had a lot of talk about them. So I'm not going to go into them. I'm going to talk about the communication. And basically one of the main problems I see is think of them as lightweight threads. So we're going to see why we might not take that too literally. And we're also going to see what do we mean by lightweight? Should we treat them as threads? Because they might be. And also, let's see if this works. Whoa, it's fast. The function main run blocking, it's going to be in every slide, but it's going to be invisible. So you have to stare at it so it's locked into your eyes for two more seconds. And now it's gone. But it's always going to be there. So we're always going to have the main scope to run all the coroutines on. And by lightweight, we've seen this example before. Here we create 100,000 coroutines with the launch coroutine builder. And it takes around 150 milliseconds to run on my machine. And now we switch to threads. And it's up to five seconds to run this. So yeah, they're lightweight. Let's go back to coroutines. And now we're using a dispatcher, the default dispatcher, which comes on my machine on eight threads. It's usually the number of cores on your machine times two. So it's very effective. And this takes around 800 milliseconds to run. And if we switch to IO, that doesn't really have a limit. I cranked it up to about 84 threads on this machine. And the more threads you're going to use, the more time it's going to take. And how about thread safety? Dispatchers, you can think of them sort of like thread pools, but not really. There are rules deciding on where you can run your coroutines. And thread safety, just a quick show of hands. Is this thread safe? Yes, show of hands. No, it's not. Because we run on possibly eight threads and we have a shared mutual stake. And the same goes for IO. It's just another thread pool-ish. How about now? Remember that function main, run blocking, we're running on the main thread. We launch 100,000 coroutines. But this is actually thread safe because we all run on main. Because coroutines inherit the dispatchers from their parents. But you might have a co-worker who does this, and now it's not thread safe anymore. So we have to be careful. And just to make sure, when I started learning Java a couple of years ago and threading a bunch of years ago, I actually thought this would help to add a volatile keyword or annotations to it. It doesn't. The volatile only makes sure that you don't use the cash value. You read it every time. But it doesn't prevent anyone else from reading the same value and writing the same value. So volatile doesn't help us. So can we treat them as threads and define? Someone mentioned the unconfined dispatcher as well. And I think it should be whatever. Instead, renamed to whatever. I just follow along. Because what it does, here we have our function main. So we're running on the main thread. And when you get into a coroutine with the unconfined, it says, okay, you're on the main thread. I'm just going to tag along. I don't care. So the first statement, print a1, is going to be on the main thread. And then we do a delay, which has a different dispatcher. And when it comes back, the unconfined coroutine is going to say, okay, you're on a new thread, new context. I'll tag along you. It doesn't preserve your context. So the a2 is going to be running on a different thread, or possibly often. And unconfined is for a corner case. I haven't seen them being used in production code yet. But you can have a similar result with this example. We launch on the dispatcher IO. We print a1 on thread worker1, say. Then we call expansion functions, which declares a width context and a different dispatcher. And here you can possibly end up in a different thread. You are going to end up in a different context, which can switch thread, and you can stay in the same one. And when you come back to the coroutine from the suspension and print a2, you can just follow along the other thread. So you might get this, and it's perfectly safe, but you can also get this. The thing to be aware of is that after the switch context, you're going to continue on that thread instead of coming back. So threads and coroutines aren't exactly one-on-one match. And just to make it even clear why it's kind of confusing to think of them literally as threads, is if you have a thread local, which contains your value on a thread. So if you switch thread, you're going to have a new value that you can share on that thread. So given the same example, we can have, or we set the local to IO first, then we read it. We call switch context. We read it again. We set it to default, and we come back and read it one-third time. So we can have IO, IO default, meaning we're all running on the same thread. The thread local stays the same during the entire operations. But we can also switch threads. And now thread local are thread safe. They're going to stick to their thread. And this is proof of that, because we switch threads. And when we're coming back to print A2, we're on the new thread with a new thread local value. But it reads wrong in your head when you have a coroutine with different thread local values. So I would suggest just for your own sanity to not combine those, because you are not... It's not readable code to have this difference. So we should treat them as coroutines. And another example from Dan Leb is if you use synchronized. This, actually, when you have a thread like this, or we create two threads and call a synchronized function, we're going to have starting, ending, starting, ending. The annotation is going to help us synchronize the code. So only one thread is going to be allowed in it at any given time. And if we change to coroutines, we launch two coroutines and we call the same function, but now it's a suspending function and we do a delay, this is actually going to print out starting, starting, ending, ending. And to understand that, we have to understand how suspending mechanism works in coroutines. What it actually does on a high level is when you call from a coroutine, when you enter the critical section, it's going to acquire the lock. So it's going to do the same thing. It's going to lock the function. It's going to print starting. And then it hits the suspending function and going to put the state into a continuation and suspend it. And then it releases the lock. So it actually divides this function into two. And when the suspension is done, it's going to acquire the lock again and print the ending and then release the lock. So that's why we can have this order, because during the suspension, the other thread comes in and takes over. So never use suspending with the synchronized annotation. You can do it if you don't call any suspension functions inside of it, but it's not safe. Someone will put the suspension function in there eventually. So let's do the communication like we should in coroutines. The third, we mentioned it and talked a little bit about it. It's often used with the async builder, which launches a coroutine and the last value of the coroutine, or if you return explicitly, it's going to be the third value. It's kind of like a future. And when you need the value, you call await on it, and then it's going to wait until the async block is finished and return the value for you. And the async is executed directly. Just to prove that we have this code. This entire block actually takes two seconds. The second async block is going to finish before the first one, but when we await the results, it suspends, so the second await won't be called until the first two seconds are done, but that suspend is going to release directly, so it's going to be like a regular call. We could also do this more manually with a completable deferred that you actually control yourself. So you don't have to use the async builder. You can use, like, an actor or just a regular launch on a coroutine. And what you have to do then is you have to call the complete on it to say this deferred is completed. It's done. You don't have to await anymore. This example is just to show that even though the third or completable deferred is a safe way of communicating, we still can't share a state because this can altering the object that we send to complete after we send to complete will actually still alter the object. So it's not safe to assume that whatever we complete is going to stick through forever, so we should use a val instead we don't want to do like this. But one thing that's good with a completable deferred is that if you call complete multiple times, it's only the first one that's actually going to complete it. So in this code, we're always going to have Bob sent, never going to have Charlie. You can still call it and you can call complete how many times you want. The first one is going to return through that it completed the deferred. All the others are going to return false because they don't do anything. And just to be explicit that it's communication between coroutines is perfectly safe. Second one we're going to talk about is channels and they provide a way to transfer stream of values, deferred are for one value, channels are for multiple values. So let's get familiar with it. Here we launch on the dispatcher default and we send two values on a channel and the send function is a suspending function. So it's going to suspend until someone calls receive on the same channel. And here we send two times and we receive two times. Just to be clear, this code will never terminate because we call send and then it suspends it will never get to the receive line of code. So we have to do it on different coroutines to be able to complete the code or we can alter the channel by adding a buffer. So now we have a buffer of one. So you can send to the channel is going to buffer the value and release the suspension. So there are a couple of different types of channels. We get the buffered one. You decide how big the buffer is going to be before send suspense. And then we have unlimited when send never suspends. You can just keep on hitting it and it's going to be like essentially a blocking queue. Conflated, we talked about a little bit in earlier talks. That's actually going to store the reason value. So send will never suspend on this one either. If it has a value, it's going to replace it. So it's always only going to keep the latest one. And if someone receives that value, it's going to be empty. So receive can still suspend until there's a value there. And rendezvous is the default way. One send and one receive function, they have to meet to transfer the value. Yeah, and also there are terminal operators or functions to channels like to list here. That's actually going to wait for the entire channel to complete and then we're going to make it a list. And this won't terminate either because it doesn't know if the channel is done sending or not. We have to close the channels to be able to have terminal functions on them. So let's see where they're excel. There's a pattern called fan in. You have many producers and only one consumer. So here we launch two. We launched two coroutines and we're going to send them to a race suspended function. That's just going to randomly release them in zero to five seconds. And then we're going to have one coroutine that's listening or receiving these functions or these channels. And yeah, we can either get Charlie Bob or Bob Charlie. It all depends on the random function. The other way around is called a fan out where we have one producer and many consumers. Like if you want to do concurrent work on a huge list, you can put them out through one channel and have multiple coroutines work on it at the same time. Here we just loop 30 times and send 30 items and then we close the channel. And then we can actually iterate over the channel with a for loop. So we create three coroutines that all have a for loop and we're going to get a result, something like this. And what I've noticed is it isn't always ordered this list. So I don't know why. If you put a delay in the suspending function, it's always ordered. So I have to look into that. We can also have, there are builders for these kind of things like a produce builder that actually creates the channel for us. You can call send instead of just send instead of channel dot send. And it closes the channel once the closure is complete. And you can also call consume each on the channel instead of having a for loop or multiple receipts. So it's going to keep on consuming, receiving until the channel closes. Next up is the mutex, mutual exclusion printed for Kotlin. So if you remember this code, it's not thread safe, but we can make it thread safe with a mutex. It's kind of a re-entrant lock. So you can lock a mutex, you can unlock a mutex and this function with lock actually first locks it, then it builds a try block and inside the try block is your code. And on the final block is going to unlock it. So it's a safe way of using these locks. And now this code is thread safe. It's fine-grained and it's custom, but it's safe. The same thing is with the synchronized example. We can actually make that work as well by using a mutex instead of synchronized. So this actually works and it prints starting ending, starting ending every time. Still have some time to go over flow, which is the Kotlin way of reactive streams. And it's kind of similar to channels, but not really. You have an emit function instead of send and you have collect or other terminal, but the most common one is collect instead of consume each. But they're basically doing the same thing. The big difference is that a channel is hot, meaning you have a coroutine behind it or multiple coroutines feeding data that is active all the time. A flow is actually cold until you call the terminal function or a terminal function on it. So it's not going to do anything until you collect and you can collect it multiple times and hopefully get the same result. You get the same execution. The result is up to you. So here we get value 1, 2, to 10. And we also have operators, like coming from the stream world, like filter, map, and also extension functions like here we have a range as flow. You can have a list as flow. But the more important thing about communications with flows and threads is that the collect or the terminal operator always determines on what context we're going to run on which dispatcher. So now we still have that function main run blocking. So we're running on main, both flow on main and collect on main. But usually we don't want to do that. We want to have a background job. So we can add the flow on which decides all the preceding operators that don't have their own context is going to use this one instead. So in this case we're going to flow on a worker one, but we still collect on the main thread or main dispatcher also. And just to show that it's only preceding, we add the map on print line as well. So map on is on main, filter is also on main. If we want to change that, we have to move the flow on. So we can move it down to under the map and now everything about it is on the dispatcher default and in this case on worker one. We can also just jump around if we want to. We can have multiple flow-ons. And one final thing, the width context inside a flow is, don't use it. You can get away with it if you're lucky. If your collect or terminal function is on the same dispatcher, you might get away with it. Otherwise you will have a runtime error. So use the flow on it to preserve context. Thank you. Questions? Are you going to have available the slice? What? Are you going to have available the slice? Yeah, they're available now at FOSTA. Thank you very much.