 of locks or and synchronization between threads right because you will have some sort of some piece of code which are critical which are basically critical sections and you cannot afford to have multiple threads modify those critical sections at the same time right. And similarly you would need some way to communicate between multiple threads you may have a worker thread and a master thread and the when the worker thread is done it should notify the master thread that with the result of the calculation right. And finally you would have to deal with exceptions. So have skill though most of the code you would write will be pure and they hopefully won't be any exceptions or errors in that code. But when you are working with real world examples you have to deal with exceptions like you know network is out or disk is full or your code has to deal with them right and finally transactions. So sometimes in your code you would have cases where you want to modify multiple things multiple variables in a single atomic transaction right so that the effect is visible all at once. So Haskell provides great tools to deal with all of these so we are going to look at that then. First of all I assume that people are familiar with Haskell here because I am not going to talk about the syntax or the libraries or how to structure code and such. I am just going to directly jump into the concurrency tools it provides. So this is what we are going to do to demonstrate the tools. We are going to write a multi user chat right. A chat server is a typical example of a concurrent program and we will see how we do that. The features we are going to have are very like sparse. We are just going to have bunch of users it should be able to like user should be able to log into the chat server and talk among each other like private messages which is what I call message here. And then there will be channels in the server in which multiple users can join and then talk in the channel. So when you talk in the channel everyone else who is following that channel or in the channel gets a notification right so which I called a tell right. So that is all we are going to have only two features and this is the plan. First thing we are going to do is add the ability to accept the user's connections right. So you should be the server should be able to accept the user connections and keep it open as long as the user is connected. Then we are going to add user to user chat then hopefully if we have time we are going to look into user quitting and user just going away becoming inactive or to deal with that. And then we are going to implement channels leaving joining channels and chatting in the channels right. So this is going to be sort of a little like a whirlwind whirlwind tour because there is a lot of ground to cover so hopefully we will finish it in time. So let's start. So first thing you do when you write an I have to program is to write down the types that you're acquiring your system right. So for this very simple the very first feature the letting the user connect to a server we just need these three simple types. There is a user which is who is identified just by the user name which is a string and then there is a client right. A client is basically representation of the user on the server. So a client contains a user a client user and a handle right. So if you're familiar with Haskell a handle is somewhat like this represents the socket to which the user is connected to right. So a client is a user and the user's handle and then finally there's a server the server is ignore the m bar but you can see that server is a map of user to users clients. That's all. So every time you add a new user you add that user to the client to the map you and when the user goes away or quits then you remove the user from the client. What's interesting here is m bar right. So let's see how so we have the types ready. So we have the types ready. Let's see if we can write the code which accepts a connection right. So that's that's it. The run server function takes a port and starts up a server right. So if you're familiar with the server sockets it basically listens. So every time someone connects to that socket the server socket it accepts the connection and on a different port and then basically runs cause this connect client with this code. So there are a few interesting things here. First let's get out of that finally. So finally is somewhat like a try finally if you're familiar with Java. It basically means that run this thing this function and when it is done run that other function do cleanups right. So it's pretty simple just when the connect client is over the client is gone then just close the handle just clean up the resource right. Another interesting thing is the fork IO at the beginning right. So that's the interesting part here. So you can imagine that since there is a forever there that means that the server will keep looking for new connections like all the time. But you can't talk to the client in the same thread the same server's main thread right because that would block the service thread. So what server does is that whenever a client connects to the server it forks a new thread. That's what fork IO does here. It forks a new thread right. It's very similar to fork from other languages. So that's so let's see what threads are in Haskell threads in Haskell are a little different from if you're familiar with threads in Java they're different from that the threads in Haskell are green threads right. That means that the threads are managed by the runtime. They are not managed by the OS right. So the runtime would manage your bunch of green threads and map them into onto a thread pool and OS threads pool which is the actual OS threads. So that's one thing and that means and the second thing is the threads are really efficient memory efficient. I'm not wrong each thread takes like 32 kb of space. So you it's very cheap in terms of memory right. The third thing is non-blocking IO. All IO in Haskell is non-blocking. The Haskell's runtime manager the IO manager as soon as you do an IO call which it sort of suspends your green thread right there and baits till the IO call is finished. It's very much similar to an event loop in Node for example. In Node you would give a call back explicitly when to call when the IO is done. In Haskell you don't have to do that. The runtime will suspend and resume your threads automatically when your calls are done the IO calls are finished done finished doing whatever they're doing then it resumes the thread automatically at that position right. So when the IO manager suspends the green thread the OS thread on which the green thread was running gets free and is ready to be used by some other green thread. So the code looks threaded you write it as a threaded code but it's efficient as evented code because it automatically in the IO manager it becomes evented automatically right. So these two things combine the non-blocking IO and the memory efficient structure of code means that you can potentially you can launch hundreds of thousands of threads even possibly millions of threads in a single Haskell process right and they were all mapped to a small bunch of OS threads and they will all run fine and has the runtime will automatically manage the suspension and resume consuming our threads right when you do IO. So that means that you can launch threads even for like smallest things you want to write a timer you can write you can launch a thread for every timer right. You want to like do a HTTP call you can launch a new HTTP launch a new thread for every HTTP call doesn't you don't have to worry about thread pools you don't have to worry about you know a lot of other things which you have to worry about in say Java right. So the how do you enable threading is you give this minus threaded the dash threaded option when you're compiling your code and then Haskell runtime the threaded runtime will be included in your program when will like have option to automatically like have green threads right and you can specify the number of cores you want to run on when you're actually running the program if there's a runtime option for that. So that's what threads are that's what 4Kio does it launches a green thread which is very efficient right. So the next thing to look at is mvar so if you remember the server users here in the code it's inside an mvar yes. So the Haskell the IO manager already does the scheduling like it does the scheduling for green threads that's pretty good at it. It uses the OSS you know a native support for evented IO and it works pretty nice. So it will use the OSS native event like event loop event poll and event loops and such so it will probably be the same guarantees that you get from I'm not sure but can I take a question after the thing to cover a lot of things. So we saw that the maps the there's a map of users map of clients kept in server which is wrapped in an mvar. So mvar is nothing but a mutable variable short for mutable variables right. So you cannot change variables in Haskell. There are no variables at search in Haskell they're called bindings. Once you assign a value to a name it's that that's it forever right. But you do need to change variables sometimes so that's why you have mvar right. So mvar is sort of a container for it's it's a container it's it's like atoms in closure if you're familiar with sort of like atom in closure. So you can think of mvar as a box right. So if you can put some value inside it you can take you can get the value you can change the value and search very much like an atom. But it's a little different the value in the the box can be actually modified atomically that means that in one single step it will be like changed and all the other threads who were looking into it they'll never get a consistent be inconsistent view of what's inside the box right. So mvar is not only just a mutable variable it's a container for shared state if you want to share state between multiple threads then you use an mvar because you can make sure that the values change atomically right. So it's actually more than just a box it's also a it's also a blocking primitive. So mvar gives you two operations put mvar and take mvar those are the two basic operations. So a put blocks if the mvar is full already the box is full then you cannot put a new value into it and your thread blocks there. Similarly if the box is empty and you do take mvar your thread blocks there. So it's like a single cell cube a single cell blocking cube right. That means that you can already see how you can use it for synchronization and locks between different threads right. So you want to have lock between like a synchronization between two threads you create a new new empty mvar then first thread before the critical section starts it puts a value in the mvar and then executes the critical section and then takes the value out of the mvar right. If a second thread arrives at the same critical section it tries to put the value in the mvar and it blocks there because it's already in use and then after the first thread is done and it empties the mvar the second thread can proceed and again put take the sort of lock and proceed right. So you can see how mvar can be used as locks and as synchronization primitives right. So let's see a bit of actual code how do you what happens in the connect client call. So there's some setup here and after that we call read name. So first thing you need is the name of the user right because without a name you can't put the user in the map. So what read name does is that it tries to it calls this thing it calls check add client. Check add client is a call which will atomically check if the user name is already present. So if the user name is already present you cannot use that user name. User names have to be unique. So it must do a check there and if the check succeeds it adds the client to that map but this is a single atomic call. The check and the adding to the map must be done at atomic way otherwise you would have you know corruption. So check add client does that atomically you'll see how it does it. After it does that then all we need to do is that if it does okay I mean if okay was nothing that means that there was already in user with this name then we send a message saying the name is in use something else and try to read the name again and if it succeeded we get get back a client object and then we can run the client and then talk to the client and do whatever else we want to right. So let's see how check add client works. So at the very beginning you have this call modify m var right that is what makes it atomic. So modify m var is a higher level function composed or created from take m var and we put m var which makes sure that the change to the m var is done in one atomic block. So what is inside the m var it's the client map and I take the client map check if the current the user asked is already member then I can't do anything is written the same map if the user is not there it's a new user then I create a new client for that user insert that client in the map and then return the client. This whole block is done atomically and you would never have two users with the same user name like in the same map right. Remove client is pretty simple after the thing is done it just modifies the m var again to delete the user from the map again. So again this the delete is also atomic and since both of these calls are using the same m var they will block on each other also right. So that's it that's how you do atomic lock sort of thing synchronized calls between multiple threads and make your code threads right. So you got mutable variables and you got a way to change them atomically right. So that's it so we have a way for the user to connect to the server now and since we spawned a new thread there the user will stay connected. Now next feature that we have to implement is user to user chat right. So let's look at the types before we go on right. So we modify the type now right. So now the client has a new thing called chan message right and there's a new data type here called message which is just like a msg a message from user from a particular user with a particular string string is the message right. So we added a new thing called chan what is it and this is some code to just parse the input from the user and let's format the input format the output. So there's pretty trivial pure functions. So what a chance and before that why do we even need chance. So how are we going to implement the user to user messaging is that every user is going to have a chan of the as we saw in the client we added a chance every client is going to have a chan of its own. A chan is nothing but a unbounded blocking queue. If you are familiar with queues in Java it's like a linked blocking queue right. It's unbounded there's no limit on it and it blocks when the queue is it does not since it's unbounded does not block when you try to put but it blocks when you try to take from an empty channel chan right. So every user has a chan now in addition to the hand right and sending a message to another user is just taking that message object and putting it on the other user's channel as well as from its channel so that it can get messages from the user on the terminal or whatever else who are like the actual client actual user outside the network as well as they can get messages from the users inside the server right. So you have to read from both the channel and the handle. So we'll see how that works. So at this point which let's do a demo of what we have. So there it is I start the server and let's connect to the server. So this code is a little different from the code I mean showing you so I have to log in by typing this and then again the different name right and let's see if I can message that other guy got the message from that something you can also message that. So the messaging box hello so the messaging box and we'll see how to implement this coming back to the code okay so this is the run this is the run client call which is called after the client has connected and the username has been validated. What does it do if so first of all that syntax over here I'm not sure if you're familiar with it but this basically exposes all the fields inside that record to the it becomes very they become very like a name point in the code. So it gets the server and gets the client and then forever it loops this code again and again right. So let's go line by line first line is try and raise read command read method. So I told you you have to read the message both from the channel and from the handle right. So that's what the read command will read your things from channel the read message will read it from sorry read command reads from the handle and the read message reads from the channel and you raise them. So now you have two sources of input right now one way to read from both sources of input is sort of like you know read one then read to read one read to read one read to like back and forth right and then in that case you cannot have blocking because you have to try to read from first one and then time out and then read from the second one then time out then first one then second one or you could raise them by raising I mean that you just read from the input source whichever completes first right. If I get a message first I read that if I get a command first I read that. So I raise the read command and read message functions and whatever I get first is what I use right. There's a try over there which is like try catch from Java except that it returns me either. So if there is an exception which happened then I get the left or if the if the if there is no exception I get the right. So that's it. So that's it. If there's an exception I just say there's an exception printed out. If there is a right then again the race returns me another either from left or from right. So if I get a command now command could be not parsed because parsing could fail. So if parsing fails I say it could not pass command. If I get an actual command I handle the command. If I get an actual message I handle the message right. So let's look at the read command. Read command is pretty simple just reads from the handle from the client handle and passes it and then returns it. That's it very simple. Read message and write send message are the important ones here. So the channels give you these two primitives. They give you a read channel and a write channel. Read channel is to from reading from a channel and write channel is to write it and read channel is blocking as I said because the channel is empty it blocks right right channel ever blocks. So using these two we write the handle command and handle method. So handle command is pretty simple. Just look up the list of users. So what I'm handling here is the message message. I'm trying to message message different user. So all I do is that I get the list of map of users from the server. Then I look up the user name. If the username exists in that map it does not exist. I just say no such user. If it does exist then I call send message on by creating a new message to that client. Basically I created a new message object sent it to that message the other client's channel and that other client will receive the channel receive the message on its channel and then call handle message which does nothing but is printed to the hand right this sends it across the network to the client. So that's it that's how the client's talk the important piece here is ray. How do you read from multiple input? How do you merge multiple inputs together right this code is a little difficult to read but let's go line by line. So first thing you do is then create an empty m bar and empty box right then the first this line the bracket for Cairo first for Cairo basically you're launching a new thread to do IOA the first I operation and then launching another thread to do second I operation right and then you're blocking on read m bar. So basically what you're doing is that you say that okay go do this first I operation if it is done when it is done put the value the result in the m bar and then you do ask for the same thing from the second I operate right and read m bar will return as soon as a value is put in that box right. So only one of these two IOs will succeed and other one will fail not fail but as soon as the first one succeeds the function returns right and that's where the bracket comes in so bracket is like a try finally again what it basically does is that after this is basically the it has three parts resource acquisition resource cleanup and then actual function actual operation right so as soon as the function returns it will run the resource cleanup which will kill the first so if the first doesn't matter which I operation succeeded as the as long as soon as the function returns both the I operation threads are killed because there's a kill thread there right actually only one of the threads is killed because one of the operation would have succeeded the second one would fail but not fail but rather be killed right so we get we merge the two inputs together into a single box and then we return the value of that box it's either AB because one of them will succeed right so we see the example of m bar as for inter thread communication right this was an example of m bar for inter thread communication there are more examples of this in the code but I'll not be able to go through them you don't have much time so we are done with user to use a chat that's it we got two of the three features done right now we have to do the difficult part chatting in channels chatting in channel is more difficult more complicated than than chatting user user user user is simple you just send a message to other users channel but chatting in channel is different because there are multiple users sending messages to the same channel and then the channel must sort of send back those messages to all other users subscribe to that channel right that means that you must somehow merge a multiple a bunch of inputs like you saw two inputs merging together in previous example this time you have to merge n inputs together and then you have to sort of duplicate those n inputs again outwards right this is way more difficult than way more complicated than using like the previous example right so you can think of it like this I could actually raise all of these no five users or six users to get their input right what's wrong with that what's wrong with that is that you would have to launch one thread per user right and if you did not notice there was an a bug in the previous race code the bug is that the second thread which is killed may have already done it's I already got a message from the handle but the skill then and then message is lost forever right that's not atomic right because there is no way to push back messages into the channels or into the handings right so we need a way to merge these channels merge these data channels atomically right so that nothing is lost you only need from one of them and you never touch any of them and similarly you should be able to send messages atomically like spread them out atomically the fan in and the fan out must be atomic right this is the other side of the channels where a user is subscribed to multiple channels as well as he has its own channel also to read from right so all of these again must be merged atomically and then sent to the guy at the other side of the network right so it's difficult to do with mbars it's because as we saw in the bracket example if you do want to do if you want basically what would this entail is that you have to read if you want to use a mchan you can use a chan for each one of them then you have to have n number of locks and less nest them inside each other right and then release them in proper fashion otherwise you would have like easily run into deadlocks and such right that's because locks are not composable we know this already that locks if you have like you cannot have unbounded number of locks you cannot say that I have like I don't know how many locks I have I'm going to take all of them that's difficult to do it's possible but it's difficult right and error prone second thing as we saw in the previous example is that merging is not efficient you have to launch one thread for each merging channel right so let's see how we can do this more efficient right we run into software transactional memory they may have heard of software transactional memory or STM as called a lot of languages have STM these days closure has an STM Java has an STM using like the multiverse library right so STM is like a sort of like a database transaction in your code right so what do transactions give you give you atomicity consistency isolation and durability in case of time and databases so STM give you the first three except durability obviously that data goes away when your code shuts down so it gives you atomicity that means that an STM the STM transaction is atomic it's either it's done in one single code right so everything you change in a transaction is visible to the outside world in one single commit finally when it commits only then it's visible till then while the transaction is running the other transactions which are running will not be able to see the effect of the same transaction that's the isolation part the multiple transactions running on same variables will see a snapshot of the variables at the beginning of the transaction not the state as they change inside the transaction and it's consistent the transactions will always take your variables from a valid state to another valid state you will never be left with some variables have changed some have not changed and like or some god like bad values got corrupted or something right the transactions are atomic consistent and isolated right how do you why do you need STM why do you need transactions you want to do you want to change multiple variables atomically you want to do that with mvars you have to take one lock inside another district right you want to read from multiple channels efficiently right don't want to launch threads for reading from each channel right as well as you want to read from multiple channels atomically that's the first point and if you're there something which goes wrong in your code you want to roll back the effects of the transaction right you don't want to leave the things sort of like messed up in midday right so that's why we use transactions let's see this in actual code so what do we do we change the we create a new thing called a channel now right a channel has a name it has a TVR of set of users which is basically the users who have joined the channel what's the TVR I'll explain and then it has a teach and of messages right so we have more from chance to teach and we have moved from mvars to TVR right this is the server now it has been changed to add a map of channels which is inside a TVR right and the client has been changed to change the client channel from chance to teach and and it have knocked the client also has a map of channels to which the client is subscribed to is the map channel name to teach and of message right which is inside a TVR because the user subscribes to multiple you know you can join multiple channels and leave them and search over time so you need to change that map all the time right so I'll explain all of these in a while the message or enum or the the structure has struck has been changed to add new kinds of messages now there is a tell which is used to like send messages to channels that is join channel leave channel their corresponding replies tell reply joined and leave it's called leave because there's already a left in actual so yeah so this is pretty straightforward let's do a demo at this point so the server is running it disconnected me because I did not respond to the ping I can look at the pink code in the in the code so let me join a channel so you see as soon as the second-hand join channel the first one gets caught a joint message like the notified right now I can talk in channel the other client gets a message so the all of the people who are joined the particular channel will get this can talk among yourself using this right and then you can actually leave the channel I can left message you that's it so we can implement channel channel the channel communication let's quickly go over the STM implementation so STM first of all comes with a monad of its own STM does not run in IO monad it has comes with something on the STM monad which means that you cannot run IO operations in STM monad you cannot do side effects in STM monad the type system prevents you from doing IO the closure STM is not type safe in that similar way you can do IO enclosure STM and if the STM retries reruns the transaction the or about the transaction IO is still done they just saying documentation that you should not do IO in STM but in have to the type system forces that you cannot do IO in STM right now that means that transactions can be run multiple times by the run time and you would not see any unwanted effect T var T var is like a m var but transactional it's a transactional variable right it gives you a bunch of API you can create a new T var can you see that you cannot create a new empty T var you are always needs to have a value all of these operations first of all except atomically run in the STM monad so you can create a new you can create a T var you can read a T var you can write T var and atomically is what converts your STM transaction to IO right you actually need to run something at the end right you can't have just a transaction running there and never running it in IO because the things run always in IO so atomically will take your STM transaction and run it atomically as it says in IO that's it you have you create a STM transaction you give it to atomically and it's atomic retry is the last thing retry is basically so it basically means that there's something went wrong in my transaction I want to rerun this right you're running a transaction and see that some value is not what you want it to be to run that transaction it's a retry that means that about everything I've done is transaction rerun it from the beginning again right that's T var quickly T-chan T-chan are channels transactional channels they are made of T var and they have this API new T-chan write and read and then finally or else right so or else is the API for efficient merging so if you can see in the code in the signature it just takes one STM operation second STM operation and does one of them whichever finishes first it's efficient does not launch threads and it's atomic only one of them will succeed and other will be as good as never run ever right so this is how you get efficient merging in transactional channels we have two transactional channels or n transaction channel doesn't matter you just try to read from all of them using all else all at once and only one of them will succeed and all all of the others will be completely right this will be atomic and this will be efficient so let's look at the code quickly the run the run client has changed now so what the very first line is that instead of the like read command now a thread of its own right so what I'm doing is that that from reading from handle I launch a thread of its own right you can look at read command very quickly the second code here is basically forever read from the handle and try to parse the command if the parse succeed send the message to the client that means that what I'm doing is that I'm reading from the handle parsing the command pushing it to the client's channel remember a client each client has a channel of its own which is now a t-chan I just send that message to the client send that message to the client's channel so now the client doesn't have to handle the handles right they don't have to deal with IO the network operation all they have to do is that we from a bunch of channels right this picture I showed at the beginning they just have to deal with all of these things on the left now become channels or even better they are t-chan the transactional channels so all all the user has to do is to merge all of these together and we already know how to much transactional channels together we just call or else over there so this after run there's this bunch of code to clean up that thread and remove the client and send a leave leave message should not yeah just basically send a leave message to all of the you all the channels that user has joined in some cleanup code there but what's interesting here is the run code see run code has been changed to remove race there is no race anymore you see the code here now what it does is atomically lead clients all channels that client has a map of channels so I just get the client channel map then I fold over the client chan and the all the channels all the channel channels the client has all of them together I just fold over them using or else and read back that means that try to read from any of these channels if none of these channels you can't read for any of these channels like all of them are like there's no data in there then you retry that is that you retry the transaction and only one of these will succeed because you're using an or else so you read exact from exactly one channel atomically and our other channels are untouched so don't lose messages you don't have to create new threads to read from multiple sources all works clean right and all you have to do to make it comic is put it inside this atomic need log it magically becomes atomic right so rest of the code is same if you read a message now you don't have to deal with the network handle stuff there's only message left so if you read a message just handle the message now handle message if you remember the old quote had a code for handling a message message let's see how you join a channel right so when you join a channel you get a message called joint channel right now to handle the message so there are a bunch of things which happens here I've added the comment you get the user's channel if the user is not already in the albert already joined the channel then you get the server channel if the server has that channel present then add the user to it otherwise create a new channel with the user in that the site or only user has only user in that channel and then add that user to the server and then this is the interesting part the duplicate client teach and and added to the user so this is another function that teach and give you is dope teach and you can take a transactional channel and make a duplicate of it that means that whatever comes in the first channel with automatically be forwarded to the second channel the duplicate channel and you can duplicate a channel as many times as you want right so it's sort of like fan out basically right we have one channel and your data coming in and duplicate it n number of time and all the duplicates get the same message after like since the point of duplication that keeps start getting messages right it's like a funnel out so that's it that's how you that's how the channels work right you have the channel has its own t-chan and when I when a client connects to it the client duplicates the client's channels teach and keeps a reference to it so when a channel gets a message from some of the user it's forwarded to all the users who are subscribed to that channel right so that's it you add you create duplicate the channel stand to a client's channel and then you keep a record of it by putting it in the map and then tell everyone that you have joined the channel now interesting part is all of this code all of it done atom is put atomically at the top and all of it is done atomically right so that means that if two users join at the same time one of the users will have to retry if they join the same channel then they have to retry because you can't modify the channel's map at the same time right so that's it we're done we have implemented channel user to channel messaging and we saw how this works the same there the leave and the tell messages would have similar implementation and they'll all be again atomic now let's talk a few minutes about the details of sdm so how sdm works in at least in after is that as you are reading and writing sdm variables it does not actually commit it to the memory it sort of keeps a journal it keeps a log of all your sdm operations and at the end of the transaction it tries to commit it right so before committing it checks that everything is consistent it like if you wrote something to a variable something someone else also wrote through the variable and the value has changed in between then it restries the transaction automatically right so it'll try to it retry the transaction as many times as it takes to get a consist like a consistent clear transactional like variable changes right and when it everything is fine it commits a transaction right so this basically means that your sdm transaction can earn again and again and again you don't know how many times it's going to run that means that if you're doing a very computation heavy thing in your transaction one transaction which involves a variable a then if the transaction there's a long transaction in this short transaction on the same variable short transactions will keep interrupting the long transactions again and again and again long transaction and never finish right so that's why you should never do long running transactions you should try to keep them as short as possible this is actually a bad bad example you wouldn't write this in actual no in actual production code you wouldn't write this big transaction loss you take it down so that's one thing and the second thing is that every right has to verify every read before it right I'm going to write a variable at verify that all the reads which I did to calculate this values variables value are still the same as before right that means that the transactions are actually n square in terms of how many read right so if you do a lot of reads the transaction becomes slower and slower and slower like by the factor of like not factor rather n square times if you do n reads your you know if you do two n reads your transaction will be four times slower right so that's one disadvantage of using transaction right but obviously there are a lot of advantage if you use it clearly it leads to very modular and very composable code and you can compose the the good thing about transactions is unlike locks they're completely composable you can take as many transactional as team code and put them together to create bigger transaction and even bigger transactions from them so that leads to very modular code and it is very efficient also in terms of an atomic obviously that's the property of transactions so that's it we have implemented all three features we have gone through the details of these primitive as there are higher orderly higher level parameters also and you should go to the code and references if you want to know about them so let's do a quick review of what we learned in this talk so threads are cheap don't be afraid to launch new threads launch as many as you want doesn't matter the runtime is very small the IO is non-blocking you don't need event loops you don't need callbacks right you don't need to structure recording your days to do our efficient things write small components which talk through channels right just break your code in like it's like small or modular components which don't know about each other you just don't just know what to read from what channel and what to write from what channel and just like sort of a connect connected over these channels that makes your code much modular that basically data flow programming right and write your write small functions in STM and compose them to create bigger functions right that gives you very modular code structure your program like assembly lines which is what which is a very important very interesting thing which you can do in functional program is difficult to do it in otherwise is that you structure your programs like assembly lines there are processing functions reading and writing from channels just doing that thing like some sort of forever loop and the channels are the the processing functions are connected with channels like conveyor belt right these could be a simple channels or it could be bounded channels there are bounded channels in haskell it could be transactional channels it could be transactional bounded channels so just lay it out your program as the assembly line and that way you can become very modular and you can add new you know processing functions at different places and such the channels can be duplicated and merged at least the transactional ones that gives you a very good way to construct like very complicated assembly lines right so that's all these are the references you should go through if you want to know if you want to learn more about them the parallel and concurrent programming has to buy salmon marlo and real-world has to which is sort of outdated but it's still a pretty good so that's very the first thing is where you will find this code it's a little more that's out than this example second is the URL of this presentation itself that's me on Twitter and that's me okay I think we can take so these are these are all primitives for one like single machine concurrency actually it's not yet very good at distributed concurrency they are still working on it get there in two years these channels are all more like a transactional behavior but how about in the functional programming do we have do I learn supports a kind of atomic operations in that level like the Haskell is performing so in terms of atomic transactions there's no there's no sort of intrinsic atomic transactions in the same way that there is there what you would tend to do is have a process representing your your atomic data and you would send a message to it and it would essentially serialize all the operations in there and atomic eyes that I guess it is it does get a little more tricky if you're trying to do something like read a value and then act on that value and write it back you can do that in a major transaction but something like ETS doesn't allow that yeah the so airline doesn't really provide that by default and it doesn't do that for a very particular reason and it's the reason that Haskell is currently having difficulty doing distributed computation doing atomic operations over a network is effectively impossible to do in a general way it has to be you choose your constraints and then you implement that yourself so airline is designed natively to exist on a in a network system so it doesn't come with those atomic default for atomic guarantees but you can build them as you talk about transactional channels so I'm fairly familiar with the core async way of doing things is it and we don't talk about transactional channels there is it actually doing transactional stuff under the hood or is it is this completely different from so core async is different like it's actually if you have seen the core async code and the introduction video trans translates the macro base thing and translates your code I'm getting that you're talking about in the go-go loops like non-blocking the blocking needs not the channel so my point was you're concerned about to read from multiple channels without having to discard or me missing some of the messages yeah you needed transactional channels so core async runs in two modes one can run it with threads back but like back by thread or it could run in sort of like a CFP sort of mode when it like see like there are multiple processes and they just communicate over channel but it runs on a single thread right so in the second case the in the first case they are like on blocking they're actually backed by Java's blocking fuse and they would like no block right and probably does pull I'm not sure how it does it but probably does time for something on multiple channels if you're running it in like CSP sort of fashion then it it's actually the macros transform whole of your code into a state machine and it stepped to like it automatically knows when a message has been sent and such and steps to the state but not exactly famous transaction channels and closure but you get similar behavior thing that's all the time I have thank you all