 in a product called the smart questioning system. So this presentation is more from the learnings from my project, and how it has helped us. And I'll try to summarize some of the learnings at the end. OK. So this presentation is more from my learnings from my current project, which I do at Ruckus Wireless. So if you have any clarifications, anything, please do stop me. Anything. So the agenda for today, so just to give a brief rough overview of what is the current situation of concurrency or the different models available in the market and what kind of ideas which has influenced the celluloid Ruby framework. And we will see some bit of celluloid code, how we write celluloid code and how we've driven some of the ideas from LLan. And some learnings from my project. This is the agenda. So I was thinking of trying to come up with different ideas to how we can represent the current situation. It was more easy for me to put a picture, because just to see when there's a shared context, obviously there'll be some contention happening between two different entities, which means you have to do a lot of right code to make sure they go together well. And if it's more than one, then it becomes even more a problem where you have to synchronize everything. This is where all your locking, mutex, everything comes in, which means which can lead to a lot of developer efforts spent being understanding the code, writing the code in a right way, testing it as difficult, debugging it as difficult. So this is kind of where the situation is. And many of the mainstream languages do this way. And slowly, I think the new concepts have been coming here for almost more than maybe 10 years. I mean, you're coming to mainstream. So one such idea is the actor model, which we'll talk about today. So apart from locking, coming from a Java background, there are a lot of things that have seen through the evolution of Java. Synchronized logs, free intent logs, stamped logs, there are a lot of different things which came basically to improve the concurrency part of it. So one such technique, which I wanted to discuss about the Comparance Lab, which is an optimistic concurrency, meaning that it allows two different entities, two different threats to run without any locking. And then it fails when it detects that there's a change compared to its old state. So this series is also supported from a microprocessor perspective. So there are restrictions for that. There are specific instructions for some microprocessors. So some of the Java uses these parameters in the language. And this has been one of the important things. And the next technique mostly is the read versus write parallelism, because we always try to make a balance which one do you want. If you want to, for one example, it's copy and write list. So the read is always not locked so that read is always better when two different threats try to access these things. But the write has always taken a copy and then replaced as an atomic reference using the CAS. So these kind of concepts are slowly having an evolution. And one of the most important things is a scheme, which is software transactional memory, which is from Closure Scala, which is a well-designed system for concurrency. Because from the name, it's more like a transactional system for memory, taking all the concepts from DD. It also uses some of these concepts from CAS, how the refs are implemented, add-times, agents. There are different good constraints which Closure provides, which I would recommend to look into it. Because comparing this with actors, there are different perspectives. And Richie Key himself has given some of the dissentations why he has got this space for this scheme rather than doing at a land, which is more towards, from his point of view, more towards distributed programming and other things. So actors will just go through it in the next coming slides. I did want to mention CSV, but Communicating Sequential Processes, which is very, very close to actors, except that how it is synchronized is the only difference. The medium is the only difference. I think if we can get the idea of actors, it's very, very easy to get the CSV consult, so OK. So why actors? So some of the reasons, which I wanted to talk about, three generic reasons being the number of CPU cores, the usual selling stuff, the number of CPU cores are increasing, which means the concurrency, we would be writing more concurrent code going forward, which means we can't handle this code by ourselves. And we are not good at it. And it needs to be abstracted. One example, which I can compare to is the garbage collection. We used to do our own manual garbage collection. We know how much amount of issues we face with crashing systems and all those things. So that means if there's an abstraction or concurrency as well, where the system takes care of the concurrency, and we just use the right constructs, and then the program automatically takes care of itself. Still, there are a lot of possibilities where your system can go into a deadlock state or a wrong state, but that can still happen. But the amount of effort we do to debug these kind of issues will be reduced because we are not taking care of the basic system. So with this in mind, because of this, I can compare to a GC, it's very easy for us to develop. So very easy to test. And most of the things you don't have to worry, because the abstraction is provided from the language ecosystem itself. So these three points would apply for all STM and everything, which is a very good one to compare. The last two points is more for the actor we are doing it. Actors mostly represent an object. There is no difference except that they just communicate with each other through messages, that's it. And they are very good at distributed computing, which means there are a lot of applications, like what's that, those kind of real-time applications reduce these kind of concepts to let it crash and make them fail, and then more reliable systems can be distributed computing. I don't have much experience in this, but it's more on the concepts which suggests that they are good at distributed computing. So how do I describe actor model? It's more like just computational objects. Since there's no shared state, we want to avoid shared state, but we don't want to take locks and everything. So the state has to be someplace. So the state usually goes with some actions around it, right? So which means there has to be some computational object which wraps around the state and does all the actions to it. So they just define as computational objects. So since this object is responsible for updating its own state, giving it a state, updating the state, there's no shared state between two different objects. Let's see, it'll be better if I go through an example. And how they communicate this by just using asynchronous message communication. So basically everything is like evented react. So I send a message, I get a reply back. So everything is done through that way. And the reason, one of the other important things is since it's becoming a computational object which runs as separate entities, there has to be a lot of transactions happening between these entities. Which means they have to be like a lifelike process in the system so that you can launch a lot of instances, a lot of state instances on this so that they can run in the system. And the virtual machine or whatever the system which is running should be able to support these kind of things to have a better time for instance. So this is one of the important things. Yeah, so some of the examples which they use are actor model, WhatsApp, TabitMQ, CoachDB. So, okay, I just wanted to compare at LAN. So this is one voice process in LAN. So when we launch an EVM, the LAN VM, so basically what it does is it creates, you can have multiple actors in the system. So these are again something like, these are like computational entities as I said. Let's call them, for now it's like different actors in the system. So each actor has to communicate with another actor since the state is very closely related to the actor. Which means nobody can access the state except by contacting the actor. So if an actor has to be contacted, then he has to have an actor. So he has a bit or some actors by which he contacted the actor. So then, when we try to contact another actor, we want to pass ahead of a queue or something which handles all your incoming messages so that he can handle and then react based based on to the input which comes in. So that's how it works. So every actor can send a lot of messages, based on the messages they see if they can respond back. And basically these are again lightweight processes, these are not actual threads, Ruby threads, or OS data threads, these are again a lightweight process which are like tasks which are provided by LAN, which acts like OS. So it has its own processes and it has its own scheduler to schedule these processes, which will be using a group of thread pool on which basically these things are on. So the underlying is still the OS data threads, but how it runs is by having another wrapper layer around which it makes these things easier so that the context which is not possible for the threads to happen. So basically an actor can create new actors, send and receive messages, this is all it can do. So one example I can give is, assume it's a store with different buyers, so it can be 100 buyers or 1,000 buyers, so they try to buy something, so they send a message by, I want to buy this item, since, because the state is based on the interaction, so the store receives a message and what it does is it processes the message, it's a queue, so it processes one by one, it's not manipulated, so it processes one by one, so it takes the first item and then it plays off item received response, let's say if you tap on the quantity, this receives a, okay, this is successful, which means it updates this list and then this guy updates this state. Let's say if he's, this second buyer can also do the same thing, so if the item is out of stock, this may say that there is no item shipment and allow by giving a response back. So all of these things are not synchronous, from a distance perspective it's very asynchronous. The synchronous way of doing is the CSP approach, where you can do the same thing as CSP, but from an airline and actor model perspective, this is more, more ideas. So when I see how an airline, sorry, the actor model works, this is the example I can just give off, no locking whatsoever, so I just flit it through, so it does. That's crazy, right? So, and you can find a lot of images like this. So, the other aspect which Cellulite tries to take this is a little crash model, so where you don't write the system in such a way that you product the system, make a lot of handling and all that. It allows you to let the system crash and then deliver itself. So, basically it avoids a lot of boilerplate load and it makes your system reliable as well. I can share the experience of the project line near the end, but we have solved these issues using the little crash model. So, I just wanted to just throw some elixir because one reason I wanted to show how the message parsing works and also it's very close to Ruby, so I just wanted to show. So, this starts elixir application and after the first time, what it does is it starts another process. It starts another process. So, they are like the process, right? They are not actually, it's one-to-lite process, process two. And once it goes to this mode, resuming mode, that is ready to receive messages. So, it does not do anything. It's a blocking problem. And so, this firing process what it starts sending information saying that bridge. And once you send the message, it's asymptomatic. So, it's there, resumes the message and what is it? This is basically pattern matching. So, this takes the pattern and then says how, that's it. So, after the first processing, it sends the second message saying that shut down. That's it. So, sorry, after the first processing, what happens is this is gone through and then this loop ends because this you can only take something one and what happens is this process is called back again so that this goes through internet impression where it receives again. So, this is again, this is not first call-stack explosion because the airline can take care of tail call depression or they can optimize tail call optimization. So, this is not a problem but there you will be a problem. So, it goes to the receive mode again so that it can receive the second message. So, basically the second message is set and then we can send it back to the receiver again. So, basically this is how the whole receive way of doing an airline works. Elixir can do more than this model. Just wanted to show how this messaging model works. Because celluloid takes a different aspect to it. It does not do this way. So, it's just a Ruby framework for after concurrency and it's not very similar to what we saw like the one, Elixir. And it calls concurrent objects as cells here. So, every actor has a thread here. So, associate it to actor. So, the way we can interact with the object is through proxy objects as more internals but I think we can leave for now. So, what celluloid promises is more deadlock free synchronization where we don't have to worry about the things that we discussed now. And it is also adopting the fault tolerance from LLM. That's what the list is just talking about. So, this is one of the libraries that say there is just been merged into concurrent library and current only. So, the typical way of writing is taking a new text and then synthesizing all the code. So, just wanted to show the example just for the basic example. This is a celluloid way of doing it. So, what we do is just say class minion and then just include the celluloid library and it acts like a normal object, no difference. So, when I do a new, even the new was put onto so it can create a proxy object for us and automatically uses class, turns this class into an actor and associate that thread to this class. So, and then I just call a message called acted at hello. And this is the most like a synchronous call. This is where I talk about where it's not going to be the same to like how we do it in the LLM style because this is more suited for Ruby kind of languages. So, the way we interact also provides the flavor you can receive some messages but it is not this one from celluloid. So, this is just an example. So, this is a synchronous call. This is a synchronous call as you see here. You can see all this asynchronous view. That can be a bug in this code. Can you identify what can be an output? This is the output, I said output. Can you even think what might be the expected output? The synchronous, asynchronous, the main grid can complete before this, this act of work. So, there's a possibility that this might not be an input which is a trivial thing. So, this is more an example around the state. So, there we have the state. So, since it is a normal object, you can have a state in such these objects and just do whatever you want to do. So, this is a synchronous call. So, this is a synchronous call. So, this is a synchronous call. So, this is a different way of saying. The last example did not have supervised. If you see we did not have supervised. It's just an actor. So, the seller also introduces something called the supervising concept on the airline way of doing it. You can supervise a particular actor. So, that if it crashes, it will automatically revive itself. So, what you do is just meaning that supervise and give a token name. And this automatically runs, starts and then it starts running. So, in this example, what I do is, I create 10 text and then call add on that. Just to show how it would behave. So, I mean, this is not a very typical example because it's rupees are on GIL and other things. So, where we might, it's the same output in Ruby. But, I think Array is not that safe in Ruby. So, we might get wrong results as well. So that is the bottom of the list. We'll get wrong results. So, yeah. Any questions? If you did questions about that. Is there a guarantee or a guarantee that you were wrong? Yes. What is that time list? Time list. So, if you've got two sentences going add, add, add, add. You have got a word that then has a problem. How does it start? So, it's the typical mailbox. So it's just a queue, that's it. All of the subtraction are done in the cellulite framework where it has the mailbox, it typically has a mailbox and all of your messages goes into the mailbox and it executes it one by one, that's it. Because that's why it has a thread associated to it. So if you see in the last examples where we have always tried to make it sequential, it is not like asynchronous way of doing it because all of them are synchronous communication, not asynchronous. So that's where cellulite wanted to do this sequential communication, more apart from the asynchronous way of doing everything because it may not fit the bill. So how they did it is based on a paper called Act Up Modern, which is done for Python. How to pipeline stuff so that they can make these kind of synchronous calls for actors. So that's the idea. I mean, I'll show you an example where you can prevent things like the blocks. And it also supports exclusive mode like Erlang, where you can't do anything with the actor except that the... So in Erlang, if you have an actor and then it listens to a message, you can't do anything apart from for the rate block. So which means you can't interrupt the actor to do something else. In celluloid, what they allow is you can interrupt an actor to do something else, to give an example. So this example, we have two different actors, actor one, actor two. And what we are doing is we are calling perform function actor one. So this gets called. And from this actor, what we are doing is we call the second actor, the perform function. What this actor does is this actor calls this actor one again. This is a very good example, but what do you think might happen in this code? One possible case, we can get blocked, right? Because you're doing a round-rate communication. But what celluloid has done is that's why they don't get deadlocked. So it shrinks the output perfectly fine. Because assume this case, so we start actor one, actor one starts doing this stuff and then he's calling the second actor. It's going to call the second actor's thread. So this means actor two started working. Meanwhile, this guy is waiting for this guy's response, right? So it's waiting for the response and this actor starts performing his action and then he calls another, he gets another message to actor one again. So at this point in time what celluloid does is it can intelligently understand that, another call which is coming in. So it integrally is the current one, it suspends that one and then all those disorders happen. So basically it can avoid deadlocks in those kind of cases. But do you think how do we can achieve this in Ruby? Any ideas how we can suspend these kind of, basically audit release, you're doing something on a thread and then we suspend that particular task and then we use the same thread to run another task on that. Yes. So Ruby provides the fibers which are very important in this case. They use fibers to do this. So this example just prints. So when you create a fiber it does not do anything and then the first time you have to resume a fiber, you resume the fiber and it prints this and then once you think that you can stop this fiber, you can say yield on that. So it stops the interpretation. So it solves this context and then comes out and then whenever you want you can go back again to resume it. So basically this is how it implements, cellular implements it. So yeah, you can see the fiber task being created for celluloid. For all the fiber tasks created by celluloid and you can see a lot of produce and other things. So another example which is interesting for us was the supervised groups. So what you can do is you can see supervised one particular actor or you can supervise a bunch of actors together. So that's what is called supervised group. So here I've used a different example. So pool is something like instead of one actor you can say a pool of actors. You can say pool of minions or size two. So basically it would allocate each thread based on the number of configuration which we passed. So it takes two threads here and then it initializes these two minions. So basically it initializes these two minions and when I do this run, all it does is it creates the supervised group and creates two minions which are initialized. And what I do is I just wanted to assign some five tasks to the executive direction. So five times what I do is just call action on this again. So the minions would create five actions. But if you want, the threads would be the same threads getting repeated because each minion has a thread which means it has allocated two threads. So how many number of tasks you send to this particular group of minions? It knows it's a pool. So right across in one of these workers the task. So that's why it gets repeated here. This is another way of handling a pool of workers. And if you see this particular, every five raise some exception. So what does this? This is a timer product based on the right. So you can schedule tasks on the same actors since it can be interleaved in other things. So what happens is every five seconds it raises some exception. And what happens if you see, it raises every five seconds and it automatically gets divided. Because it's running as a similar container so it can automatically manage stuff for us. So we don't have to worry about exceptions. We can do most of our exception blocks and other things. We don't have to catch them. So other cellular features I can think of is you can link between two actors and then make them fail or phone fails, those kind of things. Futures are possible like how it is done. It's not very similar to a car or something but more asynchronous we are doing it. And the time is like the example which I showed you, every. So there are timers, there are finite state machines provided and you can also work with signaling but which they don't recommend in celluloid because that can cost it lots easily. So the last two slides. With respect to the project learnings, so we have a lot of Ruby process like a rescue process in our project where we try to different things like queues, redis and other things. Sometimes what happens is sometimes the connection of the queue gets lost and sometimes process will die. So using cellular helpers to clean up all those things, remove all those exceptions and I mean literally we remove a lot of code because of all these exception and use. So in this case we had a pool of analysts. It creates new process workers and this could be blocking or unblocking, you can do whatever. So because since queues blocks on this and let's wait for messages, if there's some message it calls these processors to do its process and then come back. And if any of them die, we can make the whole chain crash again, come back again. So all of those things were really helpful for us. So based on this we had a lot of exception free code and some of the use cases where we had temporary process crashes were rectifiable. We are not able to, especially when sometimes we had to do restart some of these queues or redis. We don't have to worry about it now because if it crashes we'll divide back. So it gets a lot of reliability for us, especially on the production systems. And it was easy to write. Most of the examples if you see it is very easy to write. There are some learning come but it's very easy to write with celluloid. And caveats, okay Ruby has this JL, so we can't do anything, even though there are multiple threads. And actor per thread can't scale really well like Erlang or because you have a lot of pitted data factors to get done. So that is not possible. So we tried with JRuby, Rubinius, Rubinius was horrible experience. JRuby was better but still there was not a big difference for us. We did not perceive much but the problem is JRuby does not have a fiber concept, which has been trying to implement this in the libraries and other things. So because of that it did not work out well probably. And there are some learning curve and with respect to the library it was not that mature. And with respect to some aspects, the crashing aspects. So we did a lot of testing, we did our own spikes to make sure that it does not crash or come down. So we did some delays in other things, which we understood in the library better to make this work for us. Further topics I just want to list on real is based on the celluloid framework. Spray is another actor based server. Elixir, yeah Elixir also has CSP kind of things. And Aqa Reactist is also getting popular with respect to the actor conferencing. And the Poser is the one which we are talking about on JVM stuff, lightweight processes for JVM. And yeah, so find the difference between CSP and actor. Some of the references, this is what I was talking about which piece idea about how we chose the conferencing which is the good point. And most of the examples I have had here may celebrate the examples project. And we did a lot of crashes in this project for Celluloid. Thank you, thanks for your time. So then Celluloid, your supervisor runs in a track as well. They run a single track there. So is there anything that supervises the supervisor? Or why the supervisor crashes? The supervisor group is another supervisor on your supervisor. You can also have supervisor group within a supervisor group. But out of the box, when you spin out the supervisor, does it automatically have a supervisor that supervises the supervisor? No, the supervisor itself has a track. Correct, correct. But why that track dies? So your actors... If that particular trend doesn't bring the whole... Also all actors die as well. So that's why... Can you have one actor watching from... like can you choose in five minutes what each other should get? Can two supervisor actors, sorry? Then they just watch each other, so if one dies, it spins itself. You want two actors to be part of two different supervisor groups. Two managers. If one manager dies, can the other manager spin the first manager back up? You can, I mean technically possible because you can literally manage all kinds of encoding. Because you're able to link actors, because it's a registry, you see how we access the actors from a registry. So the registry knows all the actors in the system. That's how Erlang also works. So which means you can access all the actors from the system even though they're registered part of a particular group. Which means you can link for a particular actor on another supervisor group and then make him launch. That's how it works. Yes? The priority... ... ... Correct. Yes? No priority, I mean if it's free, just should use the task. If you see the threads, some threads are repeating more than the other one. So some threads are repeated more than the other one. Nine threads are repeated more than the other one. So if it finds it free, just schedule the task. So it queues, that's it. Can you look at what's queued? Because everything is a mailbox already, even a pool you can assume it's a mailbox, but some are different things. So it's just an abstraction where you can queue all the messages and then use your free threads to work on those actions, that's it. Do you know as to whether it's free or it's not free? It automatically answers. In this case, we did not do anything. We just called this action, that's it. So cellular text cannot schedule in that particular task. We don't have to worry about it. Which thread does it or how many... So you can scale it based on number of... Let's say if you... In some of the project cases where you want it to increase the pool, you can increase the pool to assume that you can handle the load. Maybe I'll put it somewhere for 0B, but we can do that. Sorry, just to... I mean, just testing my understanding, this sort of seems a bit like a queue, right? It ends up in a queue and then you have your pool of workers who pulls up on the queue and processes it. But, and you mentioned green threads. Is that how it's implemented? So there's no... It's not even the old threading, right? It's not even the new Ruby thread. So when you say new Ruby threads... Like the green threads of the old stuff that isn't even... See, it's not equivalent as Erlan because there is no concept of... Even further scheduled miniscule processes which are called green threads. So that's why they use fibers and other things to simulate the behavior because... I don't know, maybe... I don't know if we could try that with fibers by implementing... Fibers as the miniscule processes in Ruby. I don't know how that would work out. But currently, they don't do that because they are always like a green to a particular factor. Which is not the same case with Erlan because Erlan has a pool of, let's say it has a pool of threads. All of the processes look the same to Erlan and then it tries to use these threads, like something which it can schedule the task and then stop it, dressing it back do something like that. So why would we do this? Why would we use this pattern? Which pattern? Okay, so two aspects, right? One is the failure aspect, which you would have understood because you don't have to worry about all the handling of exceptions and other things. The second part is when you have to deal with state where you want to lock and other things, you don't have to do that because it acts like a normal object and it takes care of the concurrency part for you. So you don't have to worry about how to lock, how to mutex, when to lock it. There are two different mutexes, how to handle those sorts of scenarios. We don't have to worry about that. This might not be the case for a typical project or something like that. That's why if you see the examples, most of them are around the Swift servers, what's up, those kind of different kind of system, this might not be the case for all different things which we do. That's why it may be STM, those kind of things might perfectly fit. But using CSP, those kind of concept on Elixir, again it opens up these ideas again, but not on Ruby, as of now. Yeah, because I mean, I guess a lot of my parallelism problems are all kind of multi-machine, it's like multiple machines trying synchronization and kind of making sure that I don't do multiple things to use this record. It's not going to cause a different machine. Because we have BB as our concurrent system. All right, so you have any questions for Elixir? We can look for him after that as well. Thank you.