 So it is time to get started. So this is Inside Active Job. This is the Beyond the Magic track. My name is Jerry DeAntonio. And let's get started. So first, a tiny bit about me. I live and work in Akron, Ohio. If you're an NBA fan, you probably have heard of Akron. There's a local kid who's a basketball player who's done pretty well for himself in the NBA. I went to school about 10 minutes from where I live. I work at Test Double. You may have heard of Test Double. Justin Searles, one of our founders, it was on the program committee for Realist Conf. And he's speaking tomorrow. Test Double, our mission is to improve the way the world builds software. And I know that sounds very audacious, but we truly believe that every programmer has it in themselves to do that. And I believe that a person here has it in themselves to do that, and that's why you're here. So definitely it's been a great company to work for. And I'm very proud to represent Test Double here. Personally, one thing I've done, my biggest claim to fame lately is I created a Ruby gem called Concurrent Ruby. You may have heard of Concurrent Ruby, because it's started to be used in some very well-known projects like, for example, Rails. Concurrent Ruby is a dependency of Action Cable. In Rails 4 and Rails 5, it's used by Sprockets. Also used by some gems like Sidekick, a Sucker Punch, used by Elastic Search and the Log Stash Utilities. And then the Microsoft Azure Ruby Tools. So much of what I'm going to be talking about today draws in my experience from that, but this is not going to be a sales pitch for that. This is going to be about Active Job and Rails itself. So because this is beyond the magic track, this is not going to be an introductory topic. This is going to be a deep dive into the internals of Active Job. So I've had to make a couple of assumptions in doing this. Basically assuming that if you're here, you've used Active Job, probably in production. You've used one of the supported job processors. You have some understanding of concurrency and parallelism. If you need a better introduction to Active Job itself, I highly recommend the Rails guides. The Rails guides are very excellent at this and provide a lot of great information. If you need an introduction into concurrency within Ruby itself, same-less plug. I did give a presentation last fall at RubyConf called Everything You Know About the GIL Is Wrong. That video is available on YouTube and that could be an introduction into that. So with that, let's jump into what is Active Job? In order to get into the internals of this, I need to briefly remind us of what it is and where it came from. So Active Job, according to Rails guides, the definition is this. Active Job is a framework for declaring jobs and making them on a variety of queuing back ends. Jobs can be everything from regularly scheduled cleanups to billing charges, mailings, anything that can be chopped up in small units of work and run in parallel. A couple key terms there. It's a framework. We're going to talk more about this, but asynchronous job processing pre-existed Active Job. There were things like Backburner, Delay Job, Q, Rescue, Sidekick, Sneaker, Sucker Punch. Many of these things existed before Active Job was created and Active Job came along as a way of unifying those. Active Job helps us schedule to ask to be run later. That was mentioned briefly this morning in the keynote that when you don't want to block the currently running REB request and you want something to happen later, you use Active Job in order to make that happen. And that can happen through what we call ASAP processing, which is where get to this as soon as you can or by scheduling it at a later date and time, potentially. And this is also, allows us to support full parallelism. Some of the job processors are multi-threaded. Many of them, however, actually are forked. I'll talk about more. And can run multiple processes on the machine and scale across multiple processors and in some cases across multiple machines. Okay, so the impetus for Active Job is that background job processors exist to solve a problem. We have these long running tasks that we don't want to block the REB request. So we want to be able to send a response back to the user and get the page rendered for them. And some of these tasks then occur after that. So for example, if I'm sending an email, this email takes time, it's asynchronous to begin with, why should I block the REB request to make sure that email posts when I can send the response back and have that post shortly thereafter? Okay, so Active Job supports that and the processors behind that support that. So like I said, Active Job, this will be important when we get into the internals, Active Job came later. There were all of these job processors, each one was unique and each one, they all did virtually the same thing. They had slightly different capabilities and went about it differently. They all solved the same problem. So Active Job was created to provide a common abstraction layer over those processors that allow the Rails developer to not worry about the specific implementation. If this sounds familiar, this is not dissimilar to what Active Record does. Relational databases existed, Active Record created an abstraction layer over that that allows us to run, use different databases, fairly, switch between different databases if necessary. Most importantly, run different databases and test, prod, and dev. Active Job does the same thing. It allows that abstraction layer that will allow us to choose different processors, change different processors as our needs occur, and run different processors and test development and production. And so Active Job had to do that while supporting the existing tools that people were already using. So, according to Rails guys, is picking your kidding back in becomes more of an operational concern. You as a developer don't care which backend you use is being used. You simply write the jobs and let the, and then use whichever backend in whatever environment makes the most sense. So, because we're gonna be looking at some code, I wanna real briefly remind us what the code looks like for Active Job before we jump into the internal. So, this is a simple job class. This should look familiar to everybody. The important things are that this class extends Active Job base and that it has a method called perform. Most of what Active Job does is encapsulated in the Active Job base class, which goes and will eventually, as we look through the details, call this perform method on your, on object of this class when the job actually runs. And we'll look at those details, right? And as a reminder, the way we configure our backend is we use this Job Q, Active Job Q adapter configuration option within our application RB. Now, inside jobs, what I'm gonna call the adapter we're gonna build here is we're actually gonna build in here a real adapter that is functional, right? So, all of the adapters that are supported by Rails have a symbol that follows normal Rails inflections that maps the adapter name to what you set the config value for. So, if inside job existed as a supported adapter in Rails, this would be how you would set that, all right? Then, so that's how you configure which backend you wanna use. And then later when you wanna actually do something later, you call the perform later method on your class, passing it one or more parameters. And that should look familiar to everybody. And if you want to schedule the job for a certain time, then you can use the set function to specify when and there's a number of different ways you can do that. So, that's just a reminder of what we see on the front of Active Job. All of that should look familiar to everybody. We're gonna talk about as what goes on behind that when you make this perform later call. So, like I said, we're gonna build a asynchronous backend here, right up here, during this presentation. One that actually works and is functional and will meet a, it's minimal, but it will meet the requirements of Active Job and show us how this works. So, couple of things just to give a sense of where we're coming from. Like I mentioned, there are multi-threaded adapters and there are forked adapters. Multi-threaded adapters run your job in the same process as the Rails app itself. Okay, there are a couple that do that. The advantage of that is those can be very fast and you don't have to spawn separate processes and manage separate processes, all right. We all know that MRI Ruby does have some constraints with concern to concurrency, but it's not as bad as most people think. That's what I talked about at RailsConf last fall. And since most, MRI Ruby is very good at multi-threaded operations when you're doing blocking IO. And most of the tasks we make these background jobs for are doing blocking IO. They're sending emails. They're posting things to other APIs. And so, since they tend to do blocking IO, they tend to work very well with Ruby's concurrency model. So, a threaded back end is simpler because we don't have to manage separate processes. Many of them, however, do use or they do spawn fork separate processes where you have to run separate worker processes. Those give you full parallelism but they require active management of those processes. So, for what we're gonna build here, we're just gonna do a multi-threaded one because I can do that very easily and it will demonstrate all the things we're going to do. And we're gonna use thread pools for that. Most job processors will also persist the job data into some sort of data store. Redis is very popular for this. The reason for doing that is that if your Rails process exits either on purpose or by crashing, if all of your job data is in memory, you're gonna lose it and those jobs will never run. So generally speaking for production, you want to have a job processor that does store the job data in some sort of external data store to allow it to persist beyond restarts. We're not gonna do that here mainly because in simplicity I wanna demonstrate what goes on an active job. We don't have to go to that level of effort. So our job processor will not persist through a data store so it makes it good for testing and development. But not necessarily we wouldn't use what I'm gonna build here today for production. So in order to do this, we're gonna need three pieces. First one is active job core. This is provided by active job itself and it is the job metadata. I wanna talk about this more but it is the thing that defines the job that you need to perform later on. It is probably the, I'd say the most important piece of all of this because it's the glue that binds everything else together. The two pieces we're gonna build today are the Q adapter and the job runner. Remember, active job came about after the job runners. So the job runner is independent and it provides the asynchronous behavior. The job runner actually exists as a separate thing. Side kick is a separate thing. Sucker punch is a separate thing. You install those separately. The Q adapter has the, its only responsibility is to marshal the job data into the asynchronous job processor. So the job processor provides increased behavior and the Q adapter marshal is between your Rails app and that job processor. And those are the two pieces we're gonna build here today. The Q adapter and the job runner. For all of the job runners supported by Rails, the Q adapter is actually in the Rails code base. If you go into GitHub, go into the Rails code base and look in active job, you'll see that there's a folder of Q adapters and there's one Q adapter in there for each of the processors that Rails supports. There's also a set of unit tests as part of the Rails code base that will run against every one of these job processors on every commit. And they ensure that all of the supported job processors meet the minimum requirements of active job. The one we're gonna build today actually will pass that test suite. And once it's done. It is so strictly speaking, the Rails core team has responsibility for the Q adapters and for that test suite. But knowing from experience, the people who create the job runners themselves work very closely with Rails to make sure that those adapters are up to date and work well with the processors. So let's stop in and talk about the active job core class. Like you said, this is the glue that ties it all together. Right? It's not obvious. So this is the job metadata. It is an object that represents all of the information about the job you've posed. It carries with it the proc that needs to be run. It carries with it things like the Q and the priority will stop in a minute. And it carries with it all of that metadata. It provides two very important methods which I'll talk more in a minute. But they are the serialized and deserialized methods. Okay, these are very, very critical, but I'll talk about them in a minute. The job metadata itself, there are several attributes on this object which we will look at and use internally within active job. These are not things that you as a Rails developer have to know about, but these are things that when turned inside of active job are very important. One of them is the Q name. Most of us should be familiar with that. You can't specify when you create a postman's jobs, what Q should run against, right? And if you don't specify, it's the default Q. Priority, some job processes support prioritization where higher priority jobs run first. We're not gonna support prioritization in ours, that's optional, but the priority would be attached to this as well. If you schedule a job to run at a specific time, you get an attribute called schedule that which tells you when. We'll look at that because we are gonna do scheduled jobs. The job ID is internal to Rails and is a unique ID within the Rails instance itself that identifies each job. Rails uses that within active job to track each one of these. The provider job ID is one that you can provide within your job processor. So if we wanna do within our job process, if we wanna have our own kind of ID system that made sense for us and worked, we could then attach it to the job metadata under the provider job ID. So Rails does not create that, we would create that ourselves. We're not gonna use the provider job ID today because it's not essential, but it is available and it's something we would add. All right, so let's actually build a QAdapter. We're gonna go outside in, right? So like I said, the QAdapter is responsible for marshaling data into the job processor. The job processor is the more interesting piece, so we'll look at that in a minute, but we're gonna start with the QAdapter and we're sort of gonna pseudo-TDD this, right? The QAdapter, most of the QAdapters were written when ActiveJob was created because the job processor's already existed and they had to handle that marshaling. In our case, because we don't have a QAdapter, or excuse me, we don't have a processor yet, we can decide what the API's gonna look like. So within our QAdapter, we only need two methods. It's very simple. One is NQ and the other is NQAt. The NQ method takes that job object we looked at a minute ago and it marshals that into our processor. And the NQAt takes the job and a timestamp and marshals that into our job processor. So notice, in this case, I've decided to make the API very simple. We're gonna create a thing called InsideJob. We're gonna have class methods NQ and NQAt. We're gonna pass the serialized job. We're gonna pass the QName and in the case of the NQAt, we're gonna pass the timestamp. So a couple of things to note. One, this is not very OO. These are class level methods that we're calling on this class. And I did that because I want to emphasize the stateless nature of this. This is very critical to understand. Active job is by its nature stateless. The state for your job is encapsulated in that job object. All of the metadata about the job, everything related to that job, all of your state is in that that we're passing through. The actual QAdapter itself is inherently stateless. Its job is just to, in your notice, we even call a class level method when we post the job. Because we're sending this thing to happen later on. It's a fire and forget. We're not creating anything that's gonna be persisted and in fact, any kind of stateful behavior here would be potentially thread unsafe. So we're just gonna call these class methods and throw this data at it and then we'll build those class methods in a minute. And that's all it really takes to build a QAdapter. Now, one thing that's very important in here is the serialize method. And I have to go into this in a little detail. The reason why we call the serialize method is twofold. First off, and less importantly, is thread safety. Remember Ruby is a shared memory language that has object references. So if we have maintained a reference to anything that was passed into that and you hold onto that reference, when this thing finally goes and gets processed later on, if it's processed in the same process on another thread, we run it a potentially not thread safe behavior. Now, the normal usage pattern makes that not really a big deal, but if we serialize the job into a representation of that, we then let go of those references and make it thread safe. There's more important reason though. And the most important reason is for consistency. Remember, we want to be able to work with multiple job processors in prod and dev and even test, we want to be able to. So when those job processors are going to persist into a data store, such as Redis, they must serialize somehow. Maybe this can't take a Ruby object and throw it into Redis. Or throw it into a traditional database, we have to serialize it somehow. That's sounding bad. So if every job processor created its own serialization method, we could potentially run into problems when we switch between these. We don't want to have hidden errors where we run this in tests and we run it in dev and all the serialization works and then we run it in production with a different processor and the serialization fails or does something different. So ActiveJob provides one common serialization routine method and one deserialization method so all of the job processors can choose to serialize the same way. And in so doing, that will reduce one potential set of errors when we move between job processors. So we are going to serialize here, even though this is the simplified version and we're not storing this in a data store, we want to do that serialization to make sure we get that consistency across processors. So internally, like we said, we need to do two things. I moved on to the job processor now. So we have the QAdapter, now we need the job processor. The job processor's responsibility is to provide the asynchronous behavior and that asynchronous behavior is Q-dependent. So we want to have multiple Qs and have each Q process a different set of jobs. So for this, we need to be able to post jobs into different Qs and have them behave asynchronously. We're gonna use a simple thread pool for this because within the context of this simplified application, a thread pool works fine. A thread pool has its own Q and therefore by creating a separate thread pool for each Q, we do get a separate Q for these different jobs. We just have to map the thread pool to the Q name, which we'll see in a minute. And then obviously a thread pool has one or more threads and therefore provides asynchronous behavior. So we can very simply deal with these needs of the Q-ing and asynchronous behavior by just creating a thread pool, okay? So what we're gonna do here is, we're gonna create the thread pool but because this is all very multi-threaded and therefore needs to be thread safe, not only are we creating these threads within our job processor itself, but because Rails can be run under multi-threaded web servers, we need to go through, jump through a couple hoops in order to get some thread safety here. So we're gonna use a concurrent map class. This is similar to a Ruby hash and support similar APIs, but it has some additional behaviors. It's one, it's thread safe, but it also has some additional behaviors to make that work. Hopefully most of you have know that with Ruby's hash, when you create a new hash, you can pass a block to the initializer and that block will be called if the key does not exist and that block will be used to initialize that key. So what we're gonna do is we are, whenever we try and retrieve a thread pool from our map of Qs, if it doesn't exist, we're gonna create a new thread pool at that time. So we'll lazily create our thread pools as new Qs are needed as jobs are supposed to be used. So this compute of absent is just a necessary thing in order to provide the atomicity and synchronicity that we need to have in order to create this new thread for a new thread safe manner. So there's some concurrency needs there, but the end result is that's basically like creating a hash and passing a block in to the constructor. And then we're gonna have a create thread pool class. In this case, we're just gonna create a cache thread pool. A cache thread pool is the simplest kind of thread pool we can create rather than getting into the details of all the different configurations we could do. Basically, a cache thread pool has an unlimited Q size. It will grow and add more threads as needed and when threads become idle, it will shut them down and remove them. So over time, we'll get an optimal number of threads and that, which for our simplified processor is fine. Right, so now I mentioned that we need an in queue method inside our job processor. It's gonna look like this. Basically, when we in queue this job, when our job is in queue, we're going to simply post the job to the thread pool and when the thread pool pulls it off, we're gonna call active job base execute. That's the important part there, active job base execute. The first line that queues to queue the post, that's just getting the thread pool, creating a new one if necessary, then posting that to be run by the thread pool whenever the thread pool has an available thread. Active job base execute is responsible for actually interrogating the job, looking up our specific class to process that job and then posting and calling the perform method on an instance of that and passing in the arguments. So when you, in your class, early on we saw we create that perform method and it takes a set of arguments and it runs, active job base handles the interrogation of the job, creating an instance of that and calling that method. All we need to do is call the execute on that in our, when our thread pool takes us and runs it later on and that's all it takes. Active job handles that, like I said, the internals of that and that right there is enough for us to actually post asynchronous jobs that are performing in ASAP way in a real environment. Now for the NQ for later, it's a little more complicated because we have to get into the time. You're gonna commit. So fortunately, we do have at our disposal a high level abstraction that handles these kinds of scheduled tasks. And, coincidentally, it's called schedule task, right? So the internals of schedule task are sort of beyond the scope of this, but the idea is a schedule task will take a number of seconds in the future that something is supposed to occur and it will queue it up and it will add that roughly that time, it will then pass it off into a thread pool to make it work. So when we notice when we actually use our perform at and we use that set method, right, Rails provides a lot of convenience things for allowing us to specify when the job happens in the future, right? Rails gives us all those great time helpers that we like, you know, one day from now and you know, one week from now and at certain times and so forth. Active job is responsible for taking all of those convenience things that we use as the Rails developers and converting them into a number of seconds in the future when this runs. Okay, so by the time we get this, we already have the number of seconds in the future. So in our job process, we don't have to worry about all of those wonderful date utilities that Rails has, Rails does that for us, right? So in this case, it's really convenient for us because schedule task, not coincidentally, takes just a number of seconds in the future when the thing should run. And in this case, normally within Concurrent Ruby, all of the high-level abstractions run on global thread pools, so you don't have to worry about managing your thread pools. In fact, most developers should never use a thread pool directly, all right? Most libraries that provide thread pools provide them internally and provide high-level abstractions that allow, that use those thread pools. So in a normal circumstances, a schedule task or a future or a promise or any of these things would use the global thread pool. But in this case, we need a specific thread pool because that thread pool represents our queue. So all of the high-level abstractions in Concurrent Ruby support the tendency injection of a thread pool. So in this case, this executor option, which is very common, is a way of saying, when you do run this thing, run it on this specific thread pool. So what we're doing here is saying, look, we know how many seconds in the future this thing needs to run. We know which thread pool we want it to run on. Just go and handle that. Schedule task handles that. And at the time the thing needs to run, it'll then grab that job and it will run this block and we're gonna do the same thing we did before is just call active job base.execute. That execute method doesn't know anything about the asynchronous behavior. It just knows now is the time to execute that. So we're gonna have our time. The same thing we saw a minute ago. And just in case we somehow get a time value that's not in the future, we're gonna just check that delay and we're gonna post it directly if somebody's not in the future. And again, that's all it takes in order to post the task later on. Rails handles all the time sensitive stuff. We just need to make sure that we can do it at that time in the future. And believe it or not, that in its entirety is a functional asynchronous job processor. So the next slide doesn't have a bunch more code on it that we normally should see. And I'm putting this on one slide because I want you to see just how simple this can actually be. That in fact, a real functioning asynchronous job processor can in fact fit on one slide. And this is basically it. We have a class called inside job. We have our queues constant where we have this thread safe map where we're gonna keep track of all of our thread pools. We're gonna have that create thread pool method which will just return the pool we want. Then we have our enqueue behavior which just throws the job onto the thread pool. And then we've got enqueue at which actually looks at that delay in that timestamp and gives it to a scheduled task. And that right there is actually a fully functioning asynchronous job processor that plugs in, that can work with active job. And like I said, the other part was the queue adapter. And remember the queue adapter just looks like this, right? It was simply when active job calls enqueue or enqueue at simply posts this thing off into my job processor. So that's it. So believe it or not, that is actually, I said a fully functional asynchronous job processor that will work with active job and could be used in test or development in order to actually get asynchronous behavior without having to install registers under deep dependencies. So the next question you're probably gonna ask is, all right, Jerry, are you gonna put this code up online so we can look at it later, right? And the answer is yes, if you wanna see this code you can find it in a very convenient place and that's Rails, okay? The genesis of this presentation was that last fall I went to the Rails team and said, you know what would be really useful if we had a simple asynchronous job processor in Rails 5? As you all know, we can in our config specify the inline adapter. The inline adapter will go and it will run the job synchronously so we don't have to deal with those underneath dependencies, but the problem with that is it's not real asynchronous behavior. And if we're using the inline adapter in test or development, we can sometimes mask problems by not having real asynchronous behavior. I said, well, let me just build a simple one and we'll call it async job. We'll make the, instead of inside job or async job we'll make the symbol just async and why don't we allow people in test and dev to run these jobs really asynchronously in order to potentially find bugs in them. And the Rails team said that's a really good idea and they worked with me and we got this merged into Rails 5 last fall. So if you use Rails 5 and you use the async processor this is basically what you're gonna have. This code was lifted almost line by line from the original implementation of that. Now since then the Rails team has done some refactoring on that. So if you go and look at the implementation now it'll look a little bit differently. So just to give you some context of what you see different. They decided to collapse things into one file. When I originally wrote this I had two files. One for the Q adapter and one for the job processor to sort of mirror that normal behavior you would have of the Q adapter and the processor being separate. They collapsed them into one file because they're very short and didn't need to be two. They renamed some stuff to go along with better Rails conventions. They are assigning that provider job ID. Again in this case we're not really needing it but having that does again provide for greater consistency with the production ones. And they decided to throw one thread pool for everything and dispense with having multiple Qs. Because again in tests all we care that these things happen asynchronously. We don't particularly care about configuring the Qs for various different behaviors. So if you go and look at it now you'll still see async job and it will do exactly what we showed before and it's right now available in Rails 5. So if I've piqued your interest and you wanna learn more about this and see other things that you can do with this the two things I would suggest you look at more deeply are sucker punch and sidekick. Sucker punch is a threaded in memory async just job processor. Does a lot of what this does but does it way better and more and more fully. The creator of this tells me that the main use case is if you want to send emails from a hosting provider, a one click hosting provider like Heroku you can fire off these emails because there's not a lot of high cost of failure if that thing goes down. And so those jobs are just retained in memory and not persisted through Datastore. But sucker punch does use thread pools just like this does. It does map those Q names to thread pools and provides some configuration of those. And also does some really cool things where it decorates every job with a job runner class that does certain cool things like track the number of successful jobs, track the number of failed jobs, handle errors and do things in this nature. So it's a really good example of how you can decorate a job when you push it into a thread pool and do some really cool things. Like I said, for most of us we shouldn't use thread pools directly. The high level abstractions in the concurrency libraries provide those capabilities. But this is a really good example of how you can do that. And also sucker punch does some really cool shut down behavior where if the Rails app is shut down for some reason it will look at the number of jobs that are still running and try and allow the jobs to execute completely before shutting down and some other things. So there's some cool stuff in there. And again, sucker punch uses a lot of the tools that we saw in here. It uses concur Ruby thread pools. It uses concur Ruby schedule task. Another great one is of course sidekick. Sidekick is also an in-memory, it's also a multi-threaded job processor. It does not use, it does persist all of your stuff to a data store so that way your job data will persist beyond a restart of the application. It does not use thread pools the way we saw here. So sidekick actually spawns its own threads and manages its own threads but it still deals with all those same things with the internals of active job. And of course sidekick has a whole bunch of additional features. Like I said, sidekick doesn't use concur Ruby thread pools but it does use concur Ruby for some of the low level synchronization and atomicity stuff, thread safety stuff you saw here. So those are two great examples. If you want to look at this further, then go look at those code bases and see beyond what we've done here. So with that, I just wanna say again, I work for test double. We are hiring and we are also for hire. So we love talking to people about software development and about software and about how we can all improve software. So if you'd love to chat with us, by all means, reach out to us. You can find us on email, social media. Myself and Justin will be here for the rest of the conference. In fact, Justin will be speaking on Thursday in the afternoon at 3.30 in, he's gonna be talking about RSpec in Rails 5. So I've got stickers up here. I've got stickers in my bag. I hope to get a chance to talk with you sometime before the conference is over. And with that, again, my name is Jerry DeAntonio. Thank you for having me. So I do have five minutes if anybody has questions I see or want me to run out. That's cool, I'm hungry too. So the question was resource contention within the job itself. If you have multiple threads running simultaneously and trying to do things, all of the asynchronous behavior is provided by the job processor itself. So all active job does is provide the compatibility layer. It's important that the job processors themselves handle all of the concurrency, any kind of locking or synchronization that is necessary. But generally speaking, if you follow the best practices, a lot of that contention goes away. So for example, you're not passing an active record object, you're passing an ID, which you can then use to pull that up later on. We're serializing the jobs so that we're not storing references, but ultimately it is up to the job processor itself to be thread safe. So the question was, would you be able to use multiple job processors simultaneously? And the answer is yes, but not through active job. Active job only allows you to specify one handler. However, as far as I know, all, I'll say most, but as far as I know, it's all of the job processors can be used outside the context of active job. So for example, you might specify sidekick for being your main job processor, but say for certain things, you wanna use sucker punch, you would then just instant sucker punch directly. And so you can do it that way. And I don't know, I can't imagine why Rails would change that, but again, it's very possible, yeah. So the question was, could we subclass active job and have two different runners? I guess, again, it's ruby, we could probably do anything we want, but there's that one configuration value within the application config. I guess we could specify, we can create our own configuration values, we could create some new ones, grab those and do something of that nature, I'm sure it would be possible, but it's not something that would be directly or easily supported by Rails and active job itself. So multi-threading is just when you have multiple threads, right? A thread pool, so the question was, between multi-threading in general and a thread pool, a thread pool is a managed thing where the queue and all of the threads are managed by the object itself. So one of the things, like I said, so I could spawn my own threads just by calling thread.new, right? What happens if those crash? What happens if I want new threads? What happens if I have idle threads? There's, you know, how do I enqueue those things there? There's a lot of plumbing involved in that, right? We can always spawn multiple threads, but in order to manage that, there's a lot of extra stuff. How do we handle exceptions, right? If you throw an exception on a thread, it will crash the thread. How do you handle that? So a thread pool takes all of that, puts it in one object with some very well-known, very common cross-language algorithms and manages those things. So you create a thread pool, you give it a set of configuration parameters, things like how many threads to run at a minimum, how many to maximum, how many things can you enqueue, if the queue gets full, what do you do? If you can't get it, if the outbreak system won't give you more threads, what do you do? And it handles all of that for you. So all you do is just create this one abstraction, the thread pool, and you throw stuff at it, and it manages all of that, that enqueuing and dequeuing and if threads crash, it'll handle that and so forth. You deal with this? You want them, kind of the official. There's some overhead in the thread pool itself because of all of that. But just like anything else, that overhead comes at the value of making you not worry about those things, right? So generally speaking, you start with the high-level abstractions that use the thread pools, so you don't have to worry about that. Then maybe later on, you specify your own thread pools and inject them into better control and then maybe if you're this guy over there, you just write your own threading yourself. But that's sort of the progression. And that is a fantastic question. The question is, how does it handle exceptions? What we did here does not handle exceptions very well at all, right? The thread pool itself will protect itself from any kind of exception on the thread, right? So thread pool will not allow its threads to die because of exceptions. The thread pool doesn't do much with them, all right? Again, this is one of the reasons to use that, to use a high-level abstraction because if you use like future or promise or actual agent, those things have consistent idiomatic ways of handling things like return values and errors and so forth. So if you look at Sucker Punch, the job decorator class in there actually handles the exceptions by capturing exception on the thread itself before it bubbles up and then doing things with it. So again, the high-level abstractions are going to provide you with better error handling and consistent idiomatic and dealing with return values and minimizing the weight on things and so forth. And again, that's why you should always start with high-level abstractions and then only inject the thread pool. The thread pool is meant to be the very lowest level in that it just provides sort of the engine. So like I said, the actual job processor itself will handle things like errors, right? So if you look, whatever you use, which one of those supported job processors you use, they are all doing the error handling in this case. Because this is the bare bones minimum, I'm not handling errors long, right? They're just gonna, your job's just gonna die, you'll not know about it. But again, this is meant to be minimal and trivial. But if you use any one of the full-blown production-ready job processors, they will handle the decoration of that job and they will handle the errors and they will have their own way of doing that. And you would follow the, that's one thing that Active Job does not include is a consistent way of handling errors. So of course, you could always put your own error handling in your perform method and handle it there, but Active Job doesn't really do that for you. Oh, I wouldn't do one. I mean, in terms of, for production, the ones that are out there are very fantastic. They're very mature, they've been used a lot. The reason for this one was to put it in rails itself so that for development and testing, you can run your tasks synchronously and get a better understanding of how they're gonna work in production. One thing I've talked to a lot of people, in fact, one of the, if you go back and you look at that commit and you look at the actual discussion around that PR, it was actually a DHH himself who said, hey, I really like this idea. I generally install Sucker Punch for dev and test. It'd be nice if I could just do that within rails and not have to have that extra dependency. Sucker Punch is great. I've worked with Brandon, the creator of Sucker Punch and he's fantastic, but it's an extra dependency for just dev and test. Rails had done a great job of writing the inline adapter for dev and test. So it makes sense for rails to provide that simple async one. And so we just minimized what we actually need and put it in rails itself. So now, for dev and test, you can do that and get a better sense of the real behavior in production what it might be. I'm sorry? It's in rails five now, yeah. So all you need to do is in rail five is just say async in your config and it's there. Thank you very much.