 Hey everybody, my name is Jared Dillon today. We're talking about a little bit about messaging patterns and Ruby As I said, my name is Jared. I work with quick left. I'm a software engineer there and you can find me on Twitter at Justice Fries So we'll start off with welcoming you to the enterprise world and that big move dot Is the money it's enterprise So when we look at this enterprise we talk about a whole bunch of things things we typically don't want to necessarily interact with and These things we interact with is actually a modest mass of many many many parts Who here is dealt with? Something like this this huge huge mass Well, great. I'm sorry. I have to it's not not not the best thing in the world But for the purposes of this talk, we are going to treat it as a single mass And we're gonna have a lot of things up here. So I decided to label them. I stole that from Jeff So thanks Jeff so We we want to integrate with this system and for many many whatever reasons we can't integrate in directly with it for political code whatever reasons we are actually operating outside of the sphere and That little sphere is our application Here, let's just say it's a rails application and we're going to run with that and What we want to do is actually start integrating with this system. So we start looking at okay What's the best way to integrate multiple systems here? Well, we can start off and say oh, let's make an API between the two We need some information. We need to consume something. So let's make a request out So we make a request out And we make another request out Not so bad. We're okay We make some more requests out more and You know, that's not the end of it The everything inside of there also will need to consume information from us so We don't care about our quest. We now start to make endpoints. We have an endpoint We have another one things are getting a little ugly and We have some more endpoints that we all want to work with here kind of an ugly mess who wants to deal with this I Sure as held out and Remember We're dealing with a whole mass of different things in here. It's not just one big homogenous mass We're just subtracting it away. So we're actually dealing with this It doesn't sound like fun at all and we may have a public API We also want to deal with so we're also facing the world and now we have up here 17 things We need to manage and scale and not only do I have to do that our APIs by definition are very unreliable and We have to get into the into the habit of error checking all of these things and making sure that nothing fails It's also synchronous all these things that we do block either within the bigger system or on our Ruby app So it's pretty obvious. We need another way They might ask yourself. Okay. What's that way? What can we do in order to fix this? And there's a couple things we can do to fix it Looking at our problems where I know we're synchronous We're blocking our main application We have a lot to maintain and scale and play around with So what if we started talking about an intermediary broker here something that would Deal with all these interactions for us and let us consume at a rate that we want to effectively making us asynchronous in nature and We can do that These things communicate with each other And we do that with something called a message queue and for the purpose of this talk we're going to talk about RabbitMQ and RabbitMQ is a Messaging queue built out of the AMQP standard developed by JP Morgan Chase in 2005 To solve this very need to solve the need of integrating lots and lots of systems with lots of different code bases in an asynchronous way And we can do this and it's battle tested and battle worn and we know it works pretty well now And it's all built in Erlong and it's extensible through Erlong So assuming we've done this We now only care about our Rails app and our messaging server We don't care about the wild world beyond we can interact with whatever we want past that And that was a weird magic move So these things only talk to each other now the Rails server and the messaging server and We do this through the notion of queuing information up We can start to let things work together Queue all this up and actually start to somehow process it But we need to process it and our Ruby server or in and we go from this convoluted nasty mess to something more like this and Much like the enterprise we haven't told the whole story yet The bigger story we're telling here is not only now do we have our Ruby server? We have something else. We have our web server and now we're letting our web server doing what's good at serving requests Handling our business logic and generally being a Rails app But we've added something else into the mix. We've added workers and workers are little processes that can spin up and Really do whatever we want and we we what we're doing here is actually yanking things out of this queue and Doing something with it Doesn't matter what it is. No, and here's the most trivial example of that This is some code running in an event machine loop with the Ruby aim QP gem And I have all the resources available at this at the end of this talk if you want to go check it out but the Ruby aim QP gem runs on top of event machine and who here's familiar with the vent machine for those of you who aren't aim the event machine gem is a Fairly sophisticated gem that embodies the reactor pattern and enables some concurrency through event registration So that's what we're doing with the real aim QP gem here and all we need to do is subscribe to the channel we want Which in this case will be some transactions and We have a direct exchange to that transactions queue from our application that it subscribes to We receive a message. We put it out. We're good to go It's about as simple as you can get for a worker but of course you can integrate this more into your application and now we're looking at actually scaling horizontally and Okay, what do I mean by scaling horizontally? Well, we have three workers. They work three messages Okay, well what happens when these three workers start to get overwhelmed We're passing we're processing too many messages from this side to really handle it and we start to see bottlenecks We can add five six ten workers We can keep spinning up processes in order to take the load off of our application and so we're looking at scaling in a very very interesting way in that Everything happens asynchronously and you have to think in a little bit of an asynchronous environment because nothing happening is happening directly We're having a direct effect on the application So we solved a problem. We've added this messaging queue in between This large homogenous system that we don't care about and our Ruby application or Rails application and remember our application Since we now have multiple processes running in there And we start to see the same problem Yeah, the within our applications this thing keeps growing We start to see in some bottlenecks that are not only in the not only in our app. We start to see intra app bottlenecks so Typically the way you would solve that of course is go back look at your bottlenecks and start to solve them That's a great thing to do but This asynchronous pattern. We've just talked about in our app and it's a fairly common pattern We can actually apply into our application I'm gonna pose the question. What if we made our controller actions asynchronous What if we're generating data? We don't actually provide a response at that point But we pass it off to something else to handle that and actually add the data into our database and do really the work for us and once again We're letting the web application do what it's good at and we're creating other things to let them do what they're good at So we don't care about this anymore for the rest of this talk We just care about the rails application so Once again We're not telling the whole story here. We have our rails application. We have some workers in there They're doing their thing. Everything's going happy. Everything's working but we also have a database server and This database server is handling our requests and Doing things that you need to do and generally interacting with our application and of course you can see a performance bottleneck here and We'll talk about that in a moment, but it scales dependent on our application So here's a what a controller might look like I abstract a lot. It's basically a pseudo code, but You know, we we create a new object in memory and we check if it saves if it saves We will do something Probably redirect to an index page if you're going off scaffolding or whatever if not render out But the key here to think about is this is a blocking operation this will wait until the request of the database server is made is finished and Is complete before we continue with life cycle of the application and If you're if you're seeing these bottlenecks and you're in a position where you have these problems They that that can be a pretty huge bottleneck where milliseconds can matter So we're starting to talk about some sort of worker again okay and We have multiple options. We're not going to use a MQP for interact or intra app communication just because We have a bunch of options in the Ruby world. Some of you may have heard of them Who's heard of delayed job? Who likes delayed job? great So the first thing we could talk about is delayed job and on the plus side. It's very simple this dot delay dot method It enqueues it into a into a database table Adds a row has that has some locking mechanisms built into the late job We have a problem right Because you're still dependent on your databases performance You're not adding a new piece to your component here Well, you are with workers. You're adding new processes, but you're not adding any new servers to run This could be great For the purpose of this though, we're saying that our database is already constrained We're working under real life constraints. And so adding delayed job and processing tons and tons and tons of jobs when we make everything asynchronous Might not be an acceptable option Okay, so who's used or heard of Q classic? Very few yeah, Wayne sort of it. Okay, great. So Q classic is this awesome thing made by Ryan Smith of Heroku and What it does is it actually uses features in Postgres SQL a Published description model that is in there to actually Act as a queue and act as a queue without being fully burdened by all the things requiring relational database So this is pretty awesome. It's new it works pretty well But we're still talking about database constraints and it'll still add some sort of constraint on our server So we'll leave that as out as an option for now and kind of go back to the drawing board with that The last option we can talk about is Actually one of my favorites who here has used rescue Cool. Okay. So rescue is a is a queue system built on top of the key value store Redis I hope everyone here has used Redis there anyone here in truth does not know what Redis is Okay, so Redis is a very fast lightweight key value store that Has some set algebra capabilities and so you can do some pretty powerful things But including building an entire worker queue system on top of it You can also use as a memcache replacement and it's good at that Yes, and it will persist to this that is the difference between Redis it big difference between Redis and memcached so Okay So we're talking about adding another piece to our system as I just mentioned This is the very easy to set up extra piece to our system So of course we have to think about these trade-offs. Yeah, and that's really what we do, right? We're making a whole bunch of conscious decisions driven by the trade-offs We have to make and in this case maybe adding a red a server is an acceptable trade-off it's fairly low cost as far as Complexity and it's pretty quick to get running so we're looking at a example of a Model that has an asynchronous create that we can start to enqueue things into we can start to run jobs on top of Up in our transaction We specify the queue we want and we have a self-dopper form method that the worker We're actually run and do what we wanted to do And in our model, I've added a an async create method. I'm missing the self it should be deaf self async create but Other than that, I mean, we're in pretty good shape here. So all of a sudden our controller Looks like this now We're creating our our object in memory But we're not actually saving it we make sure it's a valid object and So we know that it passes all of our tests all of our validations and it's something safe We can do do something with and now we pass it off to Redis. We say async create and We now will enqueue that and It'll be worked at a later time This requires a pretty major shift in the way you think about applications The reason I say that is because you're not doing anything here except rendering a Successful response and I left out that piece of that code because that doesn't matter But you do not do any data manipulation within your app server itself. You're passing it off onto something else Now if you're obviously if you're requesting data, this may not be the model to choose But for creation of data if they don't need it immediately You can start to look at something like this and you can also start to look at tools such as real-time services web sockets or Other Interval polling and other methods in order to start to get this data out You can get you can have the data push the page We're ready and this is a pretty fundamental shift in the way we think of web applications and offers a new way to scale The reason I say that is because now of course as our application starts to get loaded We can fire up more and more workers to work on the rescue queues and we've shifted the problem We've shifted the problem in the user's favor in that when the user is operating with the application They're not bound by everything. We're doing in the background But they are now operating on what they can see and what they can work with and When you're ready for them to work with new pieces of information You'll give it to them. So We've done something major here. We've taken a very synchronous system. We've taken a system that was a Problem that we were looking at that it was just the blue and the red in our enterprise massive scale area and our our rails Ruby app We've done two things We've abstracted the problem of Synchronity and blocking away from our inter app communications away from our apis that communicate outside of the context of public apis and public apis are a whole different matter, but The ones between the two api the two Systems we've now alleviated And we've enabled a new way of scaling We've also done the same thing within our application We've considered the lay job. We've considered things that may hurt harm our database But we've instead gone with a worker system that adds a little bit of complexity But overall helps our application in a lot of ways and More moral moral we've made a fundamental shift in the way we think about web applications this model I'll I'll hammer it again is not a synchronous model. You cannot Host out with this model create something and get it back immediately It'll be available when the workers are processed it but you're trading Availability for a lot more speed and a lot more of a distributed system So I've left some time for questions My name is Jared Dillon. I'm at justice fries like freedom fries, but better and There's some resources here These are all resources. We've discussed in this talk on over briefly But and these are all great gems, but more all focus on the architecture decisions We've made here and how it benefits your app. Thanks so much Mike sure You still have that constraint yes, yes, and and I deliberately avoided that because that database concerns are a Oh, I'm sorry So Mike's question was so we we've abstracted everything out and made it so everything's a synchronous Well, how do you deal with database issues that you're still blocked by the database and how do you deal with that? Right you have more people trying to put information into the database all at once And My response to that of course, you know, I looked at that that out of this talk deliberately database scaling is a huge huge topic that a lot of people have thought much more deeply about I have a course you get once you have that constraint you're starting to talk about Finding ways in order to to scale that database and that really becomes a bottleneck at that point beyond a security Yes, that's a very good point The follow-up was if I ask for something valid if I create an object in memory and check if it's valid now When I do it And if I do a deploy something else I change validation rules while sitting in the queue grab along that may be that couldn't cause problems later down the road when it tends to create and feel validation and So of course when you're building the worker you would have to build an error checks against that you may want to check It's valid Again, or actually be able to kick it into a failure queue to be able to be handled and and send off the proper Notification sending email, you know this order failed Here's why to the user and so that at that point you have to really decide how you want to interact with the people Using the system. That's a great question. So the question was if since because I'm just checking something's valid And you're queuing it do you run the risk of losing something if if the queuing server or reddit were to crash In the case of reddit you can make it persistent and right after this and so you're protected You still have those objects in the queue ready to go in the case of random queue you can also make your Queues become persistent and so the right after this will reconstruct when they are when they're restarted The downside of that would grab in queue is it's much slower than then when it's just operating in memory And so if it's it really goes down to what you're doing if you're doing something critical like a transaction I just showed there absolutely you want to use a persistent queue if you're processing analytics data Okay, you miss a you miss a point here and there you probably don't care so much You probably care more about being the bulk data that you can aggregate over So the question was what if you're queuing servers down when the when the job is pushed in the first place? In I mean in that case the flat I answer nothing happens. They can't make the connection. You can't put the job in there That that gets a lot more into developer operations as far as what you do on the server side If if you can't do it At that point you do have to have some exception handling around that and say Take it back. You can't do it or another option is to of course have synchronous fallbacks and I wouldn't necessarily recommend that but you can As what It's all new version does okay, Josh was saying the new version of Rather than here is active AJ which Okay, yeah, so you can have so for with Robin q you you had much of a Master slide set up and so now you can have two masters as well to help insulate you against that You know you are talking about one of the trade-offs we did We do have to think about is now we're trading for a single point of failure and how do you insulate against that? So it's about insulating it's failure at that point. So I prefer Ruby a MVP. Oh Yeah, yeah, so the question was do I prefer Bunny and or which which in QP gems would prefer and why I prefer Ruby and QP first because The I like the DSL around it and it does run async in a vending machine Bunny and Karen are great. They are they are synchronous options So if you need to do things in your controller in in rails itself with the cues you can use those I prefer Karen. I like the DSL better But buddy and Karen are both good jobs Any other questions and asked how do you handle having a whole bunch of moving parts in your system now that you've started to make it more complicated and How do you make it easy to develop in that environment? Okay, so the way I've done it is When setting up these systems, of course, you know, you have to make the hard decision Okay, do I want to add this moving part of the system and the One way I might do it and I've done it once With mix success is set up a chef solo Script in order to get everybody set up and running with the same environment And then using gems like foreman to do process management and run all my things all at once Have really helped that process. It can be a nightmare when you have to say okay You have to boot up 10 processes to really develop on this thing that can get ugly But if you want a great job to start looking at that start leaving in that problem Foreman is an excellent process manager that by declaring a proc file and Saying form and start it'll start everything in your application all at once together and keep you running and maintain a single log to see what's going on So Mike was saying there is a great job with chef solo upon soloist and pivotal. I should use it to set up their workstation I'll check that out. Be sure to check that out as well. Thank you so much. Thank you