 So, I feel like this is going to be slightly anticlimactic, based on the fact that I just gave that lightning talking, got so many cheers, so it's a little strange now being up here. So, I'm going to talk about services, but first, I just wanted to mention I work for Living Social. We are hiring senior engineers, and if you are interested, please come see me or anyone else that works for us. We have a booth in the sponsored lounge. So, service-oriented architecture for a lay definition is basically building your overall application such that you have multiple services talking to each other to do what needs to be done. If you don't know what a service is, a service is an unassociated, loosely coupled, self-contained unit of functionality. That's what I got from Wikipedia. What that basically means to me is that it is a singularly focused application by itself. It's something that you could take that application and expose it to the outside world, and there would be some value just in itself. So, the benefits of a service-oriented application or architecture are manyfold. The biggest one, in my opinion, is that it can be an asynchronous, and when I say can be asynchronous, I mean that you can just fire and forget many times. If you are doing reporting or whatnot, you can just send off, say, I want this report and go about doing whatever you want, and later it can ping you back and say, hey, this report is done. It is parallelizable, that's a fun word, in the sense that if the client that you are using to communicate with your services is capable of parallel requests, you can make these requests in parallel to save time when you are actually processing the page that you are about to load. It has loose coupling, as was stated before, meaning that the individual applications are much easier to change because they are so far removed from the overall architecture of the other services, the other clients that rely on the service. It can give you faster tests, meaning that suddenly if you move from a monolithic application to a service-oriented architecture, you have numerous small test suites rather than one giant test suite, so when you make a change in a service, you are only running, say, 10%, 20% of your tests rather than the full suite, and that is enough to cover the full service to know that you are actually living up to the contract that you've established. As I said before, it is significantly easier to extend and change because of the loose coupling, and the thing that a lot of managers will like the best about it is it actually increases the velocity when you are working to extend because of that reduction in coupling and the easy extensibility. I've gone to a lot of service-oriented architecture talks, and I always had a problem with them because they talked about service-oriented architecture at a very high level. They basically said you need to identify what your problem areas are, and then you need to understand how you are going to change the database and then make the service. Basically, any time that I tried to do that, I kind of felt like this. It was always a question of how? How do I actually pull out the service? How do I actually go about breaking up the coupling that I currently have and actually make the service a distinct and separated part of the code base? Like I said, I work for Living Social, and if I could, I would show you what our back-end organization chart looks like, but suffice to say it is so many different services. There are so many services just talking to each other. There's things for authentication, for logging in. There's things for financial work. There are things for handling the merchant communications. It is a many-layered set of responsibilities that are distributed throughout multiple code bases. When I was first starting there, we were looking to change one of our APIs from one that was quick, but slightly unreliable as far as the data to be one that is more correct. It was more precise in its calculations. It was up to date. With the new API came new possibilities. One thing that we were looking to explore was to dog food our API by making a client. Dog fooding, if you don't understand what that means, basically is taking an API that you're making for other people and using it yourself directly. That way you are at the mercy of what you actually have allowed, rather than just saying, oh, the customers can totally do that. They have everything that they need. Meanwhile, you're doing things that are only applicable if you have an internal connection. Let's get started about how to actually build a service. The first step in building a service is determining what your service will do. This is something that takes a little bit of time because people have all of these ideas about, oh, we have this whole piece that feels like it should be linked together and it shouldn't be a malleable piece of code. It's one thing that you need to do to build a service. It's one thing that you need to do to build a service. It's one thing that you need to do to build a service. It's one thing that you need to do to build a service. That being said, services should do one thing well. One. I personally have never gone to mom and pops car repair and washing machine dispensary. So it's kind of something that bears in mind. It shouldn't do account creation and financial management. It should be doing one thing and it should do that one thing well. After you determine what your service is actually going to be doing, you need to create end points for the services. So you need to ask yourself, what are the end points of the service? What actions are available to somebody using this? Is it just authentication, meaning are you doing a post that then returns a response? Are you allowing for lists of, in our case, deals? The API that we were working on was for getting information about deals and we could do individual deals. We could do an index of deals. We could create deals or payments and modify deals or payments. So after you determine what the end points of your service are, you have to go about and build the controllers. Building the controllers is a fairly easy step. We've all gone through and made controllers and rails. There's nothing really special here. Then you need to determine what your request options are. Basically, is there an ability to filter a response? If you're returning some odd 30 attributes for an object, are you going to allow people to say I only want these five attributes? Or maybe you can say I don't need these three attributes. Using rails, the 2JSON method allows for only an accept options. Are you building these things into your API so that people can use that to customize what they're getting back? You also need to ask yourself, are we having multiple objects per request? Like I said, our particular one allows us to get payment information for deals. We have the ability to request multiple deals per request. So we can get say 10 or 20 deals in a single request rather than having to get them one at a time. One problem that we ran into when we were making our API was we had a lot of attributes that we didn't actually want to send off to the users. So the problem with this is when you have a default that says, okay, we only want to return these 15 attributes out of the 50 that we have on our deal or on our deal's payments. If somebody then comes in and says, I don't want these particular ones, it might override those defaults and expose something that you weren't expecting. To get around that, we actually ended up using active model serializers. The link is there. You can go and check them out. But what active model serializers are is a way to override the 2JSON method in controllers so that you can just have the standard 2JSON or JSON parsing or not parsing serialization. And it will only give back what you specify. And what this looks like is something very simple. It's just a list of attributes and then a list of potential associations that you also want to serialize and the serializers for those associations. Now, with this, you can also go through and say for state if we wanted to, we could define a method on this serializer and it would actually override the default behavior. So you have a lot of customizability in what you're actually serializing for the consumers of your API. One really, really important step for this in particular is write tests. Seriously, write tests. What the tests are in this particular, when you're doing a service, is it is a contract about what your service fulfills. It basically says I am providing this and if your tests ever fail, you have broken that contract. So hopefully you're using some type of CI to do continuous integration and deployment so that you never break that contract with your services consumers. The next step is to create client models. It's sometimes tempting to just use the response that comes out from a particular service. Don't do that. Seriously. Do not do that. It is a terrible, terrible idea. I've worked somewhere where we were basically working directly with hashes in a client application and it was a nightmare. Basically every single, if anything changed so that one attribute that was previously nested under the address then moves suddenly and it is now under the city, you then have to go and fish in through all of your code to figure out where that change is necessary to make. So with many things, if you're using a third party application or third party library or gem or whatever, wrapping response is good. That being said, when you're first developing your API, it's sometimes really difficult to anticipate what you're going to need. So when I was doing this API, I actually created something that I termed really dark magic. It's terrible. This is what it looks like. What that is is something that goes through and for every hash that is in the initial hash that's passed in, it will check to see if there is a class defined in the client model's name space and try to create one of that class. And if there is an array, it will try to create many of that object. And if it doesn't, it fails, it gets an exception silently and then just sets it as an instant variable instead of as a class. This is horribly non-performant and it's incredibly mutable to the point where there's no structure to it at all. So really, please don't do that. The client models often just ended up looking like this because I just wanted the creation of the model. That being said, it did allow me to then potentially overwrite certain things in my client so that I could make some quick adjustments and this would over time shape itself into what the actual structure of what we were creating wanted to be. And again, this is a caution. It's great for rapid prototyping and it's horrible for everything else. So you really shouldn't be doing that until, like, or you can do it. Just don't release with that. Again, write tests for your client in this case. Writing tests for the client on the other side of the, on the other side of the service is basically a guarantee that the service hasn't changed so that when you do eventually update the client or update the service rather, those tests will break and it will say, hey, you actually need to update this before you push it out to production. So there is a gem that kind of encapsulates both step two and three of creating the controllers. They're the responses and creating the client models that are being consumed. And that is a gem called Sanjay. Thank you to Steve Jorgensen for showing me this. It is very similar to the active model serializers in that when you're creating an object model, you basically just give a list of properties that you want to encode and that will basically create a parser that can take JSON and turn it into one of these objects so that when you get JSON back from your service, you just run the model parser and it will create an instance of that object for you. The next step in creating a service is to create the communication layer. How does your client or how does any of your clients communicate with this service? Find a gem. Any gem, really. You can just use the standard HTTP library. You can use the various REST gems that exist. My recommendation personally is for a gem called Typhus. And the reason why I recommend that is because it allows for parallel requests to be made. You can queue up a number of requests to an API or to many APIs and then you can have them run all at the same time and they will get back to you and run a callback after they get the response. And so that allows you to basically save quite a bit of time because of the reduction in round trip times to communicate with the application or the service that you're dealing with. Again, this then allows concurrent requests. It also will condense the time it takes to get each one of those individual requests. Again, when dealing with any type of library or gem, wrap it. Wrap it. Seriously. It's a terrible thing to use a gem and to not take the time to wrap it so that it's condensed into one particular part of your application. Everybody, well, not everybody, but many people remember going from Active Record 2.3 to 3. whatever. It was a headache. It was a nightmare to go from 2.3 to 3.x because of the fact that everything changed and you had to find every single instance where you were using Active Record and potentially change it. If you want to hear nightmares about it right now, you can talk to Carrie. She's dealing with that right now. So after you create the communication layer, there is a bonus step that you can do for people that are going to be consuming your service. Because if you take client models and you add a communication layer, that basically can be wrapped up into a gem. The next step is to sever dependencies. Basically, just go through and replace any database calls in your existing client or service and you're basically going to segregate that and create a gap. You're also going to have to mine the gap. The other thing that is important to note is that after you create this gap between your service that you're creating and the client that's consuming it, you're still in the same code base. So if you want to start testing this, you're going to need a multi-request server of some sort. When I was doing this, there was a period of about two to three hours or more that I spent getting timeout requests after timeout requests and I was trying to figure out desperately while working with a coworker and then couldn't figure it out for the life of us and suddenly we realized that we had run into a WebRick wall. WebRick being a server that only allows one request would be working on the initial request and wouldn't be able to process the call to the same server to get the information from the API and so it would be waiting on the request from the service in the main request while the service request was waiting behind that in the queue and so eventually they'd both timeout. The next thing that you need to do is improve the service's performance. A round trip over a network is nowhere near as fast as a database call. So the way that I go about doing this is with series of tools. I was actually unaware of StackPROF which basically is the new version of the tool that I use which specifically for Ruby 2.1 so thank you Aaron Patterson for showing us that yesterday. Like I said, I use personally Perftools because we don't run shiny new 2.1 and I assume most of you do not. Perftools will give you a display like this. It's a lot of information to take in. It's terrifying. When I first started using it, I was really confused by this. There's a whole bunch of numbers. The ones on the left side are effectively useless to me. They are where and how long the named method is how long it or how many times it shows up in the profiles. However the two columns directly to the left of the method names are the ones that I use most often. What those are are the number of profile samples that the method listed and any of its descendant calls actually show up in the profile. So basically it's any time it shows up in the call stack at all. The reason why this is very useful is because you can get an idea of where you should be drilling down into your application. The one thing I will note in this is the 64.3 percent garbage collector, which is terrifying, but it's important for the next step of what I'm showing. The next step is basically finding where you're entering your application. For this particular case, I was using active model serializer to JSON, which is line 267, and if you look there, it's 35.7 percent, so that is everything that is not the garbage collector. So you're dealing with 35.7 percent of your time being spent in that method or any of its descendants. Drilling down a little bit further, this particular profiling gets to the deal serializer, which as I drilled down into that, the attributes creating the list of attributes for the serializer took 8 percent of the time with 5.9 percent of the time being spent in the gross sales method. And then following that down a little bit further, I would eventually get to the main bottleneck, and you can find this pretty quickly as you're going through your list of methods seeing like, oh gross sales calls, this other thing which then calls something else, and then that part can usually be a glaring incidence of where you're spending most of your time. In this case, the calculator's coupon method taking 21.5 percent of my time is where I was focusing all of my time on trying to optimize, because it was where I was spending all of my time. I forget who it was that was saying that they thought, oh I usually have a good idea about where my problem is, and then we'd try it and we'd get very little back. That used to be me until I started and really learned how to use this tool. Now, any time that I'm dealing with any type of performance improvements, I just use this tool immediately, because I am almost universally wrong about what it should be. There was a time when we'd made an adjustment to a time method, like literally we weren't memoizing a start time and end time, and that dropped our, like making that adjustment dropped a call from six seconds to three. It was something incredibly minute. I wouldn't have guessed it. So, get to know Perf Tools or the other one, which I forget right now, StackProf, those tools as far as profiling your application and really understanding where you're spending your time are invaluable for saving man hours. The last step is to then transfer the client and or the service. Basically, this has all been done inside of one application and now it is time after creating that gap to then pull it apart and separate into two separate code bases. So, you extract from the original code base and create two different ones. Oftentimes, this can involve extracting tables or databases from existing situations and moving them to another server. Then, if you're dealing with the databases, you then have to figure out how to keep multiple databases in sync. Please figure that out. I really have no clue how to do that part. And if you do know, please give a talk on that maybe. Thanks. And that's all I have.