 My name is Mike Grossclose, today I'm going to talk to you about client server architectures. Before we get started a little bit about me, you can find me on most social media as Microfusion. I'm one of the web architects at WeedMaps.com. There's a couple of other WeedMaps guys in the audience. Our main Rails application is a large, high-traffic Ruby Rails site. We're currently in the process of migrating it to a microservices architecture, which will allow us to basically scale what the industry demands. And we're hiring, so if you're interested, come talk to me after. Okay, let's get started. So the other day on Twitter, I saw Aaron Patterson say he was worried about the audience getting tired of puns, and this concerned me, because I think Aaron Patterson's puns are a huge part of the Rails culture. So I decided to do a pun raiser on Twitter and raise community awareness around the importance of puns, and also raise support for Girls Who Code, because I have a five-year-old daughter and right now she thinks everything I do is boring, but you never know. Let's see. The pun raiser was successful. I received a lot of puns. In fact, days later, people are still sending me puns, but I am now completely punned out, so there will be no puns in this talk other than the ones you see on the screen. So a few days ago, I found this app that changed my life. That's an app that allows you to swap faces with other people. So I tested it out on Aaron Patterson on DHH. And I thought, since I'm one of the last speakers, why not document my trip here through the medium of face-swapping? For example, meet Chris, one of my good friends and office mate. He's sitting right there. This is us in the airplane traveling to RailsConf, and here's us losing money in Vegas during our layover. So back to client server stuff. We're at RailsConf, so of course I can't talk about the client server without talking about the majestic monolith. And when I talk about the monolith, I'm referring to an application in which the presentation layer, business logic, and database integration are all tied together to address a single problem. There are lots of pros and cons to consider when developing on the monolith. Many of you might have different experiences, so don't consider these rules as much as just things to consider. So first, let's talk about team structure when working in the monolith. And we can't talk about team structure without bringing up Conway's law, basically stating that the teams will produce code that reflects the structure of the organization. And along those lines, I've personally seen the monolith work well for small, tightly integrated teams like under 10 developers, usually full stack, where we all work in the same code base, we all have the same stand-ups, the same backlog, et cetera, because again, the structure of the team is somewhat monolithic. So continuing on that, that leads to talking about the pros with regards to developing in the monolith. First, we have a single code base to work in, which gives us speed while jumping around the application, which makes it easier to refactor functionality between services. A side note here, if you've ever worked in a modular application with multiple packages and repos, you'll quickly find that refactoring code between different services can be a pain. Not only to test, but also to deploy, et cetera. So in the monolith, code gives you the entire system, which makes things easier to reason about for smaller applications, which implies a simpler architecture. Additionally, you get consistency across the code base. For example, it becomes easier to enforce coding standards, linting, code complexity analysis, like if you use cane or something like that, and again, all great for the small teams. But as your team grows and everyone is contributing the same code base, you eventually start getting more merge conflicts, and then the code starts becoming more and more complex, especially when people put in code to meet deadlines, which possibly causes them to skip specs, or that much needed refactor, et cetera. And that's all going to make the code harder to reason about and maintain. So side note, I was wondering what it would be like to swap faces with a statue. That's what it looks like. Or with me with long hair. So now let's look at system scalability under a monolithic architecture. First we can vertically scale up the system by increasing the size of our application server and database by throwing hardware at it. But a point will come where you'll eventually reach the limitations of vertically scaling, and we end up eventually horizontally scaling out our app servers. We basically do this by creating clones of our monolith, or as Martin Fowler says in his talks, basically cookie-cuttering our application onto new machines. But at some point, we're going to reach the limitations of our database. I've personally seen the largest RDS instance you can get on AWS at 100% CPU, and when it happens, it is not a good thing. But you can't scale out the database, at least in the relational world the same way you would with your app servers, because you need a single source of truth. But what you can do is read replicas or sharding, there's different techniques. And with read replicas, basically you'll do all your writes to a primary database, which then replicates itself onto a secondary database. And then you do all your reads from the secondary, taking a portion of the load off the right server and distributing it across the read server. So let's look at the client flow in this monolith. Under this architecture, viewing a page basically consists of a flow of something like this. I'm going to assume you're not using Ajax or TurboLynx or anything. So you point your browser to URL. The request turns into a request from the client to the server for that page. The server then returns HTML, JavaScript, CSS, returning the content of that URL, and the browser renders the page. At this point, any new actions cause a new URL request from the client, which starts this flow all over again. Note the tight coupling between the client and the server in this flow, because I'm going to come back to it. Let's talk about mobile. Native apps have helped move web apps in a new direction. Since native apps are installed rather than served at page load time, they have forced us to think about the client and the server as two very separate entities. We now have to assume that the client can be anything, Android, iOS, desktop app, et cetera. This creates a new requirement that we have multiple presentation tiers, meaning that we now have multiple ways to communicate with, view, and mutate the state of our monolith. And up until now, integration with external resources has been an afterthought. So what do we do? We bolt an API to the side. So now we've tacked an API under our system, maybe presented through something like Grape or something, and our app has basically started to grow some pretty nasty appendages. And as our product requirements grow, we end up supporting both our APIs and our legacy views. Additionally, in order for the web to keep up with the current expectation, web apps have also evolved. And we see the birth of single page apps. Single page apps work to emulate the expectations of native apps in that they attempt to create an experience where you don't leave the page but rather manipulate it. This is all starting to feel very messy. But fortunately, there is a solution. And that is to separate our client concerns from our back end server concerns. Doing this helps draw a clear separation of responsibilities. Where on one side, we have a static asset server serving up HTML, CSS, and JavaScript. On the other side, we have an API server serving up raw JSON data to the client for it to do with it however it pleases. But there's something important here not to be overlooked. In addition to separating our technical concerns and cleaning up the interface between our client and our server, we've also just created a separation of concerns along more human boundaries. Basically between the data and business logic side of things and the user experience. This is extremely important. Front end and back end require different head spaces. Even for those of us who go back and forth between the two, there is definitely a context which involved. This now enables us to have two separate teams. Each having different expertise working on what they do best. Front end team working on developing amazing user experiences and the back end team focusing on reliability, scalability, data, API best practices, et cetera. At this point, you're probably thinking, man, I miss the talk from the chef guys because it's at the same time. So they're there. So the client has left the comfort of the monolith. What does this new world look like now and into the future? And of course, I can't talk about the client without talking about JavaScript. I love JavaScript, but I'm also not in denial of the language flaws. It has its strength, first class functions, async by nature, but it also has plenty of weaknesses which I probably don't need to go into details on because everybody here is a Rubyist and knows. So how do we solve this? Let's talk about transpiling. We start with CoffeeScript. Most of us are familiar with it and its similarities with Ruby. It has a clean syntax. It simplifies the language. It helps avoid a lot of the pitfalls you get into with JavaScript. But the JavaScript language itself has evolved with the latest standards. ES6 gives us arrow functions, default assignments, templating, destructuring assignment, tail call optimization, which allows you to do recursion without blowing the stack. And ES7 has async await. And with Babble, we can transpile our code to ES5 so we can use the latest features of the language in the browser now without having to wait for browser support to catch up. There are a lot of other transpiled languages. TypeScript, ClosureScript, Dart, to name a few. And all this allows us to write more elegant code that runs in today's browsers. So the key takeaway is transpiling is good. I believe it's here to stay. And mostly because the browsers will never keep up with the demands for new and better language features. So I believe we should get used to it and mostly embrace it. Now let's talk about JS frameworks because we also have a lot of choices to make. First, we've got Angular. It's an opinionated client-side monolith. Its strength is in the ability to do dependency injection. Although you can use it without, Angular 2 is recommending you use TypeScript. And Angular 2 has also moved away from scopes and gone entirely component-based. And if you've used Angular 1, then you're very happy with this. And lastly, we have Ember, which is another large opinionated framework. Has ES6 built in via the Babble transpiler and has extremely powerful router that's so good that React ended up taking it for their community. React by itself is an arrow without a bow. It's not very opinionated. It focuses on being simple and declarative. It's built using reusable components and has a unidirectional data flow. But it's not concerned with the view layer. Or it's only concerned with the view layer. So you need to pair it with something like Redux to drive the state of your app. There are a lot of other frameworks out there. Lowdash, which is like the backpack of tools everyone needs. JQuery, which is like the knife everyone has in their pocket. Our functional reactive friends, CycleJS, BaconJS, ReactiveX. And lastly, ShamelessPlug for Lore, which is something we just open sourced this week, which is a convention over configuration framework for React Redux applications. But if you ask this guy, he's going to suggest you use Ember. That's you, Huda. Let's talk about how we deploy our client code. First in this new client-side world, we no longer want to have to worry about servers. Servers should no longer be a client-side concern. One way to guarantee your front end will basically never go down is to use an AWS S3 bucket to serve your front end. You can pair it with something like Gulp and script up the deployment. The only downside to this is that there are some hacks required to get it to work without using hash bang URLs. You can also use something like GH Pages, which some of you may have done. Just push your repo to GH Pages branch, and it will automatically be hosted for you. It's easier to set up than S3, but it also has a few issues. Same hash bang URL problem, but they also don't support SSL for custom domains. Personally, I recommend you use something like search.sh. It allows you to push your client-side code via the command line, but it also addresses the SSL issue and the hash bang issue. So now let's talk about the convergence of native web and native development. This is a kind of touchy subject for some, but I feel obligated to talk about it. Cordova, Ionic 2, React Native are all a few tools available that allow you to build apps for native devices using traditional web tools like HTML, CSS, and JavaScript. So it begs the question, why are people still writing native apps? Well, historically, there's been a hesitation to use these tools because of the performance of JavaScript. It's hard to emulate the performance of native experience, but this is changing. For example, we use React Native to write pocketconf, and it works great. So why should we converge native app development in the web? Well, first, remove the specialization required to build native apps. Red, no one wants to write Objective-C. It also helps give consistency in the experience across the platforms. It also allows you to share components and code across the code base. And like I said a second ago, performance is no longer an issue. Before moving away from the client, I have to quickly talk about something we often hear in the JavaScript community, and that is JavaScript fatigue. In my opinion, a lot of the fatigue we're seeing is due to some of the modularity in the ecosystem, combined with the churn in the interface. I love the modularity, and I love JavaScript, so the modularity is not really a bad thing. But the way I see it, the more modularity we have and the more churn we have in our interfaces, the more frustration the community has. So how do we avoid this? Probably the best way to do it is that know when you're using cutting edge, there is a high probability that your interfaces will change. We see this happening a lot now in the React ecosystem. Also lock your dependency versions. I've seen many times where a patch update that's supposed to be backwards compatible ends up breaking things. So to summarize this section, the client is about creating human experiences. We need tech to get out of the way of creating these experiences, which is part of the growing pains of this fast growing ecosystem. Choices are great, but without a lot of opinions, things are still feeling very modular, which has a cost. It's probably some time for some more Facebook pictures. There's Justin. There's Alex. There's JCK. And actually last night we went to Kansas City, Joe's Kansas City barbecue before we go into karaoke. It was an amazing barbecue, so we had to get some pictures there. Okay, let's talk about the server. Which means we should start by talking about monolithic APIs. Monolithic APIs have many of the same pros of the monolith that we discussed at the beginning, but the key is that we have removed some of the cons that were brought by removing the client, and thus removing all the client side concerns with it. We can easily build a monolithic API by using the new Rails 5 API. By using the Rails API, you get many of the same benefits we've come to know and love, like convention over configuration, while still being lighter weight than a full Rails app. Additionally, when we talk about how to scale a monolithic API, it's very similar to how we would scale our normal monolith with many of the same pros and cons. But at some point, your team will start to grow, and that leads us to need to talk about microservices. Microservices have some of the same problems that modularity in the JavaScript community can have. And that is that you need to have stable interfaces clearly defined before you start. And with that reason, I personally believe that using microservices is about scale. They're an optimization technique used to scale your code and your organization, and we all know you should always be cautious when prematurely optimizing. So why would we want to avoid using microservices right off the bat? Well, let's talk about potential pitfalls you might find. First, the interfaces are harder to refactor once they're there. And whereas in the monolith, you may have shared services, queries, tools, task runners, and microservices additional effort is gonna be required to share code between individual services. Your Microsoft, let's see. Latency between systems is now a much bigger issue because now rather than calling other methods in your own code, you're gonna have to go across the wire to get the data you need. DevOps and systems. Historically, I've seen companies underestimate the importance of DevOps. So if you feel like you have a few systems not getting enough DevOps love, then imagine scaling this to many systems that heavily depend on each other. Additionally, with the DevOps side of things comes the complexity of monitoring the health of your system. Being able to predict when something is going to go wrong and knowing how to recover when it does. And lastly, developers have a tendency to build and move on to the next project. With microservices, you wanna keep all your systems to the same standards. Basically, the philosophy, you're only as strong as your weakest link. And this takes a lot of effort because there's often this build and forget philosophy when using microservices. But given all that, there will come a time when your application will reach a point where you're needing microservices for scale. So let's talk high level real quick about how one might do that with microservices. Within your application, you have a number of domain driven models that can be grouped by bounded contexts. By breaking up your app along these boundaries, you create a number of smaller apps or services with explicit interfaces between them. They should communicate with each other and the outside world using lightweight protocols like REST because you wanna be of the web, not behind the web. Each microservices should have its own database or databases to ensure that the microservices are autonomous units. So now we have a list of atomic services each with their own normalized database, but there is a problem. We don't want the client to have to do a bunch of queries just to get the data it needs. For example, let's say we wanna get all the circles that belong to a triangle of a specific color. As it currently stands, we first have to query the triangle service to get the triangle ID and then query the circles with that ID. Instead, we can do a denormalized data tier such as Elasticsearch, Neo4j, stuff like that. Next slide. And then add a layer of composition services above that. The composition layer allows us to prepare data in a way that might be more suitable for the consumer. And this is all going back to Conway's law because the beauty of this is that each service can have its own team which then has its own data storage device and different technologies potentially. The only thing that really matters is the reliability of the system and its interface between them. This one is the good one. Did you ever wonder what it's like to swap faces with twins? At this point, I wanna talk to you about the future. I originally had, it's about service-less architecture back in as a service like the one I've been building, Storcery.com. But then yesterday, I went to an incredible talk which got me thinking, and as a result, a few hours ago, I threw away the end of my slide deck and replaced it with this. If you're wondering what that talk was, it was so good that made me modify my talk. It was this, guys. And if you didn't see the talk, you should. I needed to recap for some context. So, obligatory spoiler alert. The TLDR of his talk was basically that Ruby is no longer the new hotness, and the cool kid language is no JS. We're seeing less people talk about Ruby, less people hiring for Ruby, and as a result, less people building things with Ruby. Basically, less people carrying the Ruby Torch. So I don't have a consulting company and I actually do a lot of stuff in Node. So then why would I change my talk at the last minute to tell you about this? Well, it's because I love the Ruby community and I truly feel we have a story to tell. And I think we have the ability to change the conversation. It starts with changing the way we talk about and use Rails. For example, we do need to start splitting apart our client and server concerns. You wanna use that new awesome JavaScript framework? That's great. With a Rails API, we can spin up back in for you in minutes. You're worried about scaling because Node microservices can scale? Cool, we can set up Rails API microservices too. But you're worried about performance. Other languages are faster. That is also true, but in most situations, response time delay is gonna come from the latency going across the wire, either between your microservices or between your server and your database. The speed of your language starts to become negligible unless you're doing crazy algorithms. So let's start using Rails for more than the monolith and let's start changing the conversation about how we use Rails to scale architectures. So in summary, we're on the edge of creating some really great stuff. Every day I look forward to the challenges that await us and I hope you do too. So in the words of DHH, let's go out and put a dent in the universe. Thank you.