 All right, hey, everyone. I'm Derek. You can catch me on Twitter at Derekco. So before we start, let me just tell you a bit about myself. I've been building web products for over 10 years. I did my first startup in PHP. Then after that tanked, I went. I perhaps joined the only startup which uses .NET. That didn't work out so well. And then I saw the live with Ruby. Shortly after that, I joined Pivotal Labs, a consultancy you guys are probably familiar with. And after spending some time there, I left at the end of last year, and I joined Kixend as your first employee. So at Kixend, I work on product. And I do full stack web development. That's the front of our office in Mountain View. We'll talk more about Kixend in a bit. But first let's look at some trends that's happening on the web right now. User experience is a differentiating factor nowadays. This is evident by heavy emphasis on design and user interaction. This has spent a whole pattern of single-page apps with snappy performance. Users just expect that now. Apps are also increasingly incorporating real-time elements, things such as updates, notifications, as you see on Facebook, Quora, and Twitter. These kinds of real-time features increase user engagement as well as just overall experience. Products also now have to think about having multiple clients. Like now, if you talk about consumer app, you almost suddenly have to consider having an offering on the iPhone and the Android and even tablets nowadays. So the market has responded to these trends with a slew of new frameworks and platforms. Adobe, Meteor, Firebase are real-time backends with some even provide front-ends. Meteor and Firebase combined have raised over $10 million in cash, so it just goes to show how hot this market is right now. Amber and Smine are relatively new popular JavaScript frameworks that help build these kind of rich front-end client apps. And there are many more great ones out there. So back to Kixend. At Kixend, we make it easy to send photos and videos privately from any device to anyone. We have apps across every platform, and we deliver files in real-time. So all these trends, which I mentioned just now, also apply to us. And here's a breakdown of what we use, Rails. Whatever you say about Rails, it still makes a solid API server to a rich client app. Our web front-end uses the very same API that our Android app uses and that our desktop app uses and our iPhone app uses. So there's no difference, and it keeps things consistent. Backbone is the most established front-end web framework out there. It adds structure to your JavaScript code while being lightweight, easily customizable, and flexible. Here's a rough comparison between components of Rails and Backbone for people who are not too familiar and see how the template maps to view, view, and routers to controller models, kind of like that. Backbone isn't really MVC. It's just a backbone way. Probably the closest thing to a backbone view is probably like the iOS view controller. We use handlebars on top of Backbone, which are logicless templates. They're compiled, which makes them very performant on the client side. And since they're logicless, it forces us to decouple our view logic from the actual presentation layer. We use XMPP, which is a popular short form for extensible messaging and presence protocol. Why do we use it? Well, it's performance. The server that we use is called eJab ad. It's IO bound and not CPU bound, which is what we want from a scalability perspective. It also uses presence-based delivery, which handles load really well. It's also one of the most mature real-time technologies out there. But you don't have to use XMPP. There are other services. So the quickest one to hit the ground running is Pusher, which runs on WebSockets. And a slightly less than one is called HTML5 server-side events. This has support in Rails 4 and in most proper browsers. So you should check them out. Coffee script. I know there's a lot of debate about this, but hear me out first. Coffee script is a syntax layer on top of JavaScript. It hides the quirks and nuances of JS, and it prevents common mistakes. Much like Ruby, it's fun to write in, and it focuses on developer productivity. I personally like the idea of an intermediary layer that compiles down to optimize JavaScript, which is especially useful as JavaScript itself evolves. So let's dive into some concepts and best practices that we use. We'll be looking at a lot of Rails and Backbone code, but many of the principles that I will speak of today hold regardless of the framework that you're using. A list. A list in Kixson helps you quickly send photos to groups of people you care about the most. Very simple concept. A list is backed by an API call with a simple response. It just contains a name and a whole collection of members. So when creating API endpoints like this, all your best practices still apply. They don't change. So we're talking about thin controllers, fat models, restful routes. And this even applies to your views. Your API responses are your views, and you should treat them as such. So you should have a presentation layer. In other words, don't use to JSON to generate your responses. It's like HTML in your models. It's just that don't consider using it. That means using things like JBuilder, which is included in Rails, or Rable, which is what we use. Having a layer helps you eventually. So let's say you decide to change your response down the road from, let's say, JSON to XML, or even to message pack. This layer helps you abstract that final response format. It's also really simple. This is an example, Rable response. It's pretty straightforward. Essentially, I'm declaring model attributes that I want to be included in the JSON response that's passed on to the client. All right, back to our lists. In the browser, we hit this page with a URL, much like the one you see. Note the URL fragment, list slash one slash edit. This fragment is processed as a route through the backbone router. A backbone router is very similar to the Rails RRB and that route to method map. And in our edit list route handler, we instantiate a list model with a pass in ID. This is then used to form a REST URL, which the model will point to. A backbone model basically encapsulates and abstracts all these data and interactions of your server-side API. Here's how a backbone model operation all the methods mapped to corresponding REST calls. So it's very straightforward. It's like what you'd expect. All right, as seen, a list can have many members. So we'll set up an instance variable on the model that will contain this collection of members. Backbone has no concept of relations, built-in relations, at least. So this is a technique that we use. When members get loaded into the list model, a change event is fired. We bind the load member's event handler to it, which loads the members into the member's collection instance variable. But stop for a bit and explain what the collection is. So this is a collection. It's very simple. A collection basically contains many models which you specify. You just point to the API endpoint, and it does the appropriate fetching and initialization of the internal models. A good practice is to have base collection and base models, such as you have your application controller, which you inherit all your other controllers from. And that's also where you include any helper methods. As you can see, our collection inherits from KS Collection, which in turns inherits from the backbone collection. Here's how the methods map. Again, it's really straightforward. After instantiating a model, we pass it into the view to get it rendered. Note that how we decouple our view from the model itself. Views control the interaction and state of what the users see based on the model data. They have a direct correspondence to a DOM element on the page. We like to keep our views self-contained, so they will know the ID of the DOM element. They'll know the templates that it's used to render it out. So keeping your views self-contained makes it easy for them to be reused, especially if you're nesting views. OK, so we're ready to render out our list view. And this likely depends on a couple of service-like calls in order to populate the entire page as you see here. So we want to load the whole interface progressively. On first hitting the page, we render out static non- data-binded sections of the view. In other words, just the chrome of it. We use a views render method as such, basically just dumping HTML into the page. And once the list request is completed, we fill in the list name onto the page. So basically, we bind this render list info method on the models change event, which is triggered the moment the list has been fetched from the server. And just like just before, we fill in the DOM element off the list name with the actual list name. Let's look at populating the view with list members on the left and possible list members, in other words, your friends on the right, once all the requests are complete. As seen earlier, members are collections. Collections trigger a reset event when they're loaded. So we bind them to the relevant render methods as well. As you can see here, for example, when possible members is completed, render list possible members is triggered. And our render methods are very straightforward and atomic as well. We just iterate through the collections and pendent for the page. So we'll do a quick recap of what we have covered. On the server side, Rails best practices still apply. We use a presenter layer like Ravel or JSON builder. On the front end, we talked about subclassing backbone-based objects, keeping your views atomic and decoupled, and rendering your views progressively as data comes in. So we've rendered a view as such. But how do we load it cleanly into our whole page? So for kicks and at least we have a main container, which changes during the lifetime of the app, depending on where you click. So if you click on inbox, you just fill in this main container, single-page app. As seen in our router, we load the view by just passing a view manager object, passing the view into a view manager object. So we use a view manager to manage the application view state. For starters, a view manager adds a close method to all our backbone views. What it does is it removes the content from the page and unbinds all events. It also triggers a callback on the view if one is specified. So this is where we put any view-specific cleanup logic. Every time a new view is loaded, the view manager destroys the previous view and replaces it with the one that you're trying to load. This prevents any stray bindings and any memory leaks, which are often caused by circular references in JavaScript callbacks. All right, now that we have the basics of a rich app in place, let's talk about improving performance. But take a break from our list and look at one of Kixen's main views, which is the inbox grid. You log into Kixen, and this just displays all the photos that you have received. This view is backed by the API-slash-deliveries endpoint. Each delivery pulls from multiple tables and gets rendered to JSON. You multiply this out, it becomes a really expensive call just to render this whole page. And this is only the top half. We pulled 20 of these at one time. But that said, our average response time is about 100 milliseconds. So how do we do that? Simple, we cache aggressively. We achieve that by having two levels of caching. We use, basically, memcache hooked up to Rails, very typical sec. And the higher level caching that we use is action caching. Over here, we are caching the index method, the one that you saw earlier, which is based on the current user's deliveries. We have a helper that helps us generate a custom cache key. This custom cache key is based on ActiveRecord's cache key attribute. It's available in all models, and it's basically based off the updated at timestamp. This lets us do something called key-based cache expiration, which essentially means that objects are cached based on their cache key. We then rely on memcaches sweepers to remove unused keys. We generate our custom cache key based on this on ActiveRecord's cache key of the most recently updated record, plus a total count of the collection in that query. So two index queries like this and a cache hit, and so being a lot faster than loading the entire payload from the database. The next type of caching that we use is view caching. In our case, all views are rabble templates. This is a template of a single delivery. And here is its representation in rabble, which gets generated in JSON. We specify the attributes we want to use in the delivery model. So you see like receiver type, item type, and so on. The sender, the receiver, and item are nested objects in ActiveRecord. So rabble allows us to render them in with nested templates, which is very good for code reuse. We have a deliveries template, which renders a collection of deliveries based on an individual delivery template that we specified above. Note, if you were looking carefully, you'll see that we declared our delivery as cacheable. Rabble will cache this generated response based on the delivery object's cache key. We applied the same kind of caching treatment to the user and photo templates. So every time there's a cache hit, we don't hit the database, and rabble doesn't waste time generating a response from scratch. In some cases, we have uncacheable data. So we just wrap them in another rabble object. And the static part is put from cache, as you can see, delivery base. And the dynamic part gets generated as usual. So there's still a net gain of it. If this looks familiar, it's also called a Russian doll setup to code DHH. So we'll see it in action right now. This is a fully abstracted cache state of our delivery's API endpoint. Each component of the response is cached, starting from the photo, the sender, which combined together forms a delivery. And finally, many of these cache deliveries then forms the action cache. So if one sender gets updated, its cache key gets updated, and there's a cache miss. Since the cache key of the sender object touches other objects it's related to, it expires all the related objects up the chain. Everything else is left intact. So in this case, it expires the delivery that references it and action cache that references the delivery. So basically, then closing circles. And let's say you introduce a new delivery, only the action cache expires. Rabble can then build up the action cache response based on the deliveries, the blue ones that are already encached, and saving time overall. So even if we optimize for a server side, there's still lag due to round trips from the client to the server. So we'll look at improving perceived performance, which goes a long way in terms of user experience. We always assume success. So I'll show you a quick demo. This is how you add friends to a list. As you can see, it adds really quickly, even with this conference Wi-Fi. And we can remove really quickly as well. So that's a very short demo. And I'm lucky that nothing happened. So we'll head back to the slideshow. How do we do that? So first, we bind the events to the right elements. In this case, add and remove buttons that you saw earlier. So once you click Add, we immediately add that member to the member's collection. And we bind the add event to a render, which immediately renders it before the server's call is completed. So in other words, like I said just now, we always assume it's going to succeed. So to the user, it's immediate. But we only enable the member for further interaction, like, for example, removing it from the list, only once the request is completed. So in backbone, this is determined by the sync and destroyed events. And in the rare event of an error, we have a handler to roll back the changes. So in the end, to a user, it's just a really seamless, fast experience. Moving on, it should also be greedy about loading data, especially data that users will potentially use. Let's look back at our example of a list. So when we first load the list on the page, we make a call to retrieve this individual list. We then load all the lists into an AdWide collection, since we expect users to perhaps browse back and look at the other lists. And lastly, we get all the collections that you'll use, such as your friend's list, your inbox. And that way, when you navigate to other parts of kick-send, you get a super snappy experience. In order for this to happen, you need to keep all your data in sync. So between step one and step two earlier, you have two copies of list two in your whole runtime. One as a standalone model, and one in the AppWide collections of lists. Since this global AppWide collection is used by other views, that's the master copy that we need to sync to. And we do so with a little helper that we write called sync to collection. We bind change events of the local model to the associated model in a specified collection. In this case, it's the AppWide collection. In our case, as you see here, it's just declaring the model and syncing it to the AppWide collection. All right, so recap. We improved performance both on a server side through aggressive caching, and through tricks on the client side to make things smoother for the user. At this point, we have created a really snappy front and back end for the app. And let's talk about layering on real time. We start with the XMPP server of choice. In our case, it's eJobity, which is written in O-Lang. On the front end, we have Strofe. It's an XMPP library in JavaScript. The X is an entry point to our backbone client. It communicates via Bosch, or bi-directional streams over synchronous HTTP. It's a long name. Bosch is a transport layer, which is commonly used for XMPP. The XMPP connection is open via HTTP long polling, where it essentially types messages or an XMPP language stanzas from one client to another. By the way, I just read yesterday that long polling is broken on iOS 6 Safari. So if anyone out there is using it, just find a backup. Bosch streams, which XMPP users have to be authenticated. This authentication process takes about five round trips to a server, which is extremely heavy for any client, especially if you talk about mobile clients. So a trick that we use is to pre-auth them. Using a library called Ruby Bosch, we create a pre-authenticated stream between our servers, basically our real server and the XMPP server. Since it's on the server side, and they're probably of a very fat pipe, there's very low round-trip latency. So the client then receives these authenticated credentials from the real server. And the client then hands it over to Strofe, and it uses these credentials to open a Bosch connection with eJabity. So then it's only one round-trip for client, and it also saves us from exposing the XMPP credentials on the client side. So if anyone tries to snoop around with Firebug, they can't find anything. Ruby Bosch is really straightforward to use. We expose it as an API call that the client calls synchronously, sorry, asynchronously. It's actually written by the co-founder of Kixend, and you can check it out at this GitHub URL. So once our stream is established, we must handle all these real-time events in our client app. So we do so with a real-time handler object. To get one going, we really just need two libraries, Strofe, as I mentioned just now, and XML to JSON. Unfortunately, XMPP handles everything in XML, so we need this library to help us convert into JSON so we don't become insane. Our real-time handler class is functionally similar to a backbone router. As the main real-time entry point to the app, it establishes the XMPP connection. As you can see here, session ID, session JID, all these are part of the auth credentials which I spoke of earlier. And we've gotten it earlier from our pre-auth dance. Just as we would with backbone route handlers, we attach event handlers to the appropriate types of XMPP messages. In this case, we are handling a message-type stanza called DeliveryReceived. These deliveries get rendered immediately into our views because we'll be listening on them for ad events. And so the effect is the moment the client receives delivery message received, stanza, it would just render it straight in. So that's how you get that real-time effect. And this concept of a real-time handler whether you're using XMPP or any other real-time service. So we use pre-auth boss streams. And we have a dedicated router-like interface as a real-time handler. So moving forward, there are various ways you can improve this entire stack. As a pure API server, rails can get loaded. So there are ways of simplifying your backend stack, for example, building upon Sinatra. I think Sprite is taking that approach. And the entire backbone front-end is essentially static code with static assets. We could use a static site generator like middleman, compile it down, and serve our front-end assets over a content delivery network like CloudFront, which really speeds it up for your users around the world. Another front-end optimization is to split our JavaScript into modules and load them on-demand using a library like RequireJS. But unless you have an extremely large front-end, this would likely be an overkill, which brings me to my final point. Use the solutions that work best for you. So at Kixen, we use technologies that allows us to move fast while meeting our product requirements. And when you're building a rich real-time app, don't over-engineer, pick and choose wisely. So thanks a lot, and I'm open for questions right now.