 430 mark, a couple of people coming in. Hi guys, awesome. Welcome. My name's Josh, or if you want to be really formal, Josh Calderimas. You can find me on GitHub or the Twitters. Or you can just call me Mr. Calderimas and find me either. This is me with Anthony. He's a sexy guy. I'm from the Travis team. But before we get started, it's the afternoon. You've just had some donuts and coffee. Do you all just do me the favor of standing for a second? You have everyone, everyone in the back with a laptop. Yep. Yep, everyone in that section. Yep, perfect. OK. So I want you to stay standing if you are logging anything from one of your apps. So it could be Rails, it could be an agent. You were using logging. I'd expect this. There's one person that sat down. That's a shame. I want you to stay standing if you're using New Relic in any shape or form. OK? I want you to stay standing if you're using custom metrics. For example, libretto or graphite, statsd. Can we stand back up? No? Sorry. OK. Now, I want you to stay standing if you're using custom logging, custom New Relic, custom metrics, and a dashboard tying them all together. Boom goes a dynamite. Thank you, guys. OK. And the people that were standing up, you can just feel good. And maybe I'll give you a sticker there. Just feel good. OK, now you can sit. OK, so two small things before I get my proper talk started. I want you to meet the Travis team. This is technically our official outfit. We wear a onesie. You join the Travis company and we give you a onesie. I have a feeling my microphone's going to fall down. There we go. This is perfect for coding in. And you can buy them online for a good price of $120. We'll even put a Travis logo on the back. We used to wear shite shirts and awful moustaches. Don't tell Sven I said that. Beautiful moustaches. And we used to present together in them. But now this is a side note. Is there any one from the engineer out here? OK, well, thanks to the beauty of Ingeniard, they're actually sponsoring Piotr Sinaki, Drogas on Twitter. He did a lot of the Rails engines work and was working on Sproutcore. So he is now working full-time on Travis open source. So I just want to give a huge thanks to them because they're helping push Travis and the open source testing part of it forward. So thank you, Ingeniard, for being another talk. It does all the way up. And there's a summer version. So if you know of anything about Travis, Travis is a crowd-funded company. We started roughly in January putting up a crowdfunding page. We raised about 125,000 US. And from that, it was from about 780 individual donations from people in the audience here and from at home. 29 companies also joined in to help us get going. Everyone from Ingeniard to Heroku to Bendyworks to SoundCloud. And this was only a fraction of them. So these are the people that have helped made Travis possible. So we want to give a lot of love and thanks to anyone here who has donated, anyone who's donated or who's outside of the room as well. But a big thing for us, and we haven't actually released it officially, but thanks to the beautiful people at Sticker Meal, they have in the past created a whole lot of stickers for us. So if you donated, you would have got these beautiful stickers in the post. But thanks to David, who works at Sticker Meal and is a pretty dog, we now have an official mascot. We have a Travis. And we have a whole lot of Travis stickers. So if you want to Travis Sticker later, come and get me and I've got one for you. But back to the talk. So as I said, my name is Mr. Calderimas. I come from this very far out place called New Zealand. We're known for hobbits and orcs, cable cars, beautiful views. I'm from Wellington, to be exact. Beautiful nightlife, amazing coffee, and the most fantastic amusement rides ever. And I've also been on one of those. It's one of those small planes where you can see out the front of the cockpit. And as you're coming in, you're like, oh great, I can see the runway. I can see a hill. I can see a runway. And then even while it's landing, all you can see is a hill up to the last second. The last four years, I've been living in this beautiful place called Amsterdam, where it's known for its huge amount of bikes, its amazing architecture, its nightlife, its recreational activities. But now I'm technically a traveling nomad. I've got no official home. I'm in between San Francisco, New Zealand, San Francisco, Berlin. I travel with a ukulele. I've got a nice little coffee kit. And today I'm here to talk to you about fun things like logging and metrics and monitoring. It's part theory, it's part code, it's part internal tools, and it's part moustache. But you'll notice that the talk title in the conference planner was about the black box. And I had a huge issue trying to find a good talk title. Having a good talk title is what people look for. And I was at the American bookstore in the Netherlands. It was actually just before I was going to Sao Paulo. And I was like, you know, Josh, you don't know anything about Sao Paulo and you're going to Sao Paulo. And this was the day before going to Sao Paulo. So I had to run here to get myself a Brazil book so I actually knew what was going on. And I came across this book and it's not how good you are, it's how good you want to be. And it had these really nice little lessons in there. For example, fail, fail again, fail again, all of these things that were taught as programmers and when even starting a business. Like the person who doesn't make mistakes is unlikely to make anything. Or if you can't solve a problem, it's because you're playing by the rules. There are nice little lessons. So I thought to myself, since I'm doing a talk about metrics and what's going on inside your app, really it's not how good your app is, it's how good you want it to be. What I want you to take away is that your app, when you deploy it wherever it is, it's not just a black box. You need to know what's going on inside of it. Knowing what goes on inside of it gives you the power to optimize and to make things better. It's not about the worth, we're doing a million when requests a second. That is a very, very circle jerk kind of metric there. We want something, we need to know really what's going on inside it. And we've got a whole bunch of tools and libraries available to us. Everything from exception notifier to libato. We should be using these tools and even better, you should be building internal tools. Internal tools will allow you to mold the data and allow you to really abstract what you need. And on top of that, services are your friends. Don't build things when things are available for you to use. It doesn't matter if it costs 100 bucks a month. These services are worth their weight in gold. But anyhow, I come from this nice little crowd funded company called Travis and we run your tests for you. You might have seen a little status image in, you know, across GitHub. We run those tests. It's free, it's open source, it's full of fairies and unicorns. We've got another piece of infrastructure and we are for builds what RubyGems is for libraries. We have over 25,000 open source projects on us. We do 10,000 test runs daily. Over the last year and a half, we've done 700 and 7,000 build requests from GitHub totaling 1.9 million total test runs. And we support 11 different languages, everything from Ruby to PHP to Python to Go. I think there's eight different versions of Ruby, three different versions of PHP and Python. We've got it. If we don't have it, tell us and we'll add it. And if you see this, it's the sign that it's on Travis. It was started with fun. The whole idea was for instant and live and modern and hackable. And as any Rails app, it started kinda like this. It's just a simple Rails app that would run on Heroku. It would use, rescue, and it would talk to a dedicated box. We'd use Redis and Pusha and a nice, pretty database. And it's about getting things out there. This was simple and this did the job. It did the job when you had five projects, but not when you've got 25,000. So it kinda grew to this. This is a simplistic view of our architecture. We have TravisListener, which listens from GitHub, queues stuff to Rabbit. That then goes to Travis Hub, which will process the requests, send it to the database and via Pusha. The user will, of course, talk to Travis via Travis CI, which is the web part. And then all of the jobs go to Travis Worker, which uses virtual box. And I will just state that I want to stab virtual box in the eye, but we've got 10 of these boxes that we manage. And this works. This is how we have scaled. As a side note, we're splitting Travis Hub into three bits. So now, when you look at this, it's not four different bits. It's actually, you know, seven different bits. And on top of that, we've got other bits behind the scenes. We use everything from Redis to Rabbit to virtual box, JRuby and MRI. We're using an assortment of everything to make this job possible. And it's kind of built around a service-orientated architecture, but we kinda call it the mini SOA version because we are sharing services. And when we say services, I'm saying we share Redis' and databases between these little apps, you generally shouldn't be doing this. But anyhow, we'll get to this. Right now, we have a total of 10 deployable apps. And on top of that, we have four different environments. We have org staging and production, and we've got a whole separate setup for Pro just to keep everything a little bit more secure and also because org is sponsored, we don't want that to leech over to Pro. So we've got four different environments, totally 40 different deployments. I mean, that's a lot. So really measuring and monitoring is a challenge, but even more so, it's more important to get it right. We need to know what's going on inside of Travis so we can optimize and make it better for you, the user. Also, I used to play Canoe Polo. I'd like us to get to know each other. So let's talk about logging. So logging is important. Everyone here is doing it and they might not be knowing it, but you're doing logging. It's useful, but this is the standard Rails login. We've got, this is what, seven, eight different lines just for your production log. This is your default in production. What we really want is something simple. We want something concise and it should be easily readable. It should be one line. If you look at this, the Rails log, standard Rails log, we can actually break it down. We've got the request type, the URL, the format, the action, status, total duration, view, and database. Those are all there, but they're over eight different lines. Why not something like this? This is all we need, and even better, it's on one line. So this is what we use across our web applications and production. It keeps it nice and simple. It's keyword, we can search for it, and it means that we don't have super verbose logging going on. And you can find this available. It's from Matthias, one of the Travis guys. It's under roid rage log rage, and you can just add it to your Rails app and you should have better logging just for production. But what about non-web apps? That's not gonna work for a non-web app. It's very Rails specific. So when you're creating a Ruby agent or a Ruby tool, you want to think about how you use logging. Logging has everything from debug, info, and error. Simply put, you want to debug lots, you want to info a bit, and you want to error those exceptions. It's not a huge thing to grasp. Really, you want to be logging all the things. Now if we think about Travis Hub just for a second here, this is one of our components. We're doing about 1,100 requests per minute using RabbitMQ and doing processing via threads and subscriptions. It has things like build finished, notifications, and it's doing background processing and threads. And this is what we have. You'll see that on the, or down here, these are all related log messages, and why they are is that you can see that the build idea is the same. So you need to think about context when you're logging and easy way to group them. If it be tagging, if it be putting an identifier or a UUID, what you want is any log message that happens in your processing needs to be easily grouped and attributed to a previous log message if that's applicable. And what you can see is it's a very simple kind of context. We've got the type, debug or info, the class, the method and any extra information, and the model and its ID. And for an error, it's just even simpler. Error, class, and error message. It allows us to then go, you know, just to tail that and to grab, and we'll get a live streaming of anything to do with that build ID. But on Travis Worker, it's a little bit different. So that was the part that's actually processing the jobs, which is managing the virtual boxes. That's got five VMs per worker, which is effectively five jobs per worker at any one time. What the goal is, so every time you look at logging, you should have a specific goal in mind. And for our goal for logging, we wanted it to easily be able to see what was going on. And we needed it to be able to do it live because we don't want to disturb the application, we just want to tail. So in here, the context is the worker. And you'll see on the left-hand side, on the left-hand side, it says Ruby2. And then it's got each different class and then what's going on. So we can even see in a nice, clear, human-readable way about to run, about to perform, change directory, export, et cetera. We know what commands it's running as well. We can see it's doing a bundle install on the bottom. Being able to see that makes it much easier to just tap in and watch the build and make sure nothing's exploded. It's a much more verbose form of info compared to the Rails app, which is usually doing it on a singular request, but it's a different God. So you don't want to treat every app the same. And as I said, like everything's grouped appropriately, making it easier to read. And then all we need to do is draw a nice little tail, grep for Ruby2, and we can see everything going on with that worker. And even better, we can improve it further. We can consolidate all our log. For example, you could ask Sync to a single server. You could use something like grelog2, which is built on Ruby and Java, uses Elasticsearch, self-hosted, so you need to look after that. Or you can use something like paper trail, hosted and just pure awesome. And for us, now we've got all of our web stuff or our hub stuff going on in paper trail. Nice and easy to see. We've also got search at the bottom, and then you can go back in the time frame nice and easily. It's got excellent stuff like archiving notifications and web hooks when it detects a certain string. So now you don't have to build and alerting yourself. You can have it detect certain errors and then maybe notify you on campfire. And of course, we love Peter Trail. Oh, I also used to play Nick Bottom in the Midsummer's Night Dream. Thank you in high school. So exceptions, boom. So exceptions, you never want to check logs for errors. That is no fun and we're beyond that. When we're doing exception logging, we want to use an app, a service or a library for this. If you're not doing this already, there's a million available. And just to clarify really quickly, we've got saying as basic as exception notify a very old Rails plugin and would send you an email each time. You could do something like Sentry, which is self-installed or hosted. And this is built on Python, if I believe right. And it's very nice interface. And it's just simple. You've also got Irvit, which is a self-installed and written in Ruby. You can host that on Roku as well. For us, we use Haystack. So Haystack is built by the guys at GitHub. Sadly, it's not open source, but this is what you get. So this is an example of an internal tool that they've seen the need for. And the difference that you'll see here is that there's a nice pretty graph and visualizations and context to be able to see what's going on with your errors. So you've even got the most common errors and then the latest exceptions, along with a nice little metric of how many exceptions there are. And of course, you get the normal breakdown of what's going wrong. So you can easily add extra bits to here and you can have the backtrace. A lot of people see errors as a to-do list. Me personally, I don't. I don't see errors as a to-do list. I see errors as a natural part of your app. Errors are going to happen. It's, you know, I'm not saying ignore them, but don't fret over them. They're not the end of the world. The best error exception notifier is your customer, your user. That's how I find it personally. An error goes wrong in Travis. We know something's bad when someone tweets saying, hey, sync isn't working. Respond, monitor Twitter and email, like support emails, Twitter. Oh, this is my little sister. So that was very unexpected even for me. I should actually start just littering them everywhere and I won't even know where. I'll just get someone else to do it next time. So exception monitoring is really important, but don't see it as the first thing that you need to do every morning. It is a good thing to get rid of them though. So do watch the big numbers. And my little sister's okay. So let's talk about metrics. Who here uses New Relic again? Just a show of hands. Awesome. Who here is using New Relic and not for a web application? One person, maybe two. Over there was two? Sold, okay. So this is what you'd normally see in New Relic. Pretty graphs, a nice little appdex score, throughput, awesome. It's really super powerful, but there's something that you might not notice here. This is not for a web application. New Relic is bendable around your agent apps, anything that's doing background processing, it might be background jobs, might be a whole separate app just during AMQP subscriptions. This here is all of our AMQP subscriptions and we can see which ones in particular are very slow or in this case, the most time consuming. And for us to do that, we just built a little rapid. It's a little bit of handy and better programming but we tell New Relic to attach its proxy and controller notification so we can then get a trace for that method. And then what we get out of that is the beauty of New Relic. You get all of the database breakups, all of the web external breakups as well. And then you also get to dive into each of those requests and see what's going on. It's really powerful, I will say though, it's not incredibly intuitive to add because they call it controller notification and people here will probably think of a web controller as well, a web controller, that's what I associate controller notification with. But for us, controller notification was used for an AMQP subscription which was a different way of hooking into it. Nonetheless, it's really powerful so you can get working with it straight away and because if you're using New Relic, sorry, a Herotu, you get the standard for free. You can just shove your app on Herotu and then you get all of this pretty stuff. So as I said, not just for New Relic, it's for all apps. I love popcorn. We're really getting to know each other here. This is great. So let's talk about metrics. So metrics are awesome, okay? Hands down, awesome. I would like to introduce you to Mattias, our Chief Metrics Officer and his pineapple. And when I was preparing this talk, I've given it a few times now and I was like, Mattias, how do I explain to the audience how awesome metrics are? And he said very abstractly, metrics are to a running app as unit tests are to your code. It's like, that's beautiful that even I have trouble understanding that. Tell me again, well, it represents a certain part of your app by measuring how long it takes and how often it occurs at a given point of time. It's as simple as that. I've even attributed it in 2012, just in case you ever reuse this. Effectively, metrics is attached to a method core. It's about how long it took. Well, it doesn't have to be attached to a method core but it's attached to something that's happening in your code, an event. And for us, we use it on the method cores a lot. So for example, there are four parts to metrics if I start there. You've got collecting, sending, aggregating, and visualizing. Those are the four main parts of metrics. And if we talk about collecting for a second, I want to introduce you to active support notifications. If you don't already know it, you're using it every single day in your Rails app. It's littered everywhere. It's even how logging works. There's no logger.print or logger.info. That happens via active support notifications. The greatest thing is it's a very simple publish and subscriber or observer kind of pattern. You publish events, you instrument some code, and then you subscribe to those events or instrumentations. You can subscribe to one event type or multiple using a redgex. It's really simple. You publish one-time events or you can yield a block and time it. Or you can just publish an event using instrument and send some extra info. I won't go fully into active support notifications, but I would highly recommend that you buy this book from Jose Valim. It is absolutely hands down amazing. We'll give you a very thorough insight into what's happening inside Rails and also that particular topic of notifications. So I'm just gonna refer to active support notifications as ASN for now, but let's look at how we could use it. That's all we need to do. We can just start littering our code with publishing and campfire and URLs and we can start passing information about this was called. We can go instrument, okay? So they're pretty much the same. Or we can instrument and it will do a timing as well. Nice and simple. We use it to publish three events in Travis. Now this is all over our code. We publish a receipt, completed and failed. You'll see a pattern similar to this. You'll send a message and we say, oh, someone fired off a campfire message or notification. We do some processing and then we publish that it completed successfully. Or if there's an error, we publish that it failed and we re-raise. It's nothing particularly special but because this is a whole lot of craft code, completed, sorry Josh, there we go. Because of the craft, we use a little bit of metaprogramming and now you just go instrument and send message. And all we're doing is emitting a whole lot of events. These events are completely metrics agnostic. So like step one is that you can even add this to your library or gem, but you can add this all over your code knowing that it's not gonna take up extra CPU, it's not gonna make your app slower, it's gonna just start emitting events and then you can start thinking about how do you want to use those events? It's very general purpose and it's all about the subscriber. This is the beauty, oh, I love German food. So let's think about sending and aggregating for a second. Stats D, that's no JS using UDP is one of the methods that we can use. We can start aggregating and sending or sending and have it aggregate. It uses simple UDP messages like this where you'd say this is one counter, increment by one please, or we can do a timing like 320 milliseconds. And you can even do this from your console. You can set up stats D and then just do this using Netcat, nice and simple. You've got a nice little Ruby gem for it. You can just set up a host in the port, increment, do some timings, boom, done. And it's only 155 lines of code which is a very small gem. It's got some tests in there, but this is what we use. We use a combination of Metrix D which is by Eric from PaferTrail. And it's based on Ruby and Event Machine. It's coupled together with the Metrix gem. Now the Metrix gem is very similar. Again, you're not doing anything mind-blowing here. We set up counters and we get to increment and we can do some timings. And then we flush that every X amount of seconds. So we flush it every 60 seconds. And then what Metrix, the gem does is it part aggregates, it aggregates within that process and then it sends it. And now what it does is it sends it to a reporter. So this is quite cool because now you can send it to different types of reporters. And in this case, we can send it to our logs and we can actually see what's happening in our logs for each of these Metrix. So let's subscribe to something here. What we're gonna do is we're gonna add a started app so we know how long it took. We'll continue to use this instrument. We're gonna also do the started and finished app on the completed. And then we subscribe. We just subscribe to anything using that namespace Travis at the top. And then if there is a finished app, then we send the time it took. And if not, we just mark it that this method was called. So it's nice and simple. This is all you need to subscribe your entire, you know, code base. You litter it with events. You do a nice little subscriber. And now what we do is we start the login. And this is what we get out. We get everything from, now you'll see that at the top, it's active record. And we've got a timer, we've got how many times it happened, the one minute rate, 15 minute rate, means, max, standard deviations, everything. This is just for one process. So now what you can do is if we think about it, this is what we've just set up. So we use PaferTrail. This is Travis in particular. Travis uses active support notifications. We use metrics. And then it goes to PaferTrail. So all of our metrics are going to PaferTrail. It's a very quick way to get started. And it means it's very visual as well. So it gives you a little bit of debug time. But it's the thing that comes after it that's really important. Because what we're doing now is we're emitting events. We're aggregating, we're sending awesome. Now we need to start visualizing. We need to start making use of these events. So there's Graphite. And Graphite is a self-hosted product. So you're gonna need to run this yourself. And it gives you very thorough graphs. It's not the prettiest interface, but sometimes it isn't about the interface, even though it actually is. That's why I have a Mac. And then there's Liberato. Liberato we are in love with. We get pretty graphs like this that allows us to do absolutely everything creating our own dashboards. This here is actually a set of our errors. We have everything from HTTP errors on the top left to AMQP messages. We've got log entries and also notifications. And if we look at here, this gives us a breakdown of every type of notification we're sending. The top green bar, the big peak, so actually push notifications. So you can see that we're sending a crack load. And the cool thing is that you can embed these graphs into your campfire, including now live graphs. So you can actually share a graph with your team and say let's monitor it live together and start commenting. This is actually one of the graphs that we blogged about a while ago. What you'll see here is that we deployed some code and then boom, we had a lot of errors. And then we fixed the bug quickly and everything peaked off. We still had some high peaks. And then we fixed another error and they just all went away. So as you can see, you could use something like Erbit or Get Century or Haystack, but you can also use Libratto and your own metrics and monitoring as well. Even better is Teseo. It gives you a nice handy dashboard that links into your Libratto or Graphite metrics. Now actually the original fork of it or the original version of Teseo which is on a obscurity at the bottom there, the GitHub, that only works with Graphite at the moment. But if you use Matias's or Royed Rage's fork, this can also work with Libratto. So you get a nice little dashboard and it's also an easy dashboard that you can note your graphs. You'll see at the red one, AMQP ready. You can see that we've got 17,000, 18,000 messages. This is another error. And we can see nice and quick and simply that is a big warning sign. We need to do something about it. So if we put everything together in our holy circle of metrics, it goes into PaverTrail. We need a webhook to pull it out of PaverTrail and then it goes to Libratto. But we can do a lot better here. So we've got a lot of aggregating and sending. It's a little bit silly. So we can use the Metrix D reporter along with Metrix D and now we get something like this. Okay, we already cut one piece out of it. Or we can go even one more step and we can use the Libratto Metrix reporter. And then this is all you need. Libratto will do the aggregating, the sending and the visualizing for you. I used to scuba dive a previous. Fun fact. So the next part is like you're using all these tools. You're doing logging properly. You're doing exception tracking. You've got Metrix. But the third part to all of this is the internal tools. So I want to show you Travis Admin. It was a little app that we've built. It's open source as well. You know, all of Travis is open source and this is a app built on Sinatra along with Bootstrap by the beautiful Constantine Hazard. I call him Constantine House. I would urge you all to do that because I don't think he likes it that much. I love you, Constantine, in case you're watching. So this is part of the app. It's composed of three, four different bits. This one here allows us to view the service hooks set up for each of the repos. What we do is you would put in a user or the user and a repo on the right-hand side and it checks the database for their OAuth token and then we can check the hooks set up for that user. We can confirm that the hook is set up correctly. We can also disable pull requests if we need to or maybe we can enable some extra things to look out for. So you'll see pull requests just down here. We've also got the GH web console. GH is a gem created by Constantine which is a nice way of interacting with the GitHub API. It's different from OctoCat. This one uses the headers a lot to traverse and it's a bit more low level. It's very low level. But what we can do is we can put the user and the command that we're going for, like user repos, and we get a nice formatted JSON list along with the headers on the right-hand side so we can see what our rate limit is and what was the scope of that request. So it's a nice way of seeing what's going on with our GitHub request because we do a lot of them. But the crown of all of this is the event monitor. So as I showed before, this was our infrastructure. It's not even part of it. It's just half of our infrastructure. And the Daryl Nicholas Cage joined our team and we tasked him to create a way to unify all of our notifications. So if we have all of this information going on, how can we kind of see the path that a request takes? Comes in via the listener. It goes to Hub. It goes to the worker and then it comes back to Hub. How do we visualize that? So he built an app for us, built on EmberJS and using Redis PubSub and we've got something like this. Now you might not see it very easily but I'll turn on the live version in a second. But what we have on the left-hand side is an event list. All of the events that happened and the master event. So in this case, you'll see that a job finished and then we see all of the events that happened after that. That might be a push notification was sent an IRC notification was sent, database was updated. So we can see all of the sub-events and then we can see the payload attached to each of these. So now you can unify and actually see live what's going on. This just goes to Redis, it expires quickly and we can watch it live and then we can pause it if need be. It uses Travis Instrumentation, a custom thing that we built for doing all of these hooks along with active support notifications. But what I'll do, so this should be Travis happening right now. And wow, it's very quiet. I don't know, there we go. And you can see that it's updating accounts. We can kind of see the payloads that are going on. You can see the Mongo Ruby driver just started and all the stuff that started with that. So it allows us to see not what's going on in a logging kind of way, but what's going on in a rich JSON logging kind of way. We can kind of see all of the events that are happening after each other. Now right now, if you wanted to go use this, it's completely open source. If you want to use this, it's completely open source, but I will admit that there's a fair bit of code that is kind of bound to how we saw the problem. I'd love to have other people try this out and get involved and see how we can take this to the next step. It's, a lot of the code is located in Travis Support, which is one of our repos on GitHub. It's completely open source. I'd like to just finish off very quickly to say that the whole point of this talk is to not just deploy your app to Heroku or using Capistrano to a dedicated server, you need to know what's going on inside your app. Knowing what's going on inside your app will allow you to find those errors to make things faster and to find out what features your users are using. In fact, one of the best ways you can use metrics is start attaching it to features. Start seeing the paths that people use and what they're doing and how often something is used. You just spent two weeks working on a feature that no one uses, is that a good return on investment? Don't treat your app like a black box. Don't just deploy it and forget about it and just look at those nice little circle jerk stats. Actually look into what's going on. The knowledge will help you improve. Don't look at those big more numbers. Use services. All of these services are available and they're amazing. Build internal tools. Don't build them straight away, but over time figure out what is it that you need or your team needs to make it easier for you to administer the product. You don't need to build it in your current normal application. Build it as a side app. That's why we have Travis Admin. That stuff does not belong in Travis itself. It's just another tool that would read the database and all the external services. Services are your friends. I think I've said services are your friends enough. You can find us on IRC and on GitHub and on the Twitters. If you'd like a pretty sticker, come talk to me. You are awesome. Thank you. And any questions? There's two. I don't think we have microphones or any, just yell it out. We can get Travis through the council bill, if you want to say. Email support at travisci.com. We've got a blog post with our pricing and all of the plans. We are freely giving out the invites. We're just controlling that slowly while we get those pages up and running. Yeah, email us and we'll get you on that. That's why we built the little, I guess, the class declaration of like saying instrument this method. Because it's our way of pulling it out of the code. We don't have to have it sitting with the method underneath. You can actually eval that into the classes in a different area later on and then you can group them together. But we find that it's the least intrusive way. We're not actually worried about instrumenting too much. In fact, the more you instrument the better. It's about how you kind of use that statistics. Like the main things that you should probably look at like start easy is how long do maybe some of my notifiers take? If I'm posting to Twitter or Facebook, how long do they take? How many of them error out? Maybe you can look at features as well. It's a great way to find out what your users are doing within your system. If you have an amazing time tracking app and no one ever uses the schedule function, then maybe you need to think about how you rework that. It can help you with your A-B testing. Metrics is very abstract. It's not just about code, but it's about everything that can kind of go on within your app. But we generally feel that every time we add a feature now, we make sure that we add a metric so we can track what's going on with it as well. You can't, because you only have HTTP for Heroku, we can't run metrics D on there or stats D. We use, at the moment, the webhook with pay-to-trail, but we'll soon be changing over to using the Libratto notify or reporter. So now it will just be the app will use the metrics gem and then it will go straight to Libratto. Libratto is building and better alerting. It's kind of the one thing I didn't cover in this talk because it's a little bit hard to throw in everything, including the kitchen sink, is the holy thing that mixes everything, connects it all up, is alerting. So the final stage in all this is that when you've got all these metrics, you've got all the measurements and the monitoring, you need to be alerted when something hits peak. Now Tiseo takes it to a certain degree, you're getting nice colorings out of it, but what you really want is campfire or a pager duty or someone to come out and get you from the restaurant or you need a way to find out when something's going wrong. It's the first step. This step takes you to knowing what's going on outside the app. Then you need to kind of add the alerting to notify you when something's going a bit astray. No other questions? Oh, well, thank you very much. If you've got anything else, just come talk to me after. Thank you. Thank you.