 Awesome. So yeah, thanks for coming down. Today I'm going to talk a little bit about ways to rethink production monitoring. That's quite a broad topic. I'll dive into it in a lot of depth. But who the heck am I? So you can probably tell by my accent I'm not from around here. I'm originally from the UK. I've been out in working in the Bay Area for the past seven years. I've also been building production monitoring systems for over ten years. Originally in finance when I was working at Bloomberg, then at a start up and now for Bugsnag and helping people do better production monitoring in their own teams by building Bugsnag. And just a quick overview of Bugsnag. I think a lot of people may know about Bugsnag. We help you understand what's going wrong in your software on production. So rather than getting a slew of emails coming in from the exception notified gem or digging into your log files or using an old tool like Airbreak for example, we kind of give you the tools and workflow you need to figure out what the most important problems are in your software. So highlighting answers rather than data, I think Skylight used that term as well, but figuring out which the most harmful bugs are, the most harmful errors are in your applications. I'll be talking about a lot of the philosophies and techniques that we use when building Bugsnag, but this is kind of a more general talk about setting up good quality production monitoring and making sure that you're not dropping anything. So this is the scary reality. This is the truth in most companies. And I don't think that, I think people, if they're honest with themselves, most people are doing this. So it's like, let's build some code. Let's write tests. And most people are doing a good job of writing tests these days. Let's send out a production and then I guess it's okay now. It's probably fine. But what you really want is confidence. You want to know that when your code is live and your customers are using your product for real that it's working. So I'm going to be talking a little bit about how you can get from the left to the right here and get some confidence in your software. So production monitoring, what the hell is production monitoring? So it breaks down into kind of three core areas. There's sub areas, but these are the three main ones that I think most people think about and care about. So stability monitoring, this is the kind of thing that bug snag does. This detects if your software is broken, if there's crashes happening here and lets you know. Performance monitoring, tools like New Relic will help you out with this. I'll tell you if your app is slow. And availability monitoring. So that's basically uptime monitoring. Something like Pingdom is my site even responding to requests right now. But really, all of these come down to one thing. They come down to delivering an awesome experience to your customers and that's the point. That's the point of doing production monitoring. But why do we care about that? Obviously it's easy to say let's deliver an awesome experience to our customers. But what's the actual reasoning behind this? Well, this is, it's never been more true this statement. It's never been easier to make software. There's, you can kind of test that, especially in the Rails community, in the Ruby community. It's such a beginner friendly space. There's tutorials everywhere. You can build apps in record time. It's never been easier to build software. So your app will live or die based on its quality. A classic example of this, if you're buying, let's say you've got two e-commerce applications. Maybe you got Amazon's app on your phone and maybe you've got Best Buy's app on your phone. You're trying to buy a TV. If the app crashes when you're trying to buy the TV, you're probably going to open up Amazon's app and buy it on Amazon. It doesn't matter where you get that TV from in a lot of situations. It matters that you get it and you solve the problem you set out to do. If your app is broken, slow, or unavailable, your customers are going to be pissed. They're going to leave. They're going to not come back. And if you have unhappy customers, this is what happens. This is a real kind of retention-based study that was done about a year ago. 84 percent, and this is on the mobile space, but we looked at this on the web apps as well, and it's almost identical. 84 percent of customers, if they have a choice of different software, they'll abandon your software after seeing just two crashes. So you're literally churning customers. You've spent all this time building some valuable pieces of software, and your customers are going to be like, well, that sucks. I'm not using it anymore. And the worst thing is that not only do customers have a choice, but customers have a voice. So people will complain. People will go on Twitter, and they'll moan about your software. People will go on the, if you're a mobile app, people will go on the App Store and leave a one-star review. And that is permanent damage to your brand, permanent damage to what you're trying to deliver and build for your customers. On the flip side, this is a study that we've kind of reproduced a couple of times, but it depends on location, I think, and it depends on honesty of responses. I think in the Bay area, people tend to report 40 percent, but I also think that people tend to think they're better than they actually are. Try this. This is an experiment. Try this on your own teams. Try and measure the amount of time when you're building something. How much you're spending finding and fixing bugs. So I'm talking about, from the point you receive, either a customer report or a notification or something like bug snag, to actually getting something fixed and a patch delivered. Figure out how much of your time you're spending on this, because it is higher than you think. So the study that we found said that 49 percent of engineering time is spent finding and fixing bugs. And it's pretty close. Most of the people I talk to, it's pretty close to that. But that is awful. Nobody wants to be finding and fixing bugs. You want to be building value. You want to be building features. You want to be building things that your customers care about, not doing schlep work, not kind of diving into log files and digging into things. So on one hand, you've got a customer base you spent time building up, abandoning your software if you have stability or performance or quality problems. And on the other side, you're wasting your time as a software engineer or an engineering manager. So I've kind of built my deadly sins of production monitoring. So most people are doing at least one of these. And I kind of, I tried one time when I did this talk to get a show of hands and nobody wanted to show any hands. So I'm just going to look around and just look for like guilty faces instead, because I've definitely seen that before. So let's take a look at these sins. These are my sins. So sin number one, pretending nothing is wrong. Now this is what I was talking about earlier with the kind of YOLO slide at the beginning. Many teams think shipping to production is the final step of the process. Now this comes from I think an old school mentality of different release cycles. You build it, you ship it and you're done. You wipe your hands and you walk away. Software is changing and especially everyone in the Rails community knows this already, but that's not how software is built anymore. You build it, you send it to your customers as soon as you can and you see if it works and you see if they like it and you ship test it. You see if this is something that is going to stick around. If you believe that nothing's wrong, then none of this matters. And if you think that everything's fine, then you don't even know what production monitoring is. But the thing is, I see this time and time and time again. I talked to a big customer the other day and they said, oh, I said, how do you do production monitoring right now? And they said, oh, we wait until our customers let us know. That's another slide. I'm going to dive into that, but it was just horrendous for me. It was a terrifying thing. And these are some of the symptoms here. And presumably people have said these or heard these in the office or on projects they've worked on. But I've written tests. Of course, that means that you've written every test that could cover every particular piece of data in every scenario. No way, no chance. I don't believe it. The QA team will check that. I heard this said in a company that didn't have a QA team. I'm not kidding you. Someone said the QA team will look after that and we didn't have a QA team in the company. Works great for me. That's a classic. Works great in development. That one's the one I hear all the time. But testing is only part of the process. You can only test for things that you can predict. You can only test for things that you can think about. And in reality, most of the production problems that come about are things that you couldn't think about or didn't expect. So I've mentioned this one already. And these two are quite related. Waiting for customers to complain. This one is the most unforgivable. If you're delivering something to your customers, you want to have some pride in it. You want to have some faith in it. And the reality is that most customers will not take the time to complain when there's a problem in your software. So if you wait for that first customer to complain, you've probably churned 20, 30, 40, 50 customers already because they were mad and they were pissed. So this one I personally think is unforgivable because I kind of classed myself as a product guy. I like building products and the point of building products is to give value to people. To give value to customers or to give value to the open source community or whatever. So if you wait until a customer is complaining, you've failed them already. I put this into a quote because I've heard something very similar to this in real life. So lack of visibility is a huge one as well. So let's say you build your app. You say, right, we need to monitor this. Maybe you put in log statements and you have log files out on production. This is what log files look like in real life. Nobody goes into log files and just says, let's just check everything's fine. Nobody does that at all. It's a black hole. You just shove stuff into these log files and nobody ever looks at them. So there's no point in having production data unless you're actually going to look at it. And also surface it in a way that makes sense and is actionable. So we'll just check the logs. I remember there was a second one here. I was in a team about four or five years ago where someone said, oh, it will be in the logs. And that particular case, there was no log statement for it. And people were just so, they were so confident that they would have been able to take this from log files. But when we looked at the code, there was nothing that was not putting anything in the logs, which is also horrifying. So this one is a really, really difficult one and a subtle one. Lack of ownership. So you've got logging or aeromodelling or something like bug snag in place. You've sent your code out to production. You're being proactive. Whose job is it to actually look at these problems? Whose job is it to spearhead the fixes for these problems? Now, this is a very difficult one to solve in large companies, but I've seen it work. And I'll go into some kind of recommendations on that later on. But this one sometimes manifests in these kind of ways. And this is the one where I normally look around the room and see a couple of guilty faces. Once you build something, you need to move on to the next thing. Who owns that? Who owns the problem or code? Who owns that of the system once it's gone out to production? I've got a feature to ship. You're always going to have time pressures. I've seen that 100 times. Not my problem. Hopefully you don't work with people like that, but I have in the past. So we've gone through the sins. These are the things that every company, every team, every person has at least one of these. I guarantee you. But how How can we do better? How can we actually get to a better place? So now I've got these kind of set of rules that I think is a framework for either building production monitoring, choosing a tool, or basically getting better in your own company in your own teams. And these are core principles that can be applied across those three areas of monitoring. Not just error monitoring, not just performance monitoring, not just availability monitoring, any area of monitoring. So the first one is, accept that your software will break after shipping. And once you do this, it's very freeing, it makes you feel a lot better about things because it will help you ship faster, but also it will make you less arrogant about your abilities as a programmer. You'll be like, look, it's gonna break, I'm not perfect, but that's okay. And once you accept that some bugs are gonna slip through to production, you're in a great position to continuously improve your app based on what's happening in the real world, based on what your customers are saying, or based on if it's breaking in production. So you've accepted things, you know that this is a problem, but how do you actually find out about issues? Now, if you're wasting 49% of your time finding and fixing bugs, a lot of that time is spent digging into things, looking into log files, realizing you didn't have diagnostic data, trying to reproduce yourself. So, if you use a tool like Bugsnag or build a tool yourself that automatically detects problem situations for you, you're gonna be in a much better position. So, most programming languages will provide, pretty much every programming language and framework will provide some kind of exception hook or error hook or something is really bad hook. For example, in Rails applications you can write rack middleware and you can say, wrap my web request in rack middleware and if an exception bubbles up, capture it, tell me about it and then let it bubble up again further through the rack stack or in Java. In Java you can hook into thread.onUnhandledException or onCoreException and it will literally, the JVM will tell you every time an unhandled exception happens. And even in performance land, for example, you can set up triggers or breakpoints so you can say, hey, have a background thread that monitors each web request, for example, and if any request takes longer than some fixed amount of time, alert me, tell me about it. This is a problem situation that we care about. So, generally in most platforms and systems there's either a built-in way of doing it or you can come up with an automated hook yourself and it'll be relatively easy to implement. If you wanna have a look at some examples of this, all of Bugsnag's crash detection SDKs and notifiers are available open source for free on GitHub. So, if you wanna reproduce, build something like this yourself, go and steal Bugsnag's code, obviously you can use Bugsnag, it's a lot easier, but if you wanna go and check out how to do this yourself, we've taken the time to find these hooks and we've put them in our notifiers in Bugsnag. So, github.com.bugsnag, if you wanna go and dig into the kind of things I'm talking about here. So, you know that things are gonna go wrong and you've put something in place to detect these problems. What tends to happen once you've done that is you end up with noise. You end up with a lot of data coming in that doesn't necessarily mean anything or you don't know how bad things are and one of the most important things in production monitoring is not drowning in noise of data. So, you wanna figure out what the highest priority thing is to fix. What is the most harmful bug in your software? And log files and data streams suck, you're never gonna look at those properly, it's just streams and streams of data like my matrix thing earlier. You need to aggregate these together into some meaningful way. So, here's an example. In error monitoring, in exception monitoring, in a Ruby application, you can say, hang on a second, all of these exceptions are coming from the same line of code. All of these exceptions have the same error class, exception class. So, you can take heuristics like this and you can combine them together and you can use those to group together like events, like issues. And that then allows you to prioritize. So, you can say, hey, there was one of this error, one of this error, one of this error, 10,000 of this error. So, once you aggregate things together and this applies in anywhere as well, you can do this in performance monitoring, you can say, this page takes this long, so grouping it into one page, you can say this takes this long to load, and you can also do it on uptime and availability monitoring by saying URLs, this URL is not responding. You can group this all together. So, this helps you avoid data blindness effectively. This is kind of what I said earlier, but this is something that I struggle with a lot when you're looking at log files, you just eyes zone out, you lose focus. You wanna see what the most important stuff is. So, you've done all that. One of the things that I said earlier was a sin was lack of ownership. And sometimes lack of ownership comes from a lack of visibility. So, how can you get that visibility into these problems? So, most dev teams these days are not just using email. I know, I always talk to people, this RailsConf, the last RailsConf, and the last RubyConf. And there's still a ton of people using the exception notified gem, which sends crashes into your inbox, into your Gmail inbox. Last time I was at RailsConf, someone told me that I said, how do you deal with the crashes on production and volume, and they said, when we get a notification from Google, from Gmail, that our inbox is over its rate limit, then we have a problem, let's go and fix it. So, people are still doing that. I mean, that's better, you're notifying, you have something coming in, it's better than nothing. But, a lot of teams these days use chat, team chat. So, can I show our hands in the room, how many people in this room right now are using Slack or HipChat? Everyone, pretty much, right? So, why not have those detected problems that have been aggregated together come into the channels that you're already working in, already discussing with your team. For example, at Bugsnag, we have a front-end team and a back-end team, and we have exceptions coming in to the appropriate room. So, in Bugsnag, in the settings for Bugsnag, you can say, tell me when every individual exception happens. Now, I call that the fire hose, that works when you're first launching an application or maybe when you're on staging, for example, or beta, but when you're in production, that's gonna be noisy, you're gonna have a bad time. That's what I was saying earlier about lack of visibility. But, what you could do is you could say things like, let me know when more than a thousand people have seen a crash, let me know when this spikes, let me know when a new error that we've never seen before comes in. So, you can take this stuff and put it into the channels you're already communicating in. So, prioritization, once you've kind of done this aggregation that we said earlier, you can actually start prioritizing things. So, how would you prioritize things once you've done aggregation? So, one way is saying, this error has happened 10,000 times. That's a pretty good idea, but what if that error happened 10,000 times because someone had one person had some bad data in their database and it just stuck in a loop or one cron job exploded and just kept on spinning and spinning and spinning. So, another way to prioritize is by how many users were impacted. If there's a user-facing project, you could say, let's tag each, every exception that comes in, let's grab a UUID and then we can say, wow, this error has happened 100,000 times to one UUID and you can learn a lot just from that kind of relationship, that ratio between those two numbers. But it also helps you prioritize. Another way you can prioritize is by looking at attributes of, again, an example in error monitoring, an attribute of the problem. If you look at, was this a handled exception or was this an unhandled exception? Arguably, you want to focus and prioritize on things that are customer impacting and unhandled exceptions will cause, in a Rails app, for example, 500 error to be displayed. Oops, something went wrong. That's going to be embarrassing. So, taking kind of these heuristics, number of events, number of users, severity will help you prioritize what to work on. Now, we've got all the way down to number six on the list and we haven't even talked about fixing bugs yet. We're just talking about being aware of them, being aware of production issues and having the visibility and surfacing the worst problems. But there's a complete waste of time in doing that unless you have the tools to diagnose these problems. So, most production monitoring tools, and if you're building one yourself, you'll need to do this. At the point in time a production problem happens, capture relevant diagnostic data to help you actually solve the problem as well. So, an example in error monitoring, exception monitoring. In bug sag, again, you can look in our GitHub and look at all the stuff, the diagnostics we automatically capture. But in a Rails application, if an exception happens, we're going to capture the line of code, the stack trace, the line of code that the crash happened on. But what else is useful? You want to know things like, what was the URL that this happened on? What was the parameters, get parameters, post parameters that were available in that request? They're the obvious thing. So then you get kind of get a little bit down and dirty. You can start sending in stuff that maybe will help you from your application. Like, how many times has this person logged in recently? Things like, did this crash happen on a particular server? Was this on a particular version of Rails? Maybe we forgot to upgrade one of our servers to Rails 5, and we're still on Rails 4 on a server. That would be awful. I don't know how that would happen. That would be just embarrassing. But I've seen crazier things. I remember that we had, in my previous company, we had one rogue server that was running Ruby 1.8, when all the other ones were updated to Ruby 2. And it was just like, what's going on? Why is this just acting differently? If you have that diagnostic data, if you've captured that at the point in time when a problem happens, you can solve these problems. You can surface what the actual cause of the error is. Now, if there's one thing that I think the most important thing that I want anyone to take away and implement in their teams is this one, is Tend. All the other stuff I'm talking about is tooling. It's all technology choice or technology implementation. This is a fundamental organizational change. So you've got production errors, you've got production issues. What's the point in having all of that in place if no one is responsible for actually going ahead and fixing it? Now, this is, again, this is a really tough one to solve. We're going through a couple of ideas and techniques I've seen in companies that we work with. But unless people care about these problems, there's no point in detecting them. So how do you actually do these things in real life? Tooling, use failure hooks, assess impact, assess severity. I talked about all these things already. Capture diagnostic data, that's the tooling. Workflow is where things get interesting. Use team chat, I mentioned this already, everyone's using Slack or HipChat. Embrace collaboration, so this is, rather than having a culture of blame, if you pick a tool or build a tool that where you can all look at a comment history, for example, you can say, hey, you know what? I think I know why this happened. You can collaborate around a particular issue or you can assign it. Here's an example of this. We have a concept of commenting in Bugsnag. So in Bugsnag, you can comment on an error. So anyone on your team can be like, oh, I think I know what this is, or is this bad? And one of our Bugsnag customers had a thread that had over 200 comments on a particular error. And it started off as, oh, this looks bad. And then someone else came in and was like, does this relate to the deploy that we just pushed out? And it kind of evolved into this huge conversation around the potential causes of it. And then there's a history, there's a permanent history. Once you fix that problem, you can go back in and see all the steps that people took and all the discussion that people had around it. And this is something that working, even if you don't have a tool that does this, if you put your exception into the team chat, into Slack, for example, you can interleave problem information with a discussion with humans around why you think this happened. Tracking progress is really important workflow-wise. So once you've detected a problem, how can we prove that we fixed it? How can we track which ones have been fixed and not fixed? So this is, again, a tool choice thing. In Bugsnag, you can tag out your bugs, your errors, as fixed, ignored, if you think it's like Google crawler bot or something like that. Snoozed, which kind of says, hey, this isn't bad right now, but if it gets a lot worse, I really need to know about it. But by tracking progress, you can kind of see why things have been bucketed into the decisions that they have been, the outputs that have been. Now, I mentioned this a couple of times. There's no point in doing any of this unless you've got a team in place and the organizational set up in place to actually do things about it. So if you are accepting that bugs are gonna happen in production, then you can embrace rapid iteration. But how do you actually deal with the issues? Now, there's three ways that I've seen teams do this. This is people struggle with this. There's three ways to do it. Number one is you can create a bug team. Now, one person or one team that's responsible for checking all your production monitoring tools to say, is something wrong and what's the most harmful problem right now? Now there's pros of this. Obviously, the bug team can collect knowledge. They can amass knowledge over time. There's also a clear responsibility, who's responsible for this kind of stuff. There's also a lot of cons. So it's harder for individual contributors, individual engineers to learn from mistakes. If you've got someone cleaning up after your messes all the time, then you're not gonna learn. You're not gonna improve as an engineer. And also, the bug team must communicate these common issues back to the rest of the engineering team. Now, this is the most common set up in older, bigger organizations. So I used to work in finance. In finance, they would have a bug team and the bug team would be like, hey, everyone, this is broken. Should we fix it? And they'd put a hot fix on our patch or whatever. I'm not a big fan of this one, but it's the tried and tested historical way of doing things. This is a kind of modern way of doing things. And this scales pretty well as well. So rather than having individual, personal and individual team, you could set up a rotor system. So we had, at my previous company, we had a role called bug warrior. So you'd be bug warrior on a weak rotation and you'd come in and you would learn immediately about the entire system very quickly because you'd be looking at bugs everywhere. And your role as bug warrior would not be to necessarily fix things, but your job as bug warrior would be to understand how bad something is and then be the champion for getting it fixed. So in some situations, you could come in and say, well, that's just a, I need to put a nil check in there. I can fix that myself, do a PR, get it in there. But in a lot of situations, it would be nuanced or difficult or complicated. But as long as you are, as long as you have someone in that role that is the stakeholder for the customer, then change can happen. So bug warrior system works really well. I've used that in various companies before. You know, the pros of this is the entire team to get, the entire team get to see and feel the pain of the customer on a rotation, which is fantastic. Because if you don't have visibility on this, you're gonna think everything's just fine, which is one of the sins that I talked about earlier. It also avoids this not my problem mentality that I was talking about earlier. Now the cons are, the main con is that it can take a little bit longer to fix individual issues. But I think that the pros, personally, the pros outweigh the cons on this system. This is a controversial one, and this one only works in certain organizations because it could potentially create a blame game. But in theory, the person who last touched the code is best placed to understand what caused a problem. Now, I'm looking around again, this is one of the situations where I get dubious faces sometimes, because I'm waiting for someone to say, but what about that person who just changed all the tabs to spaces? Yeah, okay, that's, you've got other problems if you've got people doing that in your code base. What about that person who refactored everything and shifted it down by a couple of lines, or what about that person who moved something into his own class to be more testable? Well, you know what? They still were in that code most recently. If they're messing around with that code, if they're in that code for some valid reason, maybe tabs to spaces isn't a valid reason, or vice versa. But if they're in that code for some valid reason, they still should have context of what's going on in there. Now, I'm not saying anyone who touches a code gets an email and they blame for this, but I'm saying they have the knowledge and therefore might be the best first contact. So pros are they have the knowledge of the affected code. Also, if this person actually caused a bug, then they probably are the best placed person to fix it and they can learn from their mistake. But the big con on this one, and it really depends on the organization structure that you guys have, is that it can create a finger pointing culture, it can create a blame culture. So you can tell which one I like the best. This is my favorite. It works, it scales pretty well. But if you have an environment where you have a positive attitude and a positive philosophy and there's no blame involved, this one's a pretty smart way of doing things as well. So what do I want you guys to take home from this? Avoid the sins. So don't pretend nothing's wrong. Don't wait for customers to complain. Don't have a lack of visibility and don't have a lack of ownership. Embrace the core principles, the fundamental principles of production monitoring. So accept that your software will break after you ship it. That's fine, it's good, it happens. Automate and add error hooks to detect crashes, errors, issues in production. Don't just have a stream of events, group like events together and aggregate. Notify your dev team where they already communicate, use team chat, use email if you want, but team chat is much, much better. Prioritize, you can't fix every bug. The dirty reality of high volume applications is you cannot fix every bug. So prioritize which ones you're gonna fix first. Diagnose, make sure you have the diagnostic data available to actually fix these things. And then my most important one, tend. Make sure someone cares about this. And then take action. So after you've avoided the sins and you've set up the core principles, how do you actually take action here? Well, select or build your own production monitoring tools based on these principles. Obviously, I'm biased and I'd say use bug snag for error monitoring and crash monitoring because these are the core principles we built our products around. But whatever you choose, make sure that they fit these criteria. Get smart about your workflow. Make sure that you improve your workflow and make an organizational change. If there's no one responsible for fixing bugs and caring about your customer as an advocate, that needs to happen. It doesn't matter how it happens, but figure out a way to make it happen. So that's it. So we've got a few minutes for seven minutes for questions. Any questions? That's a great question. So the question is when you've got a lot of bugs that may be the same-ish level of priority or the same occurrences, how do you get those fixed? So is that fair? Yeah, sort of. Okay. Oh, awesome. I was saying zero. Yeah. Okay. That's a really good question. So how do you get to effectively inbox zero in your production monitoring tool? So there's two ways I've seen this done. One is brutal, one is less brutal. So the brutal way is declaring bankruptcy, right? The brutal way is coming in, selecting everything and deleting everything and then starting from scratch. And it works really well, but it's scary as hell. That one is super effective. The other technique that works pretty well is to have a hackathon bug week or a couple of weeks where you say, right, instead of just deleting and pretending we don't have these problems, let's actually go through and fix a bunch of them. And some of the stuff we've been building in bug snag around, we built this tool called snooze. So you can come in and snooze things and say, I don't care about this right now, but let me know if it gets worse. We're trying to build an environment where you can start hiding these from your main inbox so you can get to inbox zero. But one of the ways to do it is to just literally declare bankruptcy. One of our biggest customers did this and they were so pleased after they did that and they started using the workflow, but it's tough, they're the two that I've seen. Yeah, it's a good question. So the question is, for bug snag customers, did they use something for production monitoring before or are they coming into this fresh? So it's actually pretty even split. So one interesting thing I wanna point out, in Rails 1, this is not the case, but in a lot of conferences that we go to, most of the, here's an example, LaraCon, which is a PHP conference, the Laravel PHP framework. It's a great community, but it's the PHP community. And 80% of the people that we talk to are just looking at patchy logs. And that is terrifying, but it really depends on the platform. But for bug snag itself, we have about a 50-50 split. I think that probably most of our initial customers were using a tool called Airbreak before, and we kind of got a lot of refugees from Airbreak. And so there was an understanding that there was a problem, and this was a better way of solving it. But yeah, we're still seeing a ton of people. I mentioned the exception-notifier gem earlier. So many people still use exception-notifier gem. Like, yeah, it's great. Do it when you're in development. But as soon as you ship to anyone, I just don't think that's a reasonable way to solve this problem. Especially when there's tools out there, like bug snag, they're free for open source projects and things like that. You might as well, it's an easy solution, but it's about a 50-50 split, but it does depend on the community. The good news is that the Rails community is pretty savvy about this kind of stuff. Good question, though. Any other questions? Cool, all right, thanks a lot, everyone. Thanks for coming down.