 this year. I'm Aileen Yushitel and I'm a senior systems engineer on GitHub's platform systems team. That's quite a mouthful but it basically means that my team is responsible for working on internal and external tools for the GitHub application. We work on improving Rails, Ruby, and other open source libraries as and how those tools interact with GitHub. I'm also the newest member of the Rails core team which means that I've finally gotten access to the Rails Twitter account because we all know that Twitter is the only important thing, never. You can find me on GitHub, speaker deck, Twitter, anywhere at the handle Aileen codes including my website. It's all the same. So today we're gonna talk about the new system testing framework in Rails that I spent the last six months building. We'll take an in-depth look at my process for building system tests, roadblocks that I hit, and what's unique about building a feature for open-source software. But first let's travel back in time a few years. At the 2014 RailsConf DHH declared that test-driven development was dead. He felt that while TDD had good intentions that it was ultimately used to make people feel bad about how they wrote their code. He insisted that we needed to replace test-driven development with something better that motivates programmers to test how applications function as a whole. In a follow-up blog post titled TDD is dead, long-lived testing, David said, today Rails does nothing to encourage full system tests. There is no default answer in the stack. That's a mistake that we're going to fix. It is now three years after DHH declared that system testing should be included in Rails. I'm happy to announce that Rails 5.1 will finally make good on that promise because system tests are now included in the default stack. The newest version of Rails includes Capybara integration to make it possible to run system tests with zero application configuration required. Generating a new scaffold in Rails 5.1 application will include the requirements for system testing without you having to change or install anything. It just works. This is probably a good time to address exactly what a system test is. Most are familiar with Capybara being referred to as an acceptance testing framework, but the ideology of system testing in Rails is much more than acceptance testing. The intention of system tests is to test the entire application as a whole entity. This means that instead of testing individual pieces or units of your application, you test how those pieces are integrated together. With unit testing, you'll test that your model has a required name and then in a separate test that the controller detected an error. With unit testing, you assume that the view must be displaying the error, but you can't actually test that. With system testing, all of that becomes possible. You can test that when a user leaves out their name, that the appropriate error is displayed in the view and the user actually sees it. System tests also allow you to test how your JavaScript interacts with your model's views and controllers. That's not something that you can do with any other testing framework inside Rails right now. But before we get into what system tests look into how what it took to build system tests, I want to show you what it looks like in a Rails application. When you generate a new Rails 5.1 app, the gem file will include CappyBara and Selenium WebDriver gems. CappyBara is locked to 2.13.0 and above so that your app can use some of the features pushed upstream to CappyBara like many test assertions. In your test directory, a system test helper file called application system test case will also be generated. This file includes the public API for your CappyBara setup for system tests. By default, applications will use the Selenium driver using the Chrome browser with a custom screen size of 1,400 by 1,400. If your application requires additional setup for CappyBara, you can include all of that in this file. Any system test that you write will inherit from application system test case. So writing a system test is no different from writing CappyBara tests except for that now Rails includes all of the URL helpers so that you can use post URL and post path instead of slash posts without doing any additional configuration. This is a simple test that navigates the post index and asserts that the H1 selector is present with the text post. Then in your terminal, you can run system tests with Rails test system. We don't run system tests with the whole suite because they're slower than unit tests and everyone, everyone we talk to runs them in a separate CI build anyway. If you want system tests to run with your whole suite, you can create a custom rake test that does that. Let's take a look at system tests in action. I had to record the demo because they run too fast and you wouldn't be able to see them unless I slowed them down. So first, we're going to write a test for creating a post. The test visits the post URL and then we can tell the test to click on the new post link in the view and then we can fill in the attributes for the post. So title will get system test demo and the content will just put in some lorem isim. And then just as the user would do this, you would click on the create post button and after redirects, we're going to assert that the text on the page matches the title of the blog post. And then we can run system tests with test system and you can see that the Puma server is booted and Chrome is started. And now the test is filling in the details that we filled in in the test and that's the index test. So as you can see, it's super simple and they run really fast. So you may be wondering why it took three years to build system tests. That didn't seem that complicated and if you're familiar with the pull request, you know it didn't actually take me six months. It didn't actually take me three years, it took me six months. There were a few reasons that system tests took three years to become a reality. The first is that system tests needed to inherit from integration tests so they could access the URL helpers that already exist. But integration tests were really slow. The performance of them was abysmal. There was no way the rail team could push system testing through integration tests without major backlash from the community. Nobody wants their tests week to suddenly go from five minutes to 10 minutes. That kind of performance impact simply isn't acceptable. Speeding up integration tests had to happen before implementing system tests. So in 2014, 2015, I worked with Aaron Patterson on speeding up integration test performance. Once we got integration tests to be marginally slower than controller tests, system tests could inherit from integration tests. Another reason that they took three years is that contrary to what many may think, the Rails core team does not have a secret Rails feature roadmap. Rails is a volunteer effort. So if there isn't someone who's interested in implementing a feature, it's not going to get implemented. Of course, individually, we may have an idea of what we'd like to see in Rails 6 or 7 or 8, but I'd hardly call it a roadmap. We each work on what we're passionate about and often features grow out of real problems that we're facing with our applications at work. And system tests are a really good example of this. Prior to working at GitHub, I was a programmer at Basecamp. When we were building Basecamp 3, we decided to add system testing through Capybara. I saw firsthand the amount of work it took to get system testing running in our application. This was a major catalyst for getting system tests into Rails 5.1. The work required to get system testing in a Basecamp 3 reinspired the motivation to work on this feature so that others could do less work in their application and focus on what's really important, writing software. So this past August, David asked me if I would be interested in getting Capybara integration into Rails 5.1. These are the exact words he said to me. Most of my work on Rails had been in the form of performance improvements, refactorings or bug fixes, so I was really excited to work on a brand new feature for the Rails framework. There was just one caveat. I had never used Capybara before. I know that sounds ridiculous, but beyond writing three or four system tests in Basecamp, which I admittedly struggled with, I had never set up an application for Capybara nor written an entire test suite. This did have some pros, though. I got to experience firsthand what was hard about setting up Capybara, especially from a beginner standpoint. I had no assumptions about what was easy or hard when I began development on system tests. Having no experience with Capybara allowed me to see the feature that I was building solely from the perspective of what works for Rails and Rails applications. And this is not to say that Capybara does anything wrong, but Rails is extremely opinionated in what code should look and feel like. For Rails, it's important that implementing system tests is easy and require a little setup so the programmer can focus on their code rather than test configuration. When you're implementing something that you're unfamiliar with, it's best to have a set of guiding principles in order to make decisions about design and implementation. Without these goals, it's easy to get sucked into scope creep or bikeshed arguments about the details. Having guiding principles means that for any decision you can ask yourself, does my code meet these guidelines? For guidance of building system tests, I, of course, used the Rails doctrine. This, as mentioned earlier today, is a set of nine pillars that drive decision making and code that goes into the Rails ecosystem. While I was building system tests, I would regularly base decisions on the Rails doctrine. System tests meet all of these requirements in some way, but I want to take a look at a couple of the principles and how system tests infrastructure meets those specific requirements. The first is optimized for programmer happiness. This pillar is the overarching theme in all of Rails. Rails' entire goal is to make programmers happier, and frankly, I'm spoiled because of this. Rails makes me happy, and I'm sure it makes all of you happy, too, because you wouldn't be here at RailsConf otherwise. But you know what didn't make me happy? All of the implementation required to get Kappy Bear running in our Rails applications. The code here is the bare minimum that was required for your application to use Puma, Selenium, and Chrome for system testing. Many applications had to do this multiple times to be able to use multiple drivers with their test suite, or had much more setup because they wanted to use support different browsers with custom settings. Rails 5.1 system tests mean that you can use Kappy Bear without having to configure anything in your application. Generating new Rails 5.1 app and all of the setup to run system tests is done. Programmer happiness was the driving force behind getting system testing out of your application and into Rails. You don't need to figure out how to initialize Kappy Bear for a Rails application. You don't need to set a driver, you don't need to pass settings to the browser, and you don't need to know how to change your web server. System tests in Rails abstract away all of this work so that you can focus on writing code that makes you smile. And all you need is this simple little method driven by. When you generate a new application, a test helper file is generated along with it. If you've upgraded your application, this file will be generated when you generate your first system test or scaffold. All of the code we looked at previously is contained by this one method method driven by. It initializes Kappy Bear for your Rails app. It sets the driver to Selenium, the browser to Chrome, and customizes the screen size. Rails values being a single system, a monolithic framework, if you will, that addresses the entire problem of building system tests from databases to views to web sockets and testing. By being an integrated system, Rails reduces duplication and outside dependencies so you can focus on building your application instead of installing and configuring outside tools. Prior to Rails 5.1, Rails the whole didn't address the need for system tests. By adding this feature, we've made Rails a more complete, robust, and integrated system. As DHH said in 2014, Rails was incomplete when it came to system tests. Rails 5.1 closes that gap by adding Kappy Bear integration. You no longer need to look outside of Rails to add system testing to your applications. Rails values progress over stability. Yes, this means that often betas, release candidates, and even final releases have a few bugs in them, but it also means that Rails hasn't stagnated over time. Rails has been around for many years, and the progress that we've made in that time is astounding. We care about our users, but we also care about the framework meeting the demands of the present and future, which means sometimes adding improvements that won't be stable. You also know that when a feature, you also don't know a feature is just right until someone else actually uses it. I could have spent years working on testing and improving system tests, but ultimately I merged them when I knew there were a few bugs left. I did because I knew that the community would find the answers to the problems I didn't know how to solve and find new issues in the implementation that I just hadn't thought of. By valuing progress over stability and merging system tests when they were 95% done instead of 100% done, many community members tested the beta release and provided bug fixes for things present in system tests. A few features were even added and some functionality was moved upstream to Kappy Bear instead. System tests progressed more by merging when there were a few bugs left than by waiting until they were perfectly stable. Now that we've looked at the driving principles behind system testing, let's look at the decisions and implementation around implementation and architecture in the Rails framework. We're going to look at why I chose specific configuration defaults and the overall plumbing of system tests in the Rails framework. The first configuration default I want to talk about is why I chose Selenium for the driver. The barrier to entry for using system tests should have zero setup and be easy for beginners to use. In Kappy Bear, the default driver is RackTest. I didn't think this was a good default for system testing because RackTest can't test JavaScript and can't take screenshots. It's not a good default for someone who's learning how to actually test their system. I also had quite a few folks tell me on the pull request that they thought Poltergeist was a better choice because it was faster and that's what they used in their apps. While it is true that Poltergeist is popular and faster, ultimately I chose Selenium for a few reasons. Selenium doesn't require the same kind of system installs. Poltergeist requires PhantomJS and Kappy Bear has a WebKit. Kappy Bear WebKit has a dependency on Qt. Both of these system installs aren't something that Rails could take on. Since Selenium doesn't have these same kinds of requirements, it made sense for Selenium to be the default over Poltergeist or Kappy Bear WebKit. One of the coolest things about Selenium is that you can actually watch it run in your browser. Poltergeist and Kappy Bear WebKit are headless drivers, which means they don't have a graphical interface. While they will produce screenshots and they can test JavaScript, you can actually see them run. And watching Selenium test run in a real browser like Chrome or Firefox is almost magical, which also makes it better for beginners. New programmers, especially those learning Kappy Bear and Rails, can physically see the tests running and it's easier to discern what's happening or what they might be doing wrong. Best part about system tests is if you don't like Selenium, the driver options are extremely easy to change. To change the default driver, system tests are the driver that system tests use. Open your system test helper and change the driven by method from Selenium to Poltergeist. Of course, you're going to need to install phantom.js and add the gem to your gem file, but changing the driver setting itself is super simple. Rails won't stop you from passing whatever you want here, but Kappy Bear will only accept Selenium, Poltergeist, Kappy Bear WebKit, and Racktest. Another decision that differs from Kappy Bear's defaults is that Rails uses the Chrome browser with Selenium instead of Firefox. Chrome is widely used and has a greater market share than Firefox. In general, I think the development is done in Chrome, so it seemed like a sensible default from that standpoint. Another reason I chose Chrome was that for a while Firefox was broken and didn't work at all with Selenium 2.53. This has since been fixed, but when I started working on it, it was one of the motivations I had for making the default Chrome. There was literally no way that I could merge system tests and have the default configuration be broken. Firefox now works with Selenium, and if you upgrade both Firefox and your Selenium WebDriver gem, you can use Firefox. And if you want to use Firefox instead of Chrome, you can simply change the using keyword argument from Chrome to Firefox. The using argument is only used by Selenium, the Selenium driver, since the other drivers are headless and don't have a GUI. I'd love to support more browsers in the future like Safari or Internet Explorer or whatever it is that you're using. Drivenby has a few optional arguments that are supported by Selenium. The screen size argument sets the browser's max height and width, which is good for testing your website at different browser sizes or setting the size for screenshots. Drivenby also takes an options hash which is which is passed to the browser initialization. This can be useful for passing options that don't explicitly define aren't explicitly defined in Rails, but are accepted by Capybara, like the URL option. One of the coolest features of system tests is they automatically take screenshots when a test fails. This is good for freeze framing failures so you can see what went wrong. This works with all drivers supported by Capybara except for RackTest. Included in the system test code on the Rails framework side is an after tear down method that takes the screenshot if the test fails and screenshots are supported. And we're going to take a look at those screenshots in action. So first I'm going to change the test, the assert part of the test to say failure screenshots instead of demo. And then we can run the test just like we did before and it's going to boot the Puma server. You can see that the test failed and in the output of the test there is a link to an image and so we can open that we can see in the test that the test the screenshot says system test demo but the test is looking for a system test failure screenshots and so we can actually see why it failed. You can also take a screenshot at any point in your when your test is running by calling take screenshot. This can be useful for tools like per CCI for comparing front end changes or for saving a screenshot of what your website looks like at whatever point in your test run. One of the less obvious features of system testing is that database cleaner is no longer required. Those of you who work with Capybara before will know that transactions during Rails test runs with Capybara wouldn't get properly rolled back. It's been the status quo for a long time that ActiveRecord was just broken and unable to roll back transactions when tests were threaded. The basic gist of the problem was when a test started Rails testing code would start a transaction with a database connection and then the web server would open a second connection to the database on a separate thread. When the test runs database inserts or updates will happen on the second thread instead of the thread with the transaction. When the test finishes insert or updated records won't be rolled back because the fixture thread cannot see the web server thread. If the inserted or changed records aren't rolled back at the end of the test subsequent runs will fail due to uniqueness constraints or other issues with leftover data. I spent an embarrassing amount of time of the six month building system test trying to solve this problem. It took me a while to understand the real issue with ActiveRecord and I was surprised how many years users had just accepted that this was an issue with Rails. I wanted to solve it so that we didn't have to force users to use yet another dependency. The problem was I didn't know how to solve it and I'll be honest that concurrency isn't one of my strengths. So I had to ask for help from the two people who know more about ActiveRecord and concurrency than I do. Aaron Patterson and Matthew Draper. This is definitely a picture of me. The problem was that Aaron and Matthew had differing opinions on how to fix the problem. I first tried to fix it Aaron's way which was to tell the ActiveRecord to just check the connections back in when they were done with them but this broke like 75% of the ActiveRecord tests so that wasn't going to be acceptable. Because then you were trying to check the connection and while the transaction is actually open and it needs it so you can't do that. Matthew came up with a different solution which was to force all of the threads to use the same connection. When the test starts the transaction is started and a database connection is opened and then the Puma server started it's forced to connect to the database using the already existing connection instead of creating a new one. That way they can see each other. Then all of the database inserts and updates happen on the same connection as the original test transaction and then they can all be rolled back when the test transaction is closed. Without Matthew and Aaron's help I would not have probably figured out how to fix this problem and you all would have to use database cleaner forever. We spent a lot of time looking at individual settings in the public API for system tests so that it so it's time to take a look at the plumbing that makes all of this work. None of the code that we look at at this point is anything that you should ever have to touch unless you find a bug. This is just everything that Rails has abstracted away so you don't have to worry about configuration and can focus on writing your tests. System tests live in action pack under the action dispatch namespace and inherit from integration tests so it can use all of the URL helpers that are already implemented on the integration side. This entire class can't fit on this slide so I'm going to go through the methods as they're called by the test when it started. When you run a system test start application is called first. This boots the rack app and starts the Puma server. Your system test helper file then calls the driven by method. This is where the default configuration settings are implemented. When driven by is called a new system testing driver class is initialized with the arguments that you passed in. The driver object is initialized with name browser screen size and options. Browser screen size and options are only used by the driver when it's Selenium. Now the running test is then initialized. It's important to call use on the driver here so the driver is set when the test is initialized and not when the class is loaded. The use method calls register if the driver is Selenium. The register method is how Capybara sets the browser to Chrome with the options and screen size passed to driven by. And finally the use method calls setup which simply sets the current driver method in Capybara to the driver you passed into driven by. This is the basic plumbing required to get required in Rails to get system tests running. It's relatively simple but it's great now that none of you have to put that in your application. One of the things that struck me about working on this feature for Rails was how different it was from building features for a product or client. I think this is in part because the work is so public versus client or product work is usually a secret until the big reveal. You're probably thinking, duh, it's open source so it's by virtue public. But almost all of the work that I'd done previously was related to performance improvements for factorings and bug fixes. These are things that folks don't come out of the woodwork to comment on. This isn't the type of work that people care a lot about when it comes to style or implementation. It doesn't change their application unless you're touching the public API. But adding a brand new testing framework that's something everyone has an opinion about. In the three months that my system test pull request was open I got 11 reviews and 161 comments. And that's not even including all of the conversations that we had in the Basecamp chat about it. And this highlighted one of the big challenges of open source for me. That it's really difficult to not constantly feel like you're being judged. Doing open source work makes you extremely vulnerable. It can feel like every commit, every comment, and every code change is open to public scrutiny. And this is one of the hardest things about working in open source. And one of the things that I think keeps new contributors from coming and working on open source. I'm on the Rails core team and I still get an adrenaline rush when I push to master. I still start sweating if the build fails after I merge a pull request. I've been doing open source for many years and I still feel vulnerable. It's difficult to remain confident and keep your cool when doing publicly visible work. I often had to fight the urge to rage-quit all of it because I was tired of debating choices that I made. Even if the other person is right, it's still exhausting to feel like you're having the same conversation over and over and over again about implementation. But really public debate is an inherent part of open source and you're always going to have to argue for your position. When your confidence is shaken it can be tempting to look for ways to find consensus among everyone who's reviewing your work. The desire to find consensus isn't unique to open source. But what is unique to open source is that the stakeholders you're trying to find consensus with have varying levels of investment in the end result. When you're building a feature for when you're building a feature for the company you work for or the client you usually know who your stakeholders are. Those are the people who care most about the feature that you're working on. But with open source you don't really know who's going to care until you open that pull request. Of course I knew the Rails team was a stakeholder and cared a lot about how system tests were implemented and I knew the Cappy Barrett team cared about the feature as well. But I wasn't prepared for all of the other people who would care. And of course caring is good. I got a lot of productive and honest feedback from community members but it's still really overwhelming to feel like I needed to debate everyone. Rails ideologies of simplicity differ a lot from Cappy Barrett's ideologies of lots of features and all of the individuals who were interested in the feature had differing opinions as well. Which driver was the best default? Was it okay to change Cappy Barrett's longtime defaults from Racktest to Selenium? Was it even desirable to include screenshots by default? Was it fine to change the default browser from Firefox to Chrome? I struggled with how to respect everyone's opinions while building system tests but also maintaining my sense of ownership. I knew that if I tried to please all through groups and build system tests by consensus that I would end up pleasing no one. Everyone would end up unhappy because consensus is the enemy of vision. Sure you end up adding everything everyone wants but the feature will lose focus and the code will lose style and I will lose everything I felt like was important. I needed to figure out a way to respect everyone's opinions without making system tests a hodge podge of ideologies or feeling like I threw out everything I cared about. I had to remind myself that we all have one goal to integrate system testing into Rails. Even if we disagreed about the implementation this was our common ground. With this in mind there are a few ways that you can keep your sanity when dealing with multiple ideologies in the open source world. One of the biggest things is to manage expectations. In open source there are no contracts. You can't hold anyone else accountable except for yourself and no one else is going to hold you accountable either. You're your boss and your employee. You're the person who has to own the scope and you're the person who has to say no. There were a ton of extra features suggested for system tests that I would love to see but if I implemented all of them it still wouldn't be in Rails today. I had to manage the scope and the expectations of everyone involved to keep the project in budget. While I really respected everyone's opinions on system tests ultimately I was building the feature for Rails. System tests needed to look needed to fit into Rails look and feel. To do that I had to work with Capybara's ideologies that system testing should be robust and have many options. There's nothing wrong with this approach but it doesn't follow the Rails doctrine because Capybara didn't provide a clear enough path for getting system tests into your Rails application Rails had to take that on. In the end it was the Rails team who was going to decide when system tests were mergeable. Since I was building the feature for Rails I first honored Rails principles and everyone else's second. When you're building open source features you're building something for others. If you're open to suggestions the feature might change for the better. Even if you don't agree you have to be open to listening to the other side of things. It's really easy to get cagey about the code that you work so hard to write. I still have to fight the urge to be really protective of system test code when someone wants to merge add some code or change how it works. I wrote it and I put a lot of time and effort into it so I'm protective of how it looks and feels. But I also have to remember that it's no longer mine and never was mine. It now belongs to everyone who uses Rails and so I need to be open to suggestions, tweaks and changes. Which brings me to my last point. Open source doesn't work without contributors. A perfect example of this is how I merged system tests when I knew they were in 100% stable. I did this because I didn't know how to fix a few bugs like displaying the correct number of assertions or how to run system tests separately from Rails test. The best part was that the system worked. You can't push an unfinished project live and ask your client to contribute to and improve upon your work and fix bugs. But with open source you can build a foundation for others to work off of even if it's not perfect. I didn't merge system tests with known issues because I was lazy. I merged them because I knew that contributors could help me fix the problems when they tested them in a real application. And because of this when the Rails team released candidate one less than a month after the first beta all of the known issues that I knew about before merging had been fixed by other people. Twatball and Capybara's main caner added many test assertions to Capybara. Previously users were running system tests with many tests would see an incorrect number of assertions and failures because Capybara handles those that are spec way. This is one of the things I wasn't sure how to fix. I knew fixing in the Rails code was wrong but I needed Capybara's buy-in to fix it upstream so it was great that the maintainer was willing to add this feature for Rails and it's a huge win for system tests. Robin850 sent a pull request that changed system tests so they don't run with the whole suite. We didn't want system tests to run when you run Rails tests in your application because they can be slow since they're using an actual browser and by default we run them in a separate test command. These two contributors help make changes to DrivenBuy so it could be used on a per subclass basis rather than globally. When I originally designed system tests it was intended to be a global setting because I assumed that if you were like a Capybara power user that you just wouldn't be using DrivenBuy anyway. Once the system tests were merged it became clear that others were really excited about using the DrivenBuy method instead and for setting up multiple drivers. So these two contributors help make system tests a lot more robust. Wrenchab helped improve screenshots to make them configurable and to display them differently based on environment settings so your CI versus your terminal. I knew this was a problem when I merged but I really wasn't sure how to fix it. He came up with an elegant and easy to use solution. This was also his first contribution to the Rails framework and all of these improvements for system tests highlight the real beauty of open source that it doesn't work without contributors and it doesn't work without you. Open source is for everyone by anyone. Contributing to open source isn't easy especially a mature framework like Rails but Rails is an astounding past and a bright future because of all of the contributors that care and don't believe anyone who says Rails is dying because frankly the future has never been brighter. Rails can and will benefit from your contributions in the future. Just like the contributors who push changes to system tests you too can make system tests and Rails better. By contributing you can help define Rails' future. I hope that next year RailsConf 2018 I see some of you talking about your contributions to Rails and open source. I hope that you enjoyed learning about system tests and how they work and my process for writing the feature. I also hope that I inspire you to contribute to open source especially Rails. And if you're looking to learn more about contributing to Rails stay for Alex Kitchen's talk which is up which is up next after the break. Thanks for listening. The question was that is the Rails team going to take a position on testing and push system tests more? I mean so the first part of that is first system tests need to like exist so now they exist and there are different kinds of testing and because system testing can be a lot slower because you're using a browser and JavaScript and all this other stuff it doesn't necessarily make sense for us to push system tests as like you have to system test and you can only system test we're not going to like delete unit tests from Rails tomorrow as far as I'm aware but I think that that would like make a lot of people really mad so it's unlikely that we would do that. So you should use what you want to use but now you don't have to like look outside of Rails to use system tests and that's the best part of it. So the question was how do I find time to contribute to Rails and open source with my full-time job? Um, so that's really hard and I think that's one of the reasons why sometimes I would feel like really burned out on system test because I was just tired of spending my weekends working on it and my balance is sort of weird uh at GitHub I do have some time to work on open source but that's not my main job so I find the balance by being like if it's something I'm really excited about and it's not for GitHub then I or for Basecamp when I was there I would I would only work on things that I really was excited about or or usually the stuff that Aaron is excited about because he likes to pawn stuff off on me Yeah, so that's hard I think that one of the things that would be good is like if you could convince your company it'll give you some open source time even if even if you have like one one day a month where you give back and you can do it in a group so then it's easier to get ramped up on a project or staying up to date on if you're using a Rails app at work staying up to date will actually help be more likely for you to find bugs in Rails because not everyone is using up to date applications so by doing that you might find a bug and then you get that if you can fix it then you get that as a contribution as well features obviously take like a lot more time and effort than well it depends then fixing a bug usually because the bug is usually a little bit more contained than feature does Rails have an opinion on how how you structure your system test the driven by method structures like your configuration you're set up so that we have an opinion on everything else like we didn't add like ways different ways of writing capybara tests so you wouldn't write like we didn't we didn't overwrite visit right and then change it to get or post we left that the way it was so generally write them the capybara way unless there is a compelling reason to add more simplistic method or if we wanted to do that we haven't done it so we yeah so write them the capybara way until we change that if we do what was the the question was what was the most exciting or fun part about about building this feature the day I merged it well it just it kind of like went on a long time like the pull request was open for three months so merging it was the most fun part but like because you like have this final yay it's done like everyone gets to use it and this was one of I think one of like the most well tested betas too which was really exciting because I don't know why I don't know if it was just because everyone was really excited about system tests and encrypted secrets and a few other things we had like we had more bugs found and fixed in our first beta than I think we've ever had so it would that was also really exciting about it the question was how did I settle on a name for this and I actually left that out of the talk because I thought it wasn't interesting I originally like we had picked one name like someone had picked a name like a long time ago like in the base camp project that we have for Rails and I was like go sure I'll make it that it was like Rails system test case but then like it didn't fit anywhere because you don't want to put it under the Rails ties namespace is just the only place that there's a Rails namespace in the under the Rails umbrella because there's no other testing framework inside of Rails ties so that didn't make sense and it needed to inherit from integration tests and then before I fixed the active record thing we thought that there was going to be an active record dependency or even possibly a database cleaner dependency so then I like took everything and I moved it to its own gem and then like right before merge somebody was like hey what about the name and then there was a whole debate and it got merged moved back into great integration tests and so that and that's how it ended up with application dispatch system test case and that also highlighted that integration tests are really weird and like should be called integration test case not integration test because everything else is test case so maybe that'll get fixed one day I am out of time so come find me after I'm sorry for those of you who didn't get your questions answered