 My name is David Tolley, I'm with salesforce.com. If you were in this talk previously, Jim Evans is from salesforce.com as well. It's almost like we sponsored the talk or something. But anyways, I'm on something called the productivity cloud at Salesforce. It's a pretty interesting group that I work with. We don't really create features that you all as potential customers would interact with. We create features and stories that help our engineers to be more productive. So things like faster results for your commits, more reliable results, so instead of having to debug your test results and figure out if it was an infrastructure failure or a real test failure, our team helps with that. We try and make our developers life easier. Not only in that, but we experiment with new technologies. I can't really get into any specifics, but any time a new technology comes out, like Greg Wester over here, he loves new and shiny things. So anytime something new comes out, he jumps on the wagon. We experiment with it, do a proof of concept, give it to people if they like it, awesome. Some examples of things that we've done is something called strict unit test, where if anything in the JVM tries to access anything like an IO or something like that outside of the space, it'll just fill the test. Not only that, we created a new J unit for a framework at Salesforce, migrating from an older J and your three framework. That was all handled by our testing cloud. One thing I just wanna give a little bit of a disclaimer, Salesforce is a really huge company. All the data and all the metrics that I'm gonna be talking about just has to do with our platform. Salesforce has been around for over 10 years. We own a bunch of different companies like Heroku, ExactTarget, and these aren't small companies. They have hundreds of employees or thousands of employees. Salesforce.com itself has, I don't know, 13, 40,000 employees. But just a little disclaimer, I'm just talking about the platform right here. Before I get into the history of Salesforce, I just kinda wanna give a little bit of recognition to Simon and Jim and the rest of the committers here. In his keynote, he mentioned something. There's something along the lines of two or three times more jobs for Selenium than a web driver than there was for some other testing frameworks. And a lot of people give them grief. They either love web driver or they hate web driver. Some people call it an necessary evil. I'm able to provide for my family because of them, and I'm sure a lot of you here are able to do the same thing. So let's give them a round of applause and just, they help our lives. Let's give them a round of applause. So Selenium has a really interesting history at Salesforce. We're one of the early adopters, and I'll be talking about how extensive our test suite is, but around 2008, one of our soul distinguished engineer now, Doug Chasman said, it'd be great if we could test our UI. So they looked at Selenium MRC, they did a proof of concept, and it went really viral. The great thing about Selenium is it's so easy to write a test. You don't need to worry about dependency injection. You don't need to worry about your code being testable. You don't need to write your code in a way that other people can write a code. You're dealing with a UI. If you can look at a UI, if you can open up a browser, you can write a test for it. So it had really awesome, really quick developer adoption, which is okay in a certain respect. It's good, we're getting more tests. You know we're testing our application, but Selenium MRC is very, very flaky. We spend almost as much time writing new code as we do maintaining our existing test suites. So around 2012, we have about 15,000 Selenium tests, and for most people here, 15,000 is kind of a ridiculous number, but it's okay because today we have around 50,000 Selenium tests. Not only that, we have two committers on Salesforce payroll. Jim Evans and Luke Edmund Samu that Jim talked about. And unlike some companies, every single one of our Selenium test is run through our CI process. So every single commit that goes through CI every time someone checks in code, it's being tested against 50,000 Selenium tests, and all of our other tests we did as well. It's a pretty cool concept. So Salesforce engineering, just the platform, is pretty huge. We have over 1,000 engineers. We have architects that, they drive our scale. They make sure that whatever we're trying to do now is gonna be a scale in the future. They drive our overall design. Then we have the feature developers. They create the new features. They create the enhancements. They're chiefly responsible for the unit tests and some integration tests. But the really cool thing about Salesforce is our quality engineers are just as important, and even in some respects, more important than our feature developers. No, we ensure product quality through test automation. So we don't rely on a lot of manual testing. And manual testing just doesn't scale. Of course, whenever you feature new enhancements or release goes out, you need to do some type of manual testing. But for the most part, every single one of our Q engineers is a developer in their own right. But instead of feature code, they're writing test automation code. They're chiefly responsible for the integration test and almost solely for the UI functional tests. So Selenium, WebDriver, those different testing frameworks. We also have hundreds of projects. The product team side ranges from one to 10 people, 10 plus people. And it's pretty cool that I say one person. There's a really cool thing about Salesforce and being an engineer at Salesforce is we have something called PT on. So if you have a really fantastic idea or something maybe that's outside the box or something like, this isn't working out, I wanna fix it. Management will give you the time where there's a day, week, month, depending on what type of scale it's gonna be. They say, okay, you have time, go make a proof of concept, let's see how it goes. So there's many, many small one or two person teams of Salesforce where they're just prototyping these concepts, prototyping these new technologies, working on some new cool features. And that's a pretty cool thing that we have going on. But not only that, our developers and our Q engineers are really tightly integrated. The QEs are involved from the get go of new product or new enhancement, a new feature. They're working with the product owner, the project managers, whenever they're developing a new feature, they'll go on there, they'll give their feedback. Not only are the developers saying, how long they think it'll take to develop a product, the engineers and the QEs are saying, I think it's gonna take this time, right? Like not only are you gonna take time to develop it, it's gonna take this, around this amount of time to test it, to make sure that's viable. So each one of our engineers, whether they're developers or the QE type, the average is about two to three commits per day. That's pretty significant when you're talking about over 1,000 people, right? So around 1,500, 2,000, 3,000 commits per day, possibly at peak times. And again, every single one of those 1,000 plus commits goes through our entire CI process. And that's over 450,000 tests. And if each one of those commits were run by themselves, it'd be somewhere in the neighborhood of one billion tests per day. That's kind of a big number for people. A lot of people in smaller companies, smaller startups, they kind of shake their head at maybe 500 selenium tests per day. When you're talking about hundreds of thousands of selenium tests per day, it's a whole new ballgame, the whole new level of complexity that goes into it. So the CI process at Salesforce is pretty much basically the same as it is with any other company. We're really huge on agile development, but we're not so rigid that we say, each team has to be two-week sprints or each team has to be three-week sprints. We give a team the authority to kind of pick out their own time frames, right? That's not a really rigid framework. But we do have three major releases per year. Every four months, really big enhancements, really big new features will come out to salesworks.com. But along the way, there's a lot of frequent, smaller releases as well. Almost on a daily basis, whether it's fixing a bug, whether it's a product enhancement, whatever needs to happen, we have the ability to release every single day. Along with being able to test or release, test or commits, test or code changes, being a public company, we have to have an audit trail, right, to be SOX compliance and all that great stuff. So we have a really cool in-house tool called Gus. And the great thing about that is it's an internal, agile project management system that we developed. It has a really cool sprint wall. You can put your stories on there. And it provides a really great audit trail. So whenever you're writing your code and you're trying to commit it, essentially what happens at Salesforce in your commit description, you'll say what story it's linked to on Gus, what your changes is doing, what release it's going to, and then you'll submit it. And the cool thing about some of the automation that we have going on there is it'll look at that commit message, it'll go to Gus, look at the API, it'll compare the commit message to Gus, do some light verification, make sure that they match up. And then if it does match up, it'll send it to something that we call pre-checking. Pre-checking is just a really simple, fast test suite that we have. It runs our unit tests and it runs our JS unit tests. At Salesforce, one of our developers developed something called X unit testing framework where it's a JavaScript unit testing framework. It's really heavily used at Salesforce. I believe that he's open sourced it. It's a fantastic product. It really helps us along with our synium tests to verify that our UI is up snuff. So we get that result back in about 30 to 45 minutes, something along those lines. So fairly quick results back, some type of results back. If it passes that, it goes into our big integration and functional testing suite. There we have around 200,000 integration tests. So that's a lot of tests. Then we have the big huge time consuming chunk of tests of our 50,000 synium tests. And it's a lot of tests that we have to deal with. And because of the 50,000 synium tests, they take a long time to run. So that's one of the main reasons that we can't run all of our tests against every single change list. Essentially what we have to do is we have to take a group of change lists, batch them together, and run our tests against that. So whenever a change list does get run, we do some pretty cool things. We try and paralyze it as much as possible. We have different runs for different flavors of IE, different versions of Firefox, Chrome, Safari. Last year Salesforce released something called Salesforce One, which is our mobile app for Salesforce. It's a really cool tool. There's a ton of Appium tests that we have for that. Luke, one of our employees, that's a committer for synium. He's one of the people in charge of Solenoid. He does a lot of the Android tests for it. So there's a lot of cool things that we try and do to paralyze it, and we run a lot of tests. But the tipping point is we got to this point where we had a critical mass of tests, right? 50,000 tests running that take 10 hours to run. We have to batch all these tests together and run. We're not getting any really easy to act on results, right? If you're in a group of change lists that's 10 different change lists, you're getting back 100 different test failures. It takes a long time to try and figure out why the tests failed, right? It's not, these tests aren't being run on your own change list, so it's not guaranteed that your change list is the one that caused the failure. So there's a lot of logic and some automation that we do to try and figure out what's going on there. Not only that, so we run each one of these 50,000 tests, but every time there is a failure, we rerun those failures. It's something that we call flappiness at Salesforce. Some people call it flaky tests, but essentially a test will run, it'll fail. We rerun it and then it passes right. I'm sure a lot of people have experienced that. The longest run takes over 10 hours and again, we batch all of these commits together. So when failures do happen, it takes a lot of effort to debug trying to figure out what happened. And Salesforce is always trying to hire new people. We're trying to scale. We're trying to push out new products, new services as fast as possible. So we needed to not only scale our engineering team, but also our test automation suites. Like if results are continually taking 10 hours with 50,000 tests, you can imagine if we had 60 or 70 or 100,000 tests whenever we have new features, it just doesn't scale well. You know, we're releasing, you know, three major releases per year. There's a lot of testing that has to go on and so we needed to refactor our longest run test suite which happened to be a slain. So I was hired on to Salesforce last September, about a year now. Greg actually recruited me and my main, the main reason that I came on was trying to refactor our sodium test suites. It was trying to figure out why our test suite took so long, what we could do to improve it and what we could do to make it better for the future. And when I got there, everyone had their own ideas of why our test suite took so long. But no one really knew. I'm sure a lot of people here have talked about DevOps, right, or heard the term DevOps. Essentially that's developers being able to act on data, right? They have access to real-time pertinent, you know, information of what their code is doing in production. They can figure out, you know, how many times their code is throwing errors, you know, what type of CPU and IO problems they're having. You need that same kind of mentality when it comes to refactoring a test suite. You can't just say the sodium test has taken a long time. It's because of the slain MRC. Let's delete the whole test suite and move on. You can do that. That's not a very good way to do it. So I needed to get some data. One critical fact that we didn't know was how many slain MRC tests that we had versus how many web-diverted tests that we had. So we needed to create some methods around that to generate some data. So that was one of the first things that I had to do. So we had to figure out what metrics we wanted to deal with. We had to figure out which tests take the longest. I mean, the main reason that we're refactoring this test suite is because it took so long, right? We had to figure out what tests failed the most often. Not only does it take up a lot of developer time waiting for no longer in a test suite to come back, but whenever a test fails often, they have to go on there and make frequent changes. That's a big time sink for developers. Which tests are the most flappy? So whenever a test fails and reruns, that's obviously some kind of issue. Whether it's a test failure or infrastructure failure, we need to figure out which tests are the flappy, the most flappy, that happen more often than other tests. That's something that we should refactor. And along with having 50,000 tests, there's obviously gonna be some type of duplicates in there, right? There's teams that are said, okay, I'm gonna write this test for this feature two or three years later. So I was like, okay, I'm gonna do this as well. Now, there's a lot of duplicates in there. So essentially what we did is, we developed tools and developed methods and code to get those metrics for us. That was a key critical part in this whole process was developing those metrics and cataloging those metrics. So once we ran all of our tests, ran all the metrics that we needed to get, we divided each test by team. We sorted them in our backlog based on those metrics, based on which ones failed the most often, based on which ones took the longest to run. Then we started to act on those tests, right? So you can't just stop production at a company like Salesforce. You can't say, we're not gonna develop any new features now, we're just gonna refactor tests for this next spring or next month or two. We have to kind of sneak those into the backlog, right? Like our salespeople don't understand, we're gonna spend time fixing test code. Some people just don't understand that. Even though there's a big benefit to it in the long run, it's kind of hard to sell. So we can have this backlog of tests, we have it sorted by priority. And so each spring, we ask the team to take two or three industry factory, right? Anytime we find a duplicate test, we run code coverage with it, we can say, hey, these tests are duplicates, that's to leave them. That's a great win for us, it's really easy low hanging fruit. So what's the current results that we have? It's still a work in progress. It's not done yet. I mean, 35,000 centium tests, trying to migrate those. Even though we have 100 teams and they're doing two or three per sprint, today we have about 5,000 tests to be migrated. That's not a lot, but 5,000 tests is larger than most people's like complete slay them sweet, not just refactor slay them sweet. Not only that, but since we're continuing to see you developing new features and new enhancements, we have thousands of new web driver tests that have been added to our backlog. The great thing about migrating to a web driver though is each one of our tests is able to be ran on all the different browsers. At Salesforce, a lot of our customers are enterprise customers, right? So it's a huge, huge use of internet explorer. A lot of companies don't really have that problem, maybe they only run in my Firefox, but we have to ensure that our number one customer and number one browser is definitely being tested. With SlimRC, we weren't able to do that very well, but with web driver we were able to. But not only that, now that we have the metrics and now that we have the data of what's tests, our SlimRC tests versus web driver tests, we're actually able to split up that sweet. The great thing about that is instead of taking 10 hours for the full SlimRC suite to run, we can run these web driver tests separately and they get back some kind of faster results. Again, our main job is to make people more productive. If they have results back faster, they're able to act on that faster, they're able to do a lot more with it. So there's a lot of great stuff going on. Beyond refactoring tests, Salesforce is investing heavily in new technologies. So we're going really big into dynamic testing infrastructure. We have, for each run, our testing infrastructure is thousands of dynamically generated test instances. We're going from an older technology to a newer technology. We're also embracing the SlimM grid. Dima's going to be giving a talk after lunch about SlimM grid. I encourage you to go to listen to that one. But essentially we want a huge farm of dynamically generated windows or browsers of different flavors connected to the SlimM grid that our tests can connect to. Right now we do a lot of static instances for our test which we want to get away from that. Part of trying to make developers more productive is to eliminate infrastructure problems. Whenever you're on running tests against static instances, you're running to, your test might fail for benign reasons, right? A test might fail because a previous test didn't clean up or maybe the browser before didn't close, there's a lot of problems that can happen. Not only in that, but we're moving our existing or we want to move our existing SlimM test framework from an older Salesforce hacked JNM3 framework into something that is faster and more lightweight. Right now our framework is really tightly coupled into our app code. So if you look at some of our tests, they're calling Salesforce Java classes directly instead of using APIs. We want to move away from that. We want to make it completely abstracted away from our app code. If we're able to do that, if we're able to use all of our publicly available API jars that we have, we'll be able to open source it at that point. That's cool because we have thousands upon thousands of customers that use Salesforce and a lot of them have custom workflows, the custom data that you're using on Salesforce and maybe they've created their own apps and put them on app exchange or something like that. But if we're able to open source this testing framework and let our customers test their workflows through WebDriver and this framework, it's just a win for us, right? So not only are we testing our code ourselves, they're testing their code. And if they find a problem before they're doing some type of release on their app on the app exchange, they can send us an email or a call and we're able to fix that for them. It's a pretty cool process. It's kind of interesting how Selenium started and the WebDriver transition happened at Salesforce. So around 2008, I said, we started WebDriver or started Selenium Marcy. And then around 2010, 11, someone said, hey, WebDriver is gonna be a really cool thing. Let's try and get a prototype of that. And so they weren't really thinking about the scale of how it was gonna happen in the future. They were mostly saying, I wanna prototype this, I wanna get this proof of concept out as quickly as possible. So instead of architecting it in a way where it was completely separate, they put the WebDriver code in the same Selenium based test as the Selenium Marcy tests were. So we need to refactor that as well, get that into its own separate classes and do a lot of stuff around there. So going forward, there's gonna be a lot of different things that we can do. So I think one thing that people might wanna hear about is, you might be in the same situation that we are. You have a big backlog of Selenium Marcy tests. How do you get people to understand that it's worth the time and effort to refactor that test? A lot of times executives, we tell them that our RCU suites are a problem. We tell them it takes a long time. We tell them that it takes a long time to debug our tests or refactor them or maybe we can't release as often as we want to because our test suites take so long. But whenever we try to explain that we wanna take maybe a month or so to refactor our Selenium Marcy suites, they look at that number and say, no, we're not gonna do that. We're not gonna take the time. It's not worth the money. So a lot of times people just complete their Selenium Marcy suites and start over. I'm not a big fan of the idea. There's a lot of useful data that those old test suites have. So I'm a big fan of trying to systematically refactor tests and move them over to a faster testing framework. So the first thing you have to understand is you have to be data-driven. You can't just have hunches. You have to be able to go to an executive and say, hey, our test suite takes X amount of times. And it's because of 10% of our tests that take over two minutes each or something like that. You have to understand why your tests are slow. Is it because it's Selenium Marcy suite itself? Is it because the way that you're writing your test? Are you doing really a long running test? Or are you using API to generate your test data? And then just using the web-diver to verify that that exists correctly. You have to be able to figure out what your problem areas are. What I mean by that is, if you're able to historically look at your CI build and your test failures and see a trend over time of which tests fail the most often, that's a huge data point to have. And you have to be able to figure out why those tests fail, right? So not only are you looking at which tests fail the most often, you have to understand why they fail. If it's an infrastructure problem, that's one thing that's completely outside of your test suite, right? If it's an infrastructure problem, then you can go to your ops team or your networking team and try and figure that out. So then you need to collect, once you have the metrics, or once you know what kind of metrics that you wanna talk about, you have to collect those, right? You have to put them in a meaningful way. Lucky with us at Salesforce, they gave me the leeway to take a few months to put in the test code that I needed to, to put in the classes that I needed to and the methods that I needed to to generate those metrics. And then you have to put them in, like a tool that we use, like I said, Gus, put them somewhere where people can easily see those metrics and easily act on that. One thing that you have to kind of explain is, when a test fails, one of the biggest time scenes is trying to figure out why that test failed, trying to debug why something failed. A lot of times a test will fail on our CI process, then they run it locally and it passes. So why does that happen? And if you can get some kind of concept or wrap your head around the idea of how many hours per release or per day or per sprint are spent troubleshooting your cinema C tests, that's probably the most important data fact that you can generate. If you're able to say you were spending 10 man days per release, fixing test code, you know, people are gonna be able to act on that. That's something that someone can grasp that maybe doesn't actually write tests, but if we're saying we're wasting 10 days per release, you know, people are gonna act on that. And some of the benefits of this factor, I mean, it's pretty obvious, but if you can release more often, we're gonna be able to get more benefits and more features out to customers. If you can release more often, whenever an issue does happen or whenever a bug is found, you can get your change list up to the CI process faster. And then whenever you push out that hot fix, you're a little more confident, right? Okay, it passed all of our sodium tests, it passed all of our integration tests. You can get that result back in an hour or something like that. You can push out that code. You have some level of confidence that it's not gonna completely break your production. And again, find out how much money can be saved by not debugging existing tests. Executives care one thing a lot, it's money, right? They don't wanna waste money, they don't wanna spend time figuring out. They don't wanna spend time wasting here and people saying all our sodium tests, these aren't great, why? They want hard data, they want hard numbers. And it's up to us to explain to them why this needs to happen. So with that, did any of y'all have any questions? I'd love to hear your questions if you have any concerns or thoughts or if you're trying to go do this migration yourself. Right. In that code coverage, you find like which UI, I mean this elegant test is covering what type of code, how do you get that code coverage, please? Right, so code coverage is anyway, it's multiple tools that you can use. I mean there's like Emma, there's Kobutora, Jacoco, things like that. The way that we're able to generate that matrix is kind of not necessarily using our existing CI process in our order, but we have a completely separate run set up to do that. One of the colleagues on Greg and I's team, his name's Petal, he developed a system called Argus that runs each test serially. So it's kind of not the most efficient process, maybe one day we'll discover a better way to do that. But essentially we have all our Java code, we compile it, say with like Emma, code being in there, and then we run each one of our tests serially and then we can collect that data. So we know which test, which piece of the app code. So then we can collect that data and put into a database and kind of correlate that data together. A lot of times we're not gonna find the exact same test, right? I mean, they're not gonna test everything exactly, but they're gonna be similar enough that we can say, hey, we don't need two or three of these, right? So then we can go ahead and delete two of them, take that one test, put it into our backlog and refactor that later. Yeah, that's an interesting question. I wish I could tell you exactly what we're doing in the future. For right now I think I can say that we have a very large server farm of static instances that we have. So pretty much just bare metal machines that we have in a data center server, right? You know, thousands and thousands that we have hooked up to what we call auto build, it's our internal CIS system. But not only that, can I talk about VMware? Well, I said VMware, so why not? We are older dynamic infrastructures based on VMware. So what happens is whenever a change list needs to be tested, the batched change list will say, okay, I have these change lists, I need this type of infrastructure to be brought up. So it'll bring up an app and DB server. It'll bring up, say, six instances of some type of Windows VM per Selenium run, right? So again, we break up our Selenium runs by browser type. So six instances of, six serve basically Selenium slaves for IE eight, for IE nine. Then we run all of our tests against those different browsers. We're going to something newer, more dynamic. We've invested a lot of money into a new technology that's coming out, not to use on production, but just for our testing infrastructure. It's really cool and exciting, but PR said I couldn't talk about it right now for some reason, but I'm sure you can guess it's one of the really cool new open source dynamic frameworks out there. We've invested millions of dollars into it and we're going to invest millions of dollars more into it. But yeah, every single run gets an entire dynamic stack and test resourced is brought up into it. And it's all done through several data centers that we have. Right. When I mentioned WebRiver, it's solely paid job based. So we did a really great job with our implementation of WebRiver, maybe not in the classes that we put it into, but in our page object way that we wrote it. All of our WebRiver tests are done through page objects. The great thing about that is, I'm sure you all know about page objects, but it's basically abstracting your test away from the backend WebRiver code itself. So whenever a test fails, say 10 test fails for one particular bug, instead of having to edit 10 different tests, you can just edit the page object and all of a sudden your 10 different tests are working. So there's a couple things that we do whenever we refactor a test. Not only do we convert it to a Tobib driver or in the page object framework, but we also see if we can test it in a different level of test. I mentioned a JX unit framework for JS unit testing. We try and look at the test and say, you know, what are this really doing? Is it really testing widgets logic? If so, can it be an integration test? Is it just testing JavaScript? What I've seen is a lot of people just test JavaScript with WebRiver or Selenium. If it's solely testing JavaScript and clicks and know what happens after the click or animations or something like that, we can actually migrate that down to an X unit test. The great thing about the X unit test and X unit framework is one test takes microseconds to run. I think we have like 10,000 tests and it runs in under a few seconds. So it's really lightweight, really quick. So not only do we just say, hey, we're gonna migrate this over to WebRiver, we're trying to see if there's some other time in a test suite that we can run it in. Because no matter how much faster WebRiver is than Selenium RC, there's test suites that are even faster than WebRiver. We try and make that determination as well. How many times do we run our failed tests? Right, we run it at least once, do you? Greg, do you? So a test of fail, we'll rerun it, put it in, we'll flag it into something that we call Yoda. It's basically like a test failure, a test run logic, right? It collects that data over time and then we're able to determine what we need to do with that test. If it's a flapper test, we can go in and refactor it. If it's, the test just needs to be deleted, we can delete it. If there's something about the code or the infrastructure dealing with that test, we can address it there as well. But the reason that we had to develop Yoda is that, again, we batch all these change lists together and run them as a big group. Maybe there's one or two or maybe there's like 20 or 30 change lists in each run. So we developed this Yoda technology that can look at the tests, see what tests they're hitting or what files they're hitting, see what kind of change lists have historically been dealt with those tests and it's a big complicated process. We have an entire team, just three or four people that's all they do is trying to figure out why the heck our tests fail and try and get that out programmatically. So yeah, it's a big deal for us and it's a pretty cool tool that we have. I'm sorry? No, we actually do only, I mean, Salesforce is a big Java company, right? I mean, obviously we have other products that are investigating newer technologies. We've bought a lot of companies along the way. I mean, people don't know that we own Heroku, right? I mean, that's a pretty cool company, a pretty cool newer technology, newer stacks. But for the most part, our platform is written in a Java language. And again, each one of our QE engineers is pretty much a developer at heart. A lot of them used to be feature developers, but they have really a big passion for testing or something like that. So they're all pretty knowledgeable. So we don't really try and extract it away from QCumber, with QCumber or anything like that. They're actually writing real Java test classes. Again, using the paid Java framework, so it's a little bit abstracted. So you create your paid objects for a certain page, you put in all your web driver code in that paid object and then the actual test themselves are just interacting with the paid object and doing some type of verification. But no, we don't use QCumber or anything like that. Yeah, so again, like I said, this is just a platform, right, of what I was talking about. There's a, historically, a Salesforce has been one huge application, one huge code base. There's really significant effort going on right now to mavenize our entire build and to break out certain projects and certain features into their own code base. And in their own CI processes and then put them into like an extra repository. And then whenever the main app runs, pulling down those compiled jars and like that. But yeah, our testing infrastructure, I mean, thankfully, I work for a company like Salesforce where they can invest millions of dollars into our testing infrastructure. Yeah, we do have a pretty significant, I don't think I can get into specifics. We have thousands of instances, right. Yeah, so we have thousands of instances. Again, we invested in newer technology and it's going to significantly increase our existing testing capability. A lot of really cool things that we're gonna do with it. Yeah, as I mentioned with the DM over here, we did move to Page Object. One thing is web-driver tests in general are much faster than SlimRC tests. With the older SlimRC tests, it does JavaScript injection into the browser. Okay, so essentially what we did is we used the data to collect information on the particular test to put it into a backlog, a Q-type system to see what we wanted to refactor first, right. So we have a nice backlog, each test broken down by team and sorted by priority versus how long they take to run, how many times it's failed and different things like that into the backlog system, right. So that's pretty much where we collected that data and then we put it into the system I mentioned before, GUS. So we have a big, nice backlog along with all that data available to each one of the scrum teams. And again, each team pulls back from that log, each sprint, refactors a couple of tests and goes from there. To make sure that what we did actually works, we collect data on the newer test as well. So as I mentioned, we were able to separate out our SLAMRC tests from our web driver tests. So we're able to collect that same data on our web driver suite now. So we have a database of information on our tests, on our current web driver tests as well. Assertions for the web driver test themselves or for the data. Yeah, we abstract that away. We don't want our test suite to do anything more than test, right. So essentially it runs, we send all our data to something that internally we call a Cloudy Boy or Paper Boy, it's basically an API on top of a database. So each one of our test results go into a database and then we're Salesforce, so we can do a lot of cool reporting with our data. So we're able to do a lot of really cool metrics, a lot of cool spreadsheets and reports based on that information. So we have, historically we have a static server farm where they're always up and running. This is our older implementation. So if a test runs on that, those VMs are just up and running and all we do is inject like server information into the environment. It goes into our web driver test suites and it says, what server am I gonna hit? It goes and it just hits against that. With our newer implementations and the VMware and the one that's coming out, per run we dynamically spin up everything. So we don't have any stagnant instances in our new systems. Each commit gets, again per web driver build, they get six dynamic instances. So say for IEA, IE9, IE10, each one of those will just automatically spin up six new Windows images. But not only that, we spin up a brand new app and DB instances as well. The app and DB obviously runs the code that's supposed to be tested and that also acts as our test runner as well. So with the newer implementation, everything is being run on pristine, dynamically generated instances. Exactly, we're huge on mobile. Again, one of our employees is one of the committers for Solenoid, which is the Android version of web driver. I gave the caveat that we were just talking about the platform, but we have a huge infrastructure dealing with mobile testing as well. I'm sorry, it's kind of a mixture of both. One thing that I'm actually exploring right now, I don't know if you've, have you seen Jason Huggins' tapster bot? Basically it's a web driver on top of like an iPhone, it has like this cool like a Raspberry Pi based servo system to actually interact with the screens. Again that's, like I said, our team likes to explore these new kind of cool technologies, and that's one thing that I'm exploring right now. So we have physical devices, we have emulators. Sometimes we use SOS labs to do a lot of our mobile testing. There's a lot of cool things that we do. Automated, layout testing, so each sprint we collect something called a gold standard where we kind of take screen shots of our releases, but we don't really have an automated way right now to kind of correlate that information. I mean obviously one of the cool things about web drivers is you can crawl through pages, you can take screenshots and there's a lot of cool things you can do with that. And then if you have a gold standard of what a page should look like, I mean the sort of trivial to kind of compare and contrast you know between what web driver says is current versus what it should be, but currently we don't really have anything like that. It would be something that we could explore in the future. The credit set has a lot of custom objects that our employees have, right? So much so that we can't necessarily test every single custom object for every single CI run, right? Not only in that, there's customer specifics on separate pods that they don't necessarily want us to touch right there, so they don't want information. But we try, we use something called dots, like data on template or something like that. We're basically, we can take an organization, we can take a bunch of custom objects, we take a bunch of test data and kind of zip it up into one file, and then for each test, literally for each test we create a new organization, new custom objects, everything for that test. And we're, so in that respect, we're able to test against custom objects, custom workflows, custom organizations without having to do that through the UI. We don't have to set up through the UI. Everything is set up through that dot. And we're continuously improving that dot. We're taking customer data, we're taking these things, we're combining to the dot, so we're continuously testing new workflows and new data. We don't use it for like load testing and things like that. There's better tests out there for them, like JMuner test and we have some custom tools that we've built in-house. WebDriver and Selenium tests are great for testing that something comes up when it's supposed to test, right? I'm not a big fan of using it for load testing. There's just a lot of interstices and a lot of things that happen if WebDriver going through the JSON wire, talking to the browser, talking to the different instances. There's a lot of stuff on top of that that would get in the way of like a true load test. But we do have load testing that we do at Salesforce, just not through the WebDriver. Cool, I think we're done. Thank you all so much.