 Without any further ado, let me introduce you to Mike Sealander who's going to talk to us about a fantasy world that we all long for, which is a world without books. Thank you. All right. Thanks. Cool. Thank you, Rachel. This is on. You guys can hear me? Yeah? Okay. Cool. Perfect. So yeah, a little far away from home. So it's awesome to be out here in London. I'm really enjoying our stay so far and hopefully you guys won't mind my Merca accent too much. If I get a little too fast or crazy, just somebody shout out and let me know and I'll try and slow it down a little bit. But yeah, so we're here today to talk about bugs and defects and how you can fix these and the processes and tools that you can use. And just for a little background on this, I work at an agency called Old Town Media and we're just a little 10 person shop, a marketing and web design firm in Fort Collins. And pardon? You slow down. Yes. All right. Already being told to slow down. Try and keep it a little slower. So yeah, we're a 10 person agency, but we put out a pretty decent amount of sites. About 80 sites last year. I don't say that to like glow, but I see that because when you start putting out that many sites and like launching that many every single month, you start to see trends and you start to see things emerge from site to site and kind of continually carry throughout the whole process. And what I noticed about a month or two ago is that bugs, not a month or two, a year or two ago, about a year or two ago is that bugs cost us a lot of money like a lot more than I was expecting and a lot more than I see. And so, you know, those bugs have like that initial cost. You got to send in a developer, they got to fix it, takes a lot of time, you know, it's a little frustrating and it kind of reduces the morale of your developers because no one likes fixing bugs. I think most of us like creating new things and working forward instead of working backwards. But then there's also like the cost of the client going in and seeing that, right, which is somewhat akin to the world ending as soon as they see that first bug on the site that you've just launched that they paid a lot of money for. So it kind of reduces that trust and reduces that relationship and that partnership that we can have with our clients if we build really good, really quality products. So I spent, I kind of like took a step back from our normal stuff and spent a lot of time focusing on new workflows and fixing all these bugs before they got to our clients and that's kind of the culmination. This talk is a culmination of all that. And hopefully you guys will walk away with some new tools and some new methods that you can use to get rid of all the bugs before your client has a chance to see that site. So first thing before we really get into it is I want to kind of define what a bug is, what a defect is. Because to your client, it's probably going to mean things that are a lot different than it will to your developers, than it will to your project manager and to the business owner. Like there's a lot of ambiguity in the term of a defect or a bug. So what we're kind of talking about today. So we're talking all on the same page with this is scoped functionality issues or visual bugs that can be fixed before client season. And quick sidebar, do you guys get Breaking Bad over here, the TV show? Yeah, have any of you guys seen the Fly episode? Yeah, it's rough, isn't it? But it's very representative of bugs, so I wanted to include it. So kind of walking through this real quick. So scoped, unless any of you guys in here know, have ESP, is that what it is, can read your client minds. And if you can, please come and talk to me. Scoped issues are not bugs, and they're a very different discussion. Your clients will probably come to you or your users and say, hey, I didn't tell you about this feature, this site is broken. We need to have a very different discussion. So not kind of included in that. And then functionality issues. I think this is what most people think of when we think of bugs. You have 500 or contact form is submitting or some action that's supposed to happen is not happening. So pretty simple there. And then visual bugs. And these are a little more ambiguous. This can be a grid that's not laying out, or a responsive issue, or one of a thousand things that's visually not happening, even though the site effectively works. And there's still bugs, because your client is gonna think that they're bugs. Like if they see a visual issue, they think the site is broken. So important to include that. And then that can be seen before a client sees them. So this kind of rolls back to that, the imaginary world where there are no bugs, because there are bugs in PHP, there's bugs in Apache, there's bugs in WordPress core. But the bugs that we're talking about today are sites that affect your client sites, and specifically how you fix them before the client sees them. So not talking about zero bugs overall, but I'm talking about zero bugs in your client's eyes. So kind of fixing that before they see it, cuz that's the most important thing. Okay, so the first piece of this, and just so you guys can follow along, this kind of follows the path that we build in websites, right? So it starts with kind of an attitude here, and then we move into the initial groundwork, and then the code testing, and the visual testing. So it kind of follows along that pattern, so you guys can keep up. But the first thing is, caring is not in scope, but we do it anyway. Like this quote really hit me about two months ago, and David Droga said it, as you can see right there. And he runs a little agency called Droga Five. Like amazing, amazing agency. They've been around for ten years, and nine of those years, they have one agency of the year award. And we're not talking local agency. We're talking they're beating out BBDO and all of those guys to get agency of the year award. So they're really doing something right. And this quote came from the latest interview where the interviewer was asking him like, hey, what are you doing to put out like this consistently creative and this consistently quality work? Like what is keeping you guys drive, like giving you this drive to really do things well? And he said, caring is not in scope, but we do it anyway, right? It's saying it's on the screen this whole time. And what he's talking about is that to be the best in an archive to catch all the bugs and to catch them before your client does and before anyone else does, you have to care so much about the process that you care about it more than your clients, more than their users, more than your competitors, like more than anyone else. Because QA is not for the complacent. Like you can't kinda sorta test a site and expect to catch all the bugs. You're gonna kinda sorta catch all the bugs, right? It's not gonna go very well if you don't care so much you care more than your clients. So you have to care a lot, just to repeat that again. And not caring and having the wrong people doing your testing and running this whole process can have a big financial impact. I recently, about six months ago, spent one week where I spent 30 hours fixing bugs that one of my client or one of my developers had left in about four or five sites, 30 hours in a week. And it wasn't for lack of skill, like he's a really skilled developer. I had told him about all these bugs, for example. But he just didn't care enough to fix them and really do a good job before he pushed it. So having the right people in the right seats and the right people doing your testing will affect the rest of the process and the entire site build, so really important piece right there. All right, and the next thing that I wanna talk about is the foundation, like the base architecture of the code and how you write things and how you do things. Because the first thing that I noticed when I kinda took the step back and approached all of our testing and all of our quality assurance is that we, any bug that showed up on a site would show up on that site and then all of the concurrent sites. Which means that our base, our frameworks, the libraries that we were using, the way that we were writing code. All of that was recreating these bugs across all these sites. So I kind of approached that base architecture and that base foundation to find better ways of writing really just quality code. Quality assurance, quality code, right? So if you don't have that quality architecture, you end up kind of like this guy right here. And the thing that I love about this photo is that both of these buildings were still under construction. So they really messed that foundation up right there. But it kind of, like it's representative. This is obviously sinking right here, cuz it did not spend time on the foundation. And if they would have just spent a little more time at this base level, by the time it got all the way up there, they wouldn't have to fix anything, right? Cuz by the time you're at this point, it's gonna take an exponential amount of time to fix any bugs and to fix any of the processes and fix those things. I'm gonna say fix like ten more times. That would have spent just like ten minutes if you had just done it at that foundation, that base level. So the first thing that we started doing was using engineering standards. And we had a great lightning talk about this earlier. But just if you guys weren't in it, engineering standards are just, they're really simple. They're how you write code. They're basically like a, sort of what I'm looking for. Guiding documents for your developers on how to style and how to do performance tuning. And how you wanna handle security issues. And how you wanna handle all your testing. It's just this guiding document that keeps everyone on the same page and keeps everyone writing that same code and writing quality code. Because if you're using an engineering standards, they're probably a little higher than what you're writing right now, right? No one's gonna write engineering standards that are at the exact level that they are at. Cuz you wanna push yourself and you wanna go higher. So keeping that base code and keeping everything consistent really helps a lot. And then we started using libraries for themes and plugins. And I realized that this slide is kind of ambiguous. I'm not gonna tell you what frameworks to use, because I don't wanna get a fight with anybody in this room. Cuz everyone has an opinion on the right frameworks to use. But the point here and the goal is to use some kind of consistent framework when you're writing custom code. And it could be a skeleton theme, it could be plug and boilerplate, it could be that you write your own frameworks or that you write your own stuff. It's kind of the approach that we took. But the goal here is that you're using something that you can apply patches to and fixes to when you notice it at site one. And then that fix applies to all the sites from then on, right? You're not just fixing site number one, you're fixing everything down the road. And you're keeping that consistent base and you know what's in your sites. And on top of that, it makes it easier to go back and debug and fix things in the future. Like on all my sites, I've built like 250 sites, it doesn't really matter. But on any of the five past years, if you give me a URL and a problem on a site, I can tell you in almost 30 seconds what's wrong with it. Because we use consistent libraries and we use consistent frameworks. So I know what the issues are on all those. So it makes it easier to debug in the future and quicker. Couple tools, and by the way, on my Twitter, if you guys go to it, there should be a link to these slides. And so you can just kind of go through these, get the tools and you don't have to write everything down, cuz these are all linked in here. So WordPress standards, I mean obviously we should be writing code the WordPress way for writing WordPress sites, probably. And 10UP, they wrote spectacular engineering standards. So definitely worth checking out there and kind of examining and perhaps adopting. And then, is anyone here from this company in the room? Does anyone know how to say their name? It's very embarrassing, okay. Well, they have fantastic standards. IMPSID, IMPSIDE, INSIDE, okay, thank you. They have a really fantastic set of standards as well. So in addition to 10UP and WordPress, worth checking out. So the next piece that we kind of started fixing and kind of approaching is content. And content is super important because the sites that we build are most likely a part of a marketing plan, right? Like, they're not the whole thing, they're not everything, they're an important piece of it. But with marketing, the content is what matters in marketing. So how you space it out, how you do the design, what the calls to action are, and how they're handled. So for building sites without content, which, to be honest, I was kind of building skeleton sites, right? Which is, I know, I see some head shaking in the room. It's kind of a strange way to do it. But what I was doing, and very incorrectly, is I was basically building this, sending it off to a client and saying, oh, hey, we tested this as good, add content. And what would they do? They would just send me a list of bugs right back, because they did my testing for me, right? Like, that's what happens when you build a skeleton. So building with some content as you're going through that process will help you catch bugs a lot sooner, a lot quicker, and help you catch more of those bugs. And then more than that, building with a lot of content is even more important. And building with a lot of edge cases and kind of errant, funky content, even more so if you can. So instead of throwing three posts in with slightly different things and saying, oh, hey, the blog works, we're all good. Throw 30, 40, 60 posts in at a time. And with vertical images, with horizontal images, with really tiny ones, with Latin content, tables, and all kinds of weird things, that your clients will probably throw at you in the future. And basically, you can break your site before they have a chance to break that site and come back to you. And then kind of on the flip side of content, because we have this one side where we're creating our edge cases and throwing a lot of weird content in. But then on the other side, you have your client's content. And if you can, and I know this is really hard, it's been kind of a struggle for us. But don't start building a site until you get content from your client, because then you know exactly what that site needs to look like. And you already have their edge cases. So you're making all your edge cases and kind of pretending what you think is going to be. And then you get their edge cases. And so you can fix anything before they have a chance to get into the site and really see the site. Tools for this guy. This is a content library that I built up. And all it is is a plugin you install. There's an admin page. You click a button and you get 60 posts, or you get 60 categories. And you can delete them just as easily. So it saves a lot of time when you're spending that content up. And then theme unit tests. This is what you would use if you're doing a theme review or if you're building something for .org. It has really funky stuff, like a 10 level deep menu and captions and captions and really, really weird stuff. So good for testing those edge cases first. The next piece of this, kind of getting out of that foundation and that really beginning part of the whole site, is to get into your code and really test that in depth as you're pushing it and as you're building things. And the first piece of this is unit testing. So for those of you who don't know, unit testing is individually and independently testing the smallest parts of an application. So this sounds really intimidating and really big, like there's a lot of $5,000 words in there, right? But it's actually really simple. All you're doing is you're taking one function in PHP or JavaScript, so they're all functional tests. One function and testing one use case of that at a time without the rest of your code base. And the benefit of this is that you're writing these tests in code, like you're writing code to test your code. So it's always there. Once you've written it, it's there. You can run it in perpetuity and you can just keep on testing it. So when you go in three months after you've built a site and you have this function and you decide to change it up a little bit, you can run that test and know before you push it to your client site that it still works or that something is broken. Let's create this kind of awesome paradigm where you put a level of insulation between you making code changes and your client site. So you have to prove with every push of your code that it still works instead of cowboy coding and just shoving it up to the server, you run your tests and then you push it over there. So because this is kind of one of the more complex pieces of the whole thing, it's not a difficult concept, but it's kind of a pain in the nuance to write. I want to give you guys an example. So up here you have an example function and this is what we're actually testing. And then right here you have the test for this function. So pretty short, pretty small, right? Very simplified example. So my permalink function. What we're doing here is we're passing a post ID in. We're using get permalink from WordPress core to get the link of that post. And then if it contains the string foo, we're gonna replace that with bar. Very simple example. So in here, this piece right here, this is called wp mock. And remember that independent part that I was talking about. You're separating this function from the rest of your code base, including core. So you have to mock up and replace some of those core functions and how they're supposed to behave so that you know that you're testing your function very accurately and individually. So that's all this does. This is creating get permalink, because that's in cores, one we don't have. And then passing some args and setting a return on this guy. So then we're gonna actually run the function. We have to actually use it to test it. And then this is kind of where the magic happens. This assert equals piece right here. And there's about a hundred different kind of functions that you can use to run. But all you're saying is that if everything goes correctly, this string right here, this .com slash bar should equal the result from our function. So pretty simple, right? I mean, you can see here we're passing in slash foo and it comes out slash bar. So if you go in and you change something on that function in three months and you're placing full or foot instead of foo, you're gonna get something kinda, you're gonna get bars out if you're doing foot. So it gives you this level of insulation and makes it easy to kind of make sure that your code is working before anything's pushed. And the tools on this piece, you have WP Mock. And that's what I was using in these functions to actually replace the WordPress core functionality. And then if you wanna write anything for core itself, there's a set of tests that you would add to and ways you can handle it called WP Factory, bundled in core development. And so it's really worthwhile to check those out and see how they're testing those functions and use it as a good example. So the next piece is code audits. And I love the Lightning Talks this morning because they all kind of fed into this, like we had code audits, we have an engineering standards and stuff. So you guys probably already know why these are really important, but I wanna go over two different kinds of code audits and how you can use them and like the bonuses and minuses of them. So the first one is manual code audits and then we have automated code reviews. So manual code audits are someone like this guy going in and reviewing our code line by line. My favorite thing about this GIF is that, so this is a news agency that wanted this guy to program while they were doing it and he just couldn't resist. And you know that some guy is sitting behind a controller looking through all this footage like, no, no, oh my gosh, he's coding furiously. This is perfect. Anyway, I just like GIFs. So I hope you guys like GIFs here and get a kick out of it. So manual code audits. Like I said, it's a developer going through and looking at your code line by line for a series of factors and making sure that everything is up to snuff and looks correct. So they're really good for evaluating a code base in context and evaluating it with the whole project and the marketing goals and the data that's being passed through in mind because there's a lot more nuance to it some of these issues than a robot or a program could look for. So security issues. This is a really good one because they can tell what data is being passed through, how you're handling it, should you be escaping something? Should that happen? Comprehensibility. So this is, you know, is your code documented? It can this developer read it as they're going through and if you were to hop back in in three months, could you read it and know what's going on? And then conciseness. So just seeing a responsibility principle and dry programming principles, all those really basics that another developer could evaluate it for. So some tips, kind of hard earned tips. The first one is you can't review very much at once with a developer audit. And they've done studies on this and it's not like I couldn't or you couldn't. The best program is in the world can only look through 100 to 300 lines of code an hour in any program and actually do a good job and know what's going on. Like after that 300 line an hour point, it just kind of drops off like a cliff in terms of effectiveness. So it's very slow, but it's, you know, really in depth. And so you kind of have to keep that in mind that maybe you don't want to run this on every project all the time, but you want to run it sometimes and you know when to use it. And then keeping it positive and making everyone accountable. This is just a piece of, kind of a piece of developer's ego. So I have done this in the past. I have sent another developer a list of like 50 things that was wrong, said fix all this and get back to me. And they didn't like me very much after that. They didn't really like the process. Like they weren't, you know, there's nothing enjoyable about it. So when you're sending off these lists and when you're having this discussion and this communication, make sure you're praising something as well. Not just saying, oh, you're a horrible developer. There's all these things that are wrong. Saying, hey, I noticed these few things. This is what we can learn from it. Let's fix them in the future and like let's move on. And then just making everyone accountable. That's basically like if you have two or three people sending these out, they're not accountable to the process. Some people start to feel like it's not really fair and making everyone accountable keeps the quality for all the code that you're pushing up and it makes sure that everyone is really involved and happy with that process. So the next type is automated code reviews. And automated code reviews, the way that I'm talking about it at least, is a program called scrutinizer. It's scrutinizer.io and I'll link to it in here. And what it does is you hook it up to a repository. It could be Git, it could be subversion or curial, whatever it is. And every single time you push or check in code, it's going to pull it out, evaluate the entire code base, and it can run 20,000 lines in say five minutes or whatever. And it can check your entire code base for all the stuff that's going on in here a lot faster than a developer could. And you don't have to do anything. You check in that code, it sends you an email if there are any issues. So very simple. So automated reviews are really good for kind of the grunt work. The stuff that developers don't really like checking for and don't really like doing as part of the process, like stylistic adherence. You can apply patches automatically instead of writing that code, so it keeps that a little better. Code repetition, on use code, like kind of this grunt level stuff that you'd have to remember at line 500 and at line 10,000 if something was exactly the same. So really good for that and really good for running continually throughout the process of building a site as you go through everything. And obviously tools, scrutinizer, that's the tool that you'll use to hook into your repositories and pull it out and make sure everything is good. And then these are just a couple of guidelines and checklists. So VIP, for example, they check every line of code that's committed onto their servers because it's a big multi-site install. They don't want anything going wrong. So they have a really good guidelines and checklist. And then Fog Creek, they run a bug tracking software. So they have a really great checklist on that guy as well. So kind of moving on into the next part, which is the visual review, like we're kind of getting towards the end of building a website and towards the end of everything that's working. And so this is just the process of checking for those visual bugs, like making sure that everything is working, the browser testing, all that kind of stuff. So a couple of quick tips on these ones. And the first one is checklists. I use checklists for everything and I really, really love checklists. Like I'm crazy about them because they're free, they're really easy to use, and more than that, they help you remember things that you would have blind spots for. Like I think all of us on certain sites, we have this tendency to kind of miss one or two things. Like I always miss testing for the pagination. For some reason, I don't know why, but I put that on a checklist, I check it on every single site and none of the sites that we have going out have issues with that particular thing. So as you're going through, just kind of keep track of things that keep reoccurring and bugs that keep happening. Your blind spots, put them on the checklist and then it makes that onboarding process easy and it makes everybody kind of stay on the same page as far as testing goes and what's expected of everyone. And then browser testing. So I think all of us know we probably need to browser test. And browser tests in every browser and every version of every browser. But that's really hard to test in every single one and it's kind of a pain. So instead of testing on just Chrome and Firefox and Safari on a Mac, to catch all the bugs you really need to do Chrome on a Mac, Chrome on a PC, on iOS, on Android, do it on all of those ones because just in Chrome, for example, which is a really stable solid one, it's the easiest one to test and write for. Even on that, between Windows and PC, JavaScript is handled differently. And between, I said Windows and PC, Windows and Mac. And between Mac and an iOS device, the touch devices, again, the touch events, completely different. So you're gonna have bugs pop up if you're not testing on every single version, on every single device natively, which kind of leads into the next piece, which is device labs. And a device lab is kind of like this. This is a crazy example of a device lab. I think this is in Google's office. But it's just having a series of devices that match what your clients need on their websites or what their users use in the office to actually look at the website in person and hold it up with your hand. Because having that ability to touch it and feel it on a native device, like that tactile function to pick up my phone and see, oh, you know, this doesn't scroll very smoothly or this touch event doesn't really work. Or to be able to prove that something is funky between a couple different versions of the browser makes a really big difference in catching those bugs. And it doesn't have to be crazy like this. Like, you don't have to spend $20,000 or whatever it is that they spent on this. You can build it for like six or $700. Like I have probably four phones, couple tablets, couple PC laptops sitting in my office. And it was all built for less than $600. Just kind of picking it up as you go, picking up used devices and increasing that stock of devices as you see more people using kind of different ones on your sites. And then the next piece, and this is the last piece of visual review, is internal testing. And all I'm talking about when I say internal testing, is I mean, obviously we test internally. We all test on our agencies internally. But I'm talking about having multiple people in your agency, especially with different experiences and backgrounds, look at the site and make sure that everything is working. Because after a while, of looking at a site for a long time, we started to get this tunnel vision. These blinders, because you've looked at the site for so long, you know what the bugs are, you know what the issues are. And there's a study that there are studies that have shown that your brain actually chemically reacts differently to something that you have seen or experienced before, than something that you're seeing fresh. And the trick to that is that if you've seen a bug 10 times, it might be a nuance or a small thing as you're building the site, your brain isn't gonna set off a trigger or alarm as much as it would if you were looking at it for the first time. And you're gonna catch more of those things if you're looking at the first time. So if you have a lot of people test the site at every single stage, it makes a big difference because they have that fresh viewpoint, that fresh perspective on things and kind of out of the box. And just a quick example, and this is, you know, it doesn't have to be your workflow. I think everything is different for different people. I don't wanna show the different perspectives that you can have on something. So obviously I test anything before it goes out. I mean, pretty basic. Your developer should probably be responsible for the code they write. And then after that, I just send off the full site to another developer. So he doesn't have a particularly different opinion, but he's going to look at it with a fresh set of eyes, you know, a little bit out of the box, like looking for slightly different things than I will. So then after he's reviewed it, it goes to our project manager. And she looks at it very differently. So she's, you know, she doesn't write code for living, so she's not gonna see the same things that we do, but she knows the contract in and out. And she knows the client's personality. And she knows what the client said in the meeting three months ago that something is slightly different and that they want it to handle a little differently. So she'll send a very different list back to me, but again, having like another stage in there. And then after that, it goes to a designer. And again, a very different perspective on that. Like they're gonna see the shapes and the fonts and you know, the kerning and the line height and all this stuff and catch that. I've used the wrong color code on one box. They're gonna catch that in like five seconds, where I would have looked at it 10 times. So again, like this isn't an ideal workflow necessarily. It works for us, but you can see how there's a lot of different perspectives and there's a lot of fresh eyes at every single stage of the process, looking at it and making sure that everything is kosher before we send it off. So this is the last piece and then I'll let you guys go. Just tracking your progress and keeping up with things as you push or as you build more sites is really important. Because it helps you to identify the, kind of like the trend. Like are you going down towards less bugs? Are you going up for some reason? Is there some site that's really outside of the average and outside of the normal that has a lot of bugs? So first piece of this is just set a metric. Like this is my goal. I don't know if I'm ever gonna get there, but we're pretty, you know, we're kind of close, but zero client bugs. I want perfection and like a big, big audacious goal. And it might not even be in the same kind of metric way. It might not be bugs. It might be hours spent on bug fixes. It might be times you have to go back into a site and fix something. There's a lot of different ways to approach it, but setting a metric that you can watch is really useful. And then as you go, keep track. Don't just say, oh, we're kind of doing good. I feel like we're doing good this week. Actually keep track when a client finds bugs or when your internal testing finds bugs. Or when you find bugs. Like when you keep that track and keep a spreadsheet, you have actual data and facts to back up how you feel the whole process is going. And then most importantly, when you get it right and when you get it perfect, like do a crazy dance, have some fun, like enjoy the process, enjoy the fact that you've gotten to a perfect site and yeah, I don't know, just have fun with it. And then finally, so hopefully at this point, you guys have taken some fun away. What we do is magic. And if we do it right, our clients will think that we're magicians instead of, you know, ending the world with our sites. So that's all I have. Thank you for listening to me. Fun. Hi. I'm here with the advent of all the different devices and having to do the cross browser checking. Obviously with using consistent libraries and templates and that sort of thing, you can reduce down the amount of testing that needs to be done each time. But how do you factor all of that into the cost of a project for a client? You know, that's a really hard question. It's really gonna depend on how you bill. We just, we have a certain amount that we go over. So we will sit down, bed a project, how much it's going to take. So it takes 20 hours to do this project. And then we'll add a certain amount on top of that just for that kind of thing, for that overhead. And it's also built into our profitability ratio. So like our hourly rate is drastically above the actual cost for that. And those are all rolled into that kind of hourly rate and that cost. We don't add a line item for testing because I think that you should just test as part of the process, like that's a deliverable. When you mentioned that you were spending a lot of actual money, let's say through time spent debugging code, how did you measure the difference that made when you changed your policies and what magnitude of an impact did that bring? I don't have that spreadsheet in front of me, but I measured it with that metric that I was talking about at the end. So the amount of bugs that we have showing up that a client has found on a particular site. I would estimate that it went anywhere from, I don't know, somewhere around 10 and dropped right down to about an average of three after we had done everything. And we've had several sites out of zero, but I'm kind of measuring it through that bug number instead of a cost analysis per se. So from that angle. You didn't mention, I don't think, static analysis or any kind of analysis that can happen before you make a commit into something like Scrutinizer. Do you see any value in using an IDE that has some static analysis built in or do you trust Scrutinizer to do all that stuff for you? I kind of trust Scrutinizer to do most of that for us. It's been really easy to track the number of issues and who's blamed for those issues as we go through because when it pops up with that issue, it tells you who's done it, what commit has been, and it kind of makes it easier to track throughout that whole process. So I kind of leave that all to Scrutinizer myself. Yeah, what software would you use if you don't have a draw full of devices to kind of test these things and how do you go about not just testing screen size but throttling speed and things like that as well? BrowserStack is a really good tool for that. It's not gonna replace actually having that device in your hand, but they actually have the devices on hand that you connect to and essentially buy some time with that so you can choose any of 100 different devices. And the nice thing about that is it's not like Chrome DevTools, for example, which just emulates that size screen, but you're actually pulling it up on the device. And that makes it hard to test like a really big screen when you're on a laptop, for example, but it's pretty solid software if you can't buy those devices and take them with you. And do you find it quite accurate compared to if it was on a real device? Because I've used things before that say it's an iPhone 6 and then you use it and you compare it to an actual iPhone 6 and it's just two different things. I think it's pretty comparable. I mean, it's never going to replace having the actual device in your hand because you can't touch it and feel it and actually see something going correctly or incorrectly. But I think it's pretty accurate, especially talking about the different versions of Internet Explorer, that gets really correct. So, I mean, with an margin of error, yeah. Okay, thanks very much, Mike. I've been looking at Mike's Twitter feed while he's been speaking and I know he has posted loads of links to the tools and the tips that he's mentioned and blog posts and so forth. So if you want to follow Mike, he's Mike underscore Clander on Twitter. So could we just have one last round of applause for Mike? Thanks, guys.