 My name is Jake Skuggs, and I'm going to talk a little bit about hotspots with metric foo. So, without further ado, I should probably explain what metric foo is. So, I don't know if you've used it or you haven't, but metric foo is a combination of a lot of metrics in one sort of gem. So, you can do code metrics about the health of your code, whether you have a lot of complexity, whether you have code coverage, whether you have code smells, and there's a bunch of different gems that will do that for you. And I kept having to rewrite rake tasks on every project I went to that would sort of collect all these metrics and put it into one nice report. And I thought, well, why not just turn that into a Rails plugin, and then later it became a Ruby gem. Some people think metric foo is only used on Rails projects. It actually can be used on Ruby projects, too. And it gets you a lot of stuff. So, I'm going to go pretty fast over the stuff it gets you, and then more in depth later. So, if you feel like you're falling behind, there will be explanations. So, you can get things like FLOG results. So, FLOG is code complexity analysis, and basically it tells you where you have high complexity. And we graph these things over time so you can see, hey, like my average complexity is going up, but we also graph, like, the top 5% of your most complex methods over time, which to me is much more indicative of how your project's doing, because average complexity is not going to change much day to day, but your worst methods, they tend to get a lot worse pretty quick if you don't keep an eye on them. You can do Archive. This is probably the most well-known metric, and one of the hardest to get running, but more on that later. Archive is code coverage. Yeah, more on this graph later. But shut up. Archive is, if you run your tests, what lines in your code have been executed, and so now you can get a percentage of how much code your tests cover. Reek is one of many code smell detectors, and it discovers things like, hey, you have an uncommunicative name, or you have a long method, or you have some sort of other problem which tends to lead to badness, and we track those. Rails-backed practices as a code smell detector specifically focused on Rails. We don't run that if we're not in a Rails project, but if you are, we run this, and it'll tell you certain things about your Rails project that you probably shouldn't be doing. Flay is a structural duplication, so at its simplest, it's copy-paste detection, but it's so much more than that, because what you can do is find out if the code is very similar but written in a different way. So it sees through things like defined method, or depth, right, or curly braces versus due end, or even if you have a big chunk of code that's doing all the same things but the names are all different, it will detect that as similar code. So it's a very cool and sophisticated tool. Cycro is psychromatic complexity. Not pronounce, pronounce psychuru, it's pronounce psychro. It's a joke, like psychromatic complexity. Get it? And that is a number of paths through a given method, right? So if you have a psychromatic complexity of 10, there's 10 different ways to get through that method with all the ifs and the elses and the unless things that determine all your branching. We also do source control churn. So things that change a lot can indicate a problem. They can also not matter at all, but it's a nice thing to know about. And Rudy is yet one more code smell detector. Rudy's a little more trying to be definitive. So req will present things that may be a problem. Rudy is trying to present only things that definitely are a problem. So you won't see as many Rudy hits as you will see req hits. And that's by design. The two people who wrote it are sort of coming from different points. And so over the years, metric fruit went from being like three reports, maybe four reports, to being like eight, nine, ten reports. And that's a lot of data. And if you want to find bad places in your code, you have to kind of flip between all these reports and sort of go like, ooh, that one looked bad. Let's see how it scores on this other one. And that's just sort of terrible and hard to use. So from the beginning, what I thought would really be a cool thing in metric food is if you can combine all of these reports into one report, so it could tell you like, hey, this thing's bad for five different reasons. And then you could say, oh, wow, that's a really bad method. And then, oh, and it turns out it's totally uncovered by tests. That's one we should totally take a look at and destroy. So, in our hotspots. So now a little bit of a history lesson for you. A while back, the Devere guys tried to... Is anybody from Devere hanging out in the audience? Hey, is that Dan? Oh, hey, Dan. The Devere guys tried to commercialize metric food and they created this product called Caliper, which lived in the cloud and you could point it at a GitHub repository and it would run a bunch of static analysis stuff on your code base and it would tell you problems. And then they wrote this thing called hotspots, which could bind all of the things into one report, which was really cool. So that was nice and I was totally jealous. And unfortunately for them, but luckily for metric food, Caliper didn't succeed and I convinced them to donate the code to the community. So thank you. That was totally awesome. So we had to pull all this stuff in. But this, I mean, it sounds trivial, right? But here's something that you don't really realize. Like there's sort of like a lot of standards. The great thing about standards is there's so many to choose from. And this is one way you could represent any one of these methods. You could basically say Bar-Baz or Bar-Baz. Some metrics report this way if this is a class method, but some just report it always with a hash sign. Some metrics will keep a module that wraps around a class in the thing, but some metrics will not report the modules that wrap classes. So these could all be the same method. So what you have to do is find some way of sort of extracting out all the differences. And the other problem you have is file paths, right? It's not a particularly terrible problem, but different metrics report file paths in different ways. So the solution is the location class. So the location class is not polite at all. I just got a bug report about an hour ago that said, hey, dummy, I defined location in my project and you're overwriting that. So I should really probably, you know, scope that to metric food. So that's my fault. I'll fix that soon. But yeah, there could be problems. More on that later. So location is basically taking in file paths, class names and method names sort of stripping out all the differences so that you can say, hey, given this thing, give me back something that is, and I can now tell if it's the same as everything else. So it's defined sort of an equal here. And now I can say, oh, here I've got this class or method name. I'm going to give it to you and you tell me if it's the same as something else, which is pretty cool. And so we also defined the spaceship operator, which is nice. We have some stuff in here to strip out some module names just so we can be consistent. And so there's a fair amount of code now in metric food that basically just rips out stuff so that we can standardize file, class, and method names. So that's great. That's awesome. And it doesn't work at all for Archive because Archive has no concept of classes or methods. Archive just says this line was covered or this line wasn't covered, which is, you know, actually all you really need most times when you're running Archive, right? You just look through the report and you say, oh, these lines are covered and these lines aren't covered. But for my purposes, I really needed to get, I wanted like percent covered for a method. So I wanted to be able to say, hey, this method is 73% covered or this method is not covered at all, which is kind of a tricky thing. And so I stood on this one for a really long time and it turns out it was just like I just shouldn't have, I should have just started writing some code. I know we probably all have done this before. Like it seemed like an insurmountable problem. Like, wow, like, you know, like it opened up Archive and looked inside it and was like, maybe there's something hidden in there where I can, no, there's nothing in there that I can use. And I don't want to parse Ruby files just to get their line numbers. I mean, that sounds like a lot of trouble except that parse Ruby. Oh, wait, there's this thing called Ruby parser, right? And Ruby parser is actually like a really cool thing that you give it a bunch of Ruby and it can do some very cool stuff. It will do a whole bunch of stuff that I don't actually really need, but I'll tell you about it anyway. It gives you the abstract syntax tree of some given Ruby as nested s expressions. Now, I don't know if you know my history, but I used to be a high school physics teacher and I don't have a computer science background. So this was a little scary to me. Like, ooh, s expressions, that sounds fancy. So it turns out not so bad, right? When I first got into it, I went, oh, this is kind of like Lisp, which I don't know much about, but it was kind of interesting. And the cool thing is if you kind of dig around inside the guts of Ruby parser, it will tell you line numbers, which is awesome. Now, before I move on, I should also point out that Ruby parser is behind the scenes of almost all or a group percentage of the metrics provided by metric foo. Metric foo relies on things that try to find code complexity and code smells. And that's very hard to do if you have to parse a bunch of Ruby, but if you can use Ruby parser to look at the abstract syntax tree, it becomes much more easy. And so if you dig into the internals of a lot of those code smell detectors, you'll find Ruby parser in there. So, behold, the line number class. It turns out this is like you're going to laugh, because why did I spend so much time fearing this when this class is not very long? So literally all I do is I pass in the contents of a Ruby file. And this is super simplistic and maybe totally wrong, but it does work. So if it's a class that I do one thing, if it's a module, I do something else. There's sometimes a block when you're defining a couple of things in a file, so maybe you're defining two classes. Ruby parser will say, oh, I'll surround those two classes in what's called a block. And what we do is we process that stuff and we only expose two methods. One just says, hey, am I in a method for a given line number? So given line 23, is that in a method or is it outside a method? And then also, what is the method at that line number? And behind the scenes, what it's doing is it is building up a hash that has as its keys method names and as its values, range objects. And it turns out it's just pretty simple. You get the s expression that you want and you say s.line, and then if you say s.last.line, that's the last line number of the method, which is awesome. Defn would be a instance method and defs would be a class method. Now sometimes you have to get a little tricky because what you have is you're inside a class self block, right? So it looks like a regular def, like a defn, but you're really inside a class self block, so that's a class method. So you have to kind of like look for those things and search inside them. And then the problem I was having is that once I found them and then I went and looked for all the instance methods, I would find them again. So this is my super simple way to stop finding things. So what I do is after I've found something, I call hide methods from next round trying to be as intentional as possible, and I just replace all of these guides that I've already found and sort of marked off as maybe a class method or inside a module or whatever, and I put in ignore me, which means nothing in terms of an abstract syntax tree. It's just a way for me to ignore them the next time by moving through the abstract syntax tree. There possibly is a very better way to do this, but that was the way I figured out. So now we're in business, right? Because we've got all the things that we need. Our code stores has a way of outputting very detailed stuff, which we were already collecting in metric foo. So what we did is we store all the output of metric foo in case you don't know is serialized to YAML. So the cool thing about it is if you wanted to use the output of metric foo, you can. You can open up this object, load up the YAML. It's just a bunch of hashes and arrays inside it. This is inside the output of the Archive, and you can just sort of literally read the class here, and then right next to it is a was run true, right? This is not the output of Archive, but this is me taking the output of Archive and turning it into some YAML. And so I already had this data available. So now I can just loop over this thing, passing in this data to the line number generator, and then start gathering information for what was covered and what isn't on the method level. So all of a sudden, I can now pass this into the hotspot's magic rankifier. And the magic rankifier basically does something like this. It says, okay, I've got a maximum flog score across the whole app of 300, and I've got a minimum flog score of five. So we're just going to say five is the lowest and 300 is the highest, and we're going to define your rank based on where you are close to those two endpoints. And then it does that for Archive and it does that for Reek and Rudy. Now, Reek and Rudy, since those things are kind of like code smells, we just do sort of a cheat and we just count the number. So we just say like, oh, you have seven Reek problems or you have eight Rudy problems. And so now we can rank all these things, and now based on where your percentiles are and all of these various things, we can say, oh, you're 90th percentile bad, you're 85th percentile bad, and 100 percentile bad in these four, three, five, ten metrics. And so now we can rank all these guys versus each other, which is pretty cool. So now it's time for the live coding demo. So we're going to run metric foo on itself and see what happens. So here I am inside metric foo. I'm going to run Reek Metrics Hall, and do the same task you would run if you were inside whatever your Ruby project is and you wanted to get the metrics. You can configure it, but I'm using pretty standard configurations here. So weeding for things to load up, spiking one of my cores. All right, so we're doing some parsing here. That is probably psychro. And now we're running some tests, which are actually specs, to get the Archive output. Generating some graphs. So I keep copies of all the previous runs for every day. So now that's how I graph that stuff over time. So I've serialized all the output, and I save it in various files, so then I can go back and mine all that stuff for things that you want. So here's the standard reports that you usually get with metric foo, and here's the brand new hotspots. So can you guys see this? Yeah, it looks pretty good. Yes, we can see it. So here is three categories. We look at files, classes, and methods. The first thing I did wrong, especially because the CSS sort of implies that, is I started reading left to right when I first looked at this. That is not the way to do this at all. Left to right has no meaning. It's up-down. My visualization skills... Actually, visualizations would be a really cool thing to do with all this data, and I just haven't done anything with that yet. So if you're interested, please submit. So you have methods here, and these are ranked from top to bottom, so worst things at the top. So you can see this guy is pretty brutal, right? Like a FLOG score 58.8, 58.4. Not terrible, but getting close to terrible. Cycro, 10, that's about five more than I usually like to have for a cyclomatic complexity in any method. And uncovered code 97.4%, that basically means it's completely uncovered, right? Because Ruby always evaluates the first line, the def, whatever. So you always get a little bit of coverage, because it feels bad for you. So this guy is completely uncovered, and seven code smells from REEK. Now, this is the delicious, delicious irony of Metric Foo at this point, because the hotspots inside Metric Foo are the hotspot code, which is pretty cool, or horrible, I don't know. Anyway, but if you look through here, like Metric Analyzer, that's hotspot. Let's see, Metric Analyzer, yep. Location, oh yeah, I just showed you that guy. He's not really covered, he's got some problems. This is kind of weird, like what the hell does it mean to say you have 3.7 paths through something? That's an average, right? So for a file, we're going to take whatever methods we have in there and do some averaging. So you can get some sort of nonsensical numbers sometimes. So here's some more Metric Analyzer stuff. Awesome template apparently has some problems, not so awesome. And so on and so forth. So let's take a little bit of a tour through the rest of this stuff. We've got all these things that we can show. Churn is pretty cool. You can see which things have changed all the time. Things don't change too... What you're looking for in churn is you're looking for things that change way out of proportion to everything else. So if you have one file that changes 100 times over the last 3 months and every other file has only changed like 10 times, that's a file that needs to change no matter what you do. And that is some sort of object that has its fingers in way too many things and you need to keep an eye on that object. Flay structural duplication has gone up lately. So we're back to this graph which I said I would discuss later and now is the time. So as an open source maintainer you often have this dilemma where somebody will present you with a pile of code that does something really cool and yet it's not covered at all. And normally the right path is just to throw it back and say no. But guess what? Sometimes it's so cool you just take it in anyway. And that's what I did at least twice now in Metric Foods history. The first time was when we serialized the ammo output. There was some wonderful work done but it wasn't tested much at all. And so I brought it in and then I sort of like slowly over the time sort of got it back up to like 91% coverage. And then I took in the hotspot stuff and I feel like I'm bashing on hotspots guys who gave me this wonderful gift so don't read it that way. But I just have to go back now and sort of do some testing of that stuff which is fine. So what? We have a test. We'll give you the test. You have a test? Why didn't you tell me this? That's awesome. Alright that's great. Okay cool. Because I totally pulled that in going like okay I'm just committing to it. I'll write some tests for that. It'll happen. So cool. That's coming soon. I didn't plan that at all. Okay so yeah. So Rik and Rudy and Real's best practices they're all code smell things. I'm going to talk about that more in a little while. Alright so back to the presentation. So let's make an important point here that a lot of times when you come to these conferences people stand up here on stage and tell you how to do things and you can get the impression that we're all sort of perfect right? But I'm just some dude right? Like I kind of write some code and oftentimes I make a mess. And the important thing about making a mess is that you're probably going to do it no matter what you do. There will always be time constraints there will always be new people on your team there will always be consultants there will always be things that you can blame for your mess but it's probably your fault. Don't ignore them or hide them just clean them up and that's sort of the big point of metric foo is everybody on every project always feels like oh man the code's kind of a mess and then when you ask everybody like what you should fix first nobody agrees and that was sort of the point of metric foo was to help solve that problem like what's the worst thing let's shoot the first charging buffalo first right? You know like we may get trampled but we're going to take a few guys out we're going to live a little longer and it's going to be alright. Now it works. Anyway moving on. So problems with metric foo other than the ones I just showed you. Okay so metric foo relies on like it has 8, 9, 10 reports so it relies on a bunch of gems which rely on a bunch of gems so I did a fresh install of metric foo using that excellent RBM tool and gem something What is the thing? What? Gem sets. Oh god I love gem sets so yeah I just created a new gem set did a fresh install of metric foo and I got all these things notice we got a hole in here somebody doesn't understand development dependencies but that's okay because I just learned about it recently too so any change to any one of these gems especially in how they output can break metric foo because metric foo literally shells out to the command line and calls things like reek like we recently have tried to move away from that like there was a big refactor so we don't call flog from the command line anymore we call flog programmatically and I'd like to do that more often in metric foo to kind of avoid these problems of oh they added an extra space to the output of flog and now everything's busted like damn it so yeah there's a lot of regex parsing in metric foo which I would like to go away some time and then there's the classic things there's different rails versions different Ruby versions but the biggest problem and something if you're having problems with metric foo getting it running like I suggest turning off archive because metric foo just shells out to the command line and runs archive on all your tests or specs and most people never run their tests that way right like you probably have like some reek test that you call and it may or may not set things up for your test suite it may do certain various things you may have some tricky things that happen in your test suite or you might have some order dependent problems that you don't know about in your test suite so when the tests are running in different order they blow up so a lot of problems at running metric foo can be traced back to archive because archive is the only real dynamic processing we do in metric foo everything else is static analysis in other words meaning that we look at the code and just do some sort of static look at it where archive actually tries to execute not only your tests but all of the code that your test covers so that can be difficult so now because Prezi doesn't really do copy-paste super well I'm gonna have to switch over to another presentation okay so now what so we've gotten to the point where we've sort of found out that there's problems we knew there were problems what should we do with all this information so if you're like most developers probably not a lot right like you find out there's a problem you feel kind of bad about it you don't like feeling bad about things so you stop thinking about it and that is it's very common and I do it all the time but the point of it is to sort of keep reminding you right like hey there's these bad things out there so try not to be a bad developer oh I should explain these photos who can you read that I went to Japan recently for Rui Kaiji which was awesome name drop and they have all these wonderful English translations that don't quite work in Japan and I just took a lot of photos of them so yes it is a lot to process so let's start easy Flay is one of the easier things to understand it's copy paste detection but also structural duplication so we're violating dry don't repeat yourself and the solution is not real hard it's just playing around with extract method you know there can be other things but these really small steps can really help later on when you need to add functionality if everything's sort of defined into chunks that have a single responsibility but when you're interacting and adding new features you don't even realize how much easier it is until you went hey that feature wasn't hard at all single responsibility principle is sort of this concept that when I was like a physics teacher who was just an apprentice at Object Mentor and they were telling me about this it just didn't make a lot of sense to me I was like okay it should only do one thing but there's like five lines in there they're all doing different things but as I matured as a programmer it becomes more and more interesting to me the idea that like a method should just try and do one thing a class should just try and do one thing an entire project should just try and do one thing it can be applied at many different levels that's sort of the idea of having many different services if you're having a big team that has a lot of problems getting along you can break your project up into different services that try to do just one thing the one thing definition can be tricky but that's where the architecture hard part of software comes in you have to sort of decide what you want things to do so let me give you an example of this so I used to work on a project that had a lot to do with phone numbers and at some point we realized like we were doing all this phone number stuff in various different places so we would get like a phone number and then we'd start doing some regex stuff to get things like area code prefix, line number, extension and all that stuff and you know obviously this is duplication so this is like instead of like pulling stuff out or creating private methods it was actually just a missing object that we were missing there really need to be a phone number object that could do stuff with phone numbers so that was a nice little refactoring that sort of saved us a lot of duplication let us move on to churn so churn is this thing where you figure out like things that change the most and what others people about churn a lot is that sometimes it means nothing oh this file has changed a lot it's a CSS file maybe it's supposed to change a lot I still think it's actually kind of a code smell if you have like 10 CSS files and only one ever changes I don't know if you're really dividing up your CSS the way it really should be divided up but like I said it can indicate God objects an object that has its fingers the objects and the internals of them because then if the internals of another object has to change then it has to change so if you know you see something that has high churn it better be a good method and it better be well tested why? because everybody's in there all the time including your worst developers but of course you don't have bad developers on your team but seriously people inside a thing all the time if it's complex and it's changing a lot because complexity can hide bugs and if it's moving all the time you're just waiting for problems to happen okay moving on code coverage a lot of times out here developers say something like hey we should just have a week to write a bunch of tests and I just kind of cringe when I hear that I like the idea of people writing tests but I get a little worried when people are going to write tests for a week because I can consider that people are going to be writing tests for things that they don't really understand right if you're writing a bunch of unit tests for something and you don't really get it you might just be locking in a bug so you look at the thing and you go okay when I put in the numbers 2 and 2 it outputs 5 cool well I'll just lock that in with a test but maybe it was supposed to add those two numbers together and the output was wrong so if you're doing code coverage make sure that you kind of understand it when you're inside something and you're sort of moving around the code base and you come across something that isn't covered now is the time to cover it because like oh I have to change this thing now would be a good time for me to really figure it out and the best way to really figure something out is to write a bunch of tests for it the worst way to figure something out is to write one kind of wrapper test for one method if you don't you just kind of look at the output and then just sort of go okay well let's just freeze that into a test and they're actually harmful tested you know are bad and should be removed so warning vlog and psychro okay so they're both different ways of measuring code complexity vlog is sort of like a super set of psychro because vlog includes branching psychro psychro is just cyclomatic complexity will pass through a method vlog uses ABC metric which is assigns branches and calls so when you look at the output of vlog you'll see you get some points for every time you make an assignment every time you do some branching and every time you make a call to a guy and this would be super simple completely unauthorized guide to vlog scores basically my feeling on is below 20 and you're probably okay you're getting to a gray area here this could be trouble and above 60 you're just bad right and now you might say I don't know what's the 60 commonly it's not uncommon to have like hundreds and two hundreds and three hundreds if you're running vlog on a code base so anything above 60 though I'm gonna claim that you can probably fix this hey it's Ryan Davis it's Ryan in the audience hey Ryan so I should mention that like much of you really wouldn't exist without Ryan Davis right like I'm gonna go vlog it in a flay but also re-parcer which is behind the scenes of a lot of the things that mesh food pens upon and this is a wonderful photo from Aaron you're right over there right okay there's a great story behind this you should track him down to get this story I'm gonna move on so how do you fix this mostly extract method missing object and if that doesn't work it's now time to sort of roll up your sleeves and re-architect and you know I'm sorry about that I'm sorry to be the bearer of bad news but you really shouldn't have to have hugely complex methods in your app if you do there's probably something you're sort of doing wrong it's really time to sort of take a look at things and say like is this architecture something that worked in the past but now the metaphor of the application has changed and it no longer really applies alright so here's a more interesting flaws in the code that I work in multi-site help work in knit it was the worst method we're at so let's take a look at this so I work in backstop solutions we write a framework for creating websites for hedge funds and for the people who use them and the thing about hedge funds is they're they all want to look like their own thing so we basically serve up a bunch of different domains and everybody like blopartners.com wants to have two sites sort of a private site where they can edit things and then a public site where their customers can see reports on how their hedge fund is doing so I looked into this thing and I was like what is this this is horrible like just doing so much and all sort of low level stuff and if you look at this like all this YAML stuff and so after digging through this a little bit I realized like so we have all this configuration some of which is global which has to do with the suffix they want to apply right do you want to call it public.blopartners.com and private.blopartners.com no of course that's too simple they want to have something like superawesomeinvestments.blopartners.com and so we roll through this and we do a lot of merging so the first thing I did I'm going to talk about later sorry by the way the method goes on this has a block score of 169 that's brutal also look here seriously when the person was writing this did they not notice this I don't know anyway there's a refactoring opportunity right there so like I said the first thing I did was I pulled out this merge life cycle YAML into sites configuration method because basically all it was doing is taking some YAML guys and looking at their keys and sort of doing a merge so that sort of broke out a lot of stuff right here and what I'm trying to do this thing that Glenn Vandenberg sort of talked to me about this Loan Star recon a few years ago he was talking about sort of like uniformity of methods but basically the idea was that you want to have some methods that are sort of like directors and they make calls off to other methods and then you want to have some methods that basically do the low level work and so this was my attempt my first pass at sort of doing that I wanted to have this guy be sort of the high level method that says you go do this so you can kind of read through it and have a high level idea of what it does and so this was the most obvious refactoring set up private URLs and public URLs are basically the same thing they just have this difference in visibility one is private and one is public so I could pull that out into one thing and then I just pass it in visibility and then I can do some switching on that and that was pretty straightforward and then just a couple of other things that I noticed were like oh this is one section of the code I don't just pull this out and make it its own method now the interesting thing you can have here is that oftentimes when you do the refactoring if you add up all the FLOG scores it will actually be more than the original app and that doesn't mean you've done a bad thing because overall complexity for an entire application is not a bad thing because as an app it's going to do more it's going to do more complex what I'm really worried about is complexity per method so I've sort of gotten this down now this is by no means done this is at the refactor stage and we're actually just having a discussion about this earlier in the week because to get this down to be not horrible because if you look at this right now you have no idea what it does still we need to start actually doing some sort of re-architecture and that's a little more difficult so out of the scope of this presentation okay so Reek, Rudy and Real's best practices all these guys are supposed to look for design problems which is really cool because if it says you have feature envy and you don't know what feature envy is you can go look it up and then you say oh wow I'm calling this other thing more than I'm calling myself that means maybe the method should be the other thing instead of where it is and that's nice but the problem is keep in mind this is a machine trying to tell you a human problem right like all of these problems that we're talking about are problems for humans the machine can execute it just fine so when it says like hey you're in a controller and you're calling params an awful lot this method called params maybe you just go well yeah but I'm in a controller and calling params is probably something that's okay to do in a controller so you have to take these things with a grain of salt this is advice this is not the end all end all like I said in this slide it's up to you and your team to establish conventions and sort of stick to them I don't really care what the conventions are just establish them and stick to them sort of have rules about what the maximum number of lines you're going to have if you're going to tolerate for a method or how much complexity is too much complexity and what sort of code coverage you're aiming for so resources for metric foo so metric foo is at metric-foo at rubyforge.com it has a bunch of links to all the things you could possibly need if you want to contribute or just use metric foo and that's about it I would like to thank Backstop Solutions for donating a couple of days of my time to metric foo so they were very nice and gave me a couple of days last week to look at and sort of get metric foo all shaped up and ready for its 2.0 release so the 2.0 release was released yesterday like I said we already have some bug reports out on it so feel free to download it and try it it is a .0 release in the truest sense of the word I pulled in a bunch of stuff that was new it does some really cool things it works on all of my projects so I'm sure it'll work on yours Questions? Yes So I did just yesterday no two days ago I'm losing track of time two days ago I did a gemstone rails 3 I created a delete me project I wrote a couple of tests inside there and ran metric foo on it without too much difficulty there is some notes on the metric foo main page about what you want to do if you're inside rails 3 but I think most of those problems were the differences between active support the past and active support and the present active support right now you're supposed to require the just the things you need instead of just requiring active support which is very cool It worked for me for a pretty simple rails app using both RSpec and test unit so I think it's working obviously have a good time with that and let me know if it isn't Yes So one place that we extend our RSpec to have a nice impact I'll also have to Yeah keep in mind Ruby parser is pretty complicated right and it's trying to do a pretty ambitious thing so sometimes it can get confused in especially when you're moving between different rubies Other questions Yes so the question is What metrics would cover performance bottlenecks None It's not the point of metric foo I guess it could be at some point but I mean I guess you could make sort of grandiose claims like oh more complex methods or longer methods but really nowhere in here are we looking at something specifically in terms of performance So that's what you're looking for I don't have anything for you Alright well thanks a lot Thanks for attending and enjoy the rest of RubyCon