 I just did a run-through and I hit 45 minutes exactly, so I'm just going to get started so that you can all have a nice coffee break after this. So chances are that you've heard of refinements, but never used them. The refinements feature has existed as part of Ruby for around five years, first as a patch, and then subsequently as an official part of Ruby since Ruby 2.0. And yet to most of us, it exists only in the background, surrounded by a haze of opinions about how they work, how to use them, and indeed whether or not using them at all is a good idea. I'd like to spend a little time looking at what refinements are, how to use them, and what they can do. But don't get me wrong, this is not a sales pitch for refinements. I'm not going to try and convince you that you should be using refinements and that they're going to solve all your problems. The title of this presentation is Why is Nobody Using Refinements? And that's a genuine question. I don't have all the answers. My only goal is that by the end of this session, both you and I will have a better understanding of what they actually are, what they can actually do, and why they might be useful, when they might be useful, why they've lingered in the background for so long. So let's go. Simply put, refinements are a mechanism to change the behavior of an object in a limited and controlled way. And by change, I mean add new methods or redefine existing methods. And by limited and controlled, I mean that by adding or changing those methods, it does not have an impact on other parts of our software which might interact with the same object. So let's look at a very simple example. Refinements are defined inside a module using the refine method. This method accepts a class, string in this case, and a block which contains all the methods to add to that class when the refinement is used. You can refine as many classes as you want within a module and you can define as many methods as you want within each block. To use a refinement, we call the using method with the name of the enclosing module. And when we do that, all instances of that class, which is string in this case, within the same scope as our using call, will have the refined methods available. Another way of saying this is that the refinement has been activated within that scope. However, any strings outside the scope are left unaffected. Refinements can also change methods that already exist. When the refinement is active, it is used instead of the existing method, although the original is still available via the super keyword, which can be very useful. And anywhere the refinement isn't active, the original method gets called exactly as before. And that's really all there is to refinements. Two new methods, refine and using. However, there are some quirks, and if we want to properly understand refinements, we need to explore them a little bit. And the best way of approaching this is by considering a few more examples. So now we know that we can call the refine method within a module to create refinements, and that is actually all relatively straightforward. But it turns out that when and where you call the using method can have a profound effect on how the refinement behaves with our code. We've seen that invoking using insider class definition works. We activate the refinement, and we can call refined methods on string instances, in this case. We can also move the call to using somewhere outside the class and still use the refine method as before. In the example so far, we've been calling the refine method directly, but we can also use them within methods defined in the class. And again, this also works even if the call to using is outside of the class. But this doesn't work. We cannot call our shout method on the string returned by our method, even though that string object was created within a class where the refinement was activated. And here's another broken example. We've activated the refinement inside our class, but when we reopen the class and try to use the refinement, we get no method error. If we nest a class within another where the refinement is active, it seems to work, but it doesn't work in subclasses unless they're also nested classes. And even though nested classes seem to work, if you try and define them using the double colon or the compact form, the refinements will have disappeared again. And even blocks seem to act a little bit strangely. Our class uses the refinement, but when we pass a block to the method in that class, suddenly it breaks, as if the refinement has disappeared. So what's going on here? For many of us, especially those relatively new to Ruby, this is going to be quite counterintuitive. After all, we're used to being able to reopen classes or share a behavior between super and subclasses, but it seems like that only works intermittently with refinements. It turns out that the key to understanding how and when refinements are available relies on another aspect of how Ruby works, which you may have already heard of and possibly even encountered directly. The key to understanding refinements is understanding about lexical scope. To understand about lexical scope, we need to learn about some of the things that happens when Ruby parses our program. So let's look at that first example again. As Ruby parses the program, it is constantly tracking a handful of things to understand what the meaning of the program is. And exploring all of these in detail would take far more time than I actually have, but for the moment, the one that we're interested in is called the current lexical scope. So let's pretend to be Ruby as we walk through the code and see what happens. When Ruby starts parsing the file, it creates a new structure in memory, a new lexical scope, which holds various bits of information that Ruby uses to track what's happening at that point. When we start processing, we create this initial one and we call that the top-level lexical scope. And when we encounter a class definition or a module definition, as well as creating the class and everything that that involves, Ruby also creates a new lexical scope nested inside the current one. And we can call this lexical scope A just to give it an easy label. It doesn't actually have a name. Visually, it makes sense to show them as nested, but behind the scenes, the relationship is modeled by each scope linking to its parent. So A's parent is the top-level scope and the top-level scope has no parent. As Ruby processes all the code within this class definition, the current scope is now lexical scope A. When we call using, Ruby stores a reference to the refinement within the current lexical scope. We can also say that within lexical scope A, the refinement has been activated. You can now see that there are no activated refinements in the top-level scope, but our shouting refinement is activated for lexical scope A. So next, we can see that the call to the method shout on a string instance. Jay McAvron, who sat there, is gonna talk a lot more about what method dispatch does. But one of the things that happens at this point is that Ruby checks to see if there are any activated refinements in the current lexical scope that might affect this method. And in this case, there is an activated refinement for the shout method on strings, which is exactly what we're calling. So Ruby then looks up the correct method body within the refinement rather than the class and invokes that instead of any existing method. And there, we can see that our refinement is working as we hope. So what about when we try and call the method later? Well, once we leave the class definition, the current lexical scope becomes the top level scope again. And then we find our second string instance with a method being called on it. And once again, when Ruby dispatches for the shout method, it checks the current lexical scope for the presence of any refinements. And in this case, there are none. So Ruby behaves as normal, which is to invoke method missing, which raises an exception and that's why we get our no method error. Now if we are called using shouting outside the class at the top of the file or something like that, we can see that our refine method works both inside and outside the class perfectly. And this is because we're activating the refinement for the top level lexical scope. And once a refinement is activated, for all, it's activated for all nested lexical scopes. So Colty using at the top of the file means that it will work everywhere in that file. And so our call to the refined method in the class works as well as just in the top of the file. So this is our first principle about how refinements work. When we activate a refinement with the using method, that refinement is active in the current and any nested lexical scopes. However, once we leave that scope, the refinement is no longer activated and Ruby behaves just like it did before. So let's look at another example from earlier. Here we define a class and activate the refinement and then later, either somewhere in the same file or in a different file, we reopen the class and try to use it. Now we've already seen that this doesn't work, but the question is why? For watching Ruby build its lexical scopes, again we'll reveal why this is the case. So once again, we have our top level lexical scope and when we encounter the first class definition, Ruby gives us a new nested lexical scope that I'll call A again. And it's within this scope that we activate the refinements. Once we reach the end of the class definition, we return to the top level lexical scope. But when we reopen the class, Ruby creates a nested lexical scope just as it did before, but it's distinct from the previous one. We'll call it B just to make that clear. While the refinement is activated in the first lexical scope, when we reopen the class, we're in a new lexical scope. It's different. It's distinct. And one where the refinements are no longer active. So the second principle is this. Just because the class is the same, doesn't mean you're back in the same lexical scope. And this also is the reason why our example for subclasses didn't behave as we might have expected. So we don't have to pretend to be Ruby anymore and we can just draw these scopes. And it should be clear now that the fact that we're in a subclass actually has no bearing on whether or not the refinement is active. It's entirely determined by lexical scope. Anytime Ruby encounters a class or module definition via the class or module keywords, it creates a new fresh lexical scope, even if that class or module has already been defined somewhere else. And this is also the reason why, even when activated at the top level of a file, refinements only stay activated until the end of that file. Because each file is also processed using a new top level lexical scope. So now we have another two principles about how lexical scope and refinements work. Just as reopened classes have a different scope, so do subclasses. In fact, the class hierarchy has nothing to do with a lexical scope hierarchy. And we also now know that every file is processed with a new top level scope, so refinements activated in one file are not activated in any other files, unless those other files also explicitly activate our refinement. Let's look at one more example. What happened there? There we go. Here, we were activating a refinement within a class, and then defining a method in that class which uses the refinement, and then later we create an instance of the class and then call that method. So we can see that even though the method gets invoked from our top level lexical scope, that's where that call to greet is, a refinement where the refinement is not activated, the refinement still somehow works, and the behavior is what we hoped. So what's going on here? Well, when Ruby processes a method definition, it stores with that method a reference to the current lexical scope at the point the method was defined. So when Ruby processes the greet method definition, it stores a reference to lexical scope A along with it. And then when we call the greet method from anywhere, even in a different file, Ruby evaluates it using the lexical scope that it has associated with it. So when Ruby tries to evaluate hello.shout inside our greet method and tries to dispatch the shout method, it's checking for activated refinements in lexical scope A, even though the method was called from an entirely different lexical scope. We already know that our refinement is active in scope A, and so Ruby can use the method body for shout from the refinement, and it works exactly like we'd hoped. So our fourth principle is this, methods are evaluated using the lexical scope at their definition, no matter where those methods are actually called from. Okay, one more example, I promised just one. A very similar process explains why blocks didn't work. So here's that example again, a method defined in a class where the refinement is activated yields to a block, but when we call that method with a block that uses the refinement, we get our error. So we can quickly see which lexical scopes Ruby has created as it's processed this code. And as before, we have a nested lexical scope that we'll call A and the method defined in our class is associated with it, and A has the refinement active. However, just as methods are associated with the current lexical scope, so are blocks and prox and lambdas and everything like that. When we define the block, the current lexical scope is the top level one. So when the run method yields to the block, Ruby evaluates that block using the top level lexical scope, and so Ruby's method dispatch algorithm finds no active refinements and therefore no shout method. A final principle, blocks and prox and lambdas and so on are evaluated using the lexical scope at their definition too. And with a bit of experimentation, we can also demonstrate to ourselves that even blocks evaluated using tricks like instance eval or class eval or anything like that retain this link to their original lexical scope, even if the value of self might change depending on how you're passing them around. And this link from methods and blocks to a specific lexical scope might seem strange or even confusing right now, but we'll soon see that it's precisely because of this that refinements are so safe to use. But I'll get to that in a minute. For now, let's just recap what we know. Refinements are controlled entirely using lexical scope structures already present in Ruby. You get a new lexical scope anytime you do any of the following, entering a different file, opening a class or module definition, or running code from a string using eval, although I haven't showed any examples of that. And as I said earlier, you might find the principle of lexical scope surprising if you've never really thought about it before, but it's actually a very useful property for a language to have. Because without it, lots of the things that we take for granted about Ruby would be much harder if not impossible. Lexical scope is actually part of how Ruby determines what constant you mean, and it's fundamental to using blocks and prox as closures, for example. We also know how our five basic principles enable us to explain how and why refinements behave the way they do. Once you call using, refinements are activated for the current and any nested lexical scopes. The nested scope hierarchy is entirely distinct from any class hierarchy in your code. Superclasses and subclasses have no impact on refinements at all. Only nested lexical scopes do. Different files get different top-level scopes. So even if we call using at the very top of a file and activate the refinement for all code within that file, the meaning of code in all other files is unchanged. Methods are evaluated using the current lexical scope at the point of definition, so we can call methods that make use of refinements internally, that make use of refinements internally, sorry, from anywhere in the rest of our code base. And finally, blocks are also evaluated using the lexical scope, and so it's impossible for refinements activated elsewhere in our code to change the behavior of blocks, or indeed other methods or any other code written where that refinement wasn't activated. Right, so now you basically know everything there is to know about refinements, but what is it good for? Anything? Maybe nothing. Let's try and find out. Now again, another disclaimer, these are just some ideas, some are more controversial than others, but hopefully they will help frame what refinements might be good for, what they might make more elegant or more robust. Well, the first one is probably not gonna be a surprise, but I think it's worth discussing anyway. Monkey patching is the act of modifying a class or object that we don't own, that we didn't write, basically. And because Ruby has open classes, it's trivial to redefine any method on any class with new or different behavior. The danger that monkey patching brings is that those changes are global. They affect every part of the system as it runs, and as a result it can be very hard to tell which parts of our software will be affected. If we change the behavior of an existing method to suit one use, there's a good chance that some distant part of the code base, hidden somewhere in rails or something like that, is going to call that method expecting the original behavior, or even worse, its own monkey patch behavior, and things are gonna get messy. So say I'm writing some code in a gem, and as part of that I want to be able to turn an underscored string into a camelized version. I might decide, oh, the easiest thing to do would be to reopen the string class and just add this method. It's innocent looking and it looks like it works. That's a very simple and quite understandable thing to want to do. But unfortunately, as soon as anyone tries to use my gem, even myself in a Rails application, the test suite is gonna go from passing not to failing, but to exploding entirely. You can see the error at the top there. It's something to do with constant names or something like that. We're looking at the back trace. I don't see anything about camelize, so it doesn't seem very obvious really why what I did seems to have broken this. I really doubt if I was using code from someone else that I would have any idea, and it would probably take me a long time to trace through figuring out what had gone. And this is exactly the problem that Yehuda Katz identified with monkey patching in his blog post about refinements almost exactly five years ago. So monkey patching has two fundamental issues. The first is breaking API expectations. We can see that Rails has some expectation, for example, about the behavior of the camelize method on string, which we obviously broke when we added our own monkey patch. The second is that monkey patching can make it far harder to understand what might be causing unexpected or strange behavior in our software. Refinements in Ruby addresses both of these issues. If we change the behavior of a class using refinement, we know that it cannot affect parts of the software that we don't control, because refinements are restricted by lexical scope. We've already seen that refinements activated in one file are not activated in any other file, even when reopening the same classes. If I wanted to use a version of camelize in my gem, I could define and use it via a refinement, but anywhere that refinement wasn't specifically activated, which won't be anywhere in Rails, for example, the original behavior remains. It's actually impossible to break existing software like Rails using refinements. There's no way to influence the lexical scope associated with code without editing that code itself. And so the only way that we can poke refinement behavior into a gem is by literally finding the source code to that gem and typing into it. This is exactly what I meant by limited and controlled at the start of this presentation. Refinements also make it easier to understand when unexpected behavior, where unexpected behavior may be coming from, because they require an explicit call to using somewhere in the same file as the code that uses that behavior. You know if there's no call to using in a file, we can be confident, assuming that no one else has monkey-patched anything, that there are no refinements active and that Ruby should behave the way that we would expect. Now, this is not to say that it's impossible to create convoluted code, which is tricky to trace or debug. In Ruby, that will always be possible, but if we use refinements, there will always at least be a visual clue that a refinement is activated, so it might be involved. Onto my second example. Sometimes, software we depend on changes its behavior over time, as new versions are released. APIs can change in newer versions of libraries, and even in some cases, the language itself can change. For example, in Ruby 2, the behavior of the charge method on string changed from returning an enumerator to returning an array of single character strings. Imagine we're migrating an application from Ruby 1.9 to Ruby 2 or later, and we discover that some part of our application is relying on this earlier behavior of charge. If some parts of our software rely on it, we can use refinements to preserve the original API without impacting any other code that might have already been adapted to the new API. Here's a simple refinement, which we could activate only for the code, which depends on that Ruby 1.9 behavior, while the rest of the system remains unaffected, and even any dependencies that we bring in now or in the future, we'll be able to use Ruby 2 as they might expect. My third example will hopefully be familiar to most people. One of the major strengths of Ruby is that its flexibility can be used to help us write very expressive code. That's the main reason why I was drawn to Ruby in the first place. In particular, it supports the creation of DSLs, or domain-specific languages, and these are just collections of objects and methods that have been designed to express concepts as closely as possible to the terminology that non-developers might use, and they're often designed to read more like human language than code. Adding methods to core classes can often help make DSLs more readable and expressive, so refinements are a natural candidate for doing this in a way that doesn't leak those methods into other parts of an application. RSpec is a great example of a DSL, in this case, for testing. Until recently, this would have been a typical example of RSpec usage. One hallmark is the emphasis on writing code that reads fluidly, and we can see that demonstrated in the line, developers should be happy, which is valid Ruby, but reads more like English than code. And to enable this, RSpec used monkey-patching to add a should method to all objects. Now recently, RSpec moved away from this DSL, and while I cannot speak for the developers to maintain RSpec, I'm quite confident that part of the reason was that they wanted to avoid monkey-patching in the object class. However, refinements offer a compromise that balances readability of the original API with the integrity of our objects. It's easy to add a should method to all objects in your spec files using a refinement, but this method doesn't leak out into the rest of the code base. Now the compromise is that you must write using RSpec at the top or somewhere in every file, which I don't think is a large price to pay, but you might disagree, and we'll get to that shortly. RSpec isn't the only DSL that's commonly used, and you might not even have thought of it as a DSL. After all, it is just Ruby. You can also view the root's file of a Rails application as a DSL of sorts, or even the query methods of active record, and in fact, the SQL gem actually does optionally provide a mechanism to let you write queries more fluently by adding methods to String and Symbol and a few other classes using refinements so that you don't affect the rest of your code base. The DSLs are everywhere, and refinements can help make them more expressive without resorting to monkey-patching or other brittle techniques, so under my last example, refinements might not just be useful for monkey-patching or implementing DSLs. We might also be able to harness refinements as a kind of design pattern, and use them to ensure that certain methods are only callable from specific, potentially restricted parts of our code base. For example, consider a Rails application with a model that has some sort of dangerous or expensive method on it. By using a refinement, the only places we can call this method are where we've explicitly activated that refinement. From everywhere else, all other normal controllers, views, or other classes, even though they might be handling the same object, even the same instance of that object, the dangerous or expensive method is guaranteed not to be available there. I think this is a really interesting use for refinements as a sort of design pattern rather than monkey-patching, and while I know there could be some obvious objections to that suggestion, and I even have some objections myself, I'm certainly curious to explore it a bit more and decide whether or not it's worthwhile. So those are some examples of things that we might be able to do with refinements. I think they're all potentially very interesting and all potentially very useful. And so finally, to the question that I'm curious about, if refinements can do all of these things in such an elegant, safe way, why aren't we seeing more use of them? It's been five years since they appeared in almost three years since they were an official part of Ruby, and yet when I searched GitHub, almost none of the results are actual uses of refinements. In fact, some of the top hits are actually gems that try to remove refinements from Ruby. You can see in the description, no one knows what problem they solve or how they work. Well, hopefully over the last 25 minutes, we might have tried to address some of that. Now, I asked another one of the speakers from this conference who will remain nameless, what they thought the answer to my question might be. And they said, because they're just bad, as if it was a fact. And my initial reaction to this kind of answer is somewhat emotionally charged, but my actual answer is more like, are they? Why do you think that? So I don't find this answer very satisfying. Why are they bad? I asked them, you know, why do you think that? And they replied, because they're just some other form of monkey patching, right? Well, yes, sort of, but also not really. And just because they might be related in some way to monkey patching, does that automatically make them bad or not worth understanding? I can't shake the feeling that this is the same mode of thinking that leads us to ideas like metaprogramming is too much magic or using single or double courted strings consistently is a very important thing. Or something that something, anything you type into a text editor can be described as awesome when that word should be reserved exclusively for moments in your life like seeing the Grand Canyon for the first time and not when you install the latest gem or anything like that. I am deeply suspicious of awesome. And so I'm also suspicious of bad. I asked another friend if they had any ideas about why people weren't using refinements. And they said, because they're slow. Again, as if it was a fact. And if that were true, that would be totally legitimate. But it's not. If you look at this blog post, which is actually from I think a few weeks ago, someone's done some nice benchmarking and it has almost no difference on the amount of time it takes to dispatch Ruby method calls. So why aren't people using refinements? Why do people have these ideas that they're just slow or plain bad? Is there any actual solid basis for those opinions? As I told you right at the start, I don't have any neatly packaged answer and maybe nobody does. When I proposed this talk, it really was a genuine question. I didn't know that much about refinements and I wanted to know if there was an answer. So here are my best guesses based on tangible evidence and the understanding that we now have about how refinements actually work. While refinements have been around for five years, the refinements you see now are not the same as those that were introduced half a decade ago. Originally, they weren't strictly lexically scoped and while this provides some opportunity for more elegant code, I think not having to write using RSpec at the top of every file, it also breaks the guarantee that refinements cannot affect distant parts of a code base. It's also probably true that lexical scope is not a familiar concept for many Ruby developers. I'm not ashamed to say I've been using Ruby for over 13 years now and it's only recently that I've really understood what lexical scope actually meant. I think you can probably make quite a lot of money writing Rails applications without really caring about lexical scope at all and yet without understanding it, refinements will always seem like confusing and uncontrollable magic. The evolution of refinements hasn't been smooth and I think that's maybe why some people feel like nobody knows how they work or what problem they solve. It doesn't help, for example, that a lot of the blog posts you'll find if you search for refinements in Ruby now are no longer accurate and even the official Ruby documentation is actually wrong. This hasn't been true since Ruby 2.1 but this is what the documentation says right now. It's a nudge to any Ruby core team members. Issue 11681 might fix that if you have a look at it. I think that some of this information rots can explain a little about why refinements have stayed in the background. There were genuine and valid questions about early implementation and design choices and I think it's fair to say that some of those questions maybe took a little bit of the steam out of the new feature as it was being unveiled to the world. But even with all the outdated blog posts, I don't think this entirely explains why no one seems to be using them. So perhaps it's the current implementation that people don't like. Maybe the idea of having to write using everywhere goes against the mantra of dry, let's don't repeat yourself, that we've generally adopted as a community. After all, who wants to remember to have to write using our spec or using SQL or using active support at the top of literally every file? Doesn't sound fun. And this points to another potential reason. A huge number of Ruby developers spend most if not all of their time using Rails. And so Rails has a huge amount of influence over which language features are promoted and adopted by the community. Rails contains perhaps the biggest collection of monkey patches ever in the form of active support, but because it doesn't use refinements, no signal is sent to us as developers that we should or even could be using them. Now you might be starting to form the impression that I don't like Rails, but I'm actually very hesitant to single it out. To be clear, I love Rails. Rails feeds and clothes me and enables me to fly to Texas and meet all y'all wonderful people. The developers who contribute to Rails are also wonderful human beings who deserve a lot of thanks. I also think it's easily possible and perhaps even likely that there's just no way for Rails to use refinements for something at the scale of active support. It's possible. But even more than this, nothing in the Ruby Standard Library even uses refinements. There's no call to refine anywhere in the Ruby Standard Library. Many new language features like keyword arguments and refinements won't see widespread adoption until Rails and the Ruby Standard Library starts to promote them. Now Rails 5 has adopted keyword arguments and so I think we can expect to see them spread to other libraries as a result. But without compelling examples of refinements from the libraries and frameworks that we use every day, there's nothing nunchagous towards really understanding when they're appropriate or not. I said there were a number of quirks with refinements or unexpected gotchas and it could be that that is the reason why no one is using them. For example, even when a refinement is activated and you can call it, you cannot call methods like send or respond to to check whether or not those refinements are activated and you can't also use them in convenient forms like symbol to proc. You can also get into some really weird situations if you try to include a module into a refinement where methods from that module cannot call other methods defined in the same module. But these don't necessarily mean that refinements are broken. All of these are either by design or direct consequences of lexical scoping. Even so, they're unintuitive and it could be the aspects like these are a factor in limiting the ability to use refinements at the scale of something like active support. But as easy as it is for me to stand up here and make logical and rational arguments about why monkey patching is bad and wrong and breaks things, it's impossible to deny that even since the start of this presentation, software written using libraries that relies heavily on monkey patching has made literally millions of dollars. So maybe refinements solve a problem that nobody actually has. Maybe for all the potential problems that monkey patching might bring, the solutions we have for managing those are good enough. Things like test suites. And even if you disagree with that, which I wouldn't blame you for doing, perhaps it points to another reason that's more compelling. Maybe refinements aren't the right solution for the problem of monkey patching. Maybe the right solution is something like object-oriented design. I think it's fair to say that over the last two or three years, the Ruby community has become much more interested in object-oriented design. And you can trace that in the presentations that Sandy Metz, for example, has given or in her book or in discussion of patterns like hexagonal architecture or interactors or presenters and all the gems that have recently appeared that help us use those patterns. The benefit that object-oriented design brings tries to bring to software. Those benefits are important and valuable. Smaller objects with cleaner responsibilities that are easier and faster to test and change. All of this helps us do our jobs more effectively. And anything that does that must be good. And from our perspective today, there's nothing you can do with refinements that you cannot do by introducing a new object or a new method that encapsulates the new or changed behavior. For example, rather than adding a shout method to all strings, we could introduce a new class that only knows about shouting and wrap any strings that we want shouted in instances of this new class. Now, I don't want to discuss whether or not this is actually better than the refinement version, partly because it's obviously trivial. And so it wouldn't be a realistic discussion, but mostly because I think there's a more interesting point. While good object-oriented design brings a lot of tangible benefits to software development, the cost of proper design is robustity. And just as a DSL tries to hide the active programming behind language that appears natural, the introduction of many objects can sometimes make it harder to quickly grasp what the overall intention of code is. And the right balance of explicitness and expressiveness will be different for different teams and for different projects. And not everyone who interacts with software is even a developer, let alone somebody trained in software design. And so not everyone can be expected to easily adopt sophisticated principles with ease. Software is for its users and sometimes the cost of making them deal with extra objects or methods might not be worth the benefit in terms of design purity. It is, like so many things, subjective. Now, to be clear, just like with Rails, I'm not arguing in any way that oh, design is not good. I'm simply wondering that whether or not it being good necessarily means that other approaches should not be considered in some situations. And so these are the six reasonable reasons that I could come up with for why nobody seems to be using refinements. Which is the right answer? I don't know. There's probably no way to know. I think all of these are potentially good defensible reasons why we might have decided collectively to ignore refinements or why we might make a case to remove refinements from Ruby entirely. However, I'm not really sure that any of them are really the answer that most accurately reflects reality. Unfortunately, I think the answer is probably more likely to be closer to the one that we encountered at the very start of our journey is because other people have told us that they are bad. So let me make a confession. When I said this is not a sales pitch for refinements, I really meant it. I'm fully open to the possibility that it might never be a good idea to use them. I think it's unlikely, but it is possible. And to be honest, I don't even really care. I just want to make nice software. But what I do care about though is that we might start to accept and adopt opinions like that feature is bad or this sucks without ever pausing to question them or explore the feature for ourselves. Now, nobody has the time to research everything. And that would not only be unrealistic, but one of the benefits of being in a community is that we can benefit from each other's experiences. We can use our collective experience to learn and improve. And that's definitely a good thing. But if we just accept opinions as facts without ever even asking why, I think this is a bit more dangerous. If nobody ever questioned opinion as fact, then we'd still think that the world was flat. It's only by questioning opinions that we make new discoveries and that we learn for ourselves and that together we make progress as a community. The sucks awesome binary can be easing and tempting and even fun to use, but it's an illusion. Nothing is ever that clear cut. There's a great quote by a British journalist and doctor called Ben Goldaker that he uses anytime somebody tries to present something as being starkly good or bad. When he says, I think you'll find it's a little bit more complicated than that. And this is how I feel anytime anyone tells me that something sucks or is awesome, it might suck for you, but unless you can explain to me why it sucks, then how can I decide how your experience might apply to mine? One person's suck can easily be another person's awesome and they're not mutually exclusive. It's up to us to listen and read critically and then explore for ourselves what we think. I think this is particularly true when it comes to software development. If we hand most, if not all responsibility for that exploration to the relatively small number of people who talk at conferences or who have popular blogs or who tweet a lot or who maintain these very popular projects and frameworks, that's only a very limited perspective compared to the enormous size of the actual Ruby community. I think we have a responsibility not only to ourselves, but also to each other, to our community, not to use Ruby only in the ways that are either implicitly or explicitly promoted to us, but to explore the fringes, to wrestle with new and experimental ideas and features and techniques so that as many different perspectives as possible inform on the question of whether or not this is good. Now, if you'll forgive the pun, there are no constants in programming. The opinions that Rails enshrines, even for great benefit, will change and even the principles of design are only principles. They're not laws that we have to follow blindly for the rest of time. There will be other ways of doing things. Change is inevitable. So we're at the end now. I might not have been able to tell you precisely why so few people seem to be using refinements, but I do have one small request. Please, make a little time to explore Ruby. Maybe you'll discover something simple. Maybe you'll discover something wonderful. And if you do, please share it with everybody. Thank you very much. Does anybody have any questions? Yes. So I'm not really sure what the question was, but I have the question, so what was the history of refinements? So they were inspired by a concept called class boxing from a different language, which effectively does a similar thing and originally proposed as a patch actually at RubyConf 2010. So literally 10 years ago, if RubyConf happened at the same time, the inspiration was to solve monkey patching. And I imagine that was because Rails was really becoming popular at that point. And people were encountering some issues with monkey patching, like the Camelize example actually does come from a real experience that you heard the cast blogged about. So that is the inspiration. The history is quite interesting, but it will take you a long time if you want to read it. There are 278 comments on the Ruby Tracker issue that introduced it over a period of two years, so. The question was in the concept of refactoring. This might be a bit confusing, but I think they're quite separate because you're not really extracting a method from anywhere. It's not like an object is losing a responsibility or something like that. Okay, I think that's the time up now. Thanks very much for your time.