 So just to make sure, you're all in the right lot room, right? You're here for the recovery from enterprise talk. Just seems like a lot of people for this presentation. The truth, maybe, right? OK, so let's get this show on the road. Recently, I took my son to Target. My son is six years old. And he is convinced that Target is a toy store that also just happens to sell clothes and towels and cleaning supplies. And of course, anytime you go to a toy store, it'd be criminal to not browse the toy section. So of course, whenever we go to Target, we have to go walk through the handful of toy aisles and say no a million times. No, you can't have that. No, we're not going to buy that. But it's part of the ritual, so we do it. And of course, the first aisle we have to visit is the Lego aisle, which is really only half of an aisle at the Target where we are. It's amazing. You walk down the Lego aisle. How many different sets there are? I mean, you want a battle axe wielding dwarf? You can have it. You want a Lego shark. One piece, it's a shark. You can get one. Of course, there's the Indiana Jones Lego skull of death piece. And what self-respecting geek could possibly pass up a model of a star destroyer without even a hint of lust, right? What strikes me whenever I walk through the Lego aisle is how specific these sets are. Like, there are pieces that you would never use anywhere else. Like, you wouldn't use that Indiana Jones Lego skull of death in the front yard of a Lego domestic house model, for instance. And you could take all these sets and put them together and create a super secret spy headquarters with a boulder trap that rolls out and crushes a passing band of battle axe wielding dwarves as they drive by in their Martian buggy, which just happens to be decorated with flowers and creeping vines. And you could do that in 10 pieces, right? Because there's stuff for everything in Legos. Currently, the Lego brand comprises some 900 distinct pieces. And over the life of the brand, they have produced over 13,000 distinct pieces. Now, that includes color and material, right? So if you were to exclude those permutations, you're still left with over 2,800 distinct Lego pieces over the life of the Lego brand. That's a lot of pieces. And if you were to sit with a bucket of Legos that had every Lego ever made and want to build something, it would be very overwhelming to look in that bucket and think, OK, where do I start? What piece should I use first? To be a Lego master really does require a very deep and intuitive grasp of all the different pieces you have at your disposal. Because there's so many different ones that you just have to know, OK, I'm going to need to do this kind of a thing next. These pieces will fit that very nicely. Myself, when I sit down and do Lego, I stick with the rectangular bricks. I'm not a Lego expert. But even in spite of that, I still love to build with Legos. Who here enjoys Legos? Of course, Legos are awesome. We love them. Even though there's so many. I mean, part of the draw is because there are so many. There's so many specific pieces. And it's just kind of fun to get in there and play with the exotic ones and make them work. Even though when you're building a big Lego model and you want to add on to it, you might have to tear apart part of the model and move stuff around to make it work. It's still way fun. But my son has a very short attention span. And so I'm given only a minute to look at all the Lego models before he wants to move on to the next aisle. The next aisle has a few more pastels, few more colorful boxes and loud colors that I'm drawn to the two square feet of shelf that comprise the Play-Doh section. Now, Play-Doh has a really bad rap as a preschooler toy. Because it's fun for adults, too. I mean, yeah, you can sit there and roll a ball, make a cone, squeed a little square, whatever. But the real fun is when you start taking those basic shapes and building something. I was sitting down with my kids the other day, and they were playing with their Play-Doh. And I just started making little cubes out of Play-Doh, made a whole bunch of them, and then stacked them into an arch that was held together only by friction. Now, of course, you're not impressed, but my six and four-year-olds were. And I was impressed, because it was so simple to do. I haven't played with Play-Doh in years, but I was able to sit down and just start putting pieces together. And it was very fun. And you don't have to memorize a bunch of pieces. You don't have to have a lot of experience with Play-Doh before you can get in and just start making things. It's very fun. And it's super easy to extend your Play-Doh models, too. If you have a big thing you've made out of Play-Doh and you decide you actually wanted to add a Play-Doh house and you wanted to add a garage onto the side or something, you just graft that garage on and it just works. You can pinch stuff off, you can merge stuff in, and it's awesome. Now, interesting, these two very different kinds of toys have similar uses, right? You use them to model, to put things together. But you wouldn't, for instance, use Play-Doh construction techniques with Legos. You can't roll a Lego brick into a ball, for instance. You can't take a Lego brick and break it in half. It's just not possible to use Play-Doh construction techniques with Legos. The opposite, though, is kind of true, right? Given Play-Doh, you could theoretically model a Lego brick, and then you could model another Lego brick. And then you could put the two Lego bricks on top of each other and extend that to, just in general, using Play-Doh to build using Lego techniques. But why would you do that, right? It's so unruly, so awkward to do that, that we would, most of us would maybe try it once just for the thrill of it, right? Some of you are probably going to go home now and get some Play-Doh and try and build Legos out of them. But in general, it's not a very effective technique for building with Play-Doh. What you really want to do is learn the strengths of Play-Doh and the strengths of Lego and use them appropriately in those different environments. Now interestingly, this lesson took me a long time to learn, and not with Lego and Play-Doh, because that's pretty evident. But it took me a long time to learn that it applies to programming languages as well. Let's consider Java. How many of you have in the past or are currently living in Java as your bread and butter? So that's a lot of hands. How many of you have bad feelings about Java? Let's see, there's a few hands there. Let's ask the other one, how many of you like Java? Enjoy it. See, good. There's some hands there, too. Me, I love Java, and then I hated Java. And now, I won't say I love Java, but I recognize that it's a viable alternative solution. It's Legos. The reason I hated Java is because I found Ruby and was trying to, I was wanting to take those techniques in Ruby and take them to Java, and it just wasn't working. Java is like Lego, right? It's rigid. You can't easily tear apart things in Java. It's very rigid that way. Java has in JDK 1.6 has 11,000 classes. Yes, 11,000. That's just the public ones. That's not counting the internal classes that support the classes and so forth. So you've got this huge pool of pieces that you can build from. Let's just take the Java Collections API as an example. This is just the Collections API. There are 46 lines in that. You've got your interfaces, the different implementations. I mean, a priority blocking queue. How many of you that have done Java for a living ever used a priority blocking queue? I saw three hands, four hands. I didn't even know there was a priority blocking queue until I put this list together. There's a lot of choices there. Now that's not a bad thing. That's just different from Ruby. If Java is the Lego of programming languages, it's pretty evident that Ruby is the Play-Doh of programming languages. I mean, Ruby has the same bad rap as Play-Doh does. How many of us heard Ruby call the toy? I think all of us have heard it call the toy. Whether in our professional work or wherever, we've heard people say it's not fit for the real world. We know it is, and that's why we're all here. But it has that reputation. It's also very malleable and flexible, like Play-Doh. It's easy to take things apart, put them together, extend things at runtime. It's very easy to do that way. And Ruby, compared to Java's 11,000 classes, Ruby has 1,400 classes. And that includes the internal and anonymous ones. Now, I'm just talking standard library here, both with Ruby and with Java. Here's Ruby's collections API. Two modules, and one of them is arguably not even related to collections. It's comparable. You use it with collections when you're wanting to sort the elements. And then the classes, there's hash array set and sorted set. Those are the ones that came to mind. There's also matrix and vector, but those are more math than collection. So I didn't include them. So there's six in Ruby to Java's 46. And that really underscores the difference in philosophy between the two environments. I don't think you'll ever see the day when Ruby ships with 46 different options for collections. At least, I hope not. I hope that day never comes. Not because I think it's bad to have so many choices, but because I think that would destroy the difference between the two environments. Ruby's philosophy is really give you the basic tools and then let you build up from them. Java's philosophy is here's every possible tool you could need. Now build something with them. Both allow you to get to the same place. They just make you take different routes. And of course, with Ruby, you could technically use Java style to build applications in Ruby. You could do all the different factories. You could avoid the dynamic runtime stuff that Ruby gives you. You could do it, but why would you? And that's the lesson that it took me a long time to learn. So this is Exhibit A. This is an outline of a class from my Copeland library, which I wrote back in 2004. Copeland was a dependency injection container. And it was basically a feature for feature port from the HiveMind project in Java. How many of you have used HiveMind before or know about it? A few. How many of you know dependency injection? A few more. For those of you that don't know, you're probably better off, but in a nutshell, dependency injection is just the way to decouple components in your application so that they don't know about each other directly. Copeland, like I said, was my first real attempt in Ruby. I was coming from Java, so I had, and I'd been digging through HiveMind's internals. So I was very familiar with how HiveMind worked underneath and I was very indoctrinated in Java development style and technique. So coming into Ruby, I started working on Copeland. And one of the things that I didn't like about HiveMind and Java in general is its dependence on XML for configuration. So one of the things I determined to do was that I would use YAML, which I felt was superior to XML. And I still do feel that way when it comes to configuration. And what Copeland would do is when it would start up is it would scan the load path, all the paths in the load path recursively, looking for these little YAML configuration files, load those up and those would then describe what the components were and how they related to each other. So this though is more meta than that. This is the loader API. Because I thought, well, you know, it's just not right to exclude all those people who love XML. I mean, I'm just gonna support YAML out of the box, but what if someone really wants their configuration files to be XML? I mean, what if? We gotta support these people, right? I said, well, okay, I'm not gonna write one, but if someone wanted to, they could just register it. This loader API will allow them to just register their own XML parser and then Copeland will use that, okay? Now that is just wrong in so many ways. Wrong, wrong, wrong. Never ever do that, okay? Not because XML is bad, but because you should never never try to think what people are gonna want. Don't ever build libraries or applications that way. Never play pretend or what if. Work with what you need, what you know you need, when you need it. I mean, Ruby is the play-doh of programming languages. There is nothing to stop you from going back later and adding support for things. Copeland would have been much simpler if I had just implemented the YAML part. I mean, it would have been even simpler if I hadn't even done that, but I did. So it'd be better if it was just one opinionated choice. And then if someone later came to me and said, you know, YAML's cool, but I would prefer XML, I could tell them, PDI, right? Go investigate that and come back with a patch and I'll be happy to consider it. And that would have kept the code leaner, less to document, less to maintain, less to test. It's the whole, you ain't gonna need it, Yagney, right? I mean Java, it's a universal principle of programming. So it's in Java too. And I think Java just has a bad rap because of some prominent libraries that don't follow that or didn't follow that. And so in general, Java has a bad name when it comes to that. So let's move on to the next exhibit. This one's even worse, I think. It's another common Java pattern that does not translate well to Ruby. And that is the pattern of a class factory. Now, probably it's too small for you guys to see, so that's okay because it would probably hurt your eyes to see it. But basically what it's saying is HiveMind had the concept of a class factory and so of course I was doing a feature for feature ports, so I moved that over too. The idea being that you want to register your classes with this class factory in various namespaces and then when you meet a class, you ask the class factory for it and you get it back. Now that's just wrong in Ruby. It's pointless, okay? If you know the class you need, first of all, you can just instantiate it. But if you don't know it, if you maybe just have a string that you got that has the name of a class in it, well, everything in Ruby is implicitly a class loader. You can do an item, a class lookup using const get, okay? If I have my module A, I just say A, const get and then the name that I want to get back and I get it and I instantiate it and away I go. Now maybe you've got a case where you need to map arbitrary strings to class names. For instance, net SSH needs to map the SSH cipher names to the classes that implement them and they aren't the same strings. Use a hash, it's as easy as that. You still don't need a whole framework for loading and finding registering classes. So, last example from Copeland. This one I know you can't see and you're glad you can't. This is an example of a YAML configuration file for one of the examples that ships with Copeland. I stripped it down too. This is actually shorter than it was. It was 106 lines of YAML for a 250 line Ruby program. I'm telling you, I have the Java indoctrination down. First of all, that level of indirection, of putting all the instantiation logic somewhere else than the class makes it really hard to trace. You know there's this class but you have no idea how it gets instantiated, no idea who's using it, where it's going, it's really hard. But even worse, the reason this was bad is because Copeland was using a static configuration file to configure something that was dynamic, okay? The loading and instantiation and initialization of classes and objects was trying to be done by a static file. Fortunately, those wiser than me showed me the way. Enter RubyConf 2004, four years ago exactly, more or less. This was my first Ruby conference and I was excited because I was going to present HiveMind. I had caught the bug on dependency injection and I was sure it was going to revolutionize the Ruby community and so I was there to preach it. And I got up and I gave my hour presentation on HiveMind and dependency injection. Who here was at that Ruby conference? I know Jim was, James, okay, yeah. So you all remember being born to tears, right? During that hour. I can remember Rich Kilmer sitting in the front row seriously bored to death. And that's fine because everyone else was too. But I can remember when it was all done and I'd done my best to convert the masses, Rich raised his hand and says, why didn't you just use Ruby? And I was like, I did use Ruby. I wrote the whole library in Ruby, right? That's why I'm here. He's like, no, no, no, no. Why did you use YAML instead of Ruby? And I had no answer. I think I mumbled something about that's a neat idea, you know. It had never occurred to me. Well, a few days after Ruby conference, I got an email from Jim Wyrick who apparently I'd got him thinking about dependency injection. He'd blogged about it before and he'd used it before. He took another stance. He was going to try and explain it to Rubyists because I had obviously failed. So he sends me a draft of his blog article and says, hey, could you look over this and let me know what you think? And it was an epiphany for me. He had taken this concept and written a very simple implementation of a dependency injection container. And he did it using a Ruby DSL. And Ruby idioms, it was really elegant and very, very nice and the article was awesome. And it's still online. If you go to his blog and search for dependency injection and Ruby, you'll find it. It's worth a read, even if you never use dependency injection. So I asked the jam. I asked him if I could have his permission to take that mini framework that he'd done and run with it. And the result was needle. Needle was, as a successor to Copeland, much better. I'm still very proud of many things in needle. It was very fun to develop, fun to think through, but fun in an intellectually masturbatory way. Ultimately, ultimately the thing is useless. Super fun, neat ideas, never use it. As an example of why it was fun and at the same time useless, let's look at exhibit D. Needle has this idea of pipelines. When you request a service from the container, it runs it through this series of pipelines that each does something different to instantiate and prepare that object for you. For instance, the proxy that you get back when you request an object might be just something that will defer the instantiation. So for the first time you request a method, it then goes to the next step in the pipeline. The Singleton pipeline in this case, which says, all right, has this ever been instantiated before? And if it has, it just returns the one that was instantiated. If it hasn't, we go to the next step, which in the case is an interceptor, which lets you do kind of aspect-oriented programming, kind of wrap things around method calls and things like that. When that's done, it jumps to the initializer, which actually initializes the object and does any setup that's needed and then you get the object back. Now, as an implementation detail, it was really cool. It gave me a lot of power and needle. It let me do a lot of different combinations of these pipelines to get different kinds of service models. You could do deferred instantiation. You could do Singletons. You could do Multitons. You could do all kinds of neat things. The error I ran into is when I exposed this as a feature of needle. When I said, needle is a dependency injection container and you can write your own service models. Who in their right mind is going to want to write their own service models when they don't even know what a service model is? What I'm trying to say is expose what you need when you need it. There's lots of benefits to small APIs. Needles for what needle is is huge. I exposed everything as the API. What you want is something small because it's easier to describe to people. You can say easier, here's how you use my library. It's easier to document and it's easier to support. If you have 2,000 method calls in your API, that's going to take a long time to document. It's going to take a lot of support because people are going to be confused about which method to use when. Easier for people to learn when it's small. It's easier for you to test because you have fewer points to test and it's less painful to expand on later. When it's small, you can open more up later. But if it's large and you decide you want to take something away, suddenly you're breaking backwards compatibility and that can be painful in lots of ways. So that was needle. And like I said, it was a lot better than Copeland but it's still wrong in so many ways. And unfortunately, I was looking for a way to convince people that it was worthwhile, that it could simplify their code, that it could improve testability and maintainability. And at the same time as I was working on needle, I was also working on a prototype of an SSH client library in Ruby, which became that SSH. So I thought, you know, it's pretty complicated. There's a lot of pieces involved in an SSH library. I'll bet I could use needle to demonstrate why this is valuable. But it totally backfired. Because it complicated it even more. Part of the reason is I got sucked into the what if trap again. I mean, NetSys H version one lets you plug in your own cryptography library. You'd have to write all the wrappers yourself, of course. And because there was no other cryptography library available for Ruby, you would actually have to write your own cryptography library too. But never mind that, you could do it, okay? Because you have to give people choices. What if you invented your own authentication method or you wanted to support some obscure authentication method that OpenSSH supported? Well, you could write that and plug it in. What if you wanted to change the default SSH port? Not just say I want to use this port to connect, but I want the library to treat port 300 as the default SSH port instead of 22. Well, you could do that. What if you wanted to add a new key exchange algorithm to SSH? Well, you could do that too. You could customize every little thing because I was on the dependency injection horse and I was writing for all I was worth. Okay? I was convinced that this was going to be the future. And I wound up with something like this. This is just a very, very tiny, tiny bit of the configuration, the needle configuration that went into net SSH. This particular bit instantiates the transport session, the SSH transport session. And as you can see, every possible property of the session is itself another service in the container. So if I was trying to figure out how the flow of the program worked, first I would see some like following the code. Oh, I'm accessing the transport session from the registry, from the container. Well, okay, let's figure out where that's defined. And so you'd comb through the code until you stumbled on this file. And then you're looking at this and you're saying, oh, okay, we're going through packet sender. I wanna see how outgoing packets are being bundled and done. Well, okay, I know the name of it so I'd have to go and find where that is defined and then step through yet more to figure out the pieces involved with that before I even got to the code. And once I get to the code, then I try and figure out where all the different properties are coming from, how they all interrelate, it exploded in my face. Part of the problem is, I mean, separation of concerns and modularity are good things. Okay, you don't want one big file that has all your code with no classes, no modules, that we all agree is evil. But the other extreme is evil too. If you make every 10 lines of code into its own component, you're gonna wind up with component soup, okay? And I hope you're hungry, because you're gonna have a lot of it. Component soup is super hard to test because you have millions of components that each need to be tested and tested in combination. So you're working in, you know, factorials of tests like that you have to come up with. It's harder to document because every single one of those needs to be documented. It's harder to maintain because it's really difficult to keep track of the dependencies and relationships between those. And when you add dependency injection to the mix, it becomes downright impenetrable because the dependency between the classes or isn't even evident in the classes themselves anymore unless you've explicitly documented it. It just gets really, really bad. And the other thing is, when you have component soup like that, fuzz is the line between your public and your private API. When you have two classes, it's really easy to say that this one is private and this one is public. But when you have 200 classes, that line that divides them becomes a snake that meanders and all over because you're like, well, this one is kind of public. Let's just grab that one and include it and document it and this one, and it becomes really fuzzy to tell where that line is. The problem that I ran into is that I was using dependency injection at two granular level. Dependency injection works great for very complicated applications when you're using it to combine the high level components of the application. But when you're using it, first of all, I was using it in a library, not even in an application. And even though it was a complicated library, it wasn't nearly complicated enough to need dependency injection. I lost my train of thought there, sorry. At any rate, make sure you can justify the additional complexity that dependency injection will need, especially if you're gonna go for a dependency injection framework, which I feel is completely unjustified in Ruby. Because Ruby lets you do dependency injection very naturally without a framework at all. This is something I use quite a bit. I have some class A and class B. In class B, I'm saying, okay, I want to instantiate a new client. No, normally I would just put A down in there, A.new, and go with it. That just gets a little hard to test sometimes. It makes it harder to inject that, like a mock in there that you want it to return. You wind up having to either override the constant or redefine the method on the fly or something. With this, you can just say new client and pass in the class you want to be the mock and you're good to go. A little more magical is this version where you have a method factory, I'm sorry, factory method down here where client just returns the class that you want to use and then new client just calls client to get the class. This is really handy when you have tests where you can just subclass B, override the factory method and the way you go. And neither of these require a framework. They're natural, pure Ruby. You didn't have to require anything. They didn't complicate your code. They didn't add more code that you have to comb through. It all just comes for free, basically. So let's look at lessons I've learned over the last four years. First thing I've learned is that direct translations are rarely accurate. If you're coming from Java, take some time to read Ruby code written by someone very familiar with Ruby. Because you'll repay that investment many times over. Otherwise you're going to spend time writing Java in Ruby. Which, it's possible to some extent, but it's just awkward and in general not very efficient. That comes down to use your environment efficiently. Make sure you know how to use Ruby. If you were gonna be a professional Java programmer, you'd want to be very sure that you knew how to use the Java standard library. All 11,000 classes of it. In Ruby, avoid static configuration. There are times where it's appropriate. For instance, if you're trying to interoperate with a non-Ruby library, you might need to read static configuration. But DSLs really are where it's at. It's like Matt said in his keynote. Ruby is a meta-DSL. It excels at DSLs. And when you start embracing DSLs, you'll find that your programs read better. They're easier to document. People can learn them faster. BI frameworks are unnecessary in Ruby, I will say that. Environments like Java, where it's much harder to do things at runtime to change and extend objects, closures, things like that, frameworks may be more appropriate. But in Ruby, Ruby is itself a framework that makes it easy to do things like dependency injection. So if I were you, I would just say, cool, James wrote Copeland in HiveMind and never ever look at them, okay? Because they're abomination. That's all there is to it. Lastly, code just in time, not just in case. That's the key. You take nothing else away from this, walk away with that. Ruby is the play-doh of programming languages. It's so easy to extend. It's so easy to graft things together. You can take an object and extend it at runtime with a module, with two modules, with a whole bunch. You can monkey patch, you can do all kinds of stuff. It's super, super powerful. There's no excuse to sit and play what if games in front of your keyboard. Just sit down with what you know, code it, and when things change, adapt. So there's the quote to Jim's article. And it links to the Copeland and Needle documentation if anyone's really feeling like they want some pain. But other than that, I'm done. Any questions about Copeland or Needle or dependency injection? Other things I've beaten myself with? Yes. So I agree that dependency injection framework is not necessary, but what about the registry pattern where you declare all your classes and then take off your application? Okay, this comment is, he agrees the dependency injection is unnecessary, but what about the registry pattern where you register all your classes up front and then shoot your program off? He says that he believes that that would reduce the need for things like monkey patching. I agree, there may be needs for that. Like I said, very complex applications require more complex designs. But I think 99% of the time, the registry pattern in Ruby is even unnecessary. And I disagree that it would reduce the need for monkey patching because that assumes you're going to know ahead of time everywhere that someone's gonna want a monkey patch. Even if you do a registry pattern and you declare some class, someone might just want to change the implementation of one method in that class, which they could then, I guess you're saying they could just put their own implementation of the class in, but that still feels like monkey patching to me because you're still going in and surgically changing one method. So, like I said, there's places for a lot of this. I just don't think that, like you said, the framework is unnecessary, but the patterns have value. Yes? It seems like a lot of what you're talking about is gratuitous indirection. So if you're running a refactoring pattern to say the smell is gratuitous redirection of indirection. Indirection to indirection to indirection. What would be a refactoring, how do you know that? That's a good question. Because saying, if this were a pattern and it were called gratuitous indirection where something redirects to this, or gratuitous indirection, where this interacts to this, to this, to this, what would the solution to that anti-pattern be? In general, I would say try and find a way to get rid of the indirection. I mean, sometimes the indirection is there for a reason. When you're dealing with deferred instantiation. I mean, sometimes you want, if something is really expensive to instantiate and you might not need it to be instantiated, then you might just say use closures or something to do that. But in general, like I simplified, like Metasys H version two, for instance, like totally ripped out needle and rewrote a lot of it to not use all of that. And it took out a lot of the indirection because the indirection was forced on me by needle in version one, because you would say this is referencing some service. So you would have to go find the definition of the service which showed you the class to use and the service that was being assigned to a property and then you'd go through the chain. Now, if you look at the code, I'm instantiating the class with a list of things past this parameters. So I ripped out like four or five levels of indirection just by that. So I don't know if that answers your question, but that's the approach I'd take. Yes. You said earlier that dependency injection and inversion of control containers were useful in complex applications where you have high level services working for each other. And then later on you said that you never need them in 3D. Does that imply that 3D applications are never that complex? Okay, he's saying I said DI and IOC can be useful, but then I said you don't need them in Ruby. Does that imply that Ruby programs can't be complex or aren't complex enough for that? Actually, what I said was dependency injection, the pattern is valuable. You can use DI or inversion of control in your programs without needing a framework for it, like needle. You can use dependency injection in your programs without needing to use needle or coatland. That said, the frameworks themselves have value in other environments like Java where it's much harder to extend things at runtime to dynamically assign properties and so forth. You might need a framework that does a lot of the dirty work for you in that case, but I'd say Ruby programs can definitely be complex enough for it. I count myself fortunate in that I've never had to write a program that complex in Ruby. I never had to do it in Java either, but I worked for a shop where they thought it was complex enough and we had to learn high-mined. But I think it takes extraordinary complexity to deserve dependency injection frameworks. Does that answer your question? I think so. Also, sorry for taking all the questions. In situations in which you're working as large-dimension price teams, say you have all the teams work on that same context, dependency injection is, what has been used as a way to basically slot in and out of that? How do you work in a situation, Ruby, where you are taking bits of work that you're using that are prepared against being in a place that you're getting to work against another team or another particular one? The question is, if you're in an enterprise team, I assume where the different people are distributed or working on very different parts of an application. He says in the past, dependency injection has been used as a way to have an interface where you can just plug in components and test them in the absence of the other components. First of all, I think that problem is orthogonal to dependency injection, because you can easily set up a mock without needing dependency injection. If this component just plugs into something and I just wanna test how it integrates with that something, I don't need dependency injection, I just need a mock that I can plug it into. Dependency injection, let me take back. It is a form of dependency injection because you are injecting that mock in, but it's a very limited form because it's an interface that you're just hooking it up to blindly and testing how everything hooks up. So I don't think that dependency injection is a justification, I don't think distributed teams or enterprise teams are by themselves a justification for dependency injection. You can do your testing, you can do your integration testing, you can do all of that, you can do your development just with a mock that conforms to some basic specification and you can hook it up to that. Yes. So I think one of the things that the PI or registry view are driving to have loose coupling and high quantization, which especially for developers who don't know what those are, that's why they're good. Okay, the statement is dependency injection and registries in general encourage, what was the same, what is it you said? Low coupling, loose coupling and high cohesion, which are definitely good things. You don't want to tightly couple high level components especially. Like I said, the problem I ran into with NetSSH was I was decoupling everything. At some point it's just easier to say this low level component knows about this low level component, but the larger component they are a part of may act independent, like without explicit knowledge of who else it's interacting with. I'm totally in favor of it. I disagree that you need dependency injection or registries to accomplish that in Ruby. Because Ruby's dynamic nature lets you do things like const get or you can just look up a constant. You can pass things in as parameters. I mean, there's still the dependency injection aspect, but it's not a framework, first of all. I don't think you need a framework to do that in Ruby. And second of all, it's not, I'm trying to say, it obviously drives the design of your application and it can lead to great benefits. But it's not as critical as it is in environments like Java, I guess is what I'm saying, where you need to think carefully ahead of time and make sure that you've decoupled everything. It really is just so easy to pull apart and rearrange that if you discover tight coupling, you can pull it apart. But in general, I think tight coupling is not as bad as it's made out to be. I think you shouldn't run away from code where you see an explicit reference to another class, for instance. That's not necessarily a bad thing. It can be. You should watch for it, but be pragmatic. I think one way to work at it is that the dependency injection is a way to make Java and other languages, they're fairly rigid, slightly more flexible, right? So to your original analogy, it's not, it's sort of like if you had some technique that would make Lego blocks a little more flexible, it's probably very applicable to Lego blocks instruction, but you'd never need it or use it in Play-Doh instruction. It's a very good point. His point was dependency injection is a way to make environments like Java more flexible, okay? So to use the Lego and Play-Doh analogy, it's like a way to make your Lego blocks a little more flexible. But in an environment like Play-Doh, where it's already very flexible, you don't need that so much because you already have all that flexibility. That's a good point. Time's just about up. One more question, anyone? Okay, thank you very much.