 And this is Scaling Rails. Real quick, again, though, I wanted to say we're hoping to make this pretty interactive. We're hoping to have a lot of questions from the audience. So if you want to participate, it'd be easier for us if you move down a little bit because then we can hear you when you're screaming at us. And also, we discovered we have a very large laser pointer. So if we don't like your questions, we'll be burning out your retinas. Yes. So. Oh, history lesson. That makes it sound so, like, professorial. All right, so I was with a group on at the point back when it was the point. So I watched this whole thing explode. I've got, like, a bajillion stories. I can never remember them, like, on tap, but whatever, just ask questions. So the project, the first Rails stack, you know, project for the point that we deployed was with Engine Yard back in July 2007. You know, basically, you know their whole stack, right? It's like LAMP, sort of, but with Ruby, and so it's Lame, Lamar, or whatever. And it's not Apache either, right? So Genix or whatever. But there's a few key things they did for us that were awesome that, like, paid off later. I'm sure we'll talk about it more later. But, you know, the whole, like, memcaches that they have for you, like, the split memcaches where you have sessions and fragments separately. That stuff, like, it was all in place in the beginning and they paid off, like, two years later, which is kind of awesome. So anyway, it's worth describing what the point was because it's exactly what turned into Groupon. We had this fundamental model of people getting together to solve goals, subject to some constraint in the middle, like, you know, X number of people to accomplish something. And that took on several different models right away. We, like, brainstormed all this stuff over, like, three or four weeks. And the five that we arrived at were Ultimatums, where you're trying to get together to, like, topple some huge company, like AT&T or whatever, for screwing over its user base. Social Actions, where, I guess it's like a protest. It's protests with teeth, whatever. Fundraisers, which were huge and still are huge, and it's even part of the Groupon model to this day. And then the two interesting ones, which we brainstormed back then in 07 as well, were the market concept, which is Groupon, as you know it, where a merchant wants to offer something to a group of people, but only, we'll only do it for, you know, a certain guaranteed amount of viewership or, you know, purchase or whatever. The one that I really loved though was the inverse market that we had, which was people getting together and demanding something of a merchant. So think Dreamcast, you really love that Sega Dreamcast, and you want it to come back, and Sega won't do it, so you promise them you have, you know, 100,000 people that'll buy it, and they'll, you know, they'll see to you, which is whatever. We never pushed with that. I really wish we would have. So if somebody wants to like take that and be like Groupon number two, or Groupon buyer, something to go ahead and throw it ahead, I'll help. These guys are taking notes over here. We're watching you. No, we're not. Oh man. They have razor pointers too. Oh. Is yours this large? Damn. Yeah, Andrew, if you're seeing this, don't kill me. Sorry. I'm a CEO, just gonna like stab me when I get home. Okay, so anyway, all these different models, the ultimatum stuff, the social stuff, the stuff about philanthropy and, you know, just altruistic things, it just didn't work. People don't care. We were all really saddened by the fact that people just generally don't care about helping each other. But they do care about money, so it's kind of, I don't know, I'm still. They care about helping themselves. They do. That's cool, yeah. Anyway, so anyway, after coming up with all these different schemes and thinking about the market concept and trying it out with a few vendors and a few merchants without the name group on it this time, we decided to rewrite the entire app, rework the whole model around what we knew was working. So I'll talk about that later too, but that was a really interesting point in our history. Yeah. From a lesson here, I guess, from an entrepreneurial standpoint is Rails gives you a lot of flexibility to kind of change direction if you feel like that's something you need to do for your business, which can be hard to do in other stacks, where it's not so easy to just rework the model and to have a really great test suite infrastructure, all that stuff built in, and just say, oh, well, we need to go this direction. Go. That was really, I wasn't around for the early days of stuff, but I can still see kind of the repercussions of it in the system, and that would have been really hard to do in other stacks, in a lot of other stacks. Not all of them. A lot of other great dynamic stacks out there you can use, but Rails certainly helped Groupon out a lot in becoming Groupon from something else, right? In throwing something out there into the ballpark and saying, well, we think this is a good idea, but then being able to refine that and change direction and then turn it into what became Groupon. Okay, so talking about what really kicked off Groupon, then I feel like I sort of touched on it, but the economy totally tanked, right? Like every startup out there had to really decide what it was doing and focus on it. And for us, same thing, we had all this investment and we really had to figure out what was working that we were doing and only focus on that. So that's when we came up with the name for Groupon.com. Our editorial chief, Aaron With, he came up with it, it was pretty awesome. I've heard so many people claim that they came up with the name later on, but whatever, the dude was high on pad tie at the moment and then he just blurted it out, it was pretty awesome. Was the domain really, wasn't it originally Get Your Groupon, something like that? Yeah, because Groupon.com was taken and had no idea if it was gonna work, so we were gonna chase that guy down and try to get it. We ended up settling with him for some crazy money, it was sad. I would have taken the kneecap-breaking approach. I'm kidding, I wouldn't have done it. I wouldn't have done it. So anyway, you don't lie very well, Mike. I know, it's terrible. So the first example of it, which was this is actually one of the problems with Rails for us at the time. I mean, it is flexible, it lets you change directions really quick, but this was a total prototype and idea, so we really wanted to do it outside of the Rails stack just to try it. So what we ended up doing was taking a, and this is our CEO's idea, right? He loves PHP still to this day too, so he set up WordPress on a media temple box and threw up a blog, and we created this FlexWidget and a corresponding API on our app that would handle purchases and showing deals and all this other stuff. So that actually was awesome because we quickly iterated on what Groupon would be with a few test merchants and a few test customers. So once it was working, it was this great loop that we had, like the site, the FlexWidget and the Rails stack, but when it started to take off, the media temple server just totally went to puts. For every one web request that media temple would be serving, we'd be serving three API requests, and the Rails stack was totally fine. So I remember we ran our first Cubs deal and everything keeled over and then Andrew and Ken were just like, all right, we have to get this thing over onto the Rails stack completely right now. So we had a weekend to basically port all this rapidly growing domain logic in the WordPress stack into the Rails stack. That was, oh, it sucked doing it over a weekend, but it was super easy because Rails is just awesome for doing that kind of stuff. And writing out the Active Rector stuff to pull in WordPress in kind of like a clean way into the domains. It worked out really well. So anyway, so that's right at the moment when things started really picking up. That's what the traffic looks like. It only goes back to 09, but if you zoom in, it still looks like that even way at the bottom. So it's been nuts ever since basically that Cubs deal, it feels like. So another thing too, a lot of people talk about how we had so many new features right after we were basically growing really fast, and it's really because we had the impetus and the momentum from two years previous or whatever. It didn't just start in late 08. Like we're already a full steam ahead with Rails. And I don't know what those dips are about. I do know, you probably can't see it reflected, but every Christmas, like right around Christmas, everything dips down massively for like a month. For the last three years running, it's crazy. And I think somewhere in there we offended somebody. That's why it dips down to you. But I think a really cool thing to point out if you guys are building Rails applications too is that you can, if you make a good choice from a hosting provider and you're using pretty much the standard Rails stack, you know, Memcache, Rails, MySQL, or database of your choice, you can go a really long way. And even today, the meat of Groupon is still running on that stack, right? So I mean there's, it's impossible to complain that Rails doesn't scale. It's the things that scale our architectures. And when I was preparing this talk with Mike, I was like, cool, scaling Rails. And as I was going through the different things that we've had to do, the really places where we've had to get outside of that model, it's, none of it's about scaling Rails. It's just about scaling a web application, right? So if you take Rails out and plug in anything else in that app tier, if you were a Groupon clone, it would scale pretty much exactly the same way, right? Rails just makes a lot of things easier and complicates some other things. It really is truly that glue layer where you're just shuffling data around between different places. So I think that's, you know, for you guys with startups or with little Rails apps, that's a real reassuring thing. If you have a good business model, you can get a really long way on the core stack. So the rest of this is gonna be about specific stories in scaling our architecture that we felt were interesting and liked to share with you. So I wanna remind you again, if you guys have any questions or sent the pops in your mind, please feel free to raise your hand up. We'll try to make this a little more interactive. Just like curiosity, that graph, is that 16 million requests per month approximately? Unique visitors? Unique visitors, okay, so your requests are significantly higher than that. Yeah, it's actually, yeah, just the basic shape of the graph though, like if you graph our database size, like pretty much every metric you can think of, they all look pretty much similar. So it's kind of like tough to figure out which one to show that would be meaningful, but whatever, number of eyeballs seems to be like the thing that matters anyway in the end. And I think with, like everyone else who scales a web application like this, the place we end up falling over is always the database first, right? So really, every strategy we have is about how do we reduce load in various ways off of the master database? So a little bit of our talks about that, some of the things we're doing to do that, in ways that aren't necessarily intuitive, right? And then we're gonna tell some stories about some interesting bits of the application we had to put together. So I'm up first. And this first story is an application that, a bit of the application that Mike and I both worked on. And it really stemmed from a business problem. It has to do with scaling the business, not really scaling the app stack, right? So what a lot of people don't realize initially is that Groupon has two very large customer bases because Groupon is essentially a marketplace. So Groupon is selling products from merchants, right? That's one customer, two users, that's another customer. And the problem that they had early on and continue to have isn't servicing the customers, the purchasers, right? We have that pretty much under control, but it's being able to service all the merchants that wanna run on the site, right? And that was a problem with the model they started with initially. Initially it was a deal, a day site. So they'd have one primary deal that was featured on the page. I'm sure if you've seen Groupon, you know what that looks like. And they would maybe have one other deal they'd call a side deal. So that real estate was really, really expensive. And what they wanted to do, what they needed to do, is be able to run more than two deals in a day, run more merchants so that we could service more merchants and stop them hopefully from going to our competition. Which we haven't done an exceptionally good job at, but the solution, which was, you think, wow, so I have to take this huge pool of deals and this huge pool of users and all these different metrics and then calculate who gets the best one. That's like a nerd-gasm problem, right? Awesome, I get to use everything. So we had the engineering team together and we were throwing stuff out there, like Hadoop, MapReduce, put this in the cloud, have like a Gajillion servers, crunch all this data and yes, you know? Yes, but what turned, where we ended up starting was like, wait a minute, let's think about this. Let's think about what we can really do out of the gate because it turned out that the hard problem for us wasn't this engineering problem building the engine. It was all the different integration points in the application, right? So we have to calculate all this data, right? But we need all the user information, we gotta get all the database, we gotta ship the data over here, crunch it, send it out in millions of emails, right? Have all this data available on the site so that when you come to the site, you can get the stuff that's for you, right? That was really a much harder problem than the engine itself. So we kind of made an executive decision to defer the really complex, fun things, unfortunately, and we just wrote it in Ruby. And the incredible thing was, our assumptions are totally wrong about what kind of model we need to get away with initially. So we had an engine that supported the entire group on sites about six months ago or more that was completely written in Ruby. And we were processing selections for 20 million users a day. And it was the first part of the stack where we pulled out a whole separate vertical for it, for this engine. We initially started with just the core model using ActiveRecord, pulling the data, processing through it, really quickly got to the point where as we were optimizing, we just had to throw ActiveRecord away because that was too slow. So we were just pulling data right out of the database into hash maps, crunching on that, doing a big transform and spitting out something on the end. And that worked incredibly well. We didn't need to commit to Hadoop and all this big infrastructure and learning new languages and learning methods of computation that a lot of the team really wasn't familiar with. It just wasn't necessary. So I think a really important lesson that I took away from that is never be afraid to start small, right? And never be afraid to take that small thing all the way into production if it's working for you. And it did work for us. And the engine has changed quite a bit since then and we are definitely going to get to the point where having a gazillion servers in the cloud and doing big MapRedoops operations is going to happen as the engine gets more sophisticated and we start contemplating larger and larger data sets that will happen, but it's definitely not where you need to start. And don't feel compelled to start there, right? Because the interesting part of the algorithm has nothing to do with your, whether you're constructing it in MapRedoos or anything like that, right? That can all be done using, you know, bread and butter, object language, food. That's all you really need most of the time. One thing to mention too is that the algorithm, like some of the first steps that you take towards that stuff, it's such a huge lift right away that you quickly get into micro-optimization with an algorithm like that. I mean, if you're looking at gender, if you're looking at location, that's a way bigger win right away than like the machine learning system. I can't even describe the machine learning algorithm, you know? But actually, it's not even working, so. Shh! No, no, no, no! Anyway, yeah, it's awesome stuff. Relevance is, it's been the most fun project I think that we've been a part of. And it's the only thing we've worked on together because we're usually like, you know, we have to parallelize as much as possible on the people side of things. Yeah. One of the story I have to tell here is we did avoid the giant architecture mistake of jumping right into Hadoop and doing something huge in the cloud. But we didn't avoid a smaller architectural mistake where we thought we were going to want to use Redis to store the data at intermediary parts of the transformation. So we'd do a bunch of crunching and then push it out to Redis and then pull it down in different processes, do more crunching, push it back out to Redis. And as we were building the engine to work in that way, we decided instead of using a bunch of mocks to fake out our interactions with Redis, that it was interactive enough and the way we wanted to express our tests, we thought, well, we'll just write an implementation of the Redis API in Ruby that we can use in our tests. So that way, when you're describing tests, we can say, I put some stuff in Redis, I make some queries and I get this stuff back. And that was all running against our in-memory Redis. We called it pseudo-Redis. About halfway through building out the first prototype, we realized that the architecture didn't call for this Redis store at all. It was just total yagney. And what really saved us is we had this pseudo-Redis that was also tested that acted just like Redis for all the behavior we needed. And we were able to just flip a switch and start using pseudo-Redis instead of the real Redis and having it all be in a single process in memory. And that was a instant, huge performance gain for us. And because we, you know. That was weird, man, because I wrote that first pseudo-Redis and I saw how you guys were using it and I'm like, no way, this is nuts, this is like evolution. It just happens, it's so cool. I know, it's kind of a funny story. It worked out for us, right? And we still use pseudo-Redis to this day because we've really expanded our use of Redis in other places. And it is still by far mostly in tests because that's what its initial purpose was. But in this case, it saved us a ton of time early on in the engine by just being able to switch this other model and see how it worked, see how it performed. It worked great. Any questions about this? You, sir. Because when we were, we, I think we went too far in engineering the prototype before we actually started to work on something. So we had this model in our head where we were like, okay, we'll crunch some stuff, we'll put it in Redis and then somebody else will pull it down and crunch it. And I think we made a mistake in thinking that that would save us memory, that would save us heap space. So we could stream all this stuff off of disk, straight through us into Redis and then stream it back through into Redis again. And that was just, it didn't really work out that way, right? The cost of shipping all that stuff across the wire even to something crazy fast like Redis for millions and millions of records, it's just way faster to keep it all in the same process. So when we made that realization, we're like, oh crap, do I have to re-architect the whole thing? No, I can just, will you pseudo-Redis? Hey, and it worked. The other issue too with Redis, it's still not truly distributed, so you're kind of limited to the memory on that one box that's running it. So if our needs, after a few months of growing to be on the size of what engineer was willing to give us on one slice, we'd have to charge somehow or partition ourselves with Redis. And the Redis 2.0 distributed stuff's on the way, I guess, I'm not sure where it's at. I heard it's very close. I'm excited about that, the Redis clustering. I am too, we use Redis a lot, so it would really help us. So take one more question on this one if anyone has one and then we'll move on. Nothing, okay. Dev Ops. We're really gonna talk about Dev Ops? Oh yeah, yeah. You were gonna talk about Dev Ops. It's funny, it's my take on it's not the normal one, I think. Anyway, so our take on it initially was developers not thinking about systems at all, right? Totally focused on the point and the concept and the model and the domain model. So having engineer initially was totally awesome because they took care of all that stuff. Like I was mentioning, they set up all of our systems. We got to be as ideological as we wanted for a year, which is, it was good. Normally that kind of thing spirals out of control, but it was definitely good. So we didn't have to think about MySQL configs or any of that. Deployment strategies, the fact that they got us hooked up with that one click deployment early on was, it's probably the best thing they've done for us. And that really hasn't fallen apart. I mean, you hear about Facebook, how they have to do distributed deployments because they have so many servers and they kind of have one master server deploying out to all these others. All these others on Twitter with their murder set up that's kind of cool, but we've avoided having to do anything like that. Actually, the one pile up that you do find, it might not just be engineering stack specific, but you're trying to do a deployment with like 50 servers or something. That kind of thing usually can cripple that initial server. And for us, it did a few times. The file system serving are one check out to all these boxes. I.O. would shoot up on it. It would kill everything during deployments. Like the whole site would go down. It took a while to get out of there. Like it ended up being like we just have to give every bit of I.O. resource we can to that one master server. But when you do that and you spend the money on it, it gives you more headroom than anything. So it's one of the good things about having a lot of funding is that like, you can avoid some of the interesting fund projects like distributed deployments for super long time given that you have the cash. That sucks for developers when I do really fun stuff, but it's kind of awesome for future development. You don't have to think about anything else. So even, you know, we've hit a lot of their limits, a lot of their scaling limits, but engineering is still helping us. We're building on an operations team now. It's that's still effectively keeping developers out of like the system administration's soup. So just generally about DevOps, I don't think I'm not really big on that concept. And you know, we aren't as an organization. So might work for others, but. That's, I think that's going to be changing because as the, we're getting to a point where the sophistication of the deployment and the scale that the company's operating at, we just, we have to have our own operation staff. So that's, that's started now and we're starting to hire DevOps people and that's going to start to change. One of the things I wanted to say about this is when I was thinking about what to talk about in this talk, I'm like, rail scale is just fine. You know, I mean, so then I got thinking about, well, what, you know, if something went away in the rail space, what would be the thing that would just kill scalability for rails? And I thought of a couple of things. One, it would be virtualized systems, right? Being able to just spin up a bunch of slices whenever you want to someplace and have them start serving traffic, right? Rails suffers or Ruby suffers from being pretty computationally intense. So, you know, bottom line is if you want a big Rails app, you're probably going to have a lot more hardware in the app tier than someone running in a language that's a lot more computationally performance. So all of a sudden that means all this tooling infrastructure we have that just is conveniently there, right? You know, Capistrano or whatever deployment strategy you're using, being able to spin up a bunch of boxes in the cloud whenever you want and have them start serving traffic. If that didn't exist, if we were still in the world of, you know, if you need a new server, you got to call up somebody and they got to pull something out of the rack and walk over and plug it in. I mean, that rails wouldn't scale, right? I mean, that's what makes the magic happen, is being able to having this tool chain there where you can just, you know, pretty quickly, pretty easily have a one-click deploy to a thousand servers, right? And eventually, you start to run the interesting issues like the IO problem, but those are all surmountable. The point is you can get really far scaling that way before you start having that trouble and if that tool chain wasn't there, rails wouldn't scale, I don't think. Questions? So, SOA or SOL? Well, as you can imagine, Groupon has two big problems. One is that they have tons of money, right? Well, that's kind of... That's double edged sword, right? They've got just tons of money to spend on whatever they want to spend it on and one of those things they like to spend it on is building the best engineering team in the world. Now, if you're a developer working on their applications, that's kind of terrifying, right? You think, oh my god, they're gonna hire a gajillion, gajillion people. They might all be awesome people, but developers like Herding Catch, you get 15 guys together and that seems like it's a gigantic team and there's all kinds of problems. So, how do you solve that? One, right? Two, you've got this web application that's just going crazy. How do you get load off the master database? I mean, that's bottom line. How do you get load off the master database, right? So, first thought is you spin up a bunch of replicas, right? You defer as much read traffic as you can to the replicas and you have master handle write. Well, now we're at the point where we can't have a single master database because the write load's too high, right? So we run a big deal. We can't have all those transactions coming in, right? How do you solve that problem? Well, one of the ways you can solve it is by re-architecting the system so that you pull load out of the master, right? So, one of the big initiatives is going to be moving towards a service-oriented architecture for a couple of big reasons. One is it lets us organize small groups of really efficient teams around one part of the vertical stack, like purchasing, right? So now we can have an editorial stack where people are putting content up on the site and running deals and we can have a separate purchasing stack that handles taking people's orders, charging their credit cards and all that kind of stuff and they can be on totally separate verticals, right? So now we have another master database we can scale write load on instead of just the one, right? And we can do that in a bunch of different places. And like I said, that's really, really nice in a couple of different ways. It lets us scale the organization in a way that makes sense. So we can have these small focus teams and it doubly lets us remove load on master. That does introduce other scaling issues now because we have to have SLAs between these different APIs, you know, how do we, you know, if you've got, you know, a purchase coming in from master has a call and API to send that over there as it goes someplace else. How do we make sure that whole chain is gonna be performant enough? And those are still things we're figuring out but that's definitely where we're going. Let me check my notes here. Another thing that's really cool about this is it lets us choose the right tools for the job, right? Just like a lot of other big organizations that have scaled out. I mean, we're not gonna make the rallying cry though. Crap, Rails doesn't scale. I have to rewrite this in Scala or whatever because that's just not true. It's the architecture that doesn't scale. The architecture breaks down at some level and at some level you're gonna wanna choose more appropriate tools. Like I'm pretty sure for the, you know, the end game for the relevant system, a lot of that's gonna be written in Java and Hadoop because that's the native language for it. We could choose something else but having small teams that are working fairly independently lets them freeze them to make the right technical decisions from an infrastructure standpoint so that we don't have to always be a rail shop, you know, because there are definitely other languages that are more appropriate for different problem spaces so it prepares us for that. And right now I think we have a few verticals. We have the order processing system I mentioned. We have the main kind of editorial app stack that serves the public site and we also have the relevance engine and we're looking for more verticals that we can pull out. There's different criteria that we use there too. Oh, go ahead. Quick question, the teams that work in those verticals, how big do they tend to be? I would prefer them never to be larger than seven. That's not true right now but I definitely would, I feel very strongly that teams larger than seven for me, I feel like I start to get diminishing returns. The communication overhead between syncing all those people starts to become a problem. Yeah. We had more questions. How easy do you find it to extract the verticals while you're dealing with the living application? It's really freaking hard. On the upside, that's one place where Ruby really shines, right? Because it's really easy to introduce a new abstraction and hide some service but there's still so much going on that it's tricky but I can imagine it'd be a lot harder in other places than it is for us and Ruby. Yeah, depending on where you do it though, sometimes it's easier than others. I mean, ideally there's sort of difficulty, like priority for how to pull these things out. I mean for me the best way that things carve up is based on the domain. If there's a clear perforation in the domain model and it just naturally segments that way, it's just naturally easier to split it out as well. Sometimes there's performance reasons or the consumers different like the editorial people versus customers and that gets a lot harder. And also anytime there's a severely different data profile like the relevance engine, relevance engine could have totally been all up in our domains business but it needed a massively different data stack. So there was a perfect reason to separate it out that way. Now when separating it out or cutting that stuff out wasn't really hard per se, right? The relevance engine was easy because we didn't have one, right? So we didn't have one already that we had to be architect so we got to choose and we just, I mean it was obviously so different from the normal processing change chain that the web app goes through that it was obvious, oh we need to spin up a vertical for this. The other thing too though is if you, there's still all these different services and some teams have to like build features that cut across all of these different services as well. There's probably no way to really design your services so that you can avoid that problem I think unless you're, who knows, maybe it is possible, we haven't. We're still writing to the main master right now. Like that's the last step in our re-architecture plan. Well when we, it's not, we're not distributing a single database so we can still handle consistency at the app layer. So we're gonna have the whole, the two masters we're planning to have is one for order processing. And basically you'll have a successful, persist act that comes back from that that you get in the app side. So we can handle, right now we're planning to handle all that consistency stuff in the app layer. On the other side of it, like for reporting and stuff there's kind of another separate vertical where we have a big distributed database that we just load all our stuff into that they can use for BI metrics and things. There's no right concerns here though. Are you using synchronous, asynchronous, or both in the communication between these service layers? Both, whatever is most appropriate. Whatever we can defer, it's preferable obviously because that gives us, you know, it doesn't lock us into having, you know, big stuff in the request chain which is a huge problem. Can I hint as to what you're using underneath the asynchronous? I mean synchronous is an HTTP, asynchronous are you using some sort of message layer or message queue or what? Yeah, our go-to right now has been rescued so far. So we have. Antithas for consuming our own services too. Yeah, afraid it should be services. Because for the home page of it gets hundreds of requests to composite that page in under a second. So, I mean, looking at the way they actually do effectively this too, they have a whole bunch of teams building a whole bunch of little kind of service stacks that all have SLA's and if they don't meet their SLA they actually will go find something else. All at real time, it's pretty impressive stuff. Yeah, that's, you know, hopefully we'll get there someday. I'll definitely check that out. Yeah, yeah, we're not anywhere in Amazon yet, I don't think so. Yeah, that's the pretty awesome part is like the, you know, revenue, like how big we seem to like how intense the application, the data requirements are like the ratio is kind of not your normal ratio. We have a lot of head room to grow. It's pretty awesome, that's it. Moving, oh wait, thank you. Well, it depends on how they're architected, right? So the relevance engine, since it doesn't really depend anything on the rail stack, it's pretty different. Now, do you mean like common dependencies from an app standpoint? Like, oh, hey, I've got this shared module that I include in a bunch of different places. Yeah, I was on a project once where they did that where they had this crazy architecture where they basically put the model in a library and then included the library in a bunch of different pieces that all shared the same model and that made me want to kill myself. So we definitely, I'm super against sharing model between stacks, I believe each stack should have its own model and if that means having a same but slightly different representation of an entity, you know, so be it. That's way preferable to me than trying to, you know, put all the logic. Oh, I have one user. I want one user model, right? Dry. At some level, when you're building different app, when you're really building different applications, right? I'm gonna have an application that does this one thing and an application does this one thing. You might have representations of an entity that they're the same entity but they mean different things and different applications and so that logic should not be shared. I mean, you're not drying things up that way, you're just making things worse. I think for everything else that does make sense, like totally, you know, horizontal library stuff, like logging, yeah, we have tons of shared libraries and that's all totally natural, you know, you plug them in with gems or what have you. Not yet and it's totally possible. We just, we don't do it that often. I mean, we are pretty dry, we just don't go like overboard with it. I mean, there is an interesting story there, not really with sharing but, you know, as Mike mentioned early on, this started out as a completely kind of different application. So it was the point, right? And then a lot of what Groupon became to be was engineered back into the original point code. So we definitely had and still had a problem with legacy where you had this legacy really domain terminology that hung around the application that's been a long road to clean that stuff up because it's hard, you know, you have a class that's referenced in a thousand places, you want to change the name of it that doesn't seem that hard but it's, you know, it's a vexing sometimes, right? You know, you don't want to change the table name. That can be kind of hard. So that stuff does come up and it's a hard thing to do in any language. I feel though, like for Ruby specifically, I feel like that's one of the places where I miss my Java tool chain, right? And the static analysis stuff that you can do in Java. Sorry, I was a Java guy but, you know, I'm like, oh, change the name of class. You know, you can do that stuff in Ruby but it's still a little more manual than you can get away with in other stacks. So I cry a little about that but I think it's, I think all the other benefits of the language make it worth it. I wouldn't go back. That's kind of a business problem too. Like one thing I'll find is like somebody will introduce a new concept and a few months will go by and then all these different departments and like hundreds of people slightly, you know, referring to things slightly differently. Like it's kind of almost outside of like a developer's job but you end up having to like chase everyone down and get everyone on the same page about certain concepts. You know, form some sort of dictionary forum and then like propagate that naming or those concepts all the way through the system, whether it be Wikis, their systems, Salesforce, our application, our database. So really being hardcore about domain-driven design is like, you know, it goes beyond the rail stack. Though the rail stack's like the easy part, changing names as far as I'm concerned. So we have eight minutes left so I'm gonna vote for skipping this one because we have more cooler stuff to talk about. That's the awesomest one though. Is it? No, it's not. Is it really? People? Yeah, dude. I'm a programmer, I don't wanna talk about people. All right, it's cool. We'll come back to it. You're a programmer. Yes. We will come back to it. Okay, the big little rewrite. So, we've got this developer from Motiva, Chris Chandler. I love talking to the guy, he's so like animated. And one thing that he loves talking about, and he does this at his talks too, is about this event horizon for startups where everything's all manlyable, you get to decide what you're doing, you get to change direction a few times. And then at some point, success hits, things accelerate really fast. And then everything you're doing just congeals and you really lose all that flexibility that you had. So all the decisions that you've made up to that point, you're kinda stuck with them for the most part. And changing something that seems trivial, like before that point, becomes like a rearchitecture of sorts. So for Groupon, I mean for Groupon specifically, I feel like we knew exactly what we were doing. And it's still pretty applicable, like the model that we had in our minds to what we're doing now, from the point. Yeah, from the point. We just didn't have a name for it. But we got to do, we decided to do a huge rewrite at that time. And it worked, you normally hear that the big rewrite is the wrong thing to do. I heard Ken Peck talking about it and everything he said was totally true. It felt like good advice, except we didn't follow that because we had our own needs. Every company's differently, and we knew we had to do something like that. So we did this rewrite, and Rails definitely made it super easy. I think the most important part for us was, we had some heavy integration tests. We had a good Selenium suite. We had the RSpec integration tests. We're also pretty fleshed out for just about everything high level. So that made a perfect pivot point to rewrite. It gets a blow away most of our implementation, keep all the high level stuff. And surprisingly, we missed barely anything. And we also did an active record style, and entirely new migration, entirely new database supported by active record migrations that we patched up to port all that stuff. So anyway, it was easy to do a full rewrite. It really, within three weeks, two people would just hammered away at it. It was not that bad. I can't say if it would be easy for every project, but whatever. Yeah, that's it. So it's doable. I wouldn't suggest doing it for like everything, but you know. Do you think you could have got away with it without the test suite? Absolutely not. It was totally crucial. I mean, you're flying blind without a test suite. You know, yeah, that's what your system is, right, what your tests tell you it's doing. Without that, it would have been impossible. Yeah, let me again, betting Rails. Okay, so the focus of this talk was supposed to be like how Rails is awesome, how it scales, how you barely have to do anything and it just works. And you find out the beauty of the Rails stack as you get to these like upper echelons of performance and stuff. My take on it is not the same though, because I like fixing the problems with Rails, the stuff that becomes an issue for us and I'm pretty pessimistic about it for whatever reason. So anyway, like more on the DDD stuff, like the expressiveness of active record. DDD is domain-driven design stuff. Nunk and Donuts. Oh. Dongle. And nothing for the 30. Fail. Yep. I do want to donate though. So anyway, in support of like really going hardcore with like your domain-driven design, it doesn't quite give you every tool that you need. So we patched up active record to allow you to do stuff like take scopes and compose them so you can refer to them in conditioned hashes and at any level of testing you can recurse through associations. You can do all this amazing stuff that like, even just like if I brought up the console, it'd be like kind of nuts to see, but basically we've met active record like super powered, put rockets on it. The thing's just amazing. And the other cool thing is, you can have this huge nested description of your model like all these levels of nesting and you can apply it to the database if you want, which is like the customary active record thing, or to an object graph in memory, which is super powerful. We're still making use of that to make the site more awesome, but I'll be doing a talk at RailsConf about that so you can wait till then. So anyway, we also patched up action controller because the rest support in Rails wasn't quite what we wanted it to be. There's a lot of actions that it didn't support, stuff with collections and figuring out how to like interleave security and authorization, different concepts in there. It's not really, Rails doesn't do anything for you, period, no that it doesn't bad or anything. So we had to add extensions to action controller to support all that. And we did them in not the Java verbose way where everything's like right there, like declarative. It's all just kind of like magic. It just happens for you, like following certain conventions. You know, I think people argue a lot about declarative versus imperative programming. And I definitely prefer declarative over imperative, but I prefer nothing at all over declarative, which is kind of the Rails way. So that's sort of what our extensions look like. And yeah, that's all I had really. There's lots of really fun examples which we had time to go through. Yeah, I mean we'll be here for the rest of the conference obviously, so if you wanna talk about any of this stuff in excruciating detail, feel free to track us down. It's excruciating too. Yeah, it will be excruciating, so you've been warned. We have a minute and three left, and I really wanna talk about this because this is one of my babies. But one of the problems we had with write load is like 20,000 people wanna buy the same thing at the same exact time, right? And at first, we had no locks in the database, so you'd have this table of codes we're giving away for a customer, and you have a few hundred people buying things, not a big deal, you have a few thousand people buying things, that's such a big deal, you have 20,000 people trying to buy everything all at once, then all of a sudden two people are getting pointed at the same record, just out of a pure race condition, right? So the knee-jerk reaction was throw a lock in. So we threw a table lock in for this set of rows that we're selecting from, and that solved the problem from the user's perspective where whilst we gave the same code to two people, but that introduced another huge scaling problem for us where now we had this roll-lock contention going on in the busiest table in the database. So our solution to that, which has been fantastic, is to use Redis in a few different forms. We use Redis for atomic counters so we can keep track of exactly how many we sold, and then shut it down immediately, and that atomic counter lets us, you know, spread that count across a thousand different app servers and they can all just ping away atomically saying, I sold one, I sold one, I sold one, and then we shut it down immediately. It also lets us do cool things like load a whole bunch of promotion codes up into Redis and then you know, just autonomically distribute them, boom, boom, boom, boom, boom, you know, and then we run out, we go back to the database and pull that stuff back into Redis, load it up again, run through 10,000, and that's been great for us. I mean, Rescue is really the gateway drug, but now we're using Redis for all kinds of stuff, and it's been really fantastic and really fast, and it's been a great tool for us to fall back on. I think we're out of time. I'm gonna go ahead and take a few questions until they kick us out, so. Where are they just? Or where we can just leave. You mentioned you were using Redis at a high risk point. Are you using HyperVideos and, you know, applications that are, is it just basically like XML over A? If I could, I would burn XML off of the face of the planet. I would prefer to use JSON services for anything. I know for our public APIs, we still support XML just because, you know, that's how it is. It's real rest, though. We don't adhere to the document, like, perfect, but it's definitely not XML over the wire. We do have some external services that we integrate with that use just raw XML. It's ridiculous that they even call that rest. But now we have a rest service, a rest API. Or, you know, there's lots of external affiliate consumers. There's our own iPhone consuming it. It's pretty rich and robust, and it's getting more robust every day. Any more questions? All right, yeah, we'll hang out up here at the front for a while if you wanna come talk to us, but thank you a lot, everybody.