 Yeah, so Like Stewart said I'm at Utah Street labs right now Which is kind of funny because I don't think we've called it that is the name of the company, but We're building a site called copious comm and at this point We just kind of call it that So my story today has four acts I My initial idea was to make it kind of a this American life thing But that seemed a little kind of like trying too hard So I dialed it back, but I kept the act thing and I hope you'll forgive me The the names of the acts have something to do with each of the acts. So hopefully It'll be interesting so act one is called web scale and I see there people who are clapping You get what I'm talking about People who aren't don't worry about it. It's really not worth it So like I said, I work for a company called copious calm. We're building a social marketplace We have t-shirts that I'm actually pretty proud of It's like the first company I've worked at where I'm really excited about the t-shirt because it's giraffes and that's kind of cool And this is the site as of yesterday As you can see we have a lot going on The core mechanic of the site is people buying and selling things to each other. It's a marketplace so one of our users Pamela Joyce's eyes is selling this $700 glitter art Jimi Hendrix poster and She has seven thousand followers the core domain model is something like Twitter, but with stuff in the mix So rather than just following people you can also follow things You can follow collections of things So there's lots of stuff you can do you can like things You can comment on things you can buy things You can unfollow people or you can follow and unfollow people Down at the bottom we've got down below where the page got cut off here We've got this share thing so you can also share things. So it's it's a pretty large collection of activities You can take on the site Those activities are aggregated Turned into stories about activities and then aggregated into feeds The feed is at this point kind of the core experience of copious in the same way that the news feed is the core experience of Facebook This is where we drop people at the end of the onboarding process This is where people go when they're logged in by default And it's another place where people can take lots of actions on the site Based on using the internet, I think this is pretty common, you know a lot of sites on the internet No matter what the domain is or incorporating these social mechanics and when people talk about social I think this is what they mean You can take lots of actions on the site. They interact with other people And and a core part of the experience is those interactions with other people It's no longer just about your content and it's no longer just about what your site does It's it's about how your site connects you to other people Okay, cool. Cool. So I'm a user on our site. I Can follow people I can follow and I'm sorry for the transition. It's just so awesome the flame thing It was just I was looking through transitions, and I had to use it You can follow things and this is a t-shirt on the site. You can go buy it today. So just saying and When you're interested in people we assume that you are interested in the things they're selling So this guy has a cool leather jacket and a cool pair of sunglasses You can buy those too Although I think the sunglasses sold so you can't buy that But you can still see stories about it, which is another interesting thing Actions are turned into stories which are aggregated into feeds And like I said, that's kind of a key thing This particular feed actually doesn't look too dissimilar from what might be in my feed. I see stuff about Rob I see stuff about Brad And so that's good so the Main thrust of this talk is gonna be about how this works How we take these actions this huge stream this fire hose of activity on the site and turn it into feeds it's a Fairly tricky problem. I think it's it can seem easy at the outset But you know once you start thinking about the numbers involved when a single action comes in We have to look at all of our users figure out who's interested in it And and stick it in there their feeds There are a few other wrinkles in in our particular system that that made it even more complicated and You are Occasionally in a position where you need to look through a large collection of old actions things that happened before on the site and Figure out how to build a new feed for that person So When we started Building the system. We had a few MongoDB instances in production So we said well, let's use this tool that we are already somewhat familiar with both You know at a programming level and at an operational level and see if we can make the problem fit in that domain And initially it seemed like yeah, we could we could actually model it pretty cleanly So the first thing we did was we stuck we stuck Actions into MongoDB as basically Jason documents Not technically true, but good enough for what we're doing Embedded in those Jason documents. We kept a list of interested users. So If a new action float into the site, we would look up who might be interested in that at that time and stick their user IDs into a document within that action when we wanted to Build a feed and serve a feed to somebody when somebody logged into the site we would Do a query for that user ID and Get the stories back cool. It worked at first And and in development and in test and even in production for a little while But it ended up falling over pretty quickly So I'm not gonna go too deeply into MongoDB because we're not at a Mongo conference, but to make a long story short Mongo stores Basically Jason documents in a structured way and and lets you do things like query over the internal structure of those Jason documents Which is cool. It's interesting except that it falls over in unexpected ways So one thing that just blew up immediately was that growing these interested embedded docs can actually lock the database Which I mean and I don't mean like for rights I mean for reeds and I mean on like the slaves and the whole thing because what happens is they grow pretty big And when I say big I mean we deployed this and we have this community manager Caitlyn and she Basically from when she started had something like 70,000 well 30,000 to 70,000 followers because we stuck her in the onboarding process and she was like Tom from Myspace You know everybody follows her and I am a hundred percent certain after going through this that Tom from Myspace Alerted those Myspace guys to a lot of scaling problems way before they would have had them otherwise because that's what Caitlyn did for us So when Caitlyn did something we would suddenly have this this document with like 40,000 IDs in it And then what's worse is when somebody followed Caitlyn we would look up all the the stories like the recent stories or something like that We've tried to block some of this out of our cultural memory, but We look up all those things and we add stuff when those Embedded docs got too big they would just Explode We also query sometimes required scanning through the embedded doc which anybody who's ever watched a table scan happen in a my school database will understand that's That's just not gonna be good The query optimizer also made some some pretty bad decisions when trying to do this And the net result was that when this was operational We were like just straight up failing to serve 10 to 20 percent of requests because they were timing out or because there were errors or Whatever so that was not really ideal So we need a new approach Act to set theory So we said we step back from the problem just a little bit and thought about the domain So what are we doing? We have sets of stories about people and listings and Feeds are the unions of those sets We union along Interest lines so when we want to generate a feed we figure out which sets people are interested in by looking at their interests And we union those together to make a feed so When I was putting the other side, I was like so we thought about you know what technologies good at working with sets And I was like relational databases Do this whole thing? But that was not the decision we made I Feel like that's the punchline of like 90% of operational talks these days, but We actually ended up going with a server called Redis and If anybody's not familiar with Redis, it's it's kind of like It's it's like well you have a problem where you'd like to use see data structures because they're really fast But you don't want to write your whole system and see so what you do is you use a Server that basically provides see data structures over a network interface It's pretty cool. And one of the data structures that they have is a A Sorted set they have set sorted sets lists. We took advantage of these sorted sets and sets pretty heavily so What we did this is kind of like a hard to see Illustration of the model that we had in in Redis the top thing there the interest thing is a set. It's a set of interests The the actual things we stored in the set were just strings t 42 tag 42 L 32 Was listing 32? And we also had actors so like actor 17 Those essentially served as pointers not literally in Redis we had to resolve them ourselves but pointers to story sets We tag the story, you know the string stories under the beginning of the interest and then we Issue a command to go take a union so like I said the client got the interest for a particular user went down Took the union of those sets and then produced a feed So the idea here the initial idea was that we could do this at runtime essentially when Somebody wanted to feed we'd go and do a build and And then return it to them we anticipated that this might be slow sometimes these union stores would be slow In some edge cases like if we wanted to build a feed of somebody who had followed you know 7,000 things and some of our users I swear to God Logged onto the site and just followed everybody they could and I don't understand what they were doing But they they had 7,000 followers like or 7,000 people. They were following and like 15,000 likes and it was it Made for some slow unioning we anticipated that we said okay cool if this unit store takes too long If uniting that sets takes too long store it in in Redis and we'll maintain it when new stories come in It's not really super scalable, but we were hoping it was scalable enough to get us Kind of past the next six months It was not at all So, you know, this is the initial idea when a new story comes in we add it to the the managed sets This this operation is pretty slow because we have to actually go to our MySQL database and say hey Who might be interested in the story and then we have to go find the feeds that need updating and then we stick it in the feeds What was cool is that Initially this took like 20 megs of RAM Redis stores everything in RAM and writes stuff out to a dump file eventually and and it was like it was it was just genius until Well, so yeah, the Ruby client did all the heavy lifting we will push some work to the background with rescue and then we broke Redis So like I said predicated and they did it send us we always fast so It just exploded almost immediately too many interests meant we were Pointing to too many story sets too many story sets meant slowly union ops Unions ended up being almost always slow. It's our fault. You're blatantly misusing that the technology There were many managed feeds management was super expensive and it It just didn't it failed to serve feeds. So So we need a new approach Act three the many armed demon and I don't know if you can see that I like riches get hub gravitar and I grabbed that demon from some kid on the internet and I don't know if if you were watching this video kid I'll give you 200 bucks for a thing or whatever I felt a little bad about like the copyright thing, but then it was yesterday. So, okay, so Feeds can't be built enough fast enough at runtime. We just decided that's that's done so The the new solution is to maintain all the feeds and read us and just own that we need to be really good at it It's gonna mean a ton of RAM usage But you know, that's that's storage. We can pay for storage and we can deal with that for a little while So what we did was we well first off we thought hey Can we write this in Ruby and we decided no that's kind of crazy because we needed to be really performant We're probably there's lots of potential parallelism in the system And and just doing that is going to be problematic The reason we thought about Ruby is because we use rails So everybody in the team knows Ruby and it's kind of nice to keep things in that Stable, but I like closure a lot and I was kind of excited about a project to use it We were already using it for some of our data warehousing with cascolog So it was a natural fit. So you have a closure. We wrote a closure demon that reads from rescue It updates these interest sets it updates story sets and it updates feeds essentially everything is a managed feed now and we decide What to stick in what feeds and all that right at one time? We also had a new requirement. We were doing a big press push and The feeling from the people who were doing the press push was that it looked ridiculous to have multiple copies of items in the feed so We needed to get this out on a particular deadline and it needed to Roll stories up into other stories. So if somebody likes 130 things, you don't want 130 things showing up in someone's feed and just plug in the feed We have a couple big sellers who list In bulk and so they would just flood the feed with all of their stuff and it was kind of annoying We also would digest along other lines not just a single person doing a bunch of things We would also say, you know, Jim and John loved this one thing and Jim loved and shared so he took multiple actions against another thing Or Jim and John loved and shared cool sunglasses. So multiple people did multiple things This digesting actually turned out to be one of the harder problems in general to solve and one of the bigger operational headaches the problem is that digesting is is Somewhat complex and it's there's no Server there's no database. It just does it for you It's possible that we could have come up with some clever scheme to use some database But it didn't come to us within the like two minutes that we Wasn't two minutes. It was like a day that we spent trying to think about this and trying to figure out what we were gonna Do so what we decided to do is just maintain the head of each feed the last six hours of every single feed in memory And I told someone last night at the bar that we were using 50 gigs of RAM and in a machine in EC2 and And this is why so that's the kind of punchline spoiler alert so We took the head of each feed we stuck it in an atom and we kept it in memory When new stories came into the come into the system We add it to each feed and then we write the feed out to Redis and clients read from Redis If you haven't used atoms before They're real simple. They are one of the state mechanisms that closure provides The semantics are dead simple. You have an atom you D ref it to get the actual data structure it contains And then you can update it using this swap function The actual underlying implementation is such that You don't worry about how that function is applied I 99% sure that function might actually be applied twice. You have to be a little bit careful, but Like it can't be stateful. It can't it shouldn't do things with other systems But it was ideal for what we wanted and essentially It let us turn the parallelism of the system way up It was so have this work out It was slightly dangerous to process multiple actions off the rescue queue that We were reading from that this rest this demon was reading from at once because we didn't want to get into a situation Where we didn't really know what was in what was currently going through the system. It made it harder to shut down And it was just a single demon anyway, so we didn't want things interrupting each other long story short tons of bottlenecks Initial feed builds story ads interest ads were all in the same process. There were some ways we could work around this We didn't need them all to be in the same process necessarily, but once you start thinking about that I mean the problem starts getting bigger and and and so we didn't necessarily want to build that all ourselves So we needed to make it as fast as possible. That was the the fastest way to victory just Figure out how we can make this single demon process things off this queue as fast as possible Fortunately our machine had tons of cores and closure has these atoms which let us maintain state in a way that Means we can throw more threads at it with a minimum of effort So what we did was we broke up We broke this demon up into a few little pieces all in different threads We have one thread that just sticks stuff into new feeds We have another thread that goes through an expires feed So we don't have to have that particular logic in line in the story processing And then we have another thread that actually just looks at this data structure inside the atom It essentially gets a snapshot of the atom It writes it all out to Redis and then it just does it again and it just keeps going and writing and the reason this worked and the reason adding this and breaking this up into several pieces was essentially trivial was that Closure has a very considered approach to state and and it really paid off I mean this this was where I felt all those years of reading about closure and all the state stuff And I was very excited We made one optimization we actually instead of having one giant atom We broke it up into one atom that contain essentially pointers to other atoms so that each of those processes could just work on a single feed at a time and it wouldn't run into a Situation where it was frequently trying to update this massive data structure and conflicting with somebody else who is also trying to Update the data structure at the same time So yeah demon working on multiple things closure was awesome for this. It was it was really good Considered approach state really paid off More than once I went in and looked at a function and I added p to the beginning of Map and it just went ten times faster, and it was like, okay, that's a good day I didn't hate that So Problems are we good? Can we can we go home? Yeah, there were problems Storing all the feeds in Redis memory Was super expensive It's just I mean like We have these enormous Redis machines sitting in AWS and the kind of machines that have 50 gigs of RAM and AWS are not particularly cheap Feed builds are only mostly fast so you can't really rely on them for onboarding they get stuck behind Story creation and sometimes story creation can be really slow because when a new story comes into the system We have to go and load all of the feeds that are inner that might be interested in that story from Redis It turns out that when you try to load like a hundred thousand things from Redis At once and like they're not really small things. It's just not super super fast So it was hard. I mean we did rely on them and do rely on them for onboarding But it's it's a cagey proposition and not really something that it was just gonna get worse It's also not at all robust if the demon breaks or needs to like take a 60-second GCNAP It's just there's nothing you can do we just having a couple times it got close. There was this one hilarious instance where we transition from line one to line two and the profiles changed and the JVM arguments that we were using were actually in the old style profiles and forgot to get them into the new style profile so we went from using like 50 gigs around to like 20 or something and It ran for a month and then just started garbage collecting like a maniac. We had no idea what was going on for a day Not awesome We also had this weird Denormalization of data that didn't seem totally necessary We already have all of the interests they already exist in my sequel databases the main my sequel databases that serve the site And we're like denormalizing them into these redis systems I mean, that's a way to make things fast and it wasn't bad, but it was it was I mean not super ideal Also not very extensible. So I mean it kind of worked for this but How do we add new things to somebody's feed? Even though they aren't following them, you know, we look at your Facebook profile and we say oh, I see you're interested in horses You haven't specifically followed this person listing a horse for sale, but I think you might be interested in it There's just no mechanism to do that and and thinking about sticking some sort of online machine learning thing in the middle of this this pipeline Wasn't super appealing and points to a larger problem. How do we make this processing smarter and smarter and smarter? There's just it's all bad. It's all in the the hot path and it doesn't really work out Tons more parallelism possible on the system Interest ads story ads feed builds. They're all basically independent processes. So How do you get more parallelism? You break things up into smaller chunks? You take advantage of more machines You build a system of cues and workers Do we build our own build something on top of redis with like workers or rescue? With workers reading up rescue cues and then sticking them on other rescue cues. I mean sure. Yeah, that sounds like fun but no like I Mean you'd need to be like half insane and half a genius to build a robust high-quality system like that in the middle of a You know startup environment where you you just need to be shipping something We just need to get something out so we can move on to the next thing that is completely broken with our site Fortunately someone already walked that route and did exactly that and So that's great Nathan I don't know if he's here. He was talking at a thing in San Francisco last night, but we'll see Has built a couple of really impressive pieces of closure software casco log which is a Hadoop abstraction that lets you use like data log on top of it's pretty cool and a little piece of software called storm So storm is cool I mean I'm still in the like in love phase with it and I haven't we haven't logged a ton of production experience So talk to me in six months, and maybe I'll be cursing it in the same way that I curse the Mongo at the beginning of this But it's it's a pretty interesting piece of software, and it introduces an interesting abstraction on top of parallel processing So here's the basic idea you have spouts bolts streams tuples serialization and deserialization basically these sort of beige things are computational units this the the greenish black things are Network connections between those computational units And their serialization and deserialization done on the the blobs of data that you're throwing around tuples are just arbitrary bags of data you can access them positionally or by name And you can store anything in them Modulo whatever serialization you're willing to write If you can write a cryo serialization cryos a Java serialization library for your data type Then you can include it in the tuple and you can set it send it between nodes All sorts of things you can serialize and deserialize And it's essentially arbitrary And that's that's it. It it's actually kind of like the core Unit of data in this thing, and it's that's it We have these spouts spouts poll external sources. They emit tuples They acknowledge successful processing It has a hook essentially to tie into whatever queuing system you're using and it handles failure It enables reliability. It has an API that's specifically built to allow you to build a Reliable system that guarantees message processing Which is something we didn't even like come close to thinking about with the original system. It was priority 500 Not that we didn't want people to see things, but if somebody doesn't see something in their feed, they probably won't notice Though if somebody doesn't get a feed at all if a build feed message fails, that's less good. So Parts of the system it is good to have reliability for This enormous slightly daunting block of code is a spout definition in in closure Storm is in a Java system, but there's a DSL built on top of it And this actually isn't as terrifying as it looks The stuff kind of down at the bottom. It all looks like The same kind of thing you do when you implement a protocol With a type and it is essentially the same thing you're implementing Java methods. I believe it all blows up it. You know de-macro-izes into a reify call But spouts can have some state we have get a red us up here and then This next couple is the the core the only thing you absolutely have to implement and this one just reads from rescue and It spits stuff off with a random ID that we we generate and that ID is part of the Reliability mechanism if you provide an ID when when you're meeting us about it'll essentially track the tuples that that ID creates across the Across the computational topology and then call either ack or fail depending on what happens Bolts provide raw processing power. They receive tuples from streams They can have arbitrary logic they can query databases they can download the internet whatever they want they can carry state so You can do things like build a an aggregator that will wait for a bunch of different tuples and You can guarantee that it will be in memory and persisted in all that and then they emit tuples back to streams Adding parallelism adding more than one of these computational units is literally adding colon P in the topology definition and then providing a number I want 15 of these running across the cluster and so obviously it's somewhat easy to to adjust These are two bolt definitions and these are a little cleaner than that spout mostly because they don't have that that Reliability stuff the really simple bolt This is how we turn a feed into Jason It it just gets a tuple and then it calls a serialized function that that grabs the feed out of the tuple And turns it into Jason and emits it and actually I left the emit thing You'll notice that both of these ack the tuple that they got and the reason they do that is to tie into the system that allows the spout to Let to call ack or fail the Second one is an example of a bolt with some state it maintains a connection to a reddit database So that doesn't have to do that every time it process tries to process something. I'm gonna talk real quickly I don't even have the transitions here with about serialization mostly because serialization is something you almost don't need to worry about When you're building your system when you're tuning your system it it is But it just works out of the box for the most part It falls back to Java serialization, which is dog slow, but I mean it is there So when you're developing it's not a thing you need to Solve upfront you can actually solve your problem first and then worry about the details later There's a ton of power under the covers cryo is its own ecosystem of stuff and it's pretty cool So this is the topology we built we have a redis instance. We have the spout reading off it the spout emits to a user's tuple and basically user's tuple goes and finds all the users in the system and then for each of those users it emits a A bolt or it emits a tuple So for each of those users it emits a tuple right the tuple contains the user ID and it contains the story We have two other bolts these likes and follows scores and they both subscribe to the user's tuple They receive every single tuple that comes out of that They do a database lookup And they give the story a score for that particular user Those scores are then aggregated in this reducer and So at the end of it this this reducer the score maintains a map With the it's keyed on the ID in the story And it just waits until it gets scores for each of the the scores in the system And then it emits the final tuple into this feed builder. So it's kind of weird I'm not sure if anybody actually remembers back to the other slide, but the Feed building atoms inside atoms thing that we built for the original system basically just moved over to this Directly it wasn't a bad system It was just that when we tried to put you know a huge number of feeds in it It used tons of memory And we needed to break it up across as many systems as we wanted to really actually make it infinitely scalable those Basically everything about that atom and atom thing with its expirer and it's its writer. It all stayed the same Although I think we write in line now because we can't and So it writes it out to Redis We just add parallelism as as needed and we kind of sprinkle it on top and and we go home one thing that I didn't really talk about that enables this is that One of storms really cool features that it allows you to define stream groupings. So these essentially ensure that In this particular case, we have the score kicking out tuples with user ID story and score and we put a stream grouping on Of user ID story on the stream that means that All tuples with the same user ID and story will go to the same in memory the same memory space The same bolt the same processing bolt which means we can safely maintain state in a single Place and and they'll all get It'll get all the tuples that it cares about not just some of them. They won't be distributed randomly Which is cool and it enabled this One of the cool things. I mean I just showed you this diagram, but this is this is translating that diagram into closure code We have the the spout at the top We have the user's bolt we have the likes and follows bolts actually have a seller follows both it didn't include there We have an interest reducer and then we have this thing that adds stuff to feeds Like I said adding parallelism is literally just defining P and then you can see the the various stream groupings defined within the bolt specs and I mean this this is it. This is the whole thing One of the really nice things about this is that a Difficulty with closure is that coming from an object-oriented programming world? You're not really sure how to organize your code It's super flexible. You have a lot of options for code organization and you know the the plethora of ways You could potentially organize things leads to Different solutions leads to you know Doing things different ways in different services This provides a really nice way for us to think about the organization of the code when you come into this code base You look at this you look up the various The various bolt implementations and you can see directly how they're implemented how the how the system actually works super declarative totally rocks Storm also comes with this cool UI It shows you the summer the topologies that are active And then for each topology, it'll show you all sorts of stats about the spouts about the bolts What the topology has been doing? One neat thing is that once you get a storm cluster up you can deploy as many topologies on it as you want so Deploying, you know 14 different totally separate topologies is as simple as deploying one. I mean that's not at all true, but Let's just let's just go with it. It's it's totally simple But it really is it is pretty neat and you know not having to set up yet another zookeeper to Do this is gonna be nice So awesome success We're gonna move this along but wait, we're still storing all the feeds in Redis. How do new feed builds happen? So it also has this the storm has this really cool piece of functionality called a DRPC server distributed remote procedural calls and essentially it's a really simple thing that gets tacked onto to to Storm and it's a little server. It's a thrift server that receives requests from anybody and pushes Tuples out through a computational topology and then keeps track of everything that's generated by the topology there's a bunch of magic a bunch of coordination magic and then and Responses return the DRP server which then returns a response to the original client, right? So that's me so We took this DRPC for this this additional topology We built another storm topology because the recent actions thing in the build feed is just another storm topology And we actually tacked it on to the original storm topology. They're all it's all just apologies So when somebody comes into the system and they build a new feed say they're coming for the first time We go look up recent actions. We build their feed We return a response to them But we also on the side kind of stick it shunted over to the original topology and stick it in their feed So that the next time they come into the system, they'll have a feed in Redis And they won't have to go through this DRPC server nonsense Which is great Okay, so this is really cool the RPC is it's just storm primitives. It's built on top of storm It's pretty insane storm and if you go look at the implementation, you know, it's it's pretty cool There's a really good blog post from Ben Howard. I think is his name. I hope he's here because I want to shake his hand So it plays nice with the regular storm topology you can connect the two together and they just work I mean seriously though, this is like this is cool This is really cool. And when we got it working. I was just like Irma Gurd Because it was really it works So what does this mean? This means we can build feeds reliably so and we can build them quickly So we don't have to store all the feeds in Redis anymore We can just store feeds for active users in Redis and it lets us cut our storage needs down by 95% Because 95% of our users haven't been on in the past like right so cool We can shut down these massive red eye We can build a more reliable more scalable system and We can easily extend this in the future. We can add You know machine learning stuff in additional scores and it's pretty clear how we would make it better Yes, it works You know, we have this in production now. It all works. It's doing the right thing. It's really Remarkably cleaner than the old system and it's it was just it's one of those moments where you step back from the system That you just deployed and you're really happy with it, which is just pleasant closure rocks I mean it was really good for this it allowed us to add easy parallelism to even the original system, but also Through you know storm the code ended up really pretty clean. It's it's easy to come into the system and understand Kind of what's going on and it's clear. There's a clear path to understanding the whole system There are libraries available. Thanks to Java when you need a library You somebody's implemented it in Java might not be super idiomatic, but it's there And it's fast and capable, you know We were reasonably happy with the performance where we knew we were doing horrible things so Can't complain and people build stuff like storm. I mean the stuff that's being built in closure the the old abstractions that are being dredged up from you know the the bowels of computer science and and The new abstractions that are being built on top of it. It's pretty cool The the power to build domain-specific languages to build stuff like this It's I think a lot of people are going to be excited about that in talks up here over the next couple days But it I'll just put my plug into it. It's really neat mostly, I mean Everything has some failings mature idiomatic library support is a little wanting there is a lot of really interesting work going on there The closure works guys. I don't know what they're doing, but they they have like 40 different libraries, and they're all pretty idiomatic They're getting to mature and and so we use some of them and we're pretty happy with some of them But you still do have to sometimes fix bugs in the libraries that you try to use Which you know, it's not the bad so bad the JVM totally the worst. It's it's horrible It's a pain in the ass, but you know everything else is even worse. So what are you gonna do? Laziness will totally bite you at some point. I love laziness huge fan of it But you know when you're working with a lot of stateful databases, and you're doing you're making a fair number of calls that don't necessarily return of value They're not pure functions. It's gonna bite you It'll lead to some frustration, but whatever. It's not that big a deal Best practices for code organization polymorphism if they're in their infancy We're getting there. It's just a place we need to go But it rocks still my favorite language after a significant project blood sweat tears The number of times something just worked was refreshing. It's from a great example of how closure can bring a language to a domain Seriously awesome. Good times. Thank you. Thanks to Rich Hickey Nate the Mars the conjur organizers Rob Zuber One of our engineers who also happened to be the CTO copious for letting this happen and We're hiring so, you know come talk to me. I'm T Vashon Twitter Travis at copious calm and that's it. Thank you