 Hi, everyone. I'm Paul Dix. I live in New York, and I work at a company called NoMore, which is a search startup. And occasionally, I get to hack some sweet, sweet code. So anyway, let's get on to the talk. So what does synchronous reads and asynchronous writes actually mean? Well, what it means is you do your data reads through services. Generally, when you're reading data, users are waiting on it. So they've requested it, and you have to get it back synchronously. So that has to happen inside the request-response life cycle. Meanwhile, your data writes, you route through a messaging system or a queuing system. And you can do this asynchronously, because a lot of the times you just want to store the data. You don't need to get a specific response back to the user right away. And one of the other things that it's about is creating loosely-coupled systems that are a bit more flexible because they don't have to contact each other directly when they do things. So one of the things you may be asking yourself is why the hell you'd want to do this. You like Active Record, you like Rails, and you like Ruby, and you just want to keep a really simple system. Well, as we all know, Rails doesn't scale. Well, actually, maybe it's a database that doesn't scale. And really, monolithic applications don't scale. Having your entire application inside one code base means that over time, as it gets mature, it gets really big. And it can take 30 minutes or more to run your test suite. And also, if you have just a simple update, you have to redeploy the entire thing. So if you have lots of traffic, and this could be user traffic or it could also be traffic on your back end, like data processing. Or maybe if you have multiple applications where you have to share the same business logic or read from the same data stores. Or if you have multiple background processes that have to share data and perform things based on when events occur with that data. Or if you have complex business logic where you want to spread it out over multiple systems so you don't have everything contained in one place, if you have one of those situations, then you might want to take a services-based approach, which is what pretty much is talking about. So Java developers who work in the enterprise sometimes refer to this as service-oriented architecture. Now, the problem is that service-oriented architecture is usually you bring along acronyms like soap and wisdom. And this is kind of the blood-filled hellscape that I imagine you enter when you do these things. So that's kind of scary. So for this, the tools that we use are a bit different. So for synchronous reads, it's all about creating restful services. And with restful services, you get all the things you normally expect, like descriptive URLs, taking advantage of the HTTP verbs, get, put, post, delete. And for that, I'd recommend Sinatra. And I'll get to the specifics of it in a bit. But it's a lightweight web framework built on top of RAC. And really, I'd call it a services framework. So once you've done that, you have to pick a serialization format. How are you going to represent the messages? So for that, I'd say JSON. It's really simple. It's easy to use. There are effortless libraries in every single language. Now, I know you can use XML, but XML is bloated and complex. And besides, it makes children cry. So asynchronous writes. Now, actually, I want to talk about this picture for a second. I tried to come up with a picture to go with the word asynchronous. And this is the best I could do. It's something called an asynchronous electric motor. I don't know what that is. But so for asynchronous writes, you actually need a messaging system to write all of your data through. And for that, I'd suggest RavenMQ, which is a powerful messaging system. But it also has that in addition to just regular queuing kind of stuff. And finally, you'll need a data store. And one of the advantages about this approach is that you can choose any kind of data store you want. And in fact, you can have multiple data stores deployed. So where it's appropriate, you can have a SQL database. And other places, you can have one of the no-SQL options that were covered yesterday. So let's get into some more specifics. But first, I want to give you a word of warning. This talk isn't about new applications. It's not about green-fielding projects. It's about solving existing problems that you have. See, I, like many Ruby programmers, tend to jump on top of new things because, look, shiny. But don't go overboard and don't overthink it. Joel Spolsky calls people that exhibit this kind of behavior architecture astronauts. And I'm going to quote something that he wrote in that article. He said, sometimes smart thinkers just don't know when to stop. And they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don't actually mean anything at all. These are the people I call architecture astronauts. It's very hard to get them to write code or design programs because they won't stop thinking about architecture. So don't be paralyzed by your architecture. Remember, your first goal is to build something. So don't be a spaceman. You're not an astronaut. Now, with that out of the way, let's see what it looks like. So this is kind of your standard Rails application. You have Rails, and you have your trusty database. And then you add in some background processing. So you have something inside of lib, maybe, and you're using a database back to Q. So you've got dj or bj or one of those j's. And then, of course, you add memcache. And of course, you have a server. And then, to scale up, you add a few more servers. So now you've got three different servers. You've got your database. You've got a load balancer in front of them. And the question is, where do you go from there? Like, once you've gotten there, what do you do? Well, maybe you decide you heard, because you heard Ezra or Chris or somebody say the Redis is awesome and it scales for forever. You add that in. And then maybe you add a read database just to get a little bit more performance. And the whole time you're doing this, your Rails application is growing in size and complexity in just the amount of code that it has. And maybe you're adding more background processes, and it makes you want to cry after a while. Again, monolithic applications don't scale. When you want to make a change to something, you have to redeploy this entire stack. So what else can you do? Well, instead, you can break everything into multiple applications. Applications we call services. So a real world example will help kind of illustrate the idea. So this is something taken from my work. Now, we have to parse and process millions of RSS out of feeds every single day. And we also have to process data from external sources like post-rank and some other places. And with that is a bunch of complex business logic. Now, obviously we don't have user traffic because we're pre-launch. So the traffic comes from these things and the complexity comes from the different things that we have to do on this data. So some of these things are interdependent. So here's an example. If we pull a blog post from online, we have to store the raw content, scrape a summary maybe, check for duplicates, do language identification, do name-dentity extraction, classify the content as a spam or adult, index it for search, run some crazy machine learning voodoo magic stored in Hadoop for analysis later. So some of these things can run in parallel while others have to run serially. And others are actually dependent and make decision points based on previous processes. And a lot of these things we're using different libraries and different languages to accomplish the goal. So originally we came up with something that looked kind of like this. There's this services oriented kind of design where we have Rails and we have the database but then we also have different things for indexing, machine learning, crawling, de-duping. And this isn't even like all the different systems that were in place. But you notice the interdependencies between the two. Between all of them, like they're all kind of interconnected. And it becomes kind of a nightmare to manage that. And we're using HTTP and JSON for all of them to communicate together. So basically this ended up, we found like two problems with this kind of system. The first is that engagement in post traffic is really bursty. And this is true for a lot of things obviously that we're doing online. We found that there were two spikes in the day, one in the morning and then one in the afternoon. And everybody that was creating a service ended up implementing an HTTP interface and then just putting a queue right behind it so they could just accept messages and queue it and deal with it. And the other problem was that data owners had to notify everybody else. So if I added say named entity extraction I would have to ask the person who owned the storage of the different entries to notify me when something new came in. So that's two different updates that had to occur. So the systems were really tightly coupled. And if one thing failed, it might cause cascading failures across the entire architecture. And tightly coupled things make otters cry. So the idea was to keep the HTTP services for the data reads, which can be optimized and cached and everything like that. While we push the rights through a messaging system and a messaging system can handle thousands of messages a second without even worrying about it. So for the synchronous reads portion of the architecture when we're using Ruby, we're using Sinatra by Blake, probably gonna mangle his name. Mizorani, I think, which is also supported by Haroku. But Sinatra is really awesome because it allows you to do things like this. So you can create an entire service in just a couple of lines of code. So in this example, we're getting a specific entry, entries with an ID, we're returning it as JSON. Obviously we don't have error handling and stuff, but you can see that it's really, really short and elegant. So to call services inside the Request Response Life Cycle, say you have a Rails app and you need to call five different services to construct all the data, you really need to do that to parallel. So an example is Amazon, when you hit their webpage, you hit about a hundred different services, just to render the single page. Google says that when you do a search, you hit about a thousand different servers. So to do things in parallel, you have two options. Either you can go multi-threaded with native threads, or you can do asynchronous requests. So I created a library called Typhoyus. If you're curious about how to pronounce it, that's it. The only reason I know that is because I asked Merriam-Webster online. But it has native bindings to libcurl and libcurl-multi. Now libcurl-multi uses an evented style, asynchronous style for parallelism. So here's what it looks like. The main interface is three different classes. There's the request object, which is what you'd expect it to be, the response object, and then Hydra. Now Hydra is the thing that manages the connections for you. And makes sure they run in parallel. So in this example, we have two different requests that we're constructing. By default request and get request, unless you specify. And then we're queuing them up. And they don't actually run when I call queue. It just tells Hydra, we're gonna run this request at some point soon. And then when I call run, that's actually a blocking call that doesn't return until all of the requests that have been queued have finished. And it runs those in parallel. So the response object is pretty simple. It has four main accessors on it. It gets associated with the request. And it has code, which is the HTTP response code, body, which is the body, time, which is the response time in seconds, and then the HTTP headers. Now one of the things you can do is you can also assign an on-complete block to your requests. So if I issue a request, like say I issue like three or four different requests. And in one of them, I say I want to issue another request based on the response that I got. So here I'm just taking the body, parsing it, and handing it to like this post object. And then I'm saying, okay, whatever the links are on the post, I wanna get the first one. And then I queue that up. So mind you that when I'm doing this, the operating system is actually managing all the requests that I had going. So they're still going, even though Ruby is doing this. And when I queue this third request up, it starts it right away. So visually that would look something kind of like this. So if you have four different requests that you start at the exact same time, and say this 25 millisecond one on the bottom is the one that kicked off this third request. And the third one takes 30 milliseconds to complete. All of them complete in only 55 milliseconds. So previously I had at the bottom post, so I'm returning that post object. And the reason I did that is because whatever's handed back from the on complete block gets put into the handled response accessor on the response object. So one of the other features is memoization. So within a call to run, if you request the same resource multiple times and it's a get request, it will actually only perform that request once. So if you ran this code where you're making 20 requests to the same thing, and you looked at the server, what you'd actually see is that the server got hit once. But the on complete blocks, if you had them for the request, would be called for every single one. It also has support for built in caching. So you can set a cache setter on the hydro object and cache getter so that when you make, and it only tries to cache get requests. So when you make a request, you can set it using the cache key, which is just a shahash of the URL. And you can cache the response and set the timeout. And the last line, if cache timeout means, just don't cache it if they didn't actually set a cache timeout on this request object. So that's the other thing is on a per request basis, you can say how long you wanna cache it. It also supports stubbing. So because Typhoius uses libcurl, it doesn't support fake web. So if you actually wanna stub out a request, here's how you do it. So you create a response object, and then you call stub on Hydra. And here we're saying we're gonna stub out a get request to this URI, and then we're gonna return the response object. So if we ran code like this, or we do a request to the URI, we just stubbed out, and we have this on complete block, and we queue it up, and then we run it, what Hydra will do is it will find the stub and call the on complete block and hand it the stub response. It also supports pattern matching. So if I wanted to stub out, say all calls for a specific user on this service of users slash, and it would be an ID, it would, you could actually stub out all of those. So the final piece of doing, creating like a synchronous read system, is that you wanna create libraries for each of your services, and you wanna package them as gems so that you can reuse them elsewhere. In our system, we have like each service communicates the other services via the reads. So we can reuse the code across them. And that brings up the topic of version. So you have to version your gems along with your services, which can be kind of tricky, but ideally you can run multiple versions of your service in parallel. So if you have version one and you decide you wanna add some features, you can create version two, run it, but you don't have to upgrade the clients immediately. You don't have to upgrade them all in lockstep. So asynchronous writes uses RabbitMQ. Now you may be wondering, what about Beanstalk or Rescue or Kestrel or one of these other options? And those are fine, but Rabbit supports PubSub semantics. So I can say, I'm interested in these kinds of events. Tell me when they happen. And it also has flexible message wrapping. So those two features allow you to build what I call an event-based system. So when certain updates happen, you can kick off calculations elsewhere in your system. So before I get into that example, I wanna briefly cover RabbitMQ. It's an implementation of AMQP, the advanced message queuing protocol, but it's not just a queue. It has a bunch of features, but for the purposes of the asynchronous writes, the two things we care about most are exchanges and routing keys. Now Rabbit has three different exchange types, the direct, the fan out, and the topic. And you can view an exchange as really just a message router. It routes messages to a certain queue based on what queues have said they want. So an example for us is processing new feed entries. So our general structure is that for every time we need to do a data write, we have a fan out exchange called whatever the data is dot write. And in this case we see we have three different queues attached to it. We have one for indexing, one for just storing the value. So that's like the canonical data store. And then one for research index. So we have like solar and then cash DB and Hadoop for the research index. And because it's a fan out, when the feed fetching system goes and grabs something, it writes to that exchange and then it automatically gets written to all three of these queues. So we actually haven't implemented the research index yet, but we can do that without the feed processing system ever knowing about it. So the second piece is the notification exchange. For that we use a topic exchange. So what that allows us to do is that it makes us so that the key value store writes notifications to the notification exchange about when fields get updated or when inserts happen. So that other people can listen in on these events and do things. So in this case we have three different queues, one bound to the exchange with insert, which just means if it has this exact routing key, send it to me. A queue bound with pound, which means send me everything. And then a queue bound with update.pound.rank.pound, which is kind of confusing, but in RabbitMQ when you specify what you wanna bind with, pound means zero or more things. So basically it's a set of, a routing key is a set of terms separated by dots. So it says any update that includes the field range, send to me. So our general structure is that we have update dot field one, field two, field dot field two, dot field three. So that you know just based on the routing key which fields got updated. So let's take insert as an example. If we publish a message to the exchange with insert, it gets routed to both the queue bound with insert and the queue bound with pound for everything. So if we send a message with update.clicks.rank, so we know some value called clicks got updated and some rank value got updated, then it gets routed to these two queues, the one bound that's interested in rank and then everything. So one example I thought of that you could also do with this is you could route your error logging through this system. So if you have multiple boxes and you want like a centralized error logging system, you could just say, we're gonna create an error log exception exchange and the routing key is gonna be some host name dot and then the source like process. So this is like an Amazon EC2 internal host name and the process is like the feed fetcher. And if I wanted to log all the exceptions and maybe send out email alerts or text messages for anything that the feed fetcher through, I just bind a queue with star dot feed fetcher and rabbit star is not quite like a reject star, it actually just means one thing and you want a thing in this place. And then if you wanted to log everything, you just bind with pound. So there's some different libraries for rabbit in Ruby. There are a lot of good options. The two I'd recommend are AMQP by Almond Gupta which is an event machine based library and then Bunny by Chris Duncan. Now he actually took a decent amount of the protocol code from AMQP and turned it into a synchronous library. So here's how you use Bunny. First you create a connection to the server and then you start it. And if I wanted to publish messages to an exchange, I create the exchange, I name it exceptions, I specify the type as a topic and I specify that it's durable which means it should survive server restarts. And then I can just call publish whatever it is and then with the routing key. If I want to log things off of a queue or if I want to say like create a queue that gets everything, here I take a queue, I create it, I call it exceptions.logger and then I bind it to the exceptions exchange with the key pound. And then the last bit is actually the queue.subscribe is a blocking call that will just wait for messages on the queue. When one comes in, it sends it in and there I'm just logging the payload. You can also access the routing key and some other things there. So some considerations for when you're creating like an async write system. One of the things it doesn't do is name uniqueness. So if you have something like GitHub user account names where it's GitHub slash, Paul Dix, obviously like you need to enforce name uniqueness for that value. And since you're just writing to a queue you don't know whether or not something's unique. So you can kind of hack around that by creating like say a locks service. So in this case you do a get to locks slash names slash Paul Dix and that would grant me like a temporary lock on that name. And then I could write it to the queue and then it would go through. Also it doesn't support transactions. If you need transactional semantics you pretty much need to have a database. But you can also have this kind of system in place as well. Async writes introduce the idea of eventual consistency. And that started with Eric Brewer's cap theorem. Now in 2000 he gave a talk where he discussed the relationship between three different requirements when you're building distributed systems. And that's consistency, availability, and partition tolerance. Now consistency means that an operation either works completely or it fails. Which you know in the database world we refer to that as atomic. Availability is pretty self-explanatory. It's just whether or not an available a service is available to serve requests. Now partition tolerance happens when you have multiple systems and you're replicating data. A partition occurs when you actually two systems lose connections with each other. Which would cause one of the systems to like fail. So to find formally partition tolerances no set of failures less than total network failure is allowed to cause the system to respond incorrectly. So the cap theorem states pick two of these things you can have two. So Werner Vogels the CTO of Amazon brought up the idea of eventual consistency. Which is you can have partition tolerance and availability and have eventual consistency. So it's kind of a weak form of consistency where it states if no new updates are made to an object eventually all accesses will return the last updated value. So basically if you have like two storage systems and you have storage A and storage B and they're replicating data between the two. In a strongly consistent environment if you write data to storage A that data right doesn't succeed until the replication to storage B is actually occurred. So if you create a partition and you break the connection between the two data rights to storage A will actually completely fail. Now in eventually consistent environment the data rights to A can succeed and later on when the connection is restored the data is replicated over the storage B. But in the meantime if you make a read request to A or a read request to B you might get like if you make a read request to B you get stale data. So really synchronous reads and asynchronous rights is about trade offs. You lose strong consistency. You lose iteration speed. It's not gonna be as fast to create applications using services as it is to like whip up some tests and do some active record design real quick. You have to do a little bit more upfront work. Like if you're doing services you have to create the services itself. You have to create the worker that's gonna read off the queue and save the data and then you have to create the client library that can call out to the service. So what you gain is scalability both in terms of how large your application code base is and how easy it is to manage that and in terms of actual traffic and loose coupling which I think is one of the real benefits. So for instance next week when I go back to work I have to implement the system for like trending stories and I can implement the system by just binding a queue to the notification exchange and not telling anybody else about it. So I can create it completely without anybody else having to worry about it or change their code or run any tests. So it's also about single purpose services. Creating small like bite sized things that can be contained so that you can test them in isolation and deploy them also in isolation. I think if you realize it's about trade offs like services and Ruby and Rails apps can actually be friends. So finally a little advertising. My website's paulbix.net. I'm posting right after the talk links to additional reading on all this stuff if you're interested. My GitHub's paulbix. My Twitter's paulbix. And also I'm writing a book for Addison Wesley that's kind of about this stuff called service oriented design with Ruby and Rails. And it looks like I have time for questions. You're talking about like having services that do specific things, small applications that your services, small services. What if you have like a wide breadth of data that's all tightly coupled. But you want to split the functionality into smaller chunks. Is there a good way to do that? Right, so there are a few different methods you can use to train. So the question was, is there a good way to break out pieces of data that may be tightly coupled? Is that correct? Or have like strong relations between each other? So a few of the strategies, eventually you're going to have to perform joins across services. And that's one of the reasons Typhoia supports memoization. If you have like, in our example, we have like blog posts. We get a list of IDs for those blog posts and they have source IDs and we don't have the sources actually in there. Then we have to make individual requests for those sources. And the memoization makes sure that we don't copy the request. But the different methods you can use, one is you can say logical function. You can partition on logical function. So you can say, okay, this is an indexing and storage system so it's gonna go over here. You can also partition based on read or write frequency. So something that's updated really, really frequently might need a different type of data store than something that's read very frequently. And I guess one of the other ones you can do is you can try to minimize joins. So if you have like Facebook, for example, and you design it so that you know, you have like a user system that stores the users, but then you also have like the stream service so that when I, you know, write some update that I'm giving a talk and that goes out to all my friends. Well, if you actually have a system where you're like replicating the data, you're denormalizing it, you're sending it out to everybody. So if I write that to the stream service, it would make sense to keep the friendship graph inside that service because it's gonna need to access that a lot. So you wanna minimize like the cross service joins. Like you will have to do them, but if there are certain joins that you know you're gonna be doing very, very frequently, you can keep them all like within the same umbrella. So the question was that doing the writes through the messaging system aligned with the concept of like, so you can do denormalization easier or? Basically, yeah, so is that so that you can do some work to make your reads have more data or perform better? And yeah, yeah, it would be. If you have to, if there are like aggregate statistics that you need to calculate that you know you want to store, you can calculate those as the data writes come in. One of the other things you can do, which obviously like a lot of this you can do with active record callbacks or you know code within your Rails app, but the problem is you're talking about more database access, writing your database more, and just making your code base bigger and more complex. So one of the other things you could do is like, say you wanted to do write through caching for something you know that they're gonna read all the time. So you can pull in the write, write it in an Mcache, and also write it elsewhere into your data store. So right now we're actually not kind of worried about it, because we keep state elsewhere. So if we actually have a failure, we can just restart the, sorry, the question was how do you handle Q failure since RevitMQ does not support replicated Qs? Qs basically live on a single box. Now I guess there are two options. We take the we don't care option, which is our state is elsewhere. If the Q server fails, we can just kick everything back off and we won't lose anything. Then the other is you can use, what they suggest is using something like HA Linux, high availability Linux to actually create like hot failover servers. So really kind of depends on how much investment you feel is necessary. I mean, they have persistence. So if the server actually fails, the message will be there on restart, but in the back. So depending on like how fast, how much responsive you need your reads to be, HTTP and JSON and the rest of the amount of heavy weight, slow option, heavy load into any other part of all of that. So the observation was that depending on how fast you need your reads to be, HTTP and JSON can be kind of slow. So I guess there are two responses to that. And one, yeah, you don't even have to use HTTP and JSON if you don't want to. Like if you're eventually like, you know you're writing into your data store. So if you're writing into memcachedDB, you can just say, if you want to read this data, you have to read it directly from the data store. The other thing is generally when you do services, the goal is you create systems that scale linearly. So as long as the request completes within, you know, 10 milliseconds, then you don't care. Like, okay, it'll take a little bit more processing power to parse out the JSON, but you can just add more servers. So I guess it kind of depends on what your tolerance is. And then obviously, you know, Tom Davis talked on the first day about Bert and Ernie, and then there are protocol buffers and thrift. But I prefer the simplicity of just using HTTP and JSON. And the other thing is we have like a polyglot environment, so it's easier for everybody to just make requests to an HTTP service and parse the JSON than to make sure everybody's individual libraries are operating the way we expect them to work, like the database library or whatever. And the other thing is doing it through HTTP just as gets kind of enforces the fact that you're doing only reads. Whereas if you're connecting with the database, people could do writes and circumvent the actual writing system that you created. Way back. Yeah? I found out that we've tried to push rabbit through boundaries, and we were doing topical state stuff, and it was exploding and using 100% of the usage for what I thought was pretty simple stuff, but we were using a lot of surrounding usage. So yeah, so one of the problems is as you add more routing keys to a single exchange, it makes it harder for it to do the lookup. So that's why we're actually creating multiple exchanges. Sorry, the question was how do you, what have you found in the scaling properties of RabbitMQ because he had problems where topic exchanges were topping out the CPU when he was routing a lot of messages. So yeah, we have separate exchanges for each individual thing to keep the key space small, but right now we're routing, I don't know, probably about 10 million messages a day on a single server and it isn't even breaking a sweat. So. Yeah, we moved the direct exchange, which follows that problem, and what we expect to have like 1000, 10 to 1000 of the routing keys, and so far it's been working that way. Right. So instead of the limitations and it just seems like it might be hard to work with. Right, well the clustering is actually good for exchanges because if you create a cluster, the exchanges actually get replicated across everybody in the cluster. The queue lives on one server, but the routing functionality lives on everybody in the cluster, so you can span that out. Yeah. So do you use this to replace the data layer on your Ruby on Rails application or? So the question was do you use this to replace the data layer on your Ruby on Rails app, and we do in certain spots. So we're still iterating quite a bit, so in some areas it's easier to just have the database, an active record, and particularly with the user facing stuff right now because we don't have to deal with like scaling that, but with everything else where it comes to pulling stuff in from online and pulling those things in, there's no database interface, like we've actually created libraries that are gems where you can call out to those different services. Yeah? I think you said you can write this, right? You know, I think you also said that you mean say a single, single queue goes into a single server and it's way back in place. If that server goes down, what should we do? That's the difference, right? So there, the question is if the rabbit server goes down, since it lives on a, a queue lives on a single server, won't you have lost rights if it goes down? Now, messages in Rabbit have a few different modes of operation. You can specify non-persistence, which is what we do because we know that if the server goes down, we're not gonna lose messages. We have the data elsewhere. So the other option is you can actually mark messages as persistent. Now it'll reduce your performance a little bit, but again, like Rabbit is, I've found it so far to be insanely performant. So what? What do you say? Does it persist on all the queues in the cluster? No, it persists on the single, yeah. Right now queues live on a single box even if you have a cluster. So if you persist and the system goes down, you need to bring it back up to get those messages back. It won't be lost, but yeah. There's ways around that as well. We do an algorithm. You can actually even have three different queues sitting on three different nodes, and then you have to do when you get your client. Right, I think you can do it again, but I don't know yet. If you just use their filter system, it just dies, then. Could you summarize that? No. So what Trotter said was at his work, what they do is they have three different queues. So if they have a cluster of three different Rabbit servers, they have a queue on each server that gets the exact same message, and then they worry about deduplicating the messages later on so that one server goes down, the other two servers have the exact same queue, exact same messages. With disparation, like with the same queue. Yeah, if you use like Asian Linux, you can do a classic one, kind of only get outdoors in a couple of seconds. You got it on? No, that was you. Oh, that was your comment, okay. Looks like I'm out of time, but I'll be around, so. All right, thank you. Thank you.