 So thank you everyone for showing up today and for giving us your time. We're here today because we are all struggling with scale. So increasingly businesses are challenged to scale to additional geographies, to more users, and to greater uptime, particularly in the electronic trading industry. You're being asked to scale out your trading operations, your listing operations, and your market data products. But you may not realize that in order to take advantage of the scale that cloud service providers can offer, you have to be able to transfer your data from a kind of classic finance data model into a more cloud-based or kind of web-based data model. So cloud service providers you already know are going to be critical enablers of that scale, but the challenge is how do you get your data from where it is now into kind of a cloud model. And you may be facing a scenario today where you have a very fragile data model where even including a small change like an additional data point winds up being a really time-consuming dev effort. So what we want to talk to you today about is how you can have a more robust data operation. So really what you need in the end is a data operation where a small change like I just described or even a big one can be handled as a business as usual event. So changing your data model should be a business as usual event because in fact it already is a requirement that you encounter every day. So we're going to talk about three things today. The first thing is about two different data modes or two different data worlds. And next we're going to talk about an approach for kind of bridging those two worlds. So an approach for kind of bringing data from one of these worlds to the next. And then finally we're going to talk about in practice how has that approach served us. We're also just going to talk us through the two market data paradigms. Thanks Matt. So broadly we're going to talk about a few cultural stereotypes between Wall Street, Silicon Valley, East Coast, West Coast, Beef. So Wall Street, one way that you can look at it is the ethos is if the system goes haywire it will be insolvent in 15 minutes. Things like this has happened. In 2012 a market making firm lost $440 million in 45 minutes because of a DevOps issue. So this is real. Move slow, don't break things. The inverse of the traditional Silicon Valley ethos takes 10 years to get an industry wide audit trail rolled out. So the focuses are stability, reliability, determinism. These are like the cultural pillars of Wall Street stereotypically. Silicon Valley, a little bit different. If this thing could scale will be cash flow positive in six years. We just had our 500 million series D or whatever. Move fast, break things. Growth accessibility, these are the priorities behind Silicon Valley. That also leads into the tech stereotypes that we observe in both sort of cultures. Wall Street favors low context encodings. That means that from the payload itself you can't really determine what's in there. You need some sort of schema definition or mapping there. You just have a bag of bytes because the messages that go through are about 40, 50 bytes, 100 bytes long. There's a preference towards physical co-location, racking and stacking servers, it still happens. You have to co-locate with the exchange in many cases. It's optimized for transaction throughput. How much can we squeeze out of our existing expenditures? There tends to be an overarching focus on precision and demanding precision. On the other hand, Silicon Valley has high context encodings. Things like JSON, perfect example. Let's just fling curly braces down the wire as much as you want. Even though it don't mean anything. Meter computing, cloud computing, this is something that lets application developers just innovate much more quickly. Developer throughput really is the priority in a Silicon Valley. There's a lot of tolerance for ambiguity because in a startup culture there's tons of ambiguity, so it's sort of rolled into the culture as you will. Bringing it back to exchange data, high performance market data, there's really two ways if you wanted data directly from the exchange to provision it right now. The first requires standing up a whole lot of servers, getting a cabinet, programming to a PDF spec in some multicast protocols where you're just getting packets blasted at you indiscriminately. And that's really what the essence of what they call high-frequency trading, direct exchange trading, direct market access, that's what that involves. And that is obviously all that data comes real-time from the matching engines. Your other option is basically at the end of the session, they'll give you a replay file of all the packets that were sent during that session, a packet capture file, and you get that secure file transfer. And essentially, both distribution types have the same exact content of their messages, but they're being delivered in a separate way. So what Matt's going to tell us about now is sort of how we approach the problem of solving this to get it into a cloud native format. Thanks a lot, Sal. So Sal just kind of brought us through the, you know, in practice the way that you actually get your hands on really high-resolution market data. And it's usually via kind of an end-to-day binary log or via this kind of like real-time colo model. But increasingly in the electronic trading industry or kind of in financial services, you in this audience are hitting the requirements where you need to bring the data from that kind of model into a new model. So for example, maybe you need to get price messages to a web front-end. How do you actually do that? Or let's say you want to get a full-depth of book feed for, you know, a one-week period into a data warehouse so that you can do data analytics. How do you do that? When you're being asked to do that, really you're being asked to bring a message from one of these little Rubik's cubes to another, essentially. So let's say that you have to bring a price message from a feed like Sal just described to a mobile device. You're being asked to go from, to take a BBO message that's coming over UDP in SBE encoding. And you're being asked to keep that message the same, so still a BBO message, but instead you want it to come over web sockets. You want it to come over web sockets in JSON, right? So you're being asked to really move it from like the little black call-out to the little yellow call-out. Now an interesting thing here is that we're not talking about transformation. We're not talking about ETL. So if we were talking about ETL, nobody would be here right now in this room. We're talking about something different. We're talking about actually non-transformation, right? So we're not, we're keeping the message the same, the contents and the meanings, meaning of the message the same, but we're listing it from, we're listing it from one encoding in one transport and then moving it to another encoding in the end transport. So carrying the same information, but just kind of across these two data worlds. So we've thrown some terms around here. We've talked about, you know, SBE, BBO messages, SBE, you know, messages coming over web sockets or UDP. Let's organize our thinking a little bit more. So as we were sort of working on these types of problems, we ended up having to sort of create these abstractions that we've thrown around that we found very useful in solving the problem of not transformation but trans coding. And really, I mean, these are all things that, that terms that get thrown around, but the fundamental concept here, the fundamental abstraction is the message. The message is basically any payload, could be the API response body, it could be a message, an SQL, you know, rogue coming back. But it essentially identifies a set of a instance of a data model or instance of a schema. The schema defines the data model. So it defines what the semantics of what a particular message conforms to. So like Matt was referring to, BBO best price messages, right? The best price messages are basically going to have at least a symbol and a price, right? And potentially a quantity. That gets defined in a schema, preferably machine-readably, so that, you know, you could do downstream automation. The encoding is essentially the presentation layer where, where, how the message is interpreted by the ultimate consumer or producer. And, you know, these are your, your things like fixed, fixed or JSON or SBE or YAML and things like that. Then the transport is ultimately the input output medium that the messages will flow through. But a single message has these three characteristics. And across the other three characteristics, they could be hot swapped without changing the semantics of the underlying message. Thanks, thanks a lot. So, so really what we're, what we're saying here is that in, in this industry increasingly you're going to need transcoder applications. Because you're going to hit these use cases where you have to bring the same information from, you know, one transport encoding over to another. You could change the schema while you do that. Maybe you need to, maybe you're not depending on, maybe you don't depending on the use case. But the point is that you need a transcoder to move information from one of these Rubik's cubes to another. Meaning to take a BVO message and move it from a certain transport like UDP and a certain encoding like SBE over to another like WebSockets and JSON. You need some application to do that. So why do you need an application to do that? So to be a little more concrete about it, cloud services we believe are going to be critical enablers of scale. You're going to need to scale out geographically increasingly. You're going to need to reach more users. Increasingly cloud is probably part of your strategy to do that. Cloud APIs expect a different transport, expect a different encoding from the kind of Colo environment that Sal is expecting. And we find in this industry that we see a lot of, in our industry, in the cloud industry, we think that maybe too many people wave their hands around that part of it. So it's kind of like take your full depth of book feed and just get it into an analytics warehouse. Okay, well how? These messages don't turn themselves into JSON. They don't turn themselves into Avro. They don't magically jump from UDP to HDDP. And that's really the problem. And so we're proposing that as you develop transcoder applications or as you use one that we've open sourced, you really have these three separate concepts of transport, schema, and encoding in mind. Because those concepts will let you really kind of isolate different parts of the code and develop them in isolation. As a result of using these ideas, you can potentially get a ticker plant in the cloud with different kind of logical components neatly separated. Next we're going to take a look at the life of a transcoded message. This is really the essence of how you do this in a sort of algorithmic way. But in the case of electronic trading and being on sort of a multicast network, you would get a packet in. You would have to strip any extraneous things from the payload. So if you're getting a UDP packet or you're partying out of a PCAP message, you might have 12 bytes that have no meaning that you need to take off first. So really what you do is you isolate the payload out of the message payload from the delivery transport, the inbound transport. Once you have that message byte that a byte array payload, you have to ID. What is this? What makes this more than an array of bytes? So this is where machine readable scheming is coming extraordinarily handy and you essentially say, okay, by convention, you have this byte offset is what's telling you what type of message. So the message type is something that is very another abstraction that is very core to this type of approach. And once you identify the type of the message, then in the machine readable schema, you would say, okay, what are the byte offsets? Applying the encoding to the bytes that you have that actually give it some meaning and that's generally happening in some host programming language, like Python or C or, you know, in C actually the trick is really you just map a struct onto the byte array and you pretty much have something that you could use within the host language. Then it's a matter of understanding what your output format, what your output encoding is going to be and what your output transport is going to be. It's inside that host language you would essentially re-manufacture while losslessly re-manufacture a new object with a new encoding, a new serialized sort of encoding of the object for the destination encoding. And then it's a matter of basically out the other end sending the transcoded message over the outbound transport that could be writing to a file, it could be sending to another network. And these are sort of abstract algorithms and metaphors that we found extraordinarily useful while we were building something. So Matt's going to show us a little bit about what you get when you could approach things like this. So thanks a lot for that. This slide shows that after a message is transcoded you need to put it somewhere. And it also shows that in the cloud, in really any kind of cloud service there are kind of infrastructure abstractions that perfectly match the messages that come out of the transcoding process. So imagine that you have a schema and you have a transcoder that lets you bring your own schema. So you have a transcoder that accepts a message schema as a parameter and the good news is your transcoder is now smart enough to kind of decode any feed or any file that it encounters because you can bring your own schema. Well, okay, that's good news. You're able to pass it a schema. It's able to respond correctly. It pulls out messages. But now that it's pulled out those messages where does it put them? So if these decoded messages are bound for a table the table also needs to have a schema and that schema needs to be completely aligned with the message itself. So what we're proposing here is that if you're really, in our view, if you're really going to be working with market data and taking advantage of these kind of cloud native syncs like Data Warehouse or Topic, it's kind of messaging bus, what you want to do is you want to embrace this idea of infrastructure as schema. So we've all heard the buzzwords like infrastructure as code and also infrastructure as data where you can begin thinking of your infrastructure almost as a Kubernetes kind of list of kind of objects sort of way. But what we're saying here is that if your schema is enough to inform your transcoder about what messages it might encounter it should also be able to inform your cloud service about what kind of infrastructure to stand up. So if you have something like an order cancellation message in one kind of atomic operation you should be able to tell your schema that message exists and here's what it looks like. Sorry, tell your transcoder that that message exists and here's what it looks like. But in one atomic operation you should also be able to tell your cloud service, Data Warehouse, that message exists that here's what it looks like and please spin up a corresponding table. Oh, sorry, this is mine, my bad. Okay, so this slide is meant to show the point of getting your data into a cloud environment. So far we've been talking about it like it's just a given that you want to get your financial data into a cloud environment but it may not be given to you. So why do you want to do it? I think from my perspective there are really two big reasons not just two but these are my two favorites. The first is that there's a really rich and performance set of analytical tools that are available in cloud environments now and they do not require you to take on a lot of additional operational overhead. It can be difficult to make sense of some of this really dense tick data in any other kind of environment. So that's one reason you might want to do it. The second reason is that distribution in a cloud environment, particularly a cloud environment with a global backbone, may really surpass the kind of global distribution capabilities that this industry has today. Cloud, I think, increasingly is competing with some of the traditional global networks. So if you get your data into a cloud environment via the transcoding routine that we discussed, you can do more analytics with fewer people faster. You can also reach... An exchange in Chicago can reach participants in Singapore, Hong Kong, Australia really trivially without any additional lessons. So those are the compelling reasons for doing it. Thanks Matt. So we're going to talk about, in more concrete terms, how we saw this problem as a meta-obstacle to getting wider cloud adoption of electronic trading data. As we were working through these problems and sort of taking them coming up again and again, we just started thinking, what would be easy to do this? And what we've noticed about similar tools is that a firm will do something like go to a itch protocol spec and they'll do it only to the point where they get their own accomplishments completed and then sometimes it might actually be open sourced in a lot of ways, there's a lot of excellent software after it does that. There's nothing that really expands it to a wider set of abstractions. So what we did is that we released, I believe it was a few weeks ago, what we're calling a Market Data Transcoder on GitHub. And we're going to show you a little bit about what it looks like on the user surface and how these abstractions bleed into the user surface of this. And essentially there's a set of parameters and our nickname for this tool is FFMPEG for Market Data, FFMPEG is a video transcoding tool. The list of options on FFMPEG is sort of like the menu with Cheesecake Factory, it's like it's endless. So this sort of, because you have a lot of options, you have a lot of input, output options, a lot of encoding options, we sort of, that sort of bled into what we see here. But the options that you set are source file, right? What kind of, where is this coming from? Where is this data coming from? What is the format type? Is it lines or limit? Is it length prefixed? What destination is it going to? What is the schema file that we're going to use? If nothing else, you need to supply sort of the schema file and if you're not taking it from an input stream, then you basically specify some input identifier. And then you define the output types, like in this case we're showing an output type of BigQuery. And what happens through this process is that you, the transporter will read the schema file and then actually manufacture the equivalent schema in BigQuery. And we learned a lot of practical lessons in this case. One of those lessons is when you only have market data incremental refreshes in your data, you don't want to create the entire fix 5.0 schema graph inside your BigQuery. So this led to an option we have called message type inclusions, where you might have only one message type in the data that you're looking at, but your schema may contain a much broader library of messages. And that's one of the ways that this really, the real world sort of bled into the options that we offer in this tool. And I think Matt's going to tell us a little bit more about sort of how one of the obstacles we encountered. So thanks a lot Sal. So Sal was bringing this through the user surface. And behind that, of course, we have the source code. And originally we had this dream that the source code would, or I guess that the application could be insulated from any kind of protocol specific logic if we just used those three abstractions of schema, transport, and encoding. And when we really thought if we get this right, then you should be able to bring any protocol and the user shouldn't ever have to write code. Of course, at a certain point that dream kind of met reality. And we did encounter, I think, at least one edge case that required kind of protocol specific logic. And that was right here for the itch protocol. For certain flavors of itch, and I think this is what, I can't remember what version I think it's itch for, but we were working with a particular exchange and they were using a particular itch protocol version. And in that version there was no complete time stamp on each message. So a particular message would actually give you a fragment, kind of like the nanosecond fragment of a time stamp. But then separately you had to read these kind of current second messages and keep track of the prevailing second. So what we realized is that we really were hesitant to have any kind of protocol specific logic until we kind of, you know, got the transcoder dumping into BigQuery. And then we said we have a bunch of messages with nanosecond times nanosecond values only. And we can't actually arrange them sequentially and therefore we can't really say to anyone that this is useful. And so what we realized is, okay, we still want the user to be able to not deal with the source code. And so we introduced this idea of a message handler class so that we have message specific logic that kind of 4 itch tracks the prevailing second and then combines that with the nanosecond and then actually puts kind of a really meaningful time stamp into BigQuery. The good news is that we wrote that message handler and if a user, you know, goes into the repo today and clones it and wants to use, wants to decode an itch file, they can do that just by passing that particular message handler as an option. And if other hiccups like this are encountered in the future, contributors can write their own message handlers and users can pass them. One thing I would just say on that, it was very important for us to transcode losslessly by default, right? Like not lose any data that's coming in through a depth of book feed, right? Don't conflate data. And in this case, you know, we really wanted to do one to one, but in this case, as Matt said, that data is absolutely useless if you don't have the full time stamp in a SQL perspective, right? So what we did is we basically gave it the ability to add an extra column that would represent or an extra field that would represent the entire time stamp. And in that case, I mean, I would say that we kept the spirit of losslessness even though we had to add a little bit of extra data so it could be useful in other contexts. And I think there's just one thing I would add. I think that the lossless idea is really important. So the transcoder in a way, I mean, we were talking about how what we're doing here is sort of like non-ETL, non-transformation. The transcoder is trying to really change nothing and trying to just get every record as literally as possible into a sync. That sync might be an Avro file, which we support. That sync might be a BigQuery table. It might be a PubSub topic. And the community can kind of add additional ones. But the point is we're trying to just get a direct kind of message by message copy in because we think subsequently in subsequent operations, that's where you're going to try to get insights out. That could be, you know, using dbt to really like execute SQL or et cetera. Okay. So I just wanted to recap what we've gone over today. So first of all, we talked about how there are two data modes or two data worlds. And we're kind of clumsily calling them Wall Street and Silicon Valley. Wall Street tends to favor, you know, higher context or sorry, lower context encodings and kind of stability. And Silicon Valley is more about kind of high context encodings and the ability to kind of scale. And so increasingly you're going to be asked to move data, we think, really from kind of like the classic finance world into a more web-based or kind of cloud-based one. So an approach that will help you do that, we think is to really keep these three ideas of schema, transport, and encoding at the center of your development. And then finally, in practice, we found that those ideas actually were useful. There were some moments where we had to kind of depart from them a little bit, but they really did help us with our application. And of course we would love it if you would check out our application, which is on GitHub. It's just called the Market Data Transcoder. And there's one thing I want to leave you with. You may find this all a bit daunting, or maybe not, I don't know. You may be thinking to yourselves that transcoding and keeping these ideas in mind and just generally kind of moving your industry into the cloud, this may all be somewhat disorienting. You may be encouraged to know that you wouldn't be the first to do this. So about 150 years ago, Julius Reuter noticed that there was a gap between two telegraph networks. And the network edge went up to HN in Germany, and then there was a 75-mile gap, and then it picked you back up in Brussels. And so as a temporary solution, while the Reuters Corporation was waiting for that gap to actually get closed, they actually employed carrier pigeons. I'm not sure how much they paid them, but they were employed. But they actually did use carrier pigeons to get the prices at one part of the network edge and then put the information in an envelope and then have the pigeons actually apply it to the other network edge and then put it back into the telegraph. So if you think about that, 150 years ago, those concepts were still totally intact. So the transport was going from telegraph to carrier pigeon and back to telegraph. And the encoding was going from Morse code to French cursive, back to Morse code. But the messages were remaining the same. So those ideas were helpful then. They're absolutely critical now. We hope that you found this helpful, and if we have time, we're happy to take questions. Two things. I'm interested in that. You don't approach the idea of conflation. So typically, I would expect that we'd go, you could try out the beam, take a second or something of an exchange of 1,000 to 1,000. Right. You don't address that conflation. You assume before it goes into your transcode, it's been the time that the whole thing was intact. Well, so the way we actually approach it is really conflation be damned, and we're not going to base it initially. So we're prepared basically with this tool. It's prepared to take unconflated data. Actually, the motivation for this tool was because we needed unconflated data, and we didn't want unconflated data. We wanted source data, so lossless source data. Now, there's definitely some, you look at the high performance iron networks in co-location. You're not going to, on a consumer broadband connection, you're not going to be able to compete. So there are downstream tactics that you need to use to potentially sharding on more symbols. Figuring out what your output channels are and balancing them out. But for this, in a use case that we're imagining, the conflation would be downstream from the production of the transcode or the output of the transcode. There's one option we're considering which would basically allow you to shard the output of the data based on dimensions in the source data. So let's say security ID, right? I have a channel that's coming in, multicast it might have any number of security IDs in it. Well, let me send those out to different transports based on the individual security IDs. We haven't done that yet, but that's an idea that you could use to sort of shard based on name spaces or shard based on output channels and things like that. Yeah. So, I mean, obviously, the news transcap is a classic case of image updates and updates. But you get your dot bringing into what you're seeing here with the student that's connected to your project because you might want to look at, you know, the VPN out before tapping across several exchanges. Oh, yeah. Yeah. And so the extent to which you show three classic out-of-type storage that you see from the existing time-based content that's being relevant to the cloud or to you, you're just going to crunch numbers and you pull out all the trades within a five-second business or something. Yeah, so that's a great point. So, I mean, like the SQL example that we gave is very relevant because we literally, if we did one-to-one, you would have useless data inside the SQL database, right? So this was sort of something that we felt forced to do based on a, let's call it a protocol artifact or protocol, you know, in terms of, you were asking about TCA and things like that. So this is interesting because if, let's say, we were writing out to, like, let's say, file systems or cloud storage, right? You would have to make some sort of determination with how is it going to be consumed, right? So if you have on a pub subtopic, let's say, and you're broadcasting messages once one, let's just say a single multicast packet that came in over UDP, you're putting that on a topic. You obviously would have to have a sequence number so you can rationalize it. But, you know, you would expect that the consumers of those topics would be able to piece together what they need. I know that in SQL, if I take away that, if I don't have that timestamp, there's no way for you to get that back because it's just lost in the ether. You just can't get it back. You're never a princess. Sorry to interrupt. We have another question over here. I want to make sure. Yeah, thank you, Leslie. I work a lot with open source technology specifically with Kafka. Oh, cool. What are you guys bringing? Like, what are you doing that Kafka doesn't use? What makes you want to invent this? So it's an interesting question. The way that we think about this is that it cooperates with things like Kafka and I think it enables some analytics use cases that actually have nothing to do with streaming. So here's the way that I think about it. First of all, I think things like Kafka are still really important in a production pipeline that uses the transfer. So to give an example, we're working with a particular exchange. We're actually with a handful of exchanges right now. One of them needs to give us, all of them are going to need to give us ordered data and we need to have a guarantee that that data is ordered by the time that we get it. And we kind of don't care... At some point on-prem, they need to order that data right now. And we kind of don't care where it's coming from. We just want to guarantee that it's ordered and that it's not... that there are no gaps, right, that it's gapless. And one particular exchange is doing that on-prem right now with Kafka. So then the question is, why do they bother giving it to us? And I think there are two reasons. First of all, we're putting it into a database that allows for really massive analytics. Not real-time analytics, right? So that's one reason. The second reason is that the global distribution of that data for people who are not latency sensitive is really, really ops-lite on a platform like ours. Yeah, and I think we see that nationally. I think we're getting this effect. It's always interesting to me. We'd be a liberty cop at a certain time, right? And then with this, we have a connection with the open source network framework, which is agnostic, right? You know, in support transformations. So I just wanted to dig in and see, you know, as Google, was there something that we're missing in the framework? Is it just kind of like a wave? I think it's really that, right? So you look at Kafka. I mean, this is what we've done is absolutely compatible with Kafka, right? So Kafka, I would see it as... One thing I like about Kafka is the concept of schema registry. I think it's really, really important, right? So this is actually how you get to a low-code hypermedia type application paradigm, because the application itself just needs to know about its runtime context. It doesn't need to know a lot about this stuff at build time. So in this case, you know, we have obviously our own preferences for, you know, PubSub, and there's overlap in what they do, right? And, you know, I think Matt said, you know, PubSub's very ops-like. But this is really meant to be sort of totally vendor agnostic, even almost technology agnostic, right? Carrier pigeons support this paradigm, you know, perfectly of these abstractions. So it wasn't so much that a response to a tool, a single tool that wasn't there, it's sort of a response to there's nothing to get this from point A, like that data doesn't transform itself. It's very hard at the hyperscaler level to actually, say, make a commitment to a particular sort of niche vertical, right? Like, you're never going to have an option in BigQuery that says upload fixed file, upload fixed messages, right? And I think that's okay, but when people want their fixed messages in BigQuery, somebody's got to get them there, and that's sort of the problem that we were solving. Yeah, and it's maybe a bit like a switch where we have clearly defined the list of inputs, clearly defined the list of outputs. You can say that you want some combination of, you know, schema and transport to end up as an avaro file and in a different kind of encoding. You can imagine, we can imagine a point where the community actually would add Kafka as an input transport type or as an output. Yeah, well, yeah, both. So basically you specify, like, the abstractions with other applications that we had were things like input source or input file source or network source or output manager and then PubSub output manager, then Kafka output manager. You could imagine this, right? And you could imagine basically specifying your schema as connect to this namespace in the schema registry and pull that, right? So there's a lot of things that one could do that we're excited to sort of bring this to the open source community so other people could help as well. We're happy to talk, keep going afterwards. I want to take two more questions if we have time. Yeah, so I wanted to ask you about... Absolutely. Yeah, this, I mean, this was sort of born out of this use case of electronic trading because it was the one, you know, we focus on capital markets, right? This is not just trading data. This is our experience with trade. So healthcare data, imagine the value that you could have in healthcare if you have a shared schema. Thank you, thank you. Yeah, I like that. I like that signal to byte rate. That's great. Right, right, right. And you know, my favorite thing is that putting gzip compression on JSON going over things like, all right, now I'm wasting bytes and compute. So, but it works. I mean, it has worked, right? I mean, if you're basically doing CDNs, I mean, it totally works, but as things get metered and people want to do things officially, I mean, literally I hate to say this, but it literally is more sustainable to use low context encodings. Like, you know, down to this... Yeah, yeah, yeah. Yep, yep. But it takes a while to get there. Yeah, yeah, we got to get out of here. Thank you everybody for showing up.