 And I've got a lot of slides. All right, good afternoon, everyone. We've got Alan here from Google, right? He's going to talk to us about all the blockchain information and how to query it in BigQuery. You've got any friends. They sold their Bitcoin. You want to know how much money they made? Ask him how to do it. Yeah, we have all the blockchain data. All right, here we go. So yeah, my name's Alan Day. You can follow me on Twitter. That's my username up there. I'm going to be talking about a data set that we released last month, co-author and I, for bringing the Bitcoin data into BigQuery. And what does that mean? What are the implications? All right, make sure I can drive my deck. Where's my mouse? OK, yeah, the agenda. Let's just go. OK, so first of all, let's start here. Take photos if you want to. This is just URL shortened links. There's a blog post that describes what I'm going to be talking about in the first half of my talk. And there's a data set as well. This is so that you can get to the BigQuery public data set of the Bitcoin data. Just to give you some sense of what's in here, here's a query looking at the number of addresses that are receiving money per day. So it's a measure of the number of transactions. As you can see, it's quite simple to do the query. And here on the right side, I'm showing visualization using Data Studio. So this is a connector that can read from MySQL or Postgres or BigQuery or CSV files, cool sheets, et cetera. We use this to do the visualization in the blog post. You can do iframe embeds. You can also embed it inside of slide shares. So just to give you some sense of why there's an importance to get the data from the Bitcoin database into something like BigQuery, first we need to step back and talk about different types of, can I turn this down at all? I've got some really bad feedback. Can the AV room turn me down? Thank you. You didn't understand that there's basically two different types of workloads for databases. One is for online transaction processing. So LTP-type systems are basically very good at doing high transaction volumes. They can ingest data very quickly. It's highly normalized. And it's designed for specific type of operations because they want to ingest data very quickly. So they do one thing, they do it really well. OLAP-type systems, online analytic processing, are denormalized so that they can accommodate many different types of queries, not trying to anticipate what the workload looks like. And typically, the data are denormalized or duplicated in various ways to support slicing the data across multiple dimensions. Because usually a row, you can consider the field as being dimensions on the ID of the row. And there's various ways you would want to aggregate, summarize, join, subselect according to those fields. And so that requires having some indexes or caches of the data to help you do that efficiently. Yes, these kind of systems are designed for flexible operations. If we add another constraint onto an OLTP system, that it's append only, that basically describes the Bitcoin or blockchain type systems with the exception that they're not particularly fast in the case of Bitcoin. Whereas we can consider BigQuery to be at least Google's product for doing an OLAP-type system. So if you want to have both of these capabilities of both having good transactional appending, transaction handling, as well as good analytics processing, you need to have two different systems to support both types of workloads. And so that's what's the motivation for preparing the data set I'm talking about today. Just to give you some example of what is BigQuery, it's basically a no operations data warehousing solution from Google Cloud. We can host terabytes. Largest data set is in the petabyte range of data stored in BigQuery. And it's quite easy to use because you can use a standard SQL interface. Whereas with Bitcoin, there's really no way to query the data in that manner. You need to optimize and normalize it in a binary format. BigQuery runs on Google's data centers. We consider data centers as being a computer. Just a bit of background as to how this BigQuery system works. We actually don't do any indexing. So the matter of speeding things up that we're doing is we're replicating the data across many different IO devices, the spindles in the data center. And so when you're doing a query, we're just partitioning up the scan and scanning across all these partitions of the data and then aggregating into some compute nodes. So we have this system called Borg. This is the basis for another project you may be aware of called Kubernetes, which is an open source project for handling virtual machines. And this is the type of compute container that's reading the data from the spindles I was just showing you. And of course, it has to move across a network. So we have a proprietary network at Google called Jupyter. We build all of our own network switches and so on, not all the shelf commodity stuff. And so we bundle all of these three different components into the system called BigQuery I was talking about. Data would be ingested in, dropped into storage somewhere, and then moved over to compute, possibly writing intermediate results into storage back and forth, ultimately giving the result out to the SQL consumer depicted here on the far right side of the graph. So that's basically how the systems work. Now as part of BigQuery, we have something called the public data sets program. And I'll tell you a little bit about this. And so the BigQuery, sorry, the Bitcoin data set is part of this public data sets program. It's effectively open data. As of October, we had 71 different public data sets that were in this program. The tables are world readable. And we have a free access tier for BigQuery. So if you want to look at these, you can query, I believe it's up to 500 gigabytes of data per month and not have to pay anything. You just have to have a Google Cloud account. In aggregate, there's something like more than 700 tables, many of which are live updated on a regular basis. The Bitcoin table is updated every 10 minutes because that's how often a block is produced. So it's as real time as it gets. We've got an access to 4 billion records in this data set or this aggregate of data sets. And the total amount of query data is in excess of 30 petabytes. And this is all exponentially growing numbers from the inception of the program. OK, so let's talk about how one might go about doing this kind of OLAP type of analysis on the Bitcoin data set. Here's the system architecture. So we can consider Bitcoin as being this peer-to-peer network that's maintaining the distributed ledger. And what we did is we built a custom Bitcoin client using Bitcoin J. And if you want to go ahead and grab the source code for how we did this, you can get it here. Nothing super complex, but if you wanted to know how a client works, you could use that. The data are coming in in real time from the peers. We're not mining. We're just syncing the data. And if somebody requests data from our node, we'll push it out. So we are operating a full node, but we're not trying to mine Bitcoins. And then every 10 minutes when a new block is added onto the end of the network, we take that block and we sync it over to BigQuery. And then that's made visible via this Data Studio tool. I was showing you earlier for doing basic time series type analysis or other analyses. And then also BigQuery has a connector to the Kaggle community. How many of you guys are data scientists in the room? OK, we've got a few. So this is the largest online data science community acquired by Google late last year. And so there's a connector to bring BigQuery data sets into Kaggle via a connector so that you can query it from R or query it from a Jupyter notebook to bring the data into a scientific notebook type environment to do further analyses. And there's a bunch of users now who are collaborating on querying this data, analyzing it, exploring it together. And you can go check that out on Kaggle. Here's another example of some data that we can join the Bitcoin data set again. So this is the mining difficulty shown in orange. So this is a measure of, it's sort of an indirect measure of the price of a Bitcoin in dollars. And in blue, you can see that we have some popularity metric for the keyword Bitcoin shown in blue. Notice this figure is a bit outdated. But you might draw some conclusions as to how popularity may be an indicator of difficulty or other attributes of the network. Some other interesting kind of analysis we can do is moving beyond doing time-based aggregations or scanning type operations is to do graph analysis type of analyses because the fundamental type of data is a transactional ledger. So it's stating that some amount of Bitcoins is moving from account A to account B. And here you can see I'm depicting the first documented transaction for exchanging Bitcoins for a physical good. So in this case, the red node indicated here is the purchaser of two Domino's pizzas for the price of 10,000 Bitcoins. That was about $40 at the time. So the purchaser was actually advised not to make the purchase because he was overpaying, but he wanted to prove the point that Bitcoins could be used for real transactions. So now upstream all these blue nodes I'm showing you in the graph actually represent other wallets in the Bitcoin transaction space and the arrow directed edges indicate money flow of Bitcoins from them to this red node. So I did a recursive query up, I believe three levels from that node. So you can see there's some interesting structure in here. Here on the right is a different subnetwork. In mid-2017 there was a ransomware attack called WannaCry. It was basically a malware that encrypted many computers on the network and demanded payment of a ransom in Bitcoins to one of three addresses. They could only have three addresses because the payload in the malware had to be small. And then if you made the payment they would send you the encryption key to then be able to recover the data off of your hard drive. You can see here that I'm looking downstream of each of these three malware payment addresses into the network and I was curious to see if the money is sort of flowing toward one place or it's flowing across the network. I'm not really sure exactly what the structure is here. I didn't go further into this analysis but one might imagine that there's some patterns that could be discovered. Oh, the edges by the way are indicating the weight or the proportional to the logarithm of the volume of the transaction and my computer just turned off. I'll be right back. Okay, cool. Yeah, so that's some type of analysis you can do on this dataset. Now these were done in BigQuery which meant that the analysis was actually a bit difficult because this kind of linear scanning is not set up for graph traversals or recursive type of queries that don't follow something like a linear scan which is what BigQuery is optimized for. And so to get deeper into this kind of analysis it's actually appropriate to move the data to a graph database and I'll talk more about that later. Yes, another visualization of the same thing. This is downstream of the pizza transaction. You can see that there's a bunch of Kaggle notebooks like I was mentioning earlier so people are writing these things. It's quite active and yeah, feel free to join in, clone somebody's notebook, comment on it, request analysis. The Kagglers are a pretty vibrant active community and the Bitcoin dataset has been in the top five for the last month and a half for which it's been available. It was briefly at number one. Okay, so that's the Bitcoin dataset overview. Now I'm beginning to work on the Ethereum dataset. So the Ethereum dataset, this is the number two most popular, largest by market cap cryptocurrency today. And it has some similar attributes to Bitcoin but it has a bunch of additional features in particular something called the smart contract which is sort of like program. It basically acts as a global distributed computer not just allowing transfers of value via this distributed ledger but also keeping a record of computing operations that can take place on the blockchain. So it's not only value, it's a computer. Okay, yes, it's different in another way as well. So I just told you it has a touring complete smart contract language. So you could write arbitrarily complex programs to execute against the Ethereum virtual machine. The blockchain is also quite a bit shorter. So rather than 10 minutes on Bitcoin, a new block is produced every 15 seconds. So this produces a bit more of a challenge to keep up with the data and keep it streaming into BigQuery but we're able to do this. It's not a problem. Yes, and so a lot of these ICOs and other things you hear about related to cryptocurrencies most of that activity is actually happening on the Ethereum blockchain today. So technical update of where I am in the status of bringing this project in is I have the peer to peer node running. I have a full sync of the Ethereum blockchain in Google Cloud. I've loaded all of it into BigQuery and it's updating every few seconds. It's also available via Kaggle. If you wanted to look at it today you can contact me afterward. I have my contact details on my final slide. I can add your username to the project. It's currently invite only right now. So I'm looking to when I release a blog to have some interesting analyses ready to go just to make a lot more impact for the launch to look at some interesting things we can do with the data. Yes, and there's also some auxiliary like pre-processing I've done that makes this data set a lot more interesting. So let's get into that. I created a view of the data into something that's a token transfer. So this is something like an 80 line query that extracts out something called an ERC-20 token. So the majority of the cryptocurrencies are actually based on this template called ERC-20. It's a, you can consider it like a software class or an object that's inherited from. And so because of that, if you wanted to look for activities related to ERC-20 tokens, you can see all of them and they all follow the same kind of pattern. And that's how I was able to produce this view. So here I'm looking at one particular token called 0x. This is a distributed exchange protocol. You can go check it out. I'm not gonna get further into that. And by doing this kind of query of exporting the source node, the target node, and then some measure of the value moving between source and target. So we're looking at the total token transfer moving within the network. We can export that data into a nice open source data visualization piece of software called Gefi and start to build some data visualizations of the transaction network data that looks something like this. Now I'm not a data visualization expert. Maybe this is not the best way to represent it, but I thought it looked pretty interesting. This is all of the 0x transactions of more than 1,000 tokens moving within the network. So it's a few thousand US dollars or greater. And what I did is I colored the nodes according to a partitioning of the graph. So I can basically say things that are red are nodes that generally interact with one another more than they do with nodes that are of another color and likewise for blue and likewise for green and so on. So there's different partitions within the graph that tend to interact intra-partition and not inter-partition. And the lines between these nodes, so the nodes represent an address and the lines between them represent a transfer of value between them. Now this is across all time. So I've not done any kind of time windowing, but you could also do analysis of this in a longitudinal manner if you wanted to do some kind of time windowed based analysis to look for various types of aberrations or anomalies where you see unusual spikes of activity, for example. Now interesting to note here, to produce that table, I actually needed to identify this thing called the transfer function from ERC-20. So all the ERC-20 tokens implement this function. It has a four byte method signature, 0xA9, 05, 9, CBB. So there's four billion possible addresses in the Ethereum method signature space. And by doing this I was able to find all of the transactions. Oh here's a zoomed in picture of the red node. This is actually one of the cryptocurrency exchanges. This is Binance. And so you can see there's some users who are doing like high volume transactions with Binance and then lower volume. Some of them are actually connected to other exchanges as well. So you can start to type these nodes and infer attributes about them based on their position in the network. But yeah, back to this method signature thing. The ERC-20 contract is open source and so we can compile it and we can identify what method signature it would have if you had the source code. So we know it happens to be called transfer. This is its byte signature when it's compiled and it takes two arguments. So there's the calling address. It is calling to another address to transfer the value to that address. And there's an unsigned int of 256 bits of how many things need to be transferred, how many units of the token. And there's a balance that's maintained in the smart contract. It's updated as a result of this method call. So within the open source space there's, well there's a database called 4byte, 4byte, like the number 4, 4byte.directory that indexes these things. And 4byte.directory has 6,942 methods for which there are sources available that can be compiled to determine the method signature. So we can start to type the transactions by looking at the source code for the smart contracts. And here's sort of how that works. So 4byte.directory takes these, and this is an open data set by the way, if you wanted to download these method signatures. The guy behind it, Piper Mariam, he's taken the Solidity source code, that's the language that's used to write smart contracts, compiled it, and then I pulled the data into BigQuery by mirroring it in as a part of the public data set. So this is one of these auxiliary data sets that's available with the Ethereum data. And then what I did is I actually looked at the GitHub BigQuery public data set. So we have all of the GitHub history in BigQuery. It's like 60 terabytes of data. And the next talk after mine from Felipe Hoffa will talk about some interesting analysis purely on the GitHub data set in isolation, but here I'm talking about it in the context of Ethereum. So what I can do is I can query this data set, find all the Solidity source files, okay, and then Piper gave me a web hook where I can create a new GitHub repo, copy all the existing sole source files into my new Git repo, and when I commit them, his database is aware that I made a commit, he can pull in the new files, compile them, update the database, and then the signatures get updated in BigQuery automatically. So this is effectively a serverless update of the database just by scraping or crawling, crawling the synchronized GitHub data set. So given that, and I also have this OLAP OLTP bridge coming from the Ethereum network into the Ethereum BigQuery data set, there's a possibility here to combine these things. And one of the great things about open data and linked to data is that we can take these two existing data sets, and there's not obviously some benefit to doing that, but we're actually able to give birth to a third data set that is the Ethereum virtual machine stack trace analytics data set that comes out from both GitHub and bringing in Ethereum into BigQuery. So this is the real value of having the public data sets is that we can accelerate the ability to innovate by joining, by linking the data. All right, so what do those stack trace analytics look like? I just started doing this. You can see here what I'm showing is not only transfer events, so transfer is the really big one, which is what I was talking about earlier. This is token transfers. I think I'm showing like the top 50 here. The second one by like largest volume by day. So this is the Ethereum for the last two years, and this is the number of those particular types of method calls aggregated per unit time. So you can see it's really dominated by transfers of tokens. Anybody wanna speculate as to what this pink method is from? I'm sure one of you can guess. Sorry? Yes, CryptoKitties, exactly. So this is actually a breeding event to produce a new kitten from two parent CryptoKitties. Yeah, it was really a flash in the pan though. It only lasted about a week. I don't have the zoomed inversion of this graphic, but yeah, it's pretty much dead from now. There's something interesting happening like a month ago when I made this chart in yellow, I didn't get deeper into it, but if any of you again wanna look at this, I can give you early access to the data set, do a study and tell me like, okay, what are the derivatives of changes of applications and what's the latest trend on the Ethereum network? All the data are available. Okay, so back to this method analysis. I told you there's a method signature space of four billion possible methods. We know about 6,792 of them, what the names of the types are, but for the rest of them, we don't know, and there's a lot. Here's what a typical method call looks like. It's just some byte array, okay? We can break it up because we know, for example, that the method signature is four bytes, right? And then we know that each additional argument is always 256 bits or 32 bytes. So we can start to get some information about this. Like I can tell you, argument two looks like it might be an address. You could possibly measure the amount of entropy in the bits here, like how unpredictable or how compressible is this, which gives you some sense of how address-like is it, right? Addresses should be unpredictable. Private keys should certainly be very unpredictable. You know, this third value, it's all zeros. What does that mean? It could be a boolean. Probably it's a boolean. Might be a null. It's highly unlikely to be an address, right? That's too easy to guess. And then this fourth one, who knows what that is? It has four zeros at the end. Is that significant for any reason? Yeah, so anybody want to speculate as to how you might take these data and get some more information out of there? You could do that. That'll cost you some money to do it, though. Yeah, not a bad idea. Yeah, sure, probe it. Yeah, probe it. Anybody else? I mean, I guess if you forked the main network onto your test environment, you could effectively give yourself free money and do as much of that as you want. Yeah, that's not a bad idea. That's more like moving into a penetration testing kind of angle to find vulnerabilities and weaknesses on the network. It's very interesting. I want to think more about that. Well, you could actually, this is a deep learning problem, right? So we could start to classify these data by looking for things like, typically transfers of tokens will have some kind of round number. So if we observe some byte string that represents a round number in a floating point space, ah, maybe it's a transfer event. And you could think about other such types of analysis you could do to further characterize the methods. So doing this sort of thing lets you get deeper into understanding what types of activity people are doing. And then by looking at top-level methods and then the stack traces underneath them starts to give you some sense of what types of activities are being done on chain. And given that every method call actually costs a little bit of gas or costs a little bit of Ethereum to run the method, you can then begin to attribute to any method call what is the real fundamental economic value that allows you to exchange ethers or dollars for computing on the virtual machine. So there's some interesting fundamental analysis that can be done on this type of data. But we need more elucidation of actually what these methods are. I was showing you some method call analysis. Here's a website that's starting to index these things called distributed applications or DAPS. Doing something on the transpose dimension. So we have methods on smart contracts and this is an index of smart contract interaction across the network, measuring interesting things like daily active users or how much Ethereum actually flowed through the contract. And if we start to look at interactions with contracts, we can begin to cluster games, for example, should have similar behavior in general. We could start to find new games just through the method of interaction. So this is a way of beginning to build something like, it looks quite a bit like a web portal like Yahoo way back in the day. Start to build something like an index or a search index on top of the Ethereum chain. All right, so where I am today is that graphic I was showing you before of the transaction network with the colored partitions is actually a lot easier to build on a graph database. And so there's a nice combination of something called Bigtable which is another Google Cloud offering plus Janus Graph which is an open source project that can allow one to store graph data in something that's easier to do this kind of recursive traversal query than it is to do it in a scanning type of system like BigQuery. So I've loaded some of the data into this database and the next step is to start to do these type of analysis to pre calculate things like what is the page rank of a node or how central and important is a particular node within the network possibly within the scope of a transaction or within a time bounded scope to further characterize these nodes and edges to begin to make statements about what their characteristics are. And then once you calculate these things you can put the data back into BigQuery for filtering in an OLAP kind of scenario. So we've got the OLTP, we've got an OLAP linear scan type database and then now we have a graph database for doing some like complex analysis to bring things back into the OLAP space for building reports. So this is happening now. If you're a Kagler, in particular, if you're a data scientist with network science experience or you're a topologist, please contact me. I would love to talk to you about other interesting things we can do with these data to bring more clarity and transparency as to what's happening on the network. Also I'm looking for more different data sources, right? So this DapRator database is curating a list of apps. That's really great. There are some other data sources that are available in the various types of document sets that have been leaked to the media where you can get some information about what addresses might be associated with what named players. So that could be an exchange, it could be an individual, it could be an organization, any number of things. I need help on this too. So if you have this type of data or like to do scraping and crawling and want to work on Ethereum or blockchain kind of stuff, those kind of data can be joined in the same kind of way I was just showing you the method signature data can be joined. So that would also be really helpful. Yeah, and just, I was planning on releasing this actually like about a month ago but some other deadlines were hitting so it's now scheduled for Q2. But I will be blocking this and letting the data out very, very shortly. That's all I've got. I do have quite a bit of time for questions like almost 10 minutes. So if you want to interact with me online, please, I'm pretty predictable and easy to find. Usually it's my name. Yeah, find me. Let's talk about this. And yeah, we've got 10 minutes to talk about it now freeform style. So I'll take questions, thanks. Okay, we have a mic. The question's up there, up in the back. Okay, shout it out. Yeah, sure. If you let me, what could happen is two blocks would be fine at the same time on two different nodes. Yeah. Now you have the longest chain. What, you have to distribute the longest chain, right? Yeah. So in this case, because there's a collision that happens, I don't know if a collision has ever happened. How will your system handle that? Because now you have two of the longest chains that have to distribute it out and then whoever figures out the next longest chain that becomes the longest. Oh, okay, yeah, yeah. So how do you handle that right now? Yeah, so each block that's loaded to the database has a, it's a linked list effectively. So a block references its previous block. And so when you have two blocks that are produced at the same time and the network's trying to fight out like which is the longest chain, we don't take a position on that. We just store both of them in the database and then eventually one of them appears to be a dead branch where it has only a few blocks kind of hanging off what becomes the main chain. Yeah, so we capture all of that. We capture all of that. Yeah, those branches in Ethereum are called uncles. Yeah, uncle blocks. Questions? So those uncle blocks, are those queryable as well? Yeah, yeah, yeah, you can see them. So is that like a Boolean flag in this egg, ignore all the uncle blocks or? Sure, well, I mean it's just a SQL query, right? So the data are all structured and ready to go for you so you can do whatever you want with that. Don't be shy, ask me stuff. Here we go. So any plans to start capturing Stellar or some of the other alternatives? Yeah, there's actually some, so I mentioned a lot of these ICOs and cryptocurrency projects are on Ethereum. There's not really blockchains for them yet, but there are some other ones where it's their chains, Stellar, Ripple, Dash, a few others. Yeah, there's interesting analysis can be done on all of those. I would say I'm a bit biased toward looking at things that have something more than just a distributed ledger and something more like a virtual machine. There's different, I guess more interesting types of insights that could be had by looking at those data sets. But I've not yet been preparing any of them. If you have some specific suggestions about what might be interesting and why, I would love to have that conversation either afterward or on Twitter we can start a thread. Yeah, it's a good idea. Here we go. I have a question about the real-time transaction. Yeah. So can it support a real-time transaction? I mean in the memory pool? Ah, no, I'm not supporting mempool. So the question is related to when a peer wants to submit a transaction to the network, it's not immediately added to the block and there's actually this queue of transactions that could be added to the next block and usually what the miner will do is grab the most valuable ones for somebody who bid to add their transaction next because it's a high-party transaction. That's what you're getting at. We are not currently monitoring the mempool so we are something like, I guess it depends on how long a transaction can sit in the mempool, but it's usually not more than a few minutes, like five blocks ahead, so something like 75 seconds. So we're 75 seconds behind real-time. But realize those transactions that are proposed in the mempool can actually be canceled out before they're executed or they could even fail. So indexing the things in the mempool, I'm not really sure how much value there is there, although if you're doing trading it could be an interesting, like, leading indicator of what's about to happen. But yeah, I'm not currently tracking that. That's a great question. No, maybe. I don't know. It kind of trickle in. Where are you going? One more. Hi, this looks all really beautiful, but is there a particular reason of why you are doing this? Why I'm doing this? I thought it was just really interesting. Like, you hear a lot of, so, okay, so take a step back. I'm a developer advocate. I'm a developer relations. I get basically marked on my performance of bringing attention to Google Cloud. There's a lot of hype around blockchain and a lot of what's being just like, a lot of the narrative in the media, there's no substance behind it. It's like somebody on some news network spouting nonsense coming from who knows where, not actually referencing any data, right? And if you look at a lot of websites like that do have a chart, it's some low res gift that came out of probably a consultancy and was resized repeatedly. Nobody has access to the real data, right? So giving people more access to data produces like a more sophisticated, concrete conversation in the public discourse space about what's really going on. So now we don't have to depend on these consultants anymore who are operating these databases and selling access to build reports and you can't really influence what report they're gonna build for you unless you pay them, right? We just give the data away for free. Please go build the report you want and talk about that. Yeah, it's democratizing the data. Google's mission, right? Making the data publicly available, accessible and useful. So yeah, it's completely in line with corporate mission. Woo, Google, Google's awesome. Yeah, right? We love open data at Google. So how big is the data set on BigQuery? Cause you said you have 500 gigs free to use. I'm wondering how many queries can I have. Yeah, so if you could select star on all the columns, you would burn it up. So BigQuery, sorry, Bitcoin. Yeah, we can do that. The primary data is 200 gigs and we do a little bit of denormalization to make it easy to query in BigQuery and so it's 500 gigs. So if you did select star on all the fields, you'd burn it up. But in reality, you're not doing that. Ethereum is a lot smaller. Ethereum is actually 70 gigabytes for to operate the full node and it's 200 gigabytes in BigQuery. Yeah, these are not huge data sets. They're just very difficult to access. Even it's public open data already uncontrolled by anyone, just really hard to get to. Yeah. Thank you. Sure. All right, one minute 36 seconds I'm calling it. Thank you very much. Woo! Woo! So thank you.