 If I don't have the mic room? Yes, you'll be fine. But the mic will be fine. The mic will be fine. All right. Thank you, everyone, for joining this talk. I'm Doug Buchanix. I'm the founder of a project called Live Peer, a live video streaming infrastructure project built on Ethereum. This is my colleague Yandin, who works with us on protocol design research implementation. We're giving a talk today about scaling payments on Ethereum using a protocol called probabilistic micropayments. And the reason that we chose to lean into this protocol and implement it and test it is because it's actually a scaling solution for micropayments that works today on Ethereum without requiring any layer two solution or any reliance on a trusted third party like a hub. So for many use cases, it's really compelling, really powerful that allows you to send micropayments around to many receivers without a lot of overhead and without exposure to gas price fluctuations in Ethereum. So I think it's pretty interesting. So what are PMs? I like to think of it as paying for services or content with lottery tickets and a traditional kind of micropayment solution, the mental model that people have is you lock some value on chain and so does another party. And then you send payments, you send signatures around off chain between these two parties and they keep track of the balance. And then when someone's ready to leave and cash out to take their payment, they submit the proof on chain and leave and kind of close the channel. So in probabilistic micropayments where you're paying with lottery tickets, it's a totally different mental model. It's thinking if you are going to be paying someone for people, what you do is you run a continuous lottery and everyone who you're paying, you're just issuing lottery tickets to, they're playing the lottery against you and eventually some of those tickets become winners and the recipients get to cash in the winning tickets on chain to receive their payments. So in this talk, we're going to give an overview of PMs and kind of the scalable payment requirements, talk about how they're met. We're going to compare it to payment channels. We'll talk about the security models and how you secure these systems as double spends and finally we'll give a quick look into our implementation, some benchmarks that we've seen running this and point to the open source code. Cool, okay, so as I mentioned, the model here is a sender needs to send some value to a recipient in exchange for some service or some content. So you could think of it as if you're streaming music, you might want to be sending a payment every couple of seconds to the person who's providing music. In our case, in my view, you're encoding video and you're paying the notes on the network we're doing the video encoding on the fly. So as I mentioned, instead of sending actual value and doing like an ether transfer, what you do is you send this kind of off-chain lottery ticket to the recipient. And the recipient checks to see if the ticket is a winner. And if it's not a winner, they just discard the ticket, it's worth zero, they don't have to cash it in on-chain. And this is kind of the default case, this happens most of the time. But when they get a ticket, the ticket may be a winner. And if it's a winner, it's worth ETH. And in this case, they just cash that ticket in on-chain to redeem their winnings. And the interesting thing, the way this works is if you think of the lottery, a lottery ticket has an expected value. You know that if you win the jackpot in the real world lottery, it's going to be $100 million. But you know the odds of winning the jackpot are, say, 1 in 100 million. And therefore, you can calculate all the expected value of this ticket as $1. So here, with probabilistic record payments, it works the same way. So the ticket, in this case, has a face value of 1 ETH. The win probability could be set to 1 in 1,000. And so the recipient knows, well, this is worth 1,000th of an ETH. And they're willing to do as much work as they want to do for 1,000th of an ETH, or provide as much content as they want to provide for 1,000th of an ETH. In the second example, the ticket has a different face value of 10 ETH and a win probability of 1 in 10,000. But the expected value of the ticket is the same. It's 1,000th of an ETH. And what's powerful about this kind of mechanic is that because a recipient can tell a sender what they want these parameters to be, they can actually control kind of the overhead that they have to pay in order to cash these tickets in and redeem them on the blockchain. And so what that means is, if gas prices are really low, you could say, oh, I'm willing to accept one ETH-based value tickets, or even less, because I can cash them in. It doesn't cost me a lot. But when gas prices rise, you can say, well, I don't want to be doing a lot of transactions to cash in these tickets. I'm going to change the parameters around, to cash them in less infrequently, and still earn kind of the same amount of value that I expected to earn. So what's cool is it lets you control how much overhead you have in cashing in tickets. You can say, I'm never going to pay more than 1% of my earnings to use the blockchain to cash in tickets. And you get to control that, regardless of where your gas prices go. That's pretty powerful. The 1% example is just the number I made up, but if you think of accepting a credit card, for example, as a merchant, you're usually paying 3% plus 30 cents per transaction. So we can do much better around the blockchain with micro payments. Cool, so in summary, you're at the recipient. You can charge what you want. You can adjust today's values to kind of remove any reliance on gas and congestion on the interior network. And over time, you earn exactly kind of what you wanted to earn with a high probability. One of the interesting mental models is that you're not guaranteed to get the exact amount of payment for the service you provided, but over a long period of time, due to kind of a lot of large numbers and probabilities, the amount you earn, the amount you pay will be very close to what was intended. So that's the model. So a brief acknowledgement to this is not something we invented at LiveBeer. Ron Rivest of RSA fame actually proposed this back in 1996. Number of people have proposed it in Bitcoin. The Orchid Protocol proposed this a couple of years on Ethereum. I think we've done the first open source implementation that people could use that's running today. But this is an idea that's been around for a while and it's pretty interesting. So it's worth asking the question now, are PMs a good solution for micro payments? And for that, we wanna look at what are the requirements for good payment solution? So first of all, they're low latency. You can send these payments around very quickly. It's all off chain. You never need to go to the blockchain to check if, or you never need to submit a transaction to the blockchain in order to pay someone. It allows you to switch recipients seamlessly so you can send a payment to multiple providers one at a time or pay them simultaneously all at the same time. You don't need to go on a chain to add a new recipient. So for example, in LiveBeer, you're encoding video on this network. There may be hundreds or thousands of people who can upload video for you. You don't need to go on to the blockchain to open a channel with each of them. You can just start sending payments to them as they come available. And finally, this one's also really important. There's no expectation that you have to have a long lived relationship with the recipient or that you have to submit a lot of value for it to be worth it. In payment channels, for example, you would never close a channel with one user if you were only gonna be cashing out five cents worth of value because the transaction cost would be too high. Whereas in micro payments, you don't have to assume that you're gonna be earning five cents, 10 cents a dollar, a hundred dollars, a thousand dollars. You can just, you know the expected value of a ticket and if it wins, it's always worth whatever face value you've monitored it to be to make it worth your while. So that's a really nice property. So probabilistic micro payments support all of these properties, which makes it really nice for streaming data use cases. Next up, what I want to do is hand it over to Yandex so you can compare probabilistic micro payments to traditional payment channels. We've heard a lot about this conference this week. All right, so before we talk a little bit more about the security model for probabilistic micro payments, it's worth doing a quick comparison with some common constructions that are used in the payment channel space. So in order to do this comparison, you actually need to highlight the specific characteristics of the construction. So there's different types of payment channel constructions ranging from individual vanilla channels to more complex network channels that solve some of the problems that are present in individual channels. And then even within that category, you have different types of routing. You have HTLC-based routing and you also have virtual channel-based routing. And on top of that, you can also distinguish between different types of network topology. So you might have a multi-hop type network or you might have a hub and spoke type network. The full details of these types of constructions are kind of out of the scope of this presentation, but there's a lot of good presentations at this conference that go into more detail. So for the comparison, for individual payment channels, we can see that the two main cons of using just vanilla payment channels are around additional on-chain setup involved with each counterparty that you wanna work with. So in a network where you might have open entry and exit into the recipient set, you would prefer not to have to incur an on-chain setup cost that rises linearly with the number of it. You have recipients that enter into this set. And as I mentioned previously, there is the possibility of dust being in your channel balance. So if you have a small amount in your channel, such that it is considered dust and the on-chain transaction cost is going to exceed the cost of actually withdrawing that dust from that channel, then it's not going to be worth it. So in that case, you actually have some minimum value transacted requirement where it's not worth it below that amount. So with HDLC multi-hop channels, this is the paradigm that you might find in a network like Bitcoin Lightning. So here I have a question mark next to low latency, mainly because in the happy case, it definitely is low latency if your payment can be relayed across multiple intermediaries without any delays. But if there's any delay in the revelation of the hash used in the HDLC, then there's a potential for that to be slowed down. So the longer the path, the higher the risk of slowing down the payment flow is. In addition to that, we also still have a minimum value transacted requirement. So if you have dust in your channel, then you can't actually use this HDLC mechanism in action to trust this leak through your payment. So in that case, the on-chain transaction cost or closing the channel on chain might actually exceed the dust in that channel. And lastly, there's this bonus column here. There's this bonus right here, which is minimal third party infrastructure dependence. This isn't a strict requirement, but it would be nice if you could accomplish direct peer-to-peer payments without relying on additional third party infrastructure. And in a network like Lightning, you need to rely on the relayer nodes, which might be okay, but a nice to have is if you didn't need to do that. The second construction that's worth talking about is virtual channel hub and spoke type networks. So the general idea here is that rather than using HDLCs to connect different payment channels across a network, you have a hub sitting in the middle that all parties connect to. And they block off funds to be used in these virtual channels that are connecting to counterparties. So as long as you have funds with the hub and as long as the hub has funds with another counterparty, you can create a direct channel between those two parties and the hub just needs to have adequate amount of funds in order to block those off to be used in that channel. But similarly, there is still the possibility of dust accumulating in the channel. So you can imagine a scenario where you have multiple channels connected with the same counterparty, but if you end up in a scenario where your counterparty goes offline or is unresponsive, in order to gather the dust in those channels, you're gonna end up needing to have on-chain transactions for each of these channels. So in that case, a solution to this would just to have a minimum value transacted requirement, which is something that we're curious if we can go without. And similarly, we're relying on the hub in this case. So there is third party structure dependence. So in summary, I think some of the advantages of probabilistic micropayments to payment channels is the no minimum value transacted requirement, and then also the no third party infrastructure dependence. But there are disadvantages as well, so the UX of lottery tickets is something that you need to deal with. You might not have an application that can deal with that UX in an adequate way. And it only works in high volume use cases. So if you're not sending a lot of tickets for a lot of small chunks of service, then it might not work out for you because you can never rely on the law of large numbers for the payments to be fair in the long term. And something that I'll talk about a little bit is you might have higher personal collateral requirements as well because you can't rely on a third party to provide liquidity for you. So moving on to the security model, there's different aspects to this, but I'll focus on two particular components. So the first one is the winning ticket selection protocol. So this is really important because if you don't have a fair winning ticket selection protocol, then if someone can influence which tickets win, then this isn't going to be a fair payment scheme. So this is just one way to do it. There's multiple different ways that you can achieve this fair selection protocol. And the way that we do it is via this commit reveal scheme. And the commit reveal scheme basically has the recipient define a hash of a random number that it keeps secret. And the sender includes a monotonically increasing sender nuts, and it maintains a separate counter for each of these hashes. And the idea is that the ticket will win if we hash these two random numbers together, where we hash the recipient's random number, and then we use the signature over the ticket as the random number for the sender. And the idea behind the sender nuts is that we need something that makes the hash of the ticket unique, such that once you produce a signature over that hash, it's unpredictable to the recipient. And then if that hash number is below the winning probability, then we consider the ticket opener. So the basic idea here is that the recipient should not be able to manipulate selection as long as it cannot predict the signature that the sender's gonna produce over the ticket hash. And then the sender should not be able to manipulate selection as long as it does not know the secret number that the recipient has generated. So the only thing the recipient should be revealing here is the hash of that number. It's important to note that the recipient also needs to defend itself against replay attacks. So it should never accept the ticket if it's using a hash, such that the preimage has already been revealed because then you know the secret number. And it should also not accept tickets with an already used nonce. So if it's using an already used nonce, then the person already knows that it's not going to produce a winning ticket. So you wanna make sure that you defend yourself against those types of attacks as well. And the on-chain contract makes sure to record the used tickets to prevent replays. So we wanna make sure that you can redeem a winning ticket more than once. The second problem that's worth highlighting is double spends. So in past literature, the most common naive protocol is using a single on-chain deposit. But the problem here is that a malicious sender can send multiple tickets to multiple providers. And if they all win, you might not have enough funds in your deposit to fund all of those winning tickets. So the classic double spend problem. And this is just an illustration of this where we have three recipients. They all receive winning tickets with a face value of five, but the on-chain deposit only has five ether in it. And we can see that it's fine for the first redemption but the second and third redemptions are not gonna be able to clear because there's not enough funds. So the idea to solve this is setting a collateral environment that is greater than or equal to the maximum utility for double spending. So when I'm only using other types of protocols flashing the collateral, but instead we can have the recipients claim from a reserve that guarantees the recipients up to a predefined amount. I'm gonna skim over this because we're running a little low on time, but the general idea is that the reserve basically presents a maximum allocation committed to each recipient. And as a result, we can keep track of a float for the sender where the float is money that the pay receives, but due to processing delays, it's still accounted for in both recipient and sender balances. And as a recipient receives winning ticket, it's gonna increase its flow as it observes on-chain successful redemptions. It's going to decrease that flow and the max flow is essentially the reserve allocation that you provide a recipient from the reserve to define the money chain. So I'm gonna skim through this, but this basically demonstrates the simplest flow with a single reserve committed to a single recipient. But the last question we need to solve is how do you actually bound the utility from double spending? So the way that we do this is that we commit each reserve to a well-defined recipient set. And in the life year protocol, we can actually take advantage of an existing provider separation process. So each round or time epoch, we have a specific set of providers and we just commit the reserve to that particular set of providers. And then we split the reserve into equal size allocations committed to each member of the set. So we can see here that in round one, we split it three ways because there's three recipients. In round two, we have new recipients. So we increase the, we decrease the allocation for each because there's more recipients. And similarly, when recipients exit, we have to split it two ways because now there's two recipients instead of four. And that's reflected in the reserve allocation that's committed to each recipient. So in summary, this helps us avoid additional on-chain transactions to update the recipients that commitment. Some of the downsides are that you can't update the allocation for each particular recipient. So if you want to allocate more to a particular recipient, you can. And then also this will likely increase the reserve requirement overall as the number of recipients increase because some recipients might define a minimum allocation. But some of this may be able to be solved with alternate constructions that I can probably talk about after the presentation. And with that, I'm gonna pass it back over to Doug to talk about the network sets and the open source implementation. Thanks, Shannon. So yeah, I'll just wrap up in the last minute. As I mentioned, like here's P2P live streaming infrastructure. And so we use these micro-payments to send to infrastructure operators on the network who are encoding video and running on the, we've been live on mainnet with an alpha that doesn't use this mechanic. We've been using this mechanic on test nets about to go live shortly. I've heard some graphs from our test net that it's a little small, but basically everything's going up into the right, which is usually good. There's a number of tickets sent and values sent, increasing tickets redeemed and value redeemed, received, increasing. And then in the yellow, it's kind of winning tickets and value redeemed on chain. And the kind of the key take-away to the key benchmark is that we've fluctuated gas prices in simulation to how they fluctuated on mainnet. And we've observed that the recipients, the software will automatically update their ticket parameters to ensure that wherever gas prices go, they never have more than what the 1% overhead for caching and on chain. So if they're earning $100, they're never paying more than a dollar in order to cash the tickets and on chain. And that's just a target that we've picked, but we think that that's acceptable relative to kind of traditional mechanics on the blockchain. So working well, we have an open source implementation. We'll show these links out on Twitter after the talk at livebeer.org, but we have the specification, smart contract implementation and salinity and client implementation in Golang. This is able to be used for general use across projects in the ecosystem. It means a little bit of work to separate out some of the live beer specific implementation details around these global recipients assumptions, but it's not far off. So around the collaborate with people who want to use this make it available for the whole ecosystem. So thank you very much for attending the talk. You can find us at livebeer.org and we're around the conference all week to catch up.