 Welcome to today's Postgres Conference webinar, Beyond Off-the-Shelf Consensus. We're joined by Dr. Rebecca Bilbro, who will discuss several case studies of global apps, both successful and otherwise, talk about the limitations of off-the-shelf consensus, and consider a future where everyday developers can use open source tools to build distributed data apps that are easier to reason about, maintain, and tune. My name is Lindsay Hooper. I'm one of the Postgres Conference organizers, and I'll be your moderator for this webinar. A little bit about your speaker. So Dr. Bilbro is a data scientist, Python and Go programmer, teacher, speaker, and O'Reilly author. As co-founder and CTO of Rotational Labs, an intelligent distributed systems company, she specializes in machine learning optimization and API development in distributed data systems. She's also an active contributor to open source software, and is the creator and maintainer of the popular Yellow Brick Library, an open source Python package that hooks into the popular Skikit Learn API. You're going to have to tell me if that's right or wrong, to support visual feature analysis, model selection, and hyperparameter tuning for data scientists and machine learning practitioners. Welcome. So with that, I'm going to hand it off. Kicking away. Welcome everyone. Can you hear me okay? Sure can. So welcome to beyond off the shelf consensus. As Lindsay said, I want to start with a little bit of a poll to get a feel for what type of context each of you is coming from. So I'm going to our first question here. So I want you to kind of put into mind the app, whatever app it is that you're currently working on, you know, whatever, whatever that might be. And think about how much consistency does it require. And so this might be a different answer from how much consistency does it get in the current implementation. But if you think about how much consistency it needs, you have these three options. Eventually consistency is good enough for this app, or we require strong consistency, or I have no idea. And any of those is a fine answer. And so I will give you a chance to think about that. And I think the way that we're doing this is that you'll raise your hand. So can I see a show of hands for the first answer eventually consistency. Okay, I see one vote for that. How about option to strong consistency. All right, how about option three. I don't know. So perfectly fine answer. Okay. Two or three votes here. All right. So that seems like the winner of the options. So let's look at our second question here. So thinking about that same application that you're working on. I'm concerned, are you about compliance with things like GDPR LG PD CCPA. So we've got three options at, you know, you did the cookie banner. I'm not on the roadmap, but you aren't there yet. And then third, you have, you know, total control and visibility over where user data is stored and replicated for your app so you are, you know, not worried about compliance. So votes for the first option. We have the cookie banner. Isn't that enough. Oh, I got one vote for that cookie banner. How about for option to it's on the roadmap. So data compliance on the roadmap, but we're not quite there yet. All right. One vote. One vote there. How about third option. I, you know, total visibility and control over where data is stored and replicated. So that looks like two votes there. So that's great. That's definitely where you want to be. Okay, last question. I'm thinking about this app that you are working on. How well does your app support internationalization and localization. So three options here. You can vote that you track geographic deployments and data replication so you can guarantee a consistent user experience for users all over the world. Or second option. You know, you haven't really started thinking about global markets yet in your implementation of the technology, or third option. You are doing internationally internationalization and localization but it's on the front end using the CLER, you know, get text kind of tooling. So votes for option one tracking. We're tracking geographic deployments. One vote there. Option two, we're not quite there yet in terms of thinking about global markets for our app. Two votes there. And then three votes. Sorry. Counted wrong. And then last option is we do do internationalization and localization but it's manifest on the front end, you know, in the web application side, not the back end. Okay, just taking a few notes. We're engaging in our poll, all of these questions are going to come up in our discussion in the presentation here today. So the main things that I would like to talk about in today's talk are these so first, I want to go over what is consensus and so for some of you this might be a review, you know, if you study distributed systems in school. So for some, for some of you this might be familiar but we'll talk about Paxos raft and some of the optimization so how consensus actually works and why it's important to the apps that you're building. And then we'll talk about commercial manifestations and consensus so these are going to be the things that you're either using, or that you know you are hearing about that your colleagues are using it that allow your applications to have consensus. Then we're going to talk about some case studies for real companies that you will recognize who started with some kind of off the shelf consensus solution and decided to change that because of experiencing growth which is, you know, it's the good kind of problem. It's a downside of success. And then last, we'll talk about an idea that I would like to suggest for an open source consensus API that I think will help address a lot of the problems that come up in these areas. So first, a little bit about me. My name is Rebecca bill bro. As Lindsay said, I am the founder and CTO of rotational labs. I'm also a teacher so I teach at Georgetown University in the continuing education campus. I teach data science and machine learning. I've written an O'Reilly book that specifically about doing natural language processing in Python for big applications. And I am the co creator and maintainer of a Python library called psychic yellow brick, which wraps psychic learn. As Lindsay mentioned in her introduction, and actually psychic yellow brick will make an appearance I can learn and psychic yellow brick will make an appearance in this talk towards the end. So I promise all of these things are connected. Speaking of connection. I'm fairly discoverable on social media. So my handle is the same on all of these platforms so if you're looking to get in touch with me. It should be pretty easy to find me at Rebecca bill bro, regardless of the platform. So, in terms of what I do at my day job. My job is to think about this question. But so I'm really, you know, my role at rotational labs is to think about the nexus between distributed systems and machine learning and think about how we can produce data systems that are a little bit smarter. And when I say smarter. That can mean a lot of different things, but generally from a machine learning perspective what it means is systems that are a little bit more flexible. And I think that they can attune themselves to the context that they're given to the data that they're given and they can learn and respond and change and sort of become tailored for a context over time, given enough information. That's really kind of the big question that I am working to answer in my role and I think that you'll see that part of, you know, part of this talk is related to this question. And, you know, thinking about the role that open source software has to play in answering this question in a way that will be sustainable for the technology community. So, let's talk a little bit about the basics so for some of you this might be a review. But the review is important because these algorithms are very complicated. And I will show you just how complicated in a moment. They're very complex. They're, they're difficult to prove and they really for, you know, most of their lifetime have been the domain of academia, and really have not been the domain of industry, meaning that most of the vocabulary, most of the understanding here about consensus really lives in universities and not in kind of everyday development contexts which is, you know, part of what I think we want to try to change together. So let's imagine a very simple use case, we have a single database, you know, let's say it's a single postgres database, and what we want to do is we want we have a single server that's handling requests from clients, and it's able to update the values in the database. And clients can do three things they can give us a key and we'll pass back the value that's put at that key will we can put new values to keys, so either putting a totally new key with a new value or putting a new value to an existing key. And we can also delete keys. So let's just think about this context is very simple. So in this simple situation, the order of operations is pretty straightforward. Right. So, it's probably first come first serve. In the case of a single server, it's pretty straightforward to keep the operations consistent so what is consistent means so consistency is one of the big, you know, issues one of the big important things in this question that we're going to be talking about so let's make it all in the same page that consistency means that the system, whether it's a single server or many, you know, distributed system that it responds to requests in a predictable way. So let's talk about what happens when there's a failure in the simple single server system. So actually failures is the norm. Right. So I think that, you know, we probably all been in the shoes of the person who brought down the system at least once failure is routine. It can happen because of human error. It can happen because the power goes out. It can happen because the network goes out. There's no crash. There's all kinds of things that can go wrong. The problem is that when we just have a single server system. When failure happens the entire system is unavailable so nobody can look up their values nobody can write new values and even worse, if there was any information that was in transit, when the shutdown when the crash happened that data is probably going to get lost. So we will lose all of the, the information that was in movement when the crash happened which is not great. So this is actually why we have distributed systems right to the whole point of a distributed system is to allow us, you know, to face the reality of failure that is very routine and normal. So, you know, the purpose of the distributed system is to have more servers, you know, and so in case the server fails. The other server is available and can answer requests and then what we have to do is we have to replicate data, you know, between the two copies of the database, you know, say synchronized. So it's actually very hard to keep them synchronized though. And that actually gets worse. The more servers we have in the system, the more time it takes to synchronize. And so the way that that would manifest for you as a developer or a user is that somebody might put a value and have that request that put request handled by one part of the system. And then immediately fire off a get request to make sure that the put worked, except that the get request goes to a different part of the system that doesn't know about the value yet. And then we could get a not found error and this this is a common thing that happens because that that that latency means that the system is waiting to synchronize. So there's another kind of problem, which is concurrency and this is another big issue. This is one of the main issues that consensus is trying to solve, which is what if we both try to put a value to the same key. It's actually meaning that from the perspective of the system, they happened at the same time we both asked at the same time. So, who wins. That is actually a very hard problem to solve. You can't use just like system clock time. If the system is really spread out, then those times are reliable so you have to end up using very complex vector clocks to make sure that we're ordering things in a way that is consistent. So there are some options. You know so when when we say that the you know system is consistent. That doesn't actually just mean one thing. We have different levels of consistency, and that goes all the way from strong consistency down to, you know, I guess no consistency is, you know, not on this list. But in terms of what you're used to seeing it's probably one of these four. And so we talked about eventual consistency or I mentioned it in our poll. Data systems that exist today are eventually consistent. So what that means is that if the requests stop, then eventually the system will become consistent so you know it's always getting more consistent and it's just a matter of catching up with the put requests. And really actually, you know, probably almost any application you can think of is likely eventually consistent, even ones where it might surprise you. The truth is that, you know, a lot of those applications like these social media applications and stores where you're, you know, you're, you're buying something. Privileging is availability. Right so they're, they're deciding to privilege availability over consistency, because they want you to visit the store. And so you visit the store and you check to see if there are any laptops left. And so they, you know, sure there's laptops left, even if they're not totally sure if there are laptops left because maybe the system hasn't fully synchronized yet. The problem is that there are a lot of cases where that's the eventual consistency is not really tolerable. So these are applications that require strong consistency, which means that any get request is guaranteed to return the most recent put. And there are not actually very many strongly consistent applications, even ones where you kind of wish they were strongly consistent and the one that I always think of is buying airplane tickets, right. And, you know, we have this experience of you, you know, you buy an airplane ticket and you think you have a seat and you show up at the, at the flight and actually two people have been sold the same seat. You know, and that is what happens when you have eventual consistency but you want to have strong consistency. So, increasingly, I think there is going to be this demand for strongly consistent applications, because people are going to not be willing to accept eventual consistency as users. And people just are have been used to developers have been used to using eventual consistency as the model because we sort of, I think, believed that strong consistency wasn't going to be responsive, responsive enough that we needed to privilege availability. But that's I think that's changing with some of the new tools that are coming out some of the new databases that are coming out, which we'll talk about. So, essentially, the question is, how do we deal with this, as our systems become more distributed, how are we going to deal with staying consistent across many replicas, and the, the kind of classic solution is Paxos. So Paxos is an algorithm that's been out for decades, was published by Leslie Lamport, you know, in this kind of Indiana Jones type article that he kind of, you know, has the sort of tongue and cheek feel, but it's a very serious solution to a serious problem about solving consistency. So the way that Paxos works is that you imagine each service, each server is a state machine, and it can apply commands in a single order. And that order is the log, the log of operations. And so the question is how do we how are we going to enter things in the log, so that the log remains consistent across all of our copies. So the question is that a client makes a request and the server that gets the request requests a slot in the log, using the prepare phase, and so that prepare phase goes out to the rest of the replicas. And if they have that spot three, they reserve it and they respond. If enough servers respond yes to the prepare phase the originating server sends an accept phase so in a second round of communications that says okay we're going to write this value to this slot in the log. And if enough of the other servers respond in the affirmative, then that entry in the log is committed. And it's committed throughout the entire system and this is true. And it can be proven, mathematically true, even if servers fail. So if a majority of the servers are still running. It can prove that, or Lamport can prove proved that the system can progress and make decisions and make them consistently so that every log ends up the same. And as soon as any server that's died, comes back up, it can be brought back up to date by synchronizing with the logs from the rest of the system. The problem with Paxos is that that's a lot of back and forth communication, right so prepare accept is a lot of rounds of communication so as you might imagine, it can get slow. And so there are some optimizations that are designed to remove some of the communication to speed things up. And so raft and multi Paxos or two algorithms that were developed in response to Paxos that are designed to be optimizations in this way. And essentially what you can, you can do is you can skip the prepare phase by electing leaders. And in, you know, the fashion of following the leader, if we know that we have a leader already, then we can skip the prepare phase and just go to the accept phase. And the problem there is that if a leader dies, and remember failure is very common. If a leader dies, we need to be able to detect that and elect a new leader. And so there does end up being a lag, usually in the phase where we have to do an election to create a new leader. Other types of optimizations like Mencius pre allocates slots in the log to different machines, like in a round robin fashion, which allows everybody to work more quickly, because there's less checking with each other. The problem there is that you can end up with empty slots in the log. And so you have to apply compassion to manage those empty slots. Other optimizations take advantage of the fact that a lot of the times the updates that we're making to data have no collision, right, so we're making updates to completely different keys values that don't interact at all. And in those cases, we can use something like epaxos or fast paxos to essentially just fast forward and and apply commits, you know, assuming that there's no conflict. The downside there is in the cases when there are conflict. Everything moves slower even slower than it would with paxos. So, none of the optimizations is a perfect solution none of these algorithms is perfect and I hope what this is starting to crystallize in our minds is that there is no perfect consensus algorithm that works in all situations. Right, it's very contextual. And it's important that you know what your application needs, and that you want to make an informed decision about what your application needs. So let's think about really what's available to us, you know, on the market right so what are these kind of commercial consensus solutions. In 2001, Landport published a new paper called Paxos Made Simple which was designed to simplify Paxos enough so that industry could maybe try to implement it. And you can tell that he succeeded right because it only took five years of Google using Google scale resources to successfully implement Paxos in Chubby. Chubby is Google's distributed locking service, and it was designed to rescue the Google file system, which at the time was really struggling with the consistency issues. And so, you know, about four years later, Apache offered an open source version of Chubby that doesn't exactly use Paxos, but uses something similar called zookeeper atomic broadcast, which is an optimization of Paxos. So zookeeper now is probably many of you have encountered zookeeper in your, your work. And if you haven't encountered zookeeper you probably know about at CD. So at CD was originally released released by core OS and 2013 at CD is not based on Paxos. It's based on the raft algorithm that I just mentioned. And it was designed to manage clusters of Linux container. And then the, but what's interesting is that even though that was the original intended use in a year later Google launched Kubernetes and Kubernetes uses at CD for the configuration source and now, if you're using consensus, you're probably probably encountered it through one of these, one of these. So the, the issue is that I, even if you have found these solutions, it's not sort of guaranteed that they've worked very well for you. So there's this very good paper by all these young and tropical activists from 2016, where they talk through kind of what these options are on the market, and talk about kind of what some of the problems are with the options which isn't that the options are bad. It's just that they're not very well understood and zookeeper is a great example where it is a paradoxical property that if you use it improperly which is possible and maybe easy to do, you actually make your, your application less scalable, rather than more scalable, which is the whole goal of zookeeper. And really what these researchers found is that even as the choices for consensus solutions have increased the confusion about what to use and what fits best for your use case has also increased rather than decreased. And I think that that's the problem that we want to try to address if we can. There are some other ways that you might have encountered commercial consensus solutions, and probably the, you know, the two big ones are spanner and Aurora. So spanner is Google's distributed, globally distributed database, Aurora is Amazon's solution. And they are both very performant, but they're performant when you consider the use case of Google and of Amazon. And if you are not, you know, building things like Google products or Amazon products, then the probability that they are well tuned to your specific use case is relatively low. So either you're getting optimizations that are not really appropriate for your application, or you're spending way more than you need to spend. There's some newer solutions cockroach and you go by are probably ones that people and guessing postgres. People have encountered, since they really are trying to be the distributed solution for people who are using relational databases at fauna and titanium DB or two other options that, you know, you know, kind of offer something for the, you know, no sequel non relational kind of model. But you know, hopefully, you know I'm not dissuading you from going global right I feel like I've been suggesting that, you know, more global equals more problems, but the reality is that more global equals more users. So the best and fastest way to increase your user base is to become a global app. But the scaling is, is hard, even with these solutions so the larger the quorum, the slower your system will be to respond. And it gets worse as the, as your data centers get further apart, right so the latency increases, you have a greater probability of network partitions. And worst of all, servers are going to respond to co located clients, which in some cases like, you know, economies like game economies has the effect of privileging players who are have the, you know, the random chance or luck of being born near a big Amazon region or Google region. You know, at the expense of people who live a little bit further away. And that kind of can you road people's trust in these kinds of economies, which also leads into voting like trust and voting systems and health systems. So it's actually quite serious. In some of these cases. You know, speaking of regions, commercial cloud really is designed to work best in a very few places. They're banking on the fact that there are a lot of people in these places and that the people in these places have a lot of money to spend. If you have users who live in one of these in not one of these places. Your user is likely to have a worse user experience of using your app potentially such a bad experience that they cease to become users or never become users of your app. So being global is a lot harder than it should be I think. And now, you know that the worst of all is that it's getting more complicated all the time, you know when we did our poll. You know, many people sort of said, well we're, we're ahead of this problem of GDPR. The challenge is that there are new regulations that are coming out all the time. You know if I was going to give this presentation, you know, two months from now I'm pretty sure this slide would have even more things on it. Because these, you know, data tendency and privacy laws are just coming out. There's new ones all the time. And it's becoming, you know, much, much harder to make sure that you know and can control where the data where your user data resides and where your user data is getting replicated to buy the internals of the system, potentially without your awareness. And that's, that's potentially a problem right. And so I want to, you know, talk a little bit about a few companies who have tackled this problem and I don't want it to sound like I am picking on these companies because I actually see these companies as trailblazers. You know, people who have, you know, really gone and leaned into these problems and tried to scale and grow and get kind of global usage and really kind of come head on with some of the problems, hopefully problems that we can all learn from. The examples I want to talk about are Niantic's Pokemon Go signal and Dropbox. So potentially some of the people in the audience are Pokemon Go players or maybe your children played this game. And so if that's the case, probably you remember what happened when the game was released. And essentially what happened is that it was a disaster. The servers were unloaded or overloaded because the game became popular, much faster than, than they were ready for. You know, they did their best right they hosted it on the best of the best they hosted on Google cloud, and they have this sort of assumption that everything would kind of scale naturally. But it did not. I did not scale. And what happened is that, you know, a lot of people couldn't even download the game and then people who were able to download the game couldn't log in, couldn't find, you know, the, the artifacts. And really, you know, it caused a lot of UX complaints for the poor engineers who were trying to maintain the system. So that's an example, I think, you know, a company that really tried, you know, did everything they could to be ready to scale for a lot of users and they, they had the good kind of problem where a lot of people signed up. And they were not, they were not able to serve all of those people. Another example, which is one of my favorites is signal. You know, signal is, you know, probably I don't have to preach to the choir here, you know, the Postgres audience I'm sure is sympathetic to signals mission right it's a 501 3C organization that's trying their best to bring privacy back to you know, chatting and messaging and essentially what happened is in one week, you know, everything exploded. So, you know, in, it was the beginning of January 2021. What's up updated their privacy policy to save it that, you know, all users were opting into sharing data with Facebook, and a few days later Elon Musk tweeted that you should switch to signal and everybody did. And in 24 hours, they went from 10 million users to 50 million users in less than 24 hours, and it brought the entire service down. I have been a signal user for for a while and there were three days where I couldn't use signal at all. And, you know, I, I had not been as aware until that moment of, you know, how kind of how important the application had become to my everyday life. So this is, you know, another example signals built on AWS. And the scaling was sort of not automatic. I, you know, everything, everything went down. And then my last example here is Dropbox. So, potentially there's some Dropbox fans in the audience also. But if you'll remember, in about 2015 2016 Dropbox decided to, you know, quote unquote breakup with AWS. And they decided to move all of their user data on to a new in-house network of data centers called Magic Pocket. That's their, their distributed system is Magic Pocket. And this, you know, this transfer of data involved like actual physical transfer of data. So there were trucks with, you know, with data and moving across the country, you know, trying to, you know, physically move all of this data. And it was not easy. And it was not straightforward and it was not cheap to do. But, you know, as you can see, since they made that change. You know, Dropbox's annual revenue has steadily increased. There is maybe a question about what it will look like in 2021. Because that was, as you'll remember, a bit of a strange year in terms of people's data usage patterns in 2020 and 2021. A lot of things have changed. But, you know, I think that it tracks that there's probably some relationship here, you know, that using, hosting all of their data on AWS was cutting into their profit margins. And they fit the bullet and decided to break away and build their own solution and have been able to see a lot of success. So I think that's a very interesting story. So, but I hope that the takeaway isn't that like, you know, you should adopt one of the solutions that these companies has has used. I hope that the takeaway really from this talk is going to be that different systems need different solutions. You know, and only you really know what is most appropriate, you know, how users are using your application, you know, what the throughput is, what your company is scaling goals are, you know, who's likely to be the next million users and where in the world they're coming from. You know that better than, you know, probably anyone else. So we need a better way, I think, to build our own solutions and to me, this feels like an open sort like a open source problem. So, I think that the solution is that we should all work together and build an open source API for consensus. So that's my pitch to you. My hope is that you will be interested in collaborating on this new open source project. You know, so I'm going to take a bit of a left turn just to convince you of why I think this is possible. There is a so you know at the beginning of the talk. I think that Lindsay mentioned scikit-learn, you know, this is a library that I'm very familiar with, you know, as you know, I'm not really a databases person I'm a machine learning engineer. So, most of my experience comes from most of my open source experience comes from machine learning world. I think that the scikit-learn story really has an important lesson for us if we're going to work together on this project. You know essentially what happened is that scikit-learn started as a couple of graduate students working together on a Google summer of code project and what they were doing was converting code that they had written as part of their PhD programs into a common API. You know, and I think it's important to really underpin here that before this happened. In order to do machine learning, you had to go to graduate school for like five years or 10 years and to do machine learning you had to write machine learning code from scratch in Fortran or in C. You know, there wasn't, there wasn't open source machine learning before, before this. You know, so you would write this code kind of by yourself you would publish your dissertation and then you know who knows what would happen to the code and so the this group of graduate students got together. So I had to think of a way to make all of the different algorithms that they had worked on independently, cohere to a common API. And in 2010 a French company that employed many of these these graduate students after they finished. So a French company called Enria released the first version, the first open source version of scikit-learn. The authors of the package publish this paper API design for machine learning software which I definitely recommend that you read it to short read. But really this was the, you know, the first time that many of us were even starting to think about how it might be possible that you could take all of these complex different machine learning algorithms and house them under a common API. So essentially the, the intuition of scikit-learn is that every machine learning model is either an estimator or a transformer. And if it's an estimator, it can fit, which means that it's learning from the data that you give it and it can predict, which means that it can predict off of a new value, or it's a transformer meaning you fit it to learn the data space and you transform the data space and produce a new data space that can get used in downstream machine learning. But this idea that you could whatever it was a logistic regression a support vector machine, even maybe a neural network model, any of these types of complex algorithms that function in completely different ways could essentially be boiled down to either an estimator with fit predict or a transformer with fit transform was revolutionary. And one, there's over 2000 contributors to this project, you know, it has 47,000 stars, it's, you know, GitHub says that it's being used by, you know, 250,000 projects, and that includes scikit yellow brick but I'm confident that it's being used by a lot of other people than that this is, you know, the main open source machine learning library in the world. And I could qualify that and say it's the main Python machine learning library, or that it's even more popular than TensorFlow or PyTorch but it is just the most, the most important, you know, machine learning library out there right now because of this common API. This is another project which is the one that my collaborator Benjamin Bangport and I created about six years ago, borrows from this intuition. And so the intuition is that there is no such thing as a best machine learning model in the abstract. So support vector machine, logistic regression phase. Ensemble decision tree random forest. None of those is the best machine learning model, because it all depends on your dataset on your use case. And so the only thing that you can do is have a series of best practices for identifying the best model for your use case. And that's what yellow brick is for. And that's what I think we should do for distributed systems right I think we need to do this for consensus and really just kind of say there is no such thing as the best consensus model the best consensus algorithm, it does not exist. What exists is a lot of different use cases. And so from those use cases I think we can engineer an API that allows us to experiment. And so this is, this is kind of what I have so far it's sort of a little bit of scratch work code. You know, and kind of an idea of what the components of the API might be where we have to figure out networking so what does it mean to send messages between replicas in the system. How do we decide who's part of the replica network. How do we identify when somebody is joined. How do we reconfigure if somebody leaves the network decision making right so do we have a leader oriented system, a leader list system. How do we detect conflict how do we do elections. And then finally how do we decide when a decision is final and it should get committed to the log. And I think that if we can kind of think about this enough and think about enough use cases. I think that we can build an API around this and make it open source, so that anybody can contribute, anybody can look at the code. Anybody can use it to experiment for their use case and identify what makes the most sense for them and for their application or for their organization. And so that it's free for everyone to to interact with. So, with that in mind, I will make a sort of call to the audience. If you're interested in contributing to this project, I would be delighted to have you reach out. You can visit this tiny URL to let me know how I can get in touch with you. And also if you just want to share some ideas about what you think this API should do what types of behavior it should support, you know, any problems that you foresee. That would be incredibly helpful, very valuable, I'd be so, you know, so grateful for any kind of contribution, large or small. And if you are not really in a place right now where you're interested in contributing to a project, I completely understand it's been a tough year or two. And if you just want to vent and talk about some of the problems that you're observing that you think might be informative to me or to the project, feel free to shoot me an email. I'm Rebecca at rotational.io, and I'd be happy to listen and learn from you and to learn from what you're observing. And that's all I've got for you. So thanks very much for coming to the talk and for your attention. And I'm excited to hear from you. Very thorough, really exciting stuff. I want to thank, first and foremost, Rebecca, thank you so much for for joining us for giving us this awesome presentation and sort of thing to chew on. And then I want to also thank our attendees for spending a little bit of their day with us. So I hope to see you on future Post-Cast Conference webinars and have a wonderful rest of your day.