 So, we're pretty sick in the payments flow because we generate a lot of really interesting data when it comes to any gateways. And so today I want to share with you how we've gone about filling out our infrastructure and data pipelines to really support us when we dig into and gain insight into the data. Hopefully if you're thinking about getting more insight into the data or maybe are already starting this project, you can take some tips on how we've gone about and solve this ourselves. So that's a good start, I guess, really. One of the side effects of that is that we will take a document and store it in our React database. And you heard my co-worker talk a little bit about how React is actually optimized for on time. You know, in engineers' payments, I think we can read that on time is a little bit more stressful than other applications because it oftentimes means a direct loss in dollars, not just productivity. So that's great, we like that on time, we want that for our application. What React really does not give us is a way to query that data. So how do we get all this great transaction data that we've been generating from our application into a place where we can actually gain insight from it? So it's a bit of a journey, we've worked on two parts, so hopefully it makes it a little easier to follow the whole trend. First thing we've done is take an event, which that's pretty simple, all it does is really, every single time there's a database of events in React, it takes that event and fires it off somewhere as also including the data that was happening in that event. And then to another application running on the exact same node as React, it really just, for one reason, and that's to whitelist data. We're not being a PCI compliant company, we can't just let our data go around the way nearly. We need to be very rigorous about the documentation of our data, but also showing that data does in fact stay here, or it's safely versus them. So this application, once it whitelists the data, that's safe to leave. It then sends it out to our confidence. It's, you can just think of it as sort of like a distributed event stream that this data can then consume later on downstream in real time. And Kafka's sort of becoming the core of our system in many ways. It's slowly been the place where all data flows through, and we've been able to use it to back up the products that have been building this. It's been pretty exciting to watch it evolve as there's system is growing as well. So that's the first leg of this journey. So we're in Kafka now, which is great. It means our data is more accessible, we can do it use in different ways. What do we do now? Well, I'd like to say that there's sort of this constellation of applications sitting around our Kafka. They're all consuming data, places and using them in ways that are useful for that particular context. In this case, we have one specific consumer that really has just one job, and that's to build batches of all of these database events. So these batches are about 50,000, maybe 250,000 events at a time. And once it builds those batches, it takes them, converts them to a CSD, and then uploads it to AWS S3. So I'd like to think of S3 as sort of this file system for all AWS services. It's the beach head if you want to do any sort of big data tooling or gain insight or use any of their stuff. So if you want to do using database big data tools, you got to start with S3. The tool we've been using as a plate has been Amazon Athena. Amazon Athena is really cool. It's this serverless interactive query tool, which means that you can just point at the in-app specific database, or I'd call it is, specific S3 buckets of structured data, and you can query using regular SQL statements. A lot of us, I think most of us being engineers, probably can write pretty standard SQLs really easily, right? So this is really exciting for me. I mean, it's for the first time in our history, we've been able to query all of our new SQL data store using just regular SQL statements. Woo! Woo! It's like all these crazy employees are trying to make it feel better and they're like, that's what they did. I think this is cool. Anyways, so, but this is really exciting for me. We're able to really gain inside more data. We can provide our data scientists with this really wonderful tool, something that is really specifically needed towards them, but at the same time, we don't lose that uptime story, right? We get to a lot of that reuse job, make sure that it's dealing with the uptime, and then everyone's happy. So this is just kind of the beginning. It's really a high-level overview, but what we're looking to do with the future is actually pulling data sources from all over the place, right? So you can imagine having the link subscription data, website data, everything flowing through Kafka, we're all patching that up, pushing that to S3, and now we can use query data across the board, across our entire system using one tool. So that's a really, really high-level overview. If you want to talk details, fix and find the conference, that really easy to spot, because that is a big red shirt. I usually end up dealing Asian data demand by the group too. So, thanks.