 So, hi everyone. I'm Chris. In case you guys don't know me, I'm the face of Atelier Solutions. I've been working with Knative for quite a while. Sorry, looks like we actually have timing mode as well. So, did some stuff with GitLab, introducing their serverless stuff. Also wrote a Knative runtime using some of the early stuff that TriggerMesh had provided. And very recently started working with creating a new meetup within my region of Northern California for CNCF Placer. So, hoping to bring at least the cloud-native and some of the Knative stuff into a smaller community to grow tomorrow's youth. So, I'm going to tell this more as a story. We're going to start off with an idea that I bantered back and forth a couple of years ago as far as trying to find a way of modernizing systems, especially for larger companies. They have their databases. They're more likely not going to get rid of it, but they still want to go towards the cloud. They want to experiment. They want to do something useful with that data. So, we have an Oracle database. We want to capture the changes that come in for all the normal CRUD operations. So, how do you do that? So, at the time we did look at Debezium. It was fairly early in the project. Supported Postgres, MySQL. They'd started playing around with Oracle, but it was also very tied to Kafka and Kafka Connect in particular. And one of the awesome things about Knative and Kubernetes in general is that it does break people free from that vendor lock-in. So, we decided to, as Seb mentioned in his previous talk, go the Frankenstein route where I wrote a Knative source. It worked. It was very ugly and trying to set up the installation configuration and tracking all the database changes, which was to a real pain. So, fast forward to about a year ago and then reexamined the question, looked at Debezium again and then realized, oh, hey, with Debezium, they've actually broken up part of the Kafka Connect aspect and have decided to start supporting other providers such as Kinesis, Google PubSub, but nothing on the Knative side. So, why don't we just go ahead and Knative-fy it? And on the plus side, as a part of the breakup of the Debezium server component to support these other cloud-based venting systems, they added support for cloud events, which actually helps a lot. But then we also need to containerize it, which for their default install, it ships in Docker. Okay, so that's two out of the three things. The only thing that was missing is a way of being able to take those database changes and stream them out into the Knative pipeline. So, that's where he did some slight modification. I submitted a pull request a few months ago. It got accepted and is part of the 1.9 release for Debezium, which now exposes an HTTP client that will stream your database changes into a listening webhook. And this is where the awesomeness of Knative comes in, because by exposing something like ksync, I can now use things like sync binding to a pipe to a broker or a trigger wherever. Okay, so that was pretty much it and with the profit. So, for sample integration, which I do have as part of a GitHub repo, is pretty much going on-prem database into a Debezium service, which spits things out to a broker, which leverages Knative's trigger and Knative service to massage that data, dump it into Redis where I have another Kubernetes service listening on the back end to report the results. In this case, it's more of a project voting system. But through the process of going through all of this, I mean, the nice thing about Java and Quarkus and a lot of the work and that has gone into the ecosystem for the last 20-some odd years is that you have a properties file, you can expose it as environment variables, you can set it in a config map, you can put it into, or pretty much directly into your deployments. Works out great. Unfortunately, getting the two to play together is still a little bit up in the air, so protecting secrets is a bit of a balancing act. But it does also go to show that with Knative itself, it does act as like providing the Lego bricks with everything listening on common port, speaking cloud events, which makes it easier to pluck data in and out of. But it still requires a bit of work to go through. Some of the functions work that was discussed earlier today will help. Some of the transformations stuff, especially with regard to exchanging payloads going from a source to a target, still require some heavy customization. I know TriggerMesh exposes some transformers like JQ to translate between JSON objects to spit them out into a common format. But it's still making progress. And the one drawback as far as this approach goes is that in the case with Debezium, you still need to have a primary instance that's kind of watching out for all the changes, otherwise additional instances, changes the clobbery each other. And it does still require that Debezium itself has access to the on-prem database. But once you get it from the database into your cluster, you can still stream it wherever, which way you want. So that's pretty much it for the spiel. If you want to learn more, I've got contact info as well as the place where you can pull the code. And that's pretty much all I got. So thank you. Thank you, Chris.