 So I'm Daniel Whiteneck, and I work at a company called Packarderm, so you'll hear a little bit more about Packarderm here in a second, so I'll leave that off for now. Also I just wanted to let you know since all of you being machine learning people and also at KubeCon, I imagine that practicality is something that you value, and I'm just launching this practical AI podcast with Chris Benson, who's a chief scientist at Honeywell. It's being produced by the changelog, so keep an eye on that. We're going to have an episode all about Kubeflow soon, so keep an eye on that. So the ML use case that I really work on with Packarderm is creating platforms for large companies or small companies that allow them to do scalable language agnostic version data pipelining and data management, so let's kind of unpack each of those things. So scalable I think makes sense to you, language agnostic makes sense to you, we're at KubeCon, everything's containers, that's good. Version, I'm going to talk a lot about that in my talk on Thursday, but basically what I'm talking about there is creating data pipelines that are sustainable over time, such that the data and the code and the processing that you do is all versioned and tracked so that you can tie any particular result back to all the processing and data that actually led to that particular result. And by data pipelining, I'm meaning that we're working on these workflows that are inherently multi-stage as Clive was talking about. And we also treat this data management piece. So there's a lot of frameworks out there for processing and running machine learning algorithms. But the one that we work on at Packarderm, which is called Packarderm, is kind of a unified view of both data processing and data management. As I mentioned, we have a bunch of production deploys of Packarderm. So Packarderm itself is an open source project. And there's a company around it. But the core is open source and we're working with a bunch of different companies. But we have pipelines in production running TensorFlow and PyTorch and a bunch of other weird stuff, including bioinformatics things and all stuff I don't know about. But we have clusters. We work with people up to 1500 node clusters doing a bunch of image processing and other stuff like that. Okay, so just a quick talk advertisement. So I'm gonna be talking about compliant data management and machine learning on Kubernetes on Thursday. So make a note about that. I know most of that title is really exciting for everybody. And then when I add the word compliant in, then everybody no longer attends my talk or gets sad or gets scared or something. But I think we're gonna have a lot of fun with it. There's gonna be a live demo. And again, we're gonna be talking about actually putting pipelines into production that can be sustained over time in the face of increasing regulation, especially in the EU. So just to give you kind of a little taste of that, which Clive set up great for me, we're gonna have this full data pipelines being driven by Packarderm. Pre-processing of data training and model export, I'm gonna show kind of and motivate how both Kubeflow and Packarderm can work together where Kubeflow provides a lot of the distributed elements that are needed in machine learning. Packarderm can do that orchestration and data management piece and then we can hand off kind of that trained model at the end and that artifact to something like Selden for serving all while keeping all of that extremely rigorously tracked and versioned all along the way from code to data to Docker images to actually artifacts that are deployed for serving. So that's me.