 I'm work at Selden, we're based in Barclays, tech hub-wise in London, it's an accelerator with 20-30 companies in it. We run TensorFlow London workshop every month, so if you're in London, it'd be great to have you there to join in with you talks about TensorFlow. As a company, we work on machine learning deployment and Kubernetes, and we also do consulting in the FinTech area, doing machine learning in various aspects, equity prediction and various other things. So where do we stand as a company, exactly what we do in terms of our product? If you view the machine learning pipeline as these steps, you know, from training, data ingestion, analysis, validation of your model, basically Selden core, which is our open source, which is what I'm going to talk about today quickly, is we just focus purely on machine learning deployment. So after you've done the training and you want to deploy your predictor out, scale it, monitor it, do analysis, and do rolling updates to your machine learning production. So we're also part of the Q-Flow ecosystem, so you can choose Selden core to deploy your models on Q-Flow as one of the options. You can choose TensorFlow Server, you can also choose Selden core. So how do we fit, so how does it all work? So once you've got your Kubernetes cluster, you can install it via Helm or Ksonic. We've got our own Ksonic registry, there's one part of Q-Flow. And then the next step is to package your machine learning one time. So for that, we use S2I, and that's what I'm going to explain today. So that's to take your source code of your machine learning prediction point, package opposite image, and so we can then manage that container, which is going to give predictions in your graph. So the final part is to actually create your one-time graph, so that's just saying how your components are going to fit together, so your models, AB tests, and other things you might do as part of the machine learning pipeline, fit together and run together, and we define that as a resource and deploy it. We have our own operator that will understand that is deployed and manage that graph, basically. So what we're trying to do is allow machine learning data scientists to use any toolkits, so Spark, TensorFlow, Skykit-Learn, what we want is they can use any toolkits they are using now and we just manage the one-time prediction for their models. And for that, they just need to do two things. They need to dockerize their one-time component and expose it using our REST and Geopracy APIs. So they can do that themselves, but we want to make it really easy for them to do that. So for that, we're using RedHead's open source to image tool. So just for those who haven't used source to image, there's two parts to this. You have your code that you want to package up. So here we've got a prediction component in Python. And then we have a set of builder image that we provide. We provide Python, R, and Java builder images that allow you to package up your source code into an image. So we provide all the dependencies and then we provide the scripts. In this case, assemble script to say how your source code is going to be packaged up with our dependencies, run time script as of how it's going to be run, and then use these scripts. These are scripts required by S2I. And once you've got those there, you're going to use the S2I tool and that will package it up and it does all the work. So this is just a quick example. Here's an example using S2I. So it's going to do a builder on the current directory. It could be from GitHub. It's going to use our Python 2 builder image and it's going to output this Python classifier. So the first thing they need to do is have their run time component. So here's one for the standard RS classifier in Python. So they do that. They can then supply a set of requirements of what packages they need. Skykit learns, SciPy, etc. And that will be included in the image. And then they just need to provide a set of requirements of how we're going to package that image. So one is what the class is going to be called. In this case, RS classifier. So we can find it when we package it. How you want to expose the API, REST or GIPC is the two APIs we handle right now. And what this is, is it a model? We also handle other types of things that allow you to do AB tests or ensemblers and different forms of things like that. So once you've done that and you can actually provide the environment as part of the command line, or you can provide it as part of the source code. So once you've got that, you just run the single line of S2I. And that will build your run time image and package it. And then we can deploy it onto your cluster. So really what we're trying to do is make it really easy for people to take their run time components, package it up, describe the graph of what they want to deploy out there on Kubernetes. Then we deploy it. It's managed by our operator. And then you can go into the virtuous loop of updating your components, changing, doing AB tests, canary, rollouts, all sorts of things you need to do in machine learning and production to actually keep that machine learning component updated and running. So just the final slide, a few call outs. So there's two source to image deep dives and intros on Thursday and Friday. And I'm going to more depth on Selden Core, which is stuff that I work on on Friday if you want to know more. So thank you. All right. Thank you very much.