 Welcome to the best seven minutes of your morning, I hope. I'm going to talk about FastTrack ML. It's an open source experiment tracker that we've built. My name's Steve Gattenbein. I'm a director of open source development for a company called G Research, which is a quantitative finance company out of the UK and is not at all associated with that other big tech company that starts with a letter G. I just wanted to be transparent about that in case you want to make different choices with your time. With that said, let's just dive right in. What problem are we trying to solve here, which by the way is a great question to just get any meeting you're ever in back on track. We wanted to provide our researchers at G Research with a highly performant experiment tracker. We run extremely large financial prediction models that utilize a lot of data, and we really needed to be able to iterate quickly on them and get our models up to snuff and out the door faster. And one of our goals with this was to look at what was out there in the open source world and try to find something that had a great UI with a good, rich feature set, because we didn't really want to get into the business of being UX people, of being front-end developers, because none of that is really core to our mission. And to be honest, we found that there are great options out there. When we got done, we are now able to use this with tens of thousands of jobs in parallel, and it's up to 100 times faster than some of the solutions that are out there. We'll look at some pretty pictures that demonstrate that in just a little bit. So how did we do it? You know how often hear that an API is a great way of having an abstraction that hides the implementation details? Well, that turned out to be true. So we looked around at the landscape out there and basically settled on two tools that we wanted to work with. One is MLflow, which many of you may have heard of, and the other one is AIM. MLflow is great because it's integrated already with so many frameworks that you already know and love. AIM is a little bit newer. It doesn't have quite the community support yet, but it has an amazing UI, and we wanted to be able to provide that to our users. So we wanted also to make sure that for existing MLflow users, migration was going to be painless, requiring no code changes, and we also built tooling to allow people who are using MLflow to bring their data into our product and start to use it with these excellent UIs. We were also really interested in being able to see information about our experiments in real time as they execute. We also do a ton of things with time series data, and so all of that was really key to us. So pretty picture time. Is it really fast? I wanted to take a second here just to give a shout out to Major League Hacking. Hopefully they watch this someday and see me doing this, but they are an excellent program that finds junior engineers and essentially brings them in as interns. And we had a fellow working with us this fall that built some tooling to allow us to measure the performance of our endpoints with the different back ends that we support versus what's already out there with MLflow. This will eventually allow us to make it part of our build process so that we are aware if we ever introduce regression, our users are demanding, and maybe yours are too. And so we want to be ahead of that if we at all possibly can. I apologize that this is really hard to read, but you can download it later and check it out if you like. I have another page of Pretty Graphs. Oh my god, we did have a regression. You can see it there in chart number three. We've got a ticket open for it. I've got it here on the slide. You can follow along. It's in the current sprint, just the sort of thing you want to watch out for when you're trying to release open source code. What does our future hold? You're probably wondering. We're doing more future development. We've got a namespacing feature in beta. What namespacing allows you to do is to basically segregate your experiments into their own little world so data scientists or researchers can use this tool and not have their work be intermingled with others. And it basically unlocks the use case where an ops team, for example, could stand this up as a single instance. And multiple researchers could use it without having to know the nitty gritty about running a Postgres or things like that. We also have implemented a lot of the REST API features in both MLflow and AIM, but there is more work to be done. Right now, we're working on metric contexts, which is a very cool feature for slicing and dicing your experiment data. OIDC support is pretty interesting to us. The namespace feature right now isn't super locked down. So it's not really something that you could use for complete RBAC, multi-tenant type things. But we aim to get there in the future. We need to do some work to optimize the AIM UI because we're basically spitting data at it faster than the AIM back-end can. And so it's causing some issues in the browser there. We would like to migrate eventually to DuckDB instead of SQL Lite. We're doing SQL Lite now as a thing that we brought over from MLflow, but we think we might get some better performance there. And then better docs. I know that productionization isn't a word, but I think you probably all know what I mean. And also around the migration, although the tooling that we have is pretty straightforward. And then finally, we have an aspiration. We'd like to just be the default back-end for these other systems. We don't really think this is a space that benefits from a ton of competition. I think at the end of the day, you'd like your experiment tracker to be basically a commodity thing that you don't spend a lot of time thinking about. And so that is basically it. We're on the internet. We have a website. We are in Slack. We have a GitHub. The team's on the Slack all the time, monitors the issues all the time. If you want to use this, if you encounter any issues, well, you're trying to spin it up. Just drop in, give us a shout. We're here for you. I don't know how much time I have left, probably about 30 seconds. If anybody has any questions, I'd be happy to try to answer them. Yeah, yeah. Still reeling from my soundcheck joke, I'm sure. All right, thanks everyone. I'll see you around.