 When I first started my career, I was great at data science and machine learning, but I had no idea how to get my models into production. I could write code, but I wasn't a developer, and I had to ask developers for help any time I wanted to take my work and make it have an impact on our actual customers. That was incredibly frustrating, and it made me feel incapable, and it also slowed me down. And it turns out I wasn't alone. Lots of people across vastly different professions want to process data as part of their work, and they don't want to wait for software engineers to help them. There are more than one billion people using Excel or spreadsheets every month, and that shows there's more than a billion people who are interested in processing data to gain insights or make decisions. And Excel is great, but there's a reason software developers are so highly paid and can produce so much value. Writing code takes it to another level. It's no coincidence that graphical interfaces for programming have not succeeded, and some of the most successful AI applications today, like GitHub's co-pilot, focus on helping developers write code faster, not replacing the need to write code. Business logic is just as complex as formal language, and code is the right way to express that. And the thing is, writing code isn't hard, but deploying code is. That's the reason not more people do it. You can think of it as driving a car. Most people can learn how to drive a car, but if you had to also know how to build the engine before you could go anywhere, a lot fewer people would be doing it. Now let me tell you a story about what happens when those barriers to deploying code are taken away. It's from my time at iSettle, a Swedish fintech. Yes, this is an AI-generated image, as you can tell from the extra pair of legs and the weird camel in one of the dashboards. But for the purpose of this talk, let's pretend that this is the risk team at iSettle. This team's job was to identify fraud by filtering out suspicious accounts and sending them off for manual review. At any other company, when they identified an interesting new pattern that they wanted to filter for, they would have had to go to an engineering team, ask them for help, and then wait for months or weeks until this was implemented and they could see the results. But the people in this team, even though they had never really written code before, were happy to pick up some basic Python. And they, thanks to great internal tooling that we had built at iSettle over many years, they were able to go from idea to results in just hours instead, and completely without getting help from anyone else. That didn't just let them make changes to these algorithms faster. It also meant they could make many more changes. Instead of testing one or two modifications every month, they could change these algorithms 30 times or more. And this led to iSettle having exceptionally low fraud rates, despite being a pretty small team. All of this was possible thanks to great tooling. These risk analysts were able to focus on writing their business specific logic and not having to worry about things like spinning up servers, connecting to databases, or setting up scheduling. They could reuse code from previous work. They could build multiple things on top of the same data sets, and not have to worry about how to get this to run in production. The best tech companies in the world has been thousands of hours building tooling like this internally. And that's smart because it means their data scientists can focus on data science. Their machine learning engineers can focus on machine learning. Their financial analysts can focus on finance and so on. But what about everyone else? There are thousands of companies that wants to process data and use it for gaining insights or automating decisions and processes. They typically face the choice of hiring a team of data engineers to build an internal platform and spending significant time and resources on doing so, or using overly simplistic low-code or no-code tools that limit what can be done. That's why we built Twirl, to give anyone access to a data platform like those at the best companies in the world. Twirl lets you write simple scripts that focus on the business logic you care about for filtering out fraudulent accounts or identifying promising sales leads or anything else you might be interested in doing. We even have customers that have built their entire products on top of Twirl. Twirl takes care of packaging your code, running it in production, dealing with all the complexity so that you can focus on the things that matter to you. It's kind of like having an experienced data engineer by your side guiding you to best practices so that you can focus on your task at hand. In fact, more than 85% of our customers have no data engineers at all. They manage to get by with the help from Twirl. And this is important because it unlocks massive productivity gains. When you can write code yourself instead of having to ask someone else to do it, you're so much faster. You can imagine having to explain to someone else how to tie your shoes and then suddenly being able to do it yourself. And it's not just faster. When people become autonomous at solving tasks on their own, that means they can do it much more often and in smaller steps. A good example here is from software engineering. We're not so long ago, developers had to go to IT and ask them to provision servers whenever they wanted to set up new applications. Today, they can do it themselves in the cloud and that means they're much more likely to build lots of small applications all the time. This is Stephanie Cabrera. She joined Buca Direkt, Sweden's biggest marketplace for hair and beauty appointments as their first data hire. Her background is in BI and analytics and she has previously worked at bigger companies that had tooling in place already for working with data. So she knew what she needed but she quickly realized that she would need help in building it. So her plan was to hire one or two data engineers and spend the next two years building out of data platform internally. With Twirl, she was instead able to get up and running with a complete platform in less than two months and they now use Twirl to power dashboards, generate insights, interact with their customers based on data triggers, share data with their partners and even develop entirely new data product lines. Stephanie has been able to have the experience that I wish I had when I started out as a data scientist. She's able to focus on her area of expertise and rely on tooling to handle everything else. It's been fantastic to see our customers get so much value from their data very early on but what's even more exciting is to see individuals become 10 times more productive than they've ever felt before. There's something magic about watching people who might have been under a little too much pressure to deliver value quickly then end up being regarded as superheroes by their organizations. We're continuously onboarding new customers to our closed beta. So if this sounds interesting to you or if any of this resonated, we'd love to talk. Thank you.