 All right. Hello, everyone. This is my first slush. And before I get started, a little disclaimer. When I left San Francisco a few days ago, I thought I was at the tail end of a little head cold. I guess flying 18 hours made everything worse. I've been like holed up in my hotel for the last two days, but I took a ton of decongestants and painkillers, which is making this whole space like way more surreal. I feel like I'm tripping. This is crazy, the production value of this place. If I say anything during this talk that doesn't make sense, it's because there's an extreme amount of pressure being placed on my brain. And anyway, I think it's gonna be fine. All right. So today we're gonna talk about a framework for exponential growth. Before we get started, a little bit about W and B, let's do a little audience participation. Who in the audience is working at a startup or has worked on a startup that has done like a developer tools product offering? Something to help developers. All right, I kind of figured we got like a couple. What about like an enterprise software startup? Like a startup that's selling to big enterprises, whether it's developers or not. All right, more. Cool. Who's heard of weights and biases in the audience? All right, good to know. We got like four people that have heard of us. Cool. So weights and biases is an end-to-end ML Ops platform. So when machine learning engineers are building machine learning models, they need a central place to keep track of all the work that they're doing. And they need to be able to visualize if Model A is better than Model B, what data was Model A trained on. So it becomes kind of like the central hub and place to organize all of a company's modeling efforts. So I'll stop there. We're gonna be talking about how we at weights and biases have grown our user base and we'll talk about some of the personas at weights and biases as well. A little bit about the company. So I'm one of the co-founders. We started about six years ago. Prior to starting CrowdFlower, I co-founded a company called Figure 8, which was originally called CrowdFlower, also in kind of the machine learning data labeling space with Lucas. We've raised a couple hundred million dollars from venture funds like Co2, Insight and Bond. We have a number of high-profile angels and have built a really, really strong team to continue to kind of scale this product. So companies that use us, I know has anyone heard of OpenAI? Probably a couple. They've been in the news recently. They're big users of weights and biases, so they're using weights and biases to build the next generation of chat GPT. My favorite thing about the company is that different industries are able to use us, right? So we'll sell into medicine, we'll sell into agriculture, we'll sell into e-commerce. All industries that are working on machine learning, which is more and more companies now can benefit from weights and biases. All right, so today we're gonna talk about fuel, a framework for exponential growth. I've got this really cool animation here. But ultimately, fuel is an easy way to remember the four key components to create a growth engine that will give you an exponentially increasing number of users. So number one is flywheels, and we'll get into the details of this in a second. Number two are users. So we want to understand the user decisions behind every metric we're trying to move and how to inspire these users to make the next decision. The E is for experiments. So we want to influence these decisions by running a number of experiments and seeing which ones work, which ones don't. We should set clear start and end dates for these experiments with clear attribution and clear hypotheses. And then learnings. So once we've done all of this, the most important thing to do is just capture what we're learning here. Because the things that worked last year won't necessarily work this year, and it's gonna be a constantly evolving system. And we can take these learnings and then go back and improve our flywheels. So I'm gonna be sharing examples for weights and biases today. All this framework can be used for any company. And I wanna give credit for the framework itself to our head of growth, Lavanya Shukla. I'm just the messenger here. She came up with the genius acronym of fuel. All right, so first let's start with flywheels. So we wanna build flywheels, not funnels. We need to set a North Star metric and we should start measuring everything now or yesterday ideally. All right, so flywheels, not funnels. This isn't to say funnels are bad. Funnels are good, we need funnels, but funnels are not nearly as exciting as a flywheel. A funnel is generally gonna lead to linear growth. You generate some awareness and then you take a user through a series of steps until they're activated and are likely to become a buyer or an active user or whatever it is you're trying to, whatever behavior you're trying to drive. Flywheels, they lead to exponential growth, which all of us want in a startup, right? They do this by adding an extra step by turning engaged users into champions who then build publicly on your platform on top of your product and draw in the next generation of users through the stuff that they're building. This is really our North Star. The bet is that we can efficiently generate that next generation of users and flywheels are an amazingly efficient way to do it. So we wanna find something in our product that users can share publicly. So they discover us fall in love and tell all of their friends or the next set of users. So at Weights and Biases, we have a couple flywheels that have worked really well for us. One is our content flywheel. So within the space of machine learning, things are changing so quickly. We started Weights and Biases six years ago. The PyTorch framework didn't exist. LLMs certainly weren't a thing. The transformer architecture wasn't a thing. There are like new and exciting innovations in this space every other week. So we have a team of content writers that are looking for the next most exciting thing that everyone's talking about. Maybe it's trending on GitHub. And we then generate content. We spent a lot of time ensuring that this content SEOed well. So it's gonna get indexed by the search engines so that as now new users are trying to understand more about this exciting new topic, they're gonna end up on the Weights and Biases platform. Now, the secret here was we could just have a blog. We could have WordPress and just be doing this. But what we're actually doing is using a feature of our product called reports. So a report in Weights and Biases is a way for machine learning engineers or members of a team to actually share their analysis. Think of it as like a report you would do in school. If you're an academic researcher and you're trying to figure out the next generation model, you could use a Weights and Biases report to share a bunch of graphs, to share a bunch of insights and knowledge. We're having our own team use reports to generate content. So this then enables new users that have maybe never heard of Weights and Biases before to come read this content that they're really interested in and say, hey, how was this report written in the first place, what is this? Now suddenly they learn about Weights and Biases and they sign up for their free account and they're becoming that next generation of users who might create their own report to share some interesting research or experiments that they're doing using our platform. And this is really that second loop that's worked really well for us because the actual content generation is a feature of our product, the users themselves, especially academics, who we give our product away for free to, we'll come use our product to do experiments and then share their analysis, their research, directly in our product, which then generates that next generation of users. So this has served us really well. All right, setting a North Star metric. So we're very metric driven at Weights and Biases. We have the product, our product itself is just about metrics basically. It's like a way for researchers to log metrics. So we've always tried to have a really good data warehouse, business intelligence solution in-house. I cannot say how important this is to really get a team focusing on this as soon as possible if you're really serious about drive and growth. So one of the most difficult things to do is just define the metric that matters for your business. So one of the most important metrics of Weights and Biases is what we call a weekly active engaged user. What this means is it's someone that comes to the Weights and Biases platform at least once in the trailing three days within a given week. So they need to come to the platform twice within a week. Now they're suddenly a weekly engaged and then hopefully we continue to kind of increase that number, which at least in 2021 we did here in a exponential way. So to make this number important, it's really about creating a culture of reporting on this. So before the pandemic, when we had an office, we just had big television screens and we would put this chart up and we'd watch it every day and when there would be a bad week, everyone in the office was getting a little nervous and we'd say, what's going on? Now in post COVID world, we have bi-weekly meeting and we're always reporting on this and saying it's one of the first slides in our weekly meeting to say this is where we're at on our just high level growth metrics. And of course, we're also reporting on like ARR and all of these like lower level business metrics, but growth is the oxygen of the company. This is what's gonna be able to get us to scale and ultimately drive more revenue. So it's really important to be honest about reporting these numbers. It's easy to say when things are working, like cool, we're all doing a great job. Now when things are working, it's really important to dive in and say, what changed? Why was it not working before and now it is working? And then of course when things like stop working, like here we see this big dip in the graph, that looks scary. I believe, yeah, that's actually Christmas, it's fine. Everyone's hanging out with their families in America. So another core thing to think about with this metric are three components that make this metric really useful. So one, users, who are the users that are, that you're targeting with this metric, right? You're probably gonna have multiple personas that your product is useful for. Which users are the specific metric you're trying to optimize, targeted for? What's their title? How do they use your software? The activity that this user needs to do to actually be considered engaged. So I think for example, creating a post versus just liking a post. And then the cadence. So how often do they need to do this behavior before they get put into this metric? For instance, if you're a company like Slack, you probably want this to happen daily. There are ways and biases, we measure this kind of weekly engage, which is kind of a three day trailing period. It's always gonna be specific to the behaviors you expect of the users. Start measuring everything now, or I said earlier yesterday. This is something that I think we wish we would have done more of early on. You really want to instrument as much as you possibly can, because if you don't have the data, you can't answer these questions. But like, why is it going up? Why is it going down? So you really need to instrument your entire funnel, really understand that funnel. And you want to have a good sense for all the different channels your users are coming from. And each step they're gonna take from discovery to onboarding. If you're not measuring, you're not learning. This ultimately, you know, when things, these metrics can be an amazing early warning system for like fundamental things that are broken in your go to market model. Try to get attribution right, attribution is so hard. But the more attribution you're able to have, again, the better you're gonna be able to understand what's working, what's not. So, you know, I'm here at Slush. It might be hard for me to attribute someone that's listening to this talk today when signed up for Wates and Biases and we said, gosh, it was definitely worth that flight all the way out here to drive this like new user. Maybe someone who's watching like a YouTube video, you've got the refer, you know they like came if they linked from say a YouTube video, but often they don't. So, you can get creative about attribution. So, just like asking users, what, how they heard about us or why they're coming. But ultimately, the other important thing to care about is ROI. So, as you're driving dollars into any of these channels, whether it's like YouTube ads or Google ads, attribution becomes really important because you need to be able to show your ROI for just like go-to-market efficiency purposes. Like your future investors are gonna really wanna know how efficient is your growth engine. All right, fuel. Now we're on to the you. So, with users, we wanna find our PIC, which we'll talk about in a second. And then behind every metric you wanna move is a decision you want your users to make. So, PIC, persona, intent, and channel. Persona, who is the user? And it's really important to deeply understand this user and ideally like love this user. Intent, what problem are they trying to solve? And can you feel the pain that they feel because they aren't able to solve it today? And then distribution channel, like where are they hanging out? Where are they looking for answers? Whenever you do an experiment, you should think about these three components and say this experiment is really targeting this persona, intent, and channel. Then you can prioritize and design the ideal experience for the most important persona, intent, and channel combinations. All right, so here's some example PICs at Wades & Biases. We have like a machine learning team lead, so they're more of a leader, a manager. They're not gonna be using the product for their day-to-day research, but maybe they wanna see dashboards and overviews of how the rest of the team is using it. We've got the ML Practitioner. This is really our core persona. This is where we really excel. The product is made for these users. We have CTO, ML Ops. One interesting thing I'd point out here is that like the ML Practitioner and the ML Ops persona, in our case, they're at odds. Like often the ML Ops persona wants to keep everything at house, whereas the ML Practitioner just wants to use the best tool. So really understanding that ML Ops persona and how we can get them to be an advocate for us has been a real challenge that is now starting to bear fruit. So behind every metric you want to move is the decision you want the user to make. It's really important to understand these decisions at each level, so go through these steps yourself and really try to understand what that user experience is for that PIC. Ask yourself, would I take the next action if I wasn't invested in this company? Would you click on that social media post? Would you sign up? Would you come back another day to use the product? If not, how can you motivate the user to take that next step? Reduce the cognitive load and give the user more motivation to take the next action. All right, experiments. Finding good experiments to run and then ultimately how to run them. So there's really two modes of experimentation. One is continue to optimize an existing experiment that you've defined. I would put this in the quantitative bucket, right? We're collecting metrics and we're just gonna try to make it better. And then two, discovering new experiments to run. This is more qualitative. We're gonna be talking to customers, prospects, et cetera. So how to find good experiments to run. Well, go where your users are. So spend time in their communities, those places where they're spending time. So in the machine learning world, there's a really cool community called Kegel, which is a kind of machine learning competition website. So we have members of our team that are Kegel grandmasters that really understand that world. Towards Data Science, Hacker News, Twitter, GitHub has been an incredible place for us to generate community and drive awareness. So we'll integrate our software into popular GitHub repositories, which then users discover us and say, cool, I wanna use this. And it's become a great flywheel where suddenly we're not even needing to integrate our library anymore. Users will just integrate it directly because they see how useful it is. Really take advantage of what your audience is most excited about. So again, the latest tools, the latest papers, you can then feed this into your own product and brand by throwing events, explaining it, or throwing a hackathon to build with that new tool. And then community-led growth, this is the best, right? Just genuinely do things that are useful for the community. Teach the community something new, make it easy for them to keep up with what's going on in the industry, connect them to mentors, heroes, they wouldn't otherwise meet, and teach them best practices. Early on at Waits and Bytes, we were teaching classes about deep learning just for free. And it was a great opportunity to meet people and understand kind of where the blockers were. And ultimately, I think really improved our product by having those community events. And it's kind of like a live product workshop. So the more you can actually just engage with the users in a way that's helpful to the community in general, I can't recommend it off. All right, so how to run good experiments. So you've got limited time and an infinite space of things to try to generate awareness in new users. So you should have a portfolio of experiments. You should have some set of things that are, you know they work, now it's just about improving them, scaling them, making them better. And then some portion of that should be like new, unproven ideas that you just wanna try out. You wanna balance the impact, the effort, and the confidence of the experiments you're running. To do this one, have a clear start and end date. You can't have it be open-ended. You're also gonna wanna be able to clearly attribute and have a metric of what you're trying to move associated with that experiment. And you want short iteration cycles. It's not gonna work if you're iterating longer than one to two weeks. It should be one to two weeks for these experiments. Now, it doesn't mean the experiment itself won't take longer than a couple weeks. It just means every one to two weeks you need to sit back and evaluate and say, hey, all right, is this working? Am I putting my resources into the right things that are giving me the biggest return? And then big time, replace adjectives with metrics. So instead of, you know, grow the user base or improve the signup flow, it's like improve this metric by X percent. And kill fast. Like don't be scared to try lots of ideas. Many of them you're gonna have to throw away. Failure is guaranteed here. Like a lot of this stuff isn't gonna work, but the only way we learn is by failing. The only way we're gonna get better is by failing. So keep iterating. All right. Learning, so the L in fuel. The real job of the growth team. And ultimately we're the kind of in product, the thing that we're gonna be able to use. So the learnings are what it's all about. Like I said, the things that worked last year aren't necessarily gonna work this year. Your space is changing, the company's changing, the product is changing. Being really organized about all the experiments you've run, what you've learned from them, what's worked, what hasn't is really important to keep this engine going. You wanna capture this in some regimented way, whether it's Notion or a tool like Monday. But ultimately you can build playbooks from all of these learnings as well to say, all right, I tried these three experiments, these all worked well. How can we take these learnings and help future employees onboard quicker? Let's make a playbook that is, how do we run the most efficient event? And ensure we have attribution and actually drive awareness with it. Cool, so that is the fuel framework, a framework for exponential growth, flywheels, users, experiments, learnings. The strategies, personas and experiments are gonna be different for every one of your companies, but this framework, I believe, can work across the board and it's really served us well at Weights and Biases. So with that, good luck. Get that exponential growth. Thank you very much. You can hit me up at Twitter, at VanPelt. And shoot me an email if you'd like as well, VanPelt at WMB. Really appreciate your attention, everyone, thank you.