 Hi, everyone. We are so thrilled to be with you here today at Product Con. I'm Nicole Breskin. I oversee news and entertainment vertical products here at Disney, brands including ABC News, Marvel, Nat Geo, and ABC Entertainment. And data is just such a massive part of everything that we do here to drive our decision making. And I couldn't be more thrilled to be here with this panel today to moderate this discussion on using data to make effective decisions in product management. So without further ado, I'm going to hand it over to our panelists to introduce themselves. And Abby, I'll kick it over to you. Awesome. Great to be here. I'm Abby. I'm a director of product at Amplitude. I focus on essentially helping product teams answer questions with their product data and collaborate around it. So definitely excited about this panel given what we do is help teams make better decisions with data and also managing a team of PMs who are trying to do the same thing. Definitely gets meta, but makes me very excited for the topic today. And Ryan, over to you. Yeah, excited to be here as well as the founder and CEO of Sprigg and we're a product insights platform that complements both analytics tools such as Amplitude as well as AV testing tools such as Chameleon quite well. And what we do is we do in product surveys as well as session replay and concept testing to help product teams and product managers understand the why and backed by Andreessen Horowitz, Excel, and First Run Capital and work with many of the fastest growing tech companies including Loom, Square, Robinhood, Coinbase, Dropbox and Ocean. Amazing. And Colin, great to see you as always. Call us about you. Hi everyone. Great to be here. I'm Colin. I'm leading Chameleon here in North America. Chameleon is known for a couple things. One of which is we're known for all team experimentation. That's where you have multiple teams coming together on a single platform to do experimentation on the web. But you could also do experimentation in the back end with your products. We're also known for our AI, which gives a score to every single visitor that comes to your site based off their propensity to convert on a single KPI. So we are huge lovers of the product world and all of the data that powers it. So happy to be with you all today. Great. Well, we're going to just jump right in. Again, so excited to be here today. So just getting us started. Team, I'd love to hear from you. What are some of the common challenges that product managers face when using data analytics to inform their decision-based making processes? I'm happy to kick off. And my background's always been in product management right out of college. And so the reason why I started Sprig was I was previously the first product manager at Weebly, and we're quickly growing to run 100 million ARR and 400 people. And we were getting all of our data in place. And so we actually were probably relatively late for most companies today for getting that analytics instrumentation in place and that AV testing in place and getting those really crisp revenue dashboards in place as well. And we started to get everything instrumented and started to realize that we didn't really understand the why behind the revenue trends that we were seeing or perhaps the analytics trends that we were seeing or maybe even some of the AV testing trends that we were seeing. And so the reason why I moved on from product management to starting a company in the product insights category is to help companies really understand the why. And so I think a big challenge a lot of companies are facing today is that they might have partial data sets. Maybe they have the analytics data and the revenue data. Maybe they have the AV testing analytics data. But we do see other companies missing the experience insights and really understanding and putting the customer first into understanding their sentiment around whether a feature or product is meeting customer needs. And so I just really encourage a lot of teams and product managers out there to look at the different types of data and see, do you have really that complete holistic picture into not only maybe how much revenue you're making off of a given customer or account, but do you really understand their behavior? Do you understand the why behind their decisions and look at all those data sets together to be data driven? I see a lot of product teams today being data driven around very specific types of data that could be misleading. And so taking that complete picture is really one of the challenges I see today in an area that we've been excited to work with teams with. Yeah, definitely agree with all that. And I think kind of going off of that idea of the complete picture and another challenge that I see people facing is either kind of stopping work when the metric doesn't look good and maybe incorrectly like abandoning initiative too early because you chose the wrong metric or maybe the wrong maturity of metric or on the other hand cherry picking the data that really tells the story that you want it to tell and actually continuing with an initiative when it no longer makes sense. Something interesting that I think kind of going back to that complete picture is especially early on in a product stage, I think a mistake people can make is to set metrics that are just at the wrong maturity level. So for example, a usage metric when you're trying to actually take a big swing with a new persona or maybe investigating into something like AI, maybe metrics aren't going to be the way that you can kind of successfully measure that in which case looking at some of the other signals that Brian mentioned, I think sometimes people forget to do and kind of either abandon initiatives or continue with something that they shouldn't. I think that's all well and true and I just think we also have to be really mindful that one of the biggest challenges that PMs or marketers face anybody that's operating in today is the volume of data that we're getting, and the velocity it comes to us, it's just overwhelming. So I think one of the biggest challenges you have to be ready to face is to know your enemy. What kinds of data should you be looking at precisely? And then being really forward thinking and explore like, okay, well, this data is important to me and my team, but how does it affect other parts of the business? That really separates I think the goods from the greats is that they're not only aware of the data that's important to them, but they're also going to be very informed about how their performance affects another part of the machine. Absolutely, and I think it's so key not only when you set your metrics like what are the metrics like your guardrail metrics that you don't want to mess with, and then also what are the metrics that maybe are not really success factors in what you're looking at. So which dovetails really nicely into just like team, like how should PMs be thinking about setting success metrics for the products they're looking after? And secondly, just an area that I know, Abby, you're really excited about is like, how should PMs be integrating data into their product life cycle? What are some best practices? Yeah, yeah, happy to kick off on that one. I think one thing that we always recommend at Amplitude and that I use on my team too is before you start a project or before you work on a product, really asking yourself, what are the key questions that I want to learn or answer with this initiative? So rather than starting with, hey, let me go ahead and to your point, call and tag every single thing and care about all the pieces of data, what are actually the questions that the team wants to answer with an initiative? And then go to, okay, what data do I already maybe have that I could use for that? Or what data maybe am I going to miss and not track if I don't instrument this now? So I think that's one key thing that I would do is just make sure you're kind of taking a question first approach to data and really going about it in that way. And the other thing that I'd say here in terms of integrating it into the team is also make analytics more of a team sport. So I think analytics is best when you're kind of working and ripping with someone. So something that we tend to do at Amplitude is we actually integrate it, you know, bringing in the team at all stages of the development cycle around data, which actually makes it a really fun exercise as well. So when you're kicking something off, like actually getting the team, the engineers, designers, PM together and actually have them brainstorm around what that set of questions is that you want to answer. Typically, you'll find that they have other questions you haven't thought about and maybe even changes the product that you ended up building. So definitely doing that, bringing data into the design process, actually asking your designer, like what questions could I answer for you that would make the experience better if you knew, which I think can also bring more data driven development. And then finally, when it comes to the results cycle, like one of my favorite meetings that we hold at Amplitude is actually pulling the results and then bringing the team together and actually live looking at all of the results and dashboards and asking them what other questions they have. And if you have a tool like Amplitude, you can even on the spot live answer those questions together, which just gets everybody thinking. And typically, we found a lot of our new hypotheses come out of those sessions. So I was just saying, make sure you take a question first approach and also trying to make it a more collaborative experience around data in general. One area that I think were the biggest tips that I would say for any product manager out there is making sure you're segmenting the right user base. And so your company strategy for the year, your OKRs, your quarterly goals, whatever your North Star is, there's always going to be a specific user persona that you're focused on as a product manager. And a lot of companies will look at their data as if every user is the same. Every user is a focus for the company. And I've seen AB tests where you actually have the test version offer form control, but it's perhaps not with the user segments there can be focused on. I've seen analytics data where trends maybe trend in the right direction, but it's not the specific user segments that your company is focused on and experience data as well. We've seen in product surveys for customers move metrics, but also not the right user segment. And one of our customers, Coinbase, they just published a case study with us. And what they wanted to do is improve their tax center. And crypto taxes is, you can imagine, very scary, very complex. And they wanted to focus on how they can empower more of their advanced users to improve and arm them with the tax information that they needed to actually successfully file their taxes with crypto. And they're able to segment specifically to the power traders and people that had made trades on other platforms. And when they looked and segmented all their data to that specific group, they gave them really key insights into exactly what they needed to do. Now, if they looked at the broader user base, the majority is actually the less advanced users would have actually driven their roadmap in a different direction. And so we look at segmentation, making sure that across all your data sets, your segmenting specific personas that are strategically important for the business is going to give you the best signal on what to do next with your product. Yeah, I mean, I think it always comes back down to what you said earlier, Ryan, which is that you're trying to answer the why. And if you can't easily, sometimes I just recommend to any PM or marketer out there, like just stop with the data for a second, just stop and see if you can tell your friend a story about how your customer or user perform. And if you can't easily articulate the story about why they're behaving a certain way or why that AB test resulted in a certain performance outcome, then you may not have like a complete understanding of what you're trying to solve for. And so, yeah, just sometimes avoiding the data overload can be helpful in getting back to the story. Like one of the features that we've got in our site, which it's one of those like buried features that, you know, it's easy to overlook, but to me, it's really cool because it's a spider chart. And, you know, an experimentation, it's very easy to like juice one metric, like you could say like, okay, well, I just am going to shove a ton of traffic over to this particular feature and or I'm going to offer like 50% off. And that's why I'm going to see a huge increase and like add to cart. But, you know, you want to have a holistic picture of like how it's affecting your overall business. And that spider chart is super helpful to that, that, that bigger picture, like you can see that okay, I've added to cart, but I've totally decreased transactions or, you know, I've answered that tax question, but I've completely like spun up a long call list to my call center or something like this. So I just feel like, yeah, you want to make sure you're understanding the story at all times because that's what helps you kind of stay oriented about what you're trying to solve for. One thing I just kind of add on to that or something that we do to kind of, it is leveraging data, but to get to the why is when we do release a new feature, like basically using data to actually take a look at who use the feature, who didn't, you know, what other trends are you seeing and then actually target those users with a research study or a survey to actually follow up and understand what's going on. So actually rather than just setting metrics, just looking at who used who dropped off and then retargeting essentially those users with a study to understand more of like, okay, we are seeing things go up, but we can't explain it. Why might that be? And you can leverage data to do better user research targeting to get at that why. I love it. And I think not only is the why so important defining the why, but telling the why and telling it broadly. Those of us product managers know like your success, there's no I in product. And you know, your success only depends on how well you can get everyone else on the team on board. So you know, for me, at least we work, we work with sales folks, we work with marketers at Disney, we work with a ton of creatives. So, you know, for my next question for this group, how do you make sure teams are working in lock step and how are you how do you tell the story, tell the why to, you know, broad based teams? I want to take a stab at that one, because this is a question that just fascinates me. And it's I think it speaks to like two cool things that are happening in the product today. One is that the world has changed. It used to be that you could have your marketers over here optimizing like their acquisition funnels, get doing really amazing marketing and creating tons of leads that would then be handed off to the product team. And then product would take them and go optimize for retention and adoption. And nary the two sides would ever meet. And that old fashioned way is like, thankfully being obliterated companies that are really doing well today, they either have a single unified growth team, or they least have their product and their marketing teams collaborating. And that way they're able to like, to your point, Nicole, really understand how to stay in lock step together. If you don't do that collaboration, then you really are setting yourself up to not be like one of the leading companies that there's so much great research about this today. And if anyone's interested, ping me and I'll share it with you. But I think that's just so important is if you can't create a feedback loop between product and marketing, if you're not powering those two levers of marketing led growth and product led growth, then you're really missing out on an opportunity to like be a stellar business. So I would say look at how you could create that feedback loop and then you're going to be off to a much more better position. Yeah, I worked at Dropbox prior to Amplitude and we had various configurations of our growth and marketing teams over the time that I was there. But I think one kind of tradition that really helped with that that was tactical was we did have a separate marketing growth and even product team, but we had regular weekly self service business reviews where we looked at that end to end funnel of what are the marketing metrics of the traffic coming in, how are we seeing that convert to signups, and then even looking at trial conversion quality and even refunds from that business. So just really looking at the end to end. So each team felt responsible for the entire end to end funnel, even though they were directly accountable, maybe to one or two of the metrics within that. So it's just kind of a ritual that gets everybody thinking like, Hey, we're all contributing to the same thing. So setting something like that up, even if you're not a single team can be kind of a simple first ritual to get going in that direction. It's amazing that we even say that out loud. And it's like something so that sounds so brilliant and so straightforward. And yet like how many companies don't do that, even it's hard for us. Like I'm not saying it's an easy thing to do, but my God, is it so valuable? Yeah, even just aligning, I think, you know, challenges that I've seen in the past to is having, you know, even same flavors of the metrics, but even slightly different definitions for what they mean, which then when it comes time to quarterly reviews, you're realizing, you know, this person said we beat it by 2% and this one said we missed. So I think metric definition alignment is actually one of the most challenging parts of this. And actually even aligning of what is that set of metrics that you're going to target and kind of track across the funnel. So that's definitely the first exercise. It's just is everybody even on the same page of what the metrics are, how they're actually defined reporting on them in the same standardized ways and then bringing the teams together to consistently look at that. It's, yeah, I agree with you. It sounds straightforward. It's a lot of behind the scenes work for sure. Well, and it used to go back to like what I was saying earlier is like you used to have in my world of experimentation, I know about you guys, but you'd have web experimentation over here and marketers would have like these KPIs. And then you'd have like back end experimentation or product experiments would be an entirely separate goal library entirely different to your point around like different segments and everything. And you know, for everybody that's listening, that universe is gone. You don't have to be in that pain anymore. Like you can have a single goal library. You can have a single library of segments you don't need to like to your point, Abby, like put yourself in a position where you know, not only are you kind of holding your like you're dragging this chain around with you, but you're also empowering the naysayers and you know, they are looking for like, oh, well, you defined it as blue, but it's actually blueish green. And then they use that as a way to like undermine your work. And I think that's what you're, you know, again, you don't have to live in that world anymore. And I think that's what's so exciting about being a product manager today. Yeah. And for the kind of like new product managers out there, what I love about product is there are so many flavors of product. And I think like today's product manager, you can spend like, you can be a PM like data PM, focus precisely on this and this definition, like, you know, at bigger companies like Disney myself, like we have, we have teams focused on data governance and working on these best in class tools. So I think it's really an exciting time to work work in product and work in data product as well too. Switching gears, A.B. testing. A.B. testing is just such a fundamental tool defining A.B. tests, multivariate tests to make decisions. What are some best practices you all think the product managers should have in using A.B. testing? I feel like I have to take that one. But you know, first, here what Ryan and Abby said, like it's just, it's just great that experimentation has reached this maturity now where it's just something that we all understand is that we have to do. So I just want to like recognize that it's great to hear other awesome companies just implicitly talking about experimentation. So, you know, I think that's just pat ourselves in the back that we've reached that stage where, you know, it's just something as normal as SEO, right, that you should be thinking about experimentation. You know, to answer your question, Nicole, I would say that, you know, the coolest thing that has happened in experimentation the last few years is the intersection of like feature management, feature flagging, and experimentation itself. Like again, just an awesome time to be in product. You can work more easily and effectively with engineers today than ever before to like create, you know, they'll be creating variable, they'll be controlling like how products are released when the same time you can be creating like variations of those products more easily than ever. And you can even customize and control like who gets to see what like Ryan makes that really good point about segmentation. And then Abby, you were talking about like choose these goals, like this is so much easier today. And I just feel like if you're not already working closely with your engineers about how to build a better product, then look closer at what's happening at feature experimentation, because you'll just be blown away at like what's possible today versus just a few years ago. So, yeah, I would say, you know, have a chat like first of all, are you doing feature flagging? And if so, you're already in a great shape to like start nurturing like the power of experimentation, which is ultimately all about just we're, you know, creating data that helps you feel like, you know what, this is right, like we need to do more of this. And so that's what we're in the business of is just making data-back decisions and experimentations are fantastic, if not the best way of coming to that conclusion. I do think it's awesome that the experimentation is becoming more and more normalized. And I, you know, maybe five, 10 years ago, it was only Google, only Facebook only is very large mature organizations where a very small change would have a noticeable lift and impact. And now, you know, here at Sprig, every change that we run on our marking site or onboarding flow, everything is A-B tested and, you know, we're relatively early stage. And so it is moving earlier and earlier, you know, do encourage or be out there, regardless of the company size, it's developing the habits, even if it takes, you know, the longer than you'd like to get those results, you know, it's still the team, it's important to run the test, even if you're super, super confident, so we can measure the lift. So you at least know the positive impact if you're 100% confident, let's at least just quantify what that lift is. And so I think it's great to see the maturity of A-B testing when actually surprising use case for us has been a lot of our customers are integrating Sprig into their internal A-B testing homegrown infrastructure or integrating into tools like chameleon or amplitude experimentation and looking at the behavioral data and say, hey, users are we're driving the behavior that we're looking for, but they're also able to what this are in-product survey tool able to actually measure that qualitative impact. And so, you know, if you hide the cancel button for your subscription, of course the cancellations will go down, they're going to have a lot of very frustrated, angry users potentially. And so we see the gold standard is having the behavioral lift. So seeing the analytical impact in a positive direction, but also the qualitative impact also in a positive direction. And if you can get the business metrics up into the right, as well as the user sentiment up to the right, really consider that the gold standard for a successful experiment and feeling very confident, not only the short term, but also long term impacts on the business are positive. Yeah, that makes sense. Oh, sorry Nicole. No, if you were, I was going to jump to the next question, Abby, but please chime in. Sure, sure. I was just going to say something you mentioned Ryan was also measuring the impact. I also think that A.B. testing can be a good tool for disagree and commit as well when teams aren't able to make decisions or have really different ideas on what they think the right path forward is. You know, we've done exercises in the past around how to rethink pricing and packaging. And there's 100 million opinions that people have around those types of decisions. And A.B. testing can help sort of give an answer to that. That's a little bit less biased than a debate back and forth between people. So that's definitely one thing I think about as well. And Colin, you mentioned feature flagging and experimentation. And I think more and more PMs are leveraging experimentation and kind of taking on the mindset of the growth PM. But one thing I do see kind of as a pitfall or something that people then start to think is trying to experiment with every single change, even when perhaps there isn't the right level of data or it's not the right kind of, you know, serve theory to do an experiment on. And then in that case, maybe feature flagging and a slow rollout to manage risk and mitigate risk is actually the better path to take. So I'd also just recommend thinking through when you're thinking about running the experiment, is a feature flag really what you're you're looking for and just trying to actually manage experience and risk. So whether it's trying to measure, you know, success metrics in a static way versus manage risk and just sort of like have a slower rollout of something, like those are questions you would also ask yourself. So make sure you're not waiting eight weeks versus a significantly significant data on a change you were pretty confident in making anyways. That's just one thing I tactically think about. I mean, there's a whole like practice. So like, I mean, there's the experimentation and then there's the art of creating the hypothesis, right? So, you know, we all have to be respectful of like the amount of hard work that goes into like coming up with a great experiment. But yeah, absolutely agree. Like it's just a form of optimization, releasing a feature progressively, right? So I think, yeah, that's an excellent point. One thing that caught my attention this morning is so funny. I was like, okay, this is just great timing. Benjamin Franklin says, I didn't fail the test. I just found a hundred ways to do it wrong. And I'm just raising this point here because, you know, even the best of the best, like, you know, you hire the greatest agencies in the whole world. And guess what? They're going to have like a win rate of like, you know, maybe 80, 20%, excuse me, 80% of the time they're going to get it wrong. And I just want everybody to be aware that, you know, there's this instinct to like go fast and follow your gut. And that isn't going to pay off ultimately. You're going to get lucky a few times, but more often than not, you're going to get it wrong. And so it's really to your point, Abby, like, you know, test where you can test and you've got a great experiment and then control release very, very carefully because you don't have to live the old ways anymore. You have these tools now. And, you know, what you can see about session recording, how they performed inside of that experience, like, you can, and I think I love this part, like you bring the CEO down and you say like, look at your customer, look at what they did here. And guess what? That'll just blow doors open for you to like basically get the resources you need to go do the things you want to do. So yeah, I mean, I'm just a huge optimist. I think it's an exciting time to like be in the space. So I encourage everybody to get started. Yeah. And to Ryan's point before it's like, if you can, the wise person once said, if you can't measure it, you can't manage it. So like the key is having these tools in your tool belt. And then, you know, the world is your oyster. But I do want to acknowledge, you know, data hard quantitative data, data is really just one side of the equation of those different tools in the tool belt. Qualitative data and what consumers say and what we feel is also so key in just having a holistic product point of you. I'd love to hear from folks on the panel, how do you use quantitative data in tandem with qualitative data to form a cohesive holistic point of view about your customers? Yeah, happy to kick off here. You know, going back to my experience at Weebly and running the AB test, looking at analytical data, looking at the behavioral data, the revenue data, the, we always looked at those data sets to start to help us understand where to dig in. And that's where the qualitative data fit in when we started to notice a trend that perhaps was pricing or unexpected or concerning or maybe something that really warranted additional investigation. And so, you know, why I started the company, Sprig, and earlier this year, we launched an amplitude integration. And so how it works is that you can connect Sprig and get your end product survey replay session replay data actually right in amplitude. And what's interesting is that you can follow that same jobs to be done that I just ran through in your amplitude dashboard. You can see, you know, perhaps that there is a perhaps an increase in signups or some interesting trends in your behavioral data. And now you can have the Sprig survey data that's being collected for in that exact moment attached to that exact same event or session replay recordings and actually then answer the why. And so we've been making an incredible effort this year to integrate with other leading product insights platforms to, like I mentioned earlier, in the conversation, tie all the data together very easily, but also natively for product teams to be able to really kind of understand not only what the users are doing, but in the same even screen also dig in and understand that why question. Yeah, something we've been talking about there is it's kind of like going from the macro to micro and reverse. So whether it's, you know, watching a session replay and then understanding is that representative of a larger trend and how significant should this be or looking at a larger trend and trying to dig in to understand, you know, Colin, you mentioned showing the CEO like a recording of something that somebody is doing to actually contextualize what you're seeing. So really using both ball and quant in reverse with each other to either validate trends, explain the why, and also when you're kind of, you know, seeing something in the quality data understanding if that actually is as big of an issue or if it's more a one-off. So kind of just using them back and forth going from the micro to the macro is something that we talk about internally. Yeah, and just to add on that, like, again, so much great research out today. So if anyone's interested, let me know, I'll share it with you. But as much as we're excited about the adoption of AB testing, it's still hard. And if you look at the programs that get it right or the companies that get it right, they do one thing. They tell stories. They don't just tell stats. And so I think that's where the qualitative data really comes in. It's so helpful. The executives, they are not going to be interested in your 20 slides about, you know, the wonkiness of your product and the quantitative features that are driving X, Y and Z rates. They want to hear a story. And I think the qualitative data is just an excellent way to help you contextualize it and tell that story. So go for stories sometimes over stats. Couldn't agree more. I feel like empathy is such a big part of being a product owner, empathizing with our customer. And these are all tools to help us, you know, whether it's the data to get the scale, you know, the individual stories to understand at a human level, how this is impacting people, it all comes together. Moving to a different agenda topic, the future, it is here now with AI. So, and I can't believe our time is coming short. But last question for this panel, how is AI transforming how PMs work with data today and where you see it going in the future? I'll just quickly take that one. I just want to say in anybody listening, like, you know, AI is a tool. So just be mindful of, like, how to use the tool. Don't think it does anything else than other than what you want it to do as a tool. So I think just taking, you know, step backwards and appreciating it for its feature, for its functionality that is, is a great, you know, perspective. At Chameleon, we have an AI, like I said earlier, which is designed to give a score to every single visitor based off their propensity to convert on a KPI you've set. So if you, as a product manager, are looking to like use AI to understand, okay, what's the likelihood that this user is going to do this particular, or engage in this particular behavior, then great, our AI will basically give you that score. And then what you decide to do with that score, you know, we're a big believer in that, you know, the creativity of what you should do with the data is still best with the humans. But AI is great at maybe bringing to your attention some opportunities that you can use to exploit, you know, the large robust sets of data. I would say at Amplitude, the way that we're thinking about it in the short term is really about how can AI accelerate time to value around product analytics? So how can we specifically help you with, you know, basically cutting out the manual task of delving through the data, pulling out what it is you should care about when you look at a chart, or even just proactively sharing with you what are the trends and patterns emerging in your data? You know, we already have features around root cause analysis and anomaly detection, but I think sort of the recent advancements here are just going to be able to take that to the next level, where we can really proactively share with you what it is that you should be caring about. I think another interesting application is using data to tell people what questions they should even be asking about their data and helping them sort of train themselves to be better analysts. So those are a couple of the things we're exploring in the short term. But I'd say in the long term, it's like every company to your point call is sitting on this behavioral data and how can we help other companies become the Netflixes of the world and use that to really optimize their product experiences, you know, whether that's onboarding or eventually even the product, the core product itself is definitely where we see things going in the long run. But I think how that interfaces with what the decisions that humans are making versus the technology is definitely to be seen, but definitely the future we're exploring and figuring out how we're a part of now. Yeah. The first person to join me at Sprague actually was a head of AI. So before you even hired an engineer, it was a very experienced senior data scientist. And so we've been working on AI, you know, since day one of the company, it's always been core of the premise knowing that we're going to enable our customers to collect large volumes of data, millions of survey responses per month, actually. And the scale that the company's going to work with, it quickly knew it would quickly become a large data challenge to understand large volumes of data. And, you know, we built all our in-house models with open source tooling. We had human loop with expert researchers reviewing all the output, tuning the models. It was very, very complex and actually a large endeavor particularly for a company, you know, of our stage. But with the advancements, you know, with open AI leading the way, you know, Bard is quickly emerging as well. There's a lot of other players now. The difficulty of implementing AI in a production grade system and even a system at scale has just gone from probably a 10 out of 10 difficulty to around a 2 out of 10 difficulty. And I think the only remaining last mile is how you can actually really QA and test and ensure reproducibility of the AI output. And once that's solved, I think it's going to drop to down to a 1 out of 10 in terms of difficulty. And so I think every, you know, company and product is expected to have AI integrated at some point. I think GitHub coining it co-pilot is probably the best descriptor that I can think of is that as humans, we're looking for a co-pilot that gives us the control but also augments our abilities and augments the work that we're doing to ensure that we're able to, you know, find time to value faster. And so at Sprig, we actually had to make that decision to switch over to, you know, GPT-4 and now we're rolling it out, you know, at a very large scale. And I think for anyone, you know, who is working with large data sets and understanding large data sets, having AI integrated natively into the tool, whether it's chameleon or amplitude or Sprig, is going to be really critical because you will, as a single person or even a team of people, be unable to analyze all the data that you're collecting and those patterns and anomalies. It really is a needle in a haystack. And AI can actually find that needle in that very, very large data set. And so later this month we're making a big announcement around our vision for AI. And I do expect that, you know, any company in the product insights category, but also for all the product managers working on a product, thinking about how AI can actually fit into your own product and be a co-pilot for your end users is going to be paramount for success in the coming years. Awesome. Well, Ryan, Abby, Colin, thanks so much for being with us here and audience. Thank you for being with us here at ProductCon today. Hope you enjoy the rest of your conference and we'll see you soon. Thank you very much. Thanks, everyone.