 I've found certain aspects of Lean Startup to be hard in practice. I'm hoping to keep those fairly interactive discussions. Don't hesitate to just raise your hand at any point. Stop me if you have any questions. First, just really quickly about me. My nicknames are V. My full name is Arvind Krishna Swamy. I'm an entrepreneur. I'm a tech executive. At this point, I'm a senior product manager with Intuit. I'm part of a discovery team at Intuit. Discovery teams at Intuit are sort of similar to Google X. It's a strategic innovation unit, where we use Lean Startup and design thinking to go out and uncover new business opportunities that Intuit might choose to pursue further. In the past, I've also been an entrepreneur. I've been a part of multiple startups. So I've sort of seen Lean, both in the startup world as well as in the big company world. Oops. Can you hear now? Is that better? I have a deeper voice now, I think. So yeah, I'll just do a quick recap really quickly. I'm an entrepreneur. I'm a tech executive. The role that I'm currently in is with Intuit. I'm part of a discovery team at Intuit. So the way discovery teams at Intuit are set up is it's sort of like a strategic innovation team like Google X. And our charter is to go out and discover unmet consumer needs and to discover this to rapid experimentation and then get it to a stage where it might grown to be a new viable business opportunity for Intuit. In the past, I've also been a part of multiple startups, both here in India and in the Bay Area, one that went IPO in the NASDAQ and other that we had a great exit. So I think I've sort of seen experimentation in different ways of building products, now both in the startup world as well as in the big company. And I'm hoping to sort of share some perspectives to bring both together. So this is sort of the broad charter, sort of why is Lean startup hard in practice? And I think I start up practicing Lean startup soon after I read the book. It's been a few years from earlier in my career where I think we went through more of a traditional way of how we went out building products. And I used to always sort of wonder, why did we build these things and spend so much time and money building things that no one cared about? I think Lean startup has drastically changed just how I think about building products. And I think I've come a long way with it, but I think some things are still hard. The objective here of this talk is I think I'm still learning too. And I think the hope, my hope is, this is sort of open all conversations that we can all have about challenges we face as these systems continue to evolve. So whenever the part of Levitam, to the previous startup that I was a part of, one of the blog posts that I made about how we applied Lean startup ended up on page one of Hacker News. And I think I had 20 to 30,000 people visiting me in one day to look at the blog post and were contacting me. And it was sort of one of the frustrating parts where it applied it, we got to a point where we were invalidated, sort of frustrated, and we were like, oh, okay, we could still be happy that people are learning from things that you've gone through. Here I'm hoping to sort of bring some of those perspectives in. Just from my personal journey, I have some perspectives of sort of where Lean startup is at. And here's a slide that I think many of you would have seen, it's the classic hype cycle of any new technology or new things that come in, technology triggers sort of go through a period of inflated expectations, just significant hype around it. And then a throw of disillusionment where certain expectations weren't met. And then arguably a slope of enlightenment and a plateau of productivity over time. And I personally believe that we've been going through a certain hype cycle with Lean startup. And there's a certain aspect of it which I think is very scientific. But along with that, I think there's a lot of pseudoscience in my personal opinion which is also sort of cropped up around Lean startup. Nothing against consultants and coaches but I think just in the ecosystem overall, we've also seen a lot of consultants and coaches around Lean who have come up. I think a number of them are great but I think a number of them who may be applying it in my opinion as a process versus a set of principles sort of to me take away from sort of what the core of Lean startup is as well. And I think it sort of affects our ability to apply some things. I do sort of want to just quickly ask the audience, where do you guys think Lean startup is at in this hype cycle? And sort of marked out four areas, A, from the trigger to the peak, the next is B, C, the slope of enlightenment and then D, the plateau of productivity. So I'm just going to ask for a quick show of hands for those of you who feel that we're in phase A. Phase A. Okay, I see about six hands, great. How many feel we're in phase B? Three. Okay, I see about nine. How many think we're in phase C? Okay, see about seven hands too. And at phase D. Okay, so no hands. Great, excellent. I just wanted to get your thoughts. I don't, I mean obviously I think we all have different views about sort of where we are. My personal opinion is I think we're sort of somewhere here as well in this territory. I feel that we've gone through a cycle where a few years back Lean was seen as a silver bullet to how we would go about doing things. A number of VCs took an interest. Startups are trying to apply it. Big companies have been trying to apply it. I think we're at a point where I think we're all talking about challenges that we have with it. And part of this is to sort of open out maybe some conversations here. Now I'm not here to sort of be too critical about Lean because I just think that ultimately every entrepreneur or an intrapreneur in a big company, I think we end up trying to juggle a lot of things. You know, we're expected to think big picture but we've got to obsess, have obsessive, compulsive disorder about all the little details of the product, experiments, need to sort of align with having a big top-down strategy which would make sense for a VC or for an executive sponsor in a big company while still thinking bottom-up with respect to experiments. We want to be bold and fearless but still have the humility to know when to step aside either alone or in a big company. And I think most product managers, entrepreneurs you would talk with would definitely tell you that there's a situational aspect to where you likely move between each of these being having customer insights, having to be data backed, having bias towards reflection which is still important because action in itself is likely blind but reflection alone is sort of important, right? You need a sort of a mix of both. Focusing on learning which arguably is more valuable versus focusing on validation and focusing on shipping because I think both are important since endless experimentation doesn't really take you anywhere or experimenting for the sake of experimenting doesn't take you anywhere. We need to have deep empathy for the customers for whom we're building the products and understand what their lives are like but we also have to be stubborn since you just can't make everyone happy. We need to know what not to do and know what to do. Often what not to do, being clearer about it almost is more helpful in some ways since it makes some decisions simpler and we need to have dogged persistence either as an entrepreneur or an entrepreneur to work through various challenges we face but on the other hand we talk about also knowing when to pivot and having sort of a healthy detachment from, for instance, a solution but having a healthy attachment to the problem that we're going after. And the reason I'm laying this out is I just think it's still hard and I think this is the reason why I struggle because I feel many people that I work with I'm a part of a lean startup circle and a few groups who apply experimentation. I think it's so situational that I think it's important for me to lay this out and say that there's no easy answer necessarily. So this slide is more around people who are getting into lean startup sort of an early stage of figuring out experimentation. I think in an early stage experiment design is still hard to outline what the vision that you have is to figure out to develop a business model canvas to have a rigorous approach to identifying what your high-risk assumptions are, identifying your leap of faith and then crafting an experiment with a falsifiable hypothesis around it takes a fair amount of discipline and some experience. I personally, when I look back at experiments that I ran maybe three years back I feel embarrassed because I think I could have done something so differently and I'll share some examples as we go along. I think many of you might have seen this as a shared on Twitter. I don't know if you guys can read this. I realize the font is a little small but this is an experiment someone ran for minimal viable pizza. So hypothesis, can we sell pizza for profit? Time to produce a pizza? Four minutes. Cost, let's say it's as cheap as possible, right? And the learning at the end of this after having delivered a fairly burnt pizza is there's no demand for pizza. So this is sort of shared in enough examples of sort of challenges with taking a scrappy approach or taking an approach where you go to minimal with an experiment. Now part of my question for you is if you were to think about this how would you rework this experiment? An experiment for minimal viable pizza. Get feedback from customers, okay? Okay, okay, sure. So I think part of the hypothesis they have here is sort of low cost pizza and really like in four minutes. Yeah, yeah, so what is it? Yeah, it's a burnt pizza, yeah. Yeah, right, so in this example what could it be? Minimalizable pizza. Yes, yeah. Any other thoughts? So but you see the point that I'm going with here, right? Part of it is sometimes I feel that we run experiments and we say that we're invalidated but minimal marketable, okay? Which can be bought but doesn't have to be eaten. Okay, great, good points. One thing I want to point out is it's part of the reason I wanted to share this example is I've seen people and myself included in some situations where we have run an experiment. We have the result but we're left feeling was I invalidated, was I validated? And one thing with Lean Startup is we talk about how you need to ensure that your experiment is falsifiable, okay? So what we refer to with this is let's say you run a landing page like the example he spoke about. And we say that the criteria for this is we're running Google ads, people look at it and come and click through and come in. So we're targeting keywords like dominoes and pizza hut and our success criteria is that 10 out of 100 people who visit will sign up for pizza. So this typically sounds good, right? But often one of the challenges that comes through with this is it's not always easily falsifiable. So what I mean by this is if you had said 10 out of 100 is good, right? What if nine came through? What would you do? That's one way. So one part that I think is useful to be clear about is sort of what that minimum number is, right? And tied to that is what are you really looking to learn? Because ultimately every experiment is out there to help you with learning and is really to help you answer a certain question. So if you're, sorry, go ahead. Correct, so to go back to the experiment, it's important to first focus on what are we trying to learn? If all we're trying to see is their demand for cheap, quick pizza. All we're trying to do is demand. Ignore the fact that the pizza has burned. Ignore the fact that no one's gonna eat it. Set that aside for now and focus on what your learning really is. So part of the reason I'm trying to address this is a bit of a misconception. So it's easy to look an example and think, oh, experimentation doesn't really work out. The only way is you've gotta build the whole pizza and do it. It doesn't necessarily mean that. It's okay to do something that's very scrappy and minimal and burnt. If all that you were looking to see is, are people interested in cheap, quick pizza? Because your next experiment may be to say, hey, okay, now how do we deliver something better that still fits here? And you run another experiment. So an example of the website is that your initial website might be really scrappy and not that great. But you may have a subsequent experiment which says by improving the design of the site, my conversion rate would improve by 14%. And you could do that downstream. You don't need to, for instance, with mobile apps, right? One of the common challenges is typically the fidelity bar that Apple has set is so high. So people expect a fantastic experience which makes it hard if you go with a hybrid app with PhoneGap or one of those shameless five base apps since people aren't happy with the experience. So often the case that a lot of people make is you have no option but to go big bang. So you need to go for a big App Store launch, make sure you end up in one of the top lists and that's the best way for you to get discoverability and distribution. But on the contrary, if you come back to sort of, what are you really trying to learn? You don't need to go out and do that. It's important to say, look, this is just an experiment. This is what I'm looking to learn and then start with that and then go for it. Yeah, great question, man. So definitely ensuring that whatever is falsifiable is put down is really important. Like success could mean 15%, 20%, 30%, right? You can have a disagreement on that. But really the business decision or the entrepreneurial decision you're trying to make is should I continue down this path at all? And for that you wanna say, look, if I don't have one paid customer by the end of this month, I'm not gonna move forward. You've got to be very, very clear about that. So that is a falsifiable side effect of ensuring if this doesn't happen, then my fundamental hypothesis around this business or this opportunity I'm trying to pursue is false. So part of, I think this very fundamental starting point challenge with lean startup is I think some of this is not exactly straightforward is what I've seen. It took me a while to sort of tune how I think about it. And I've found that the best way to go about it is to talk to more people and exchange notes a little bit to understand this. I have another example here. I think the font may not be great, but this is an experiment which one of the teams I was working with earlier tried to run. So sort of the opportunity they were trying to look at is digital marketers who are trying to maintain a presence in social media. And many of them what they try to do is they go out and try to find a lot of digital content and then set it up with tools like Buffer and others so that they keep publishing them. So they keep looking for content that they can publish. So part of the idea here is, hey, we built something to which we're auto-tweet on their behalf and find interesting content for them, they would be interested in it. So this is a concierge experiment which I think for those of you who are familiar it's there's no technology behind it. Someone does it manually pretending as though there is technology. So this is a concierge experiment which someone has attempted to run, right? Saying they'll get a bunch of people to sign up and say they will auto-tweet on their behalf and they've got these algorithms which will curate content and do it for you. But in reality there's no algorithm, there's just a few people doing it manually behind the scenes, right? And going through questions with people to get them to sign up, running through a concierge service for two weeks and then at the end of two weeks running a disappointment survey to see whether, look, if I stop the service now how disappointed would you be, right? Fairly typical approach that many people would take to running an experiment like this. Now if you look at this experiment, right? This sort of has one of the same challenges I mentioned which is overall it looks like a fair approach, right? It's got concierge, you're gonna run it, you're not putting in much effort, someone does it manually, you're doing things that don't scale to start with and focus about scalability later but it's not clear what success is. It's not clear from this experiment whether how you would decide what is validation and what is not. Now the other thing to think about too is is this really the highest risk aspect? Now one of the important things is before you start off experimenting to think of what your leap of faith assumption is which is the most high risk assumption that you're making in whatever you're going down, right? And it's not always easy to identify what that is and at times the other challenge that I've at least had really in startup is sometimes to test your high risk assumption you need to sort of either build out a little more or validate other assumptions first. The most common one is often a channel hypothesis which you sort of need to validate before you can go on to test a problem hypothesis or go on to look at other aspects. The second challenge that I've had is whenever you're building a network effects platform you can find ways of working around the chicken and egg problem often by either subsidizing one side or thinking of one side as being the user, if you will. But often what I've found is with those platforms you have to seed them. Either having to seed them or getting to a certain stage of a community that's involved. I've found that without it, it's a little hard to run experiments. I would love to hear if some of you have run experiments with platform products to see how you've gone about it, how you've chosen which categories to go after and how you've sort of thought about it. I've sort of found this part a little hard. So I think the next one is around just managing control variables across multiple experiment batches. So often what would happen is, right, you would run multiple experiments one after another, trying them at different groups to test different things. Now, you know, we can, you know, I'd like to think that lean startup makes us more scientific and more intelligent with how we're doing things. But experiments and science are run in controlled environments, right? Where things are carefully controlled and managed. Now, how do we manage that in the real world? Any of you here who are running experiments have any thoughts on this? I mean, as an entrepreneur running a landing page experiment, there's seasonality, there's so many other things that get involved. Just curious. I mean, how do you usually approach this? Any thoughts? Cool. It depends though, right? Yeah, yeah. Correct around what? And yeah, okay. Short in the duration, sure. Sorry, yeah. Yeah, yeah, yeah. Got it. Got it. So you're saying? Correct, correct, correct, yeah. Right, so let's say you go to three different campuses. Now, one of them is, it's a ladies' college, another one is a PU, another one is an IAT. And you end up having very different results from the three. How do you interpret them? Yeah. Correct, and get back, right? So at least for me, at least the struggle I personally had is, so this is something, I've done it into it, blanked out some parts, but within a small business unit, this is an experiment we were running around trying to look at people's interest in QuickBooks reports, okay? And one thing we tried was to try to get a little better at identifying what we thought the control variables were. So one example was personas or the users who were involved. Another one was identifying, like, the other thing that's going to vary is the incentive that we would offer them. So I can't say this helped a whole lot, but we tried to identify what some of these variables are across experiments and tried to manage them like you would in a scientific experiment as control variables. Because otherwise, without it, right, we end up sort of the same point of, look, are we doing an apples to apples comparison? Are we really validated or invalidated here? And finding that a little harder. So at least I think identifying control variables across experiment batches, I've found this to be helpful. So this is another one which I think both someone inside a startup and a large company faces, which is if you're running an experiment, you want your sampling size to be enough to hold some statistical significance. I think different people have opinions on this. I'll walk through a couple of slides and talk about this point a little more and why it's important. I think for those who have seen the old dinosaur slides, this might be familiar, but you can go out, you can run a landing page experiment and eight out of 100 visitors sign up. Now, what happens is, is eight out of 100 good enough? Do you have benchmarks for the sort of ads that you're running, for the things that you're doing? Do you have organizational benchmarks on email open rates that are dependable for your industry? So if you're mailing a bunch of account insurances, do you know what a typical open rate for those emails would be? Can you benchmark the answer? If not, how do you decide, right? And next is 100, interesting enough that you would seek validation and go on to a broader sampling size from there. So one thing I want to touch upon is, ultimately, there are many different approaches to how you design your MVP. I mean, these are some of them, people use other approaches too. All of them have different levels of fidelity and obviously, depending on what you want to learn, you may choose something that has lower fidelity and then move on from there to something that has a higher level of fidelity. So for instance, one well-known one is, oops, okay, it's Imposter Judo, which is, if you're trying to build a mobile app, see if you've got a competitive app out there and go show that to some potential target users who don't know the app and see what they think about it and try to learn. So learn from what your competitors already have or other people in the market already have and try to learn. So that is like literally zero effort, right, in certain ways, but it's at a lower level of fidelity. Now the thing is, as you move into an experiment at a higher level of fidelity, for instance, maybe a concierge where you're manually offering a certain service, you're still at the point where you need to decide, okay, now you've handled a concierge experiment where 1,000 people have come through and here is what you're seeing, right? I think with entrepreneurs, often like marketing budgets may be limited. You can only test with so many users in a big company, companies can be a little protective about their existing user base and existing product lines and sensitive to it. So even there, your ability to get to a large user base at the start may be hard and you may still need to work through legal and privacy to ensure that you can test what you want to. There is a bit of a balance. I'm sure entrepreneurs, entrepreneurs, you would relate to this. So part of it is sort of this tough question of what constitutes validation and when do you decide to double down? So either as a startup or if you're running an incubator inside a big company that's practicing lean startup, how do you decide when to double down on an idea that you think one of your teams has made enough progress on? And I think I've still struggled a little bit with this. I don't think the answers are still completely clear. I'd love to get inputs. I think the next thing to me is misunderstanding complex versus complicated systems. Any of you here who attended last year's Agile India? Anyone? Okay, handful. So there's this great session on the Kineman framework which Snowden, we had a keynote last year and he spoke about thinking of complex systems and how complex systems can be classified and depending on the state, they're in the simple, complicated, complex and chaotic. I won't go into it in a whole lot of detail. The most simple way to think about it is in a very simple system, the relationship between cause and effect is obvious to all, right? So if all you're doing is you, for instance, you've got the reception desk right here at Agile India, you could work off best practices because everyone knows what has to be done. It's a relatively simpler system and what has to be done is fairly obvious to all. From here, you sort of go on to more complicated systems where cause and effect requires analysis or investigation. And then there's the space which he calls complex where cause and effect can only be perceived in retrospect. So cause and effect can only be perceived in retrospect. And finally, chaotic systems where there's no obvious relationship and the only way you can go about things is sort of acting, sensing and responding. I'll focus just a little bit here. There's no relationship at all in whatsoever between Kinevin and Lean Startup, directly in any way whatsoever. I think the objective of Kinevin is very different but I found it useful to think about this in the context of Lean Startup, especially since we're crafting experiments, I found it useful to think about what are we really trying to learn? And in certain cases, what we're looking to learn may be in a space that is not complex. It may be complicated but not complex. And if it is complicated but not complex, it may not need experimentation. And the other aspect of this that I've found is for people who may come from more of a business school background or from a more sort of cause and effect analysis approach to saying why are we doing what we're doing? Because it's an important question. There are a lot of situations where when you're going into a new space, you're introducing a new product or you're trying to drastically change things, it may not be, it's just hard to predict how users would respond. So you may have some theories, but the only way it sometimes is to run little light experiments, right? A number of rapid experiments and test it out. So where this gets a little tricky is when it is complicated but not complex, I feel that you can take that approach where you could craft experiments where you have a clear hypothesis about how a user will behave and why they will behave that way, right? And you can go out to test whether the user behaves that way and then later follow up with customer development interviews to understand why they behave that way. But at least with complex systems, I think it's harder. So it may not be obvious why someone will behave that way. We may just not know at all, but we just have to try a number of experiments and then maybe later we will understand. And I thought this thinking was useful for me. I want to say though, this is not, I think this is still, it's important to say that there are, one approach that some entrepreneurs would take is to try a number of different things and then to see what sticks. And it works for a number of entrepreneurs. Now I'm not necessarily saying that this is the approach which I'm endorsing. So I think maybe there's still a certain deliberate approach to planning experiments, but maybe greater flexibility with sort of trying a lot of things in parallel and then seeing what sticks. But with a certain method to it, but not necessarily a mode where you will experiment and only after validation go on to try something else. I found it useful to try more things in parallel. So this is just my learning and perspective on this. Now one of the other common things that people talk about is just sort of the risk of getting stuck at a local maxima and missing a bigger opportunity when you take up a lean startup approach. Here's at least one thing we do at Intuit. These tools are available of Intuit Labs website in case and if you want to look at it as a free download on Intuit Labs. It's an approach of sort of going broad before you go narrow. So part of the risk is this, right? The lean startup, in my opinion, you start off with a vision, you've identified a problem, and then you start, you have an idea for a solution and you start to iterate towards it. I think often as entrepreneurs, and I think it's just human bias, once you go down a certain path with a solution, the willingness to sort of step back and try other things once you've gone down a certain path is a little lower. So, and once you start experimenting, you're so down in the weeds that it's a little hard to step away. So one thing that, it's a little exercise, you go through it, Intuit Labs is to sort of go broad and try to develop at least seven solutions, no matter how wacky they are for the problem we're trying to go after. And then after we've identified those seven, you put them down in a two by two, okay? And the axis of the two by two sort of depend on what you're looking at, but you put down two by two, so sort of decide which of these solutions you would then pursue further. The next one is about effectuation. Anybody here who's familiar with effectuation? Okay, I'll quickly talk about this. So there's one school of thought led by Professor Sarah Saraswati. She's written about it after having studied a number of entrepreneurs and how they think about building startups, where she talks about the difference between what she calls causal reasoning and effectual reasoning, okay? And here's a slide that talks about the two, okay? And this is important, and I think this is also a source of one of the aspects of criticism of Lean Startup where a lot of people would tell you that a lot of the success stories of Lean Startup came after those companies had succeeded. And it's easy to look at some of these things after the fact and say, oh yeah, yeah, this is a success because of that. But who are the people who are succeeding while applying it? And you can make the argument on the other side talking about how many companies have failed faster by applying Lean Startup and therefore save millions and millions of dollars by not going down a path that they otherwise have gone down. But on the other hand, I think the important point here is causal reasoning, right? Managerial thinking of here's our vision, here's where we want to get to. And how do we get there? And here are all the different steps towards getting there versus entrepreneurial thinking. Which is, here are all the things that I have today. Here are the resources at my disposal today. Using these things at my disposal, what can I do? So part of the effectuation thesis is that how most entrepreneurs think, it has started what they have, what are the resources at my hand? Who are the people I can leverage? Who are the connections that I have? How can I build something from here to go somewhere? So they may have an idea of where they want to go, but it's not necessarily completely fixed. Causal reasoning in large ways attempts to predict the future, right? It attempts to predict what is happening. Effectual reasoning pretty much says, you're not going to be able to predict the future. What do you know? What do you have? How can you start with that as an entrepreneur and then go from there with your given means towards many possible imagined ends? Some may work out, some may not work out. And figuring out how you move towards that. So to me, where I think there are times that I have struggled a little bit with this and I see some people struggle is inherently, I think certain aspects of what we do is causal, but there's an effectual side to how we all think as well. I think here in India, you would talk a lot of small town entrepreneurs whom I would argue think about what they have and then they try to figure out what they can do. They may not necessarily start with a big broad vision. Now the interesting thing though is it doesn't just apply to smaller entrepreneurs, Mormon pop shops. Her studies included Steve Jobs. It included very successful value entrepreneurs that she surveyed and she found that this is the approach that they take. Now the hard part with the effectuation or any of these frameworks is they're all tools. It's hard to, we can sort of make cases saying one may benefit you more than the other. I think of these as tools and it's good to sort of think about the relevance to what we're trying to apply to. But I think just in our minds for entrepreneurs, I feel that this is one area where I found a bit of a disconnect and I'm sort of at a point now that I see that lean startup is being applied more at big companies than at startups. And I've sort of tried to ask myself why? Is it because entrepreneurs are more effectual in how they think? Is it because business leaders at big companies are more causal in how they think? Is that a fundamental difference? MBA graduates who are tuned towards managing risk, managing a business that's at peace time versus entrepreneurs who are natural wartime leaders who are out to go out and building some scratch and find their way through different things. So I think this is the broad list. Again, to be clear, I think lean startup is a huge difference to how I think about product, how I think about building it. I've also struggled with some of this. I'm still learning. I think we all are. I think we're more scientific about how we build products today than we ever were before. I'm hoping that some of these are areas where I'm hoping to learn more from all of you here and hopefully we can trade notes because part of the objective of coming forward to events like this is to also open out these conversations in some of these topics. So I think that's all I had. We can open it up to questions. And here are my coordinates. Any of you need help with a new idea or are looking for speaker sessions at your company or talks, happy to come by help exchange ideas and looking to see what we can do to have more conversations about lean startup locally here and in our ecosystems. Yeah. Yeah, yeah, absolutely, absolutely. So Intuit is a 8,000 person company that thinks of itself as a 8,000 person startup. And if you've read the Lean startup book, Intuit is featured in one of the chapters with example, the Snap Facts. So Intuit uses Lean startup actively within the company to run experiments at various sizes in the different areas of the company. We have definitely seen people come forward. It could be, so typically like a discovery team that gets created is a combination of a product manager and engineer and an interaction designer. And great ideas can come from anywhere. And part of these systems of enabling people in their time to experiment also sort of lets you get away from managers in the company playing Caesar, right? Because they may not be the best place to understand what is involved and to empower the employees to give them the space and a framework to think about this and run experiments and to help decide based on what criteria you would say that you've succeeded at one step and you would go from this step to the next. So for instance, one way Intuit, the way to think about it is your first step is to find one customer who wants what you're trying to build. Find one and make that one customer happy. And then track what we call love metrics to have a minimal lovable product, love metrics for that one customer. And once you've done it for one, you find a cohort of users that you wanna go to from there and take it to a cohort. And then from there, sort of problem solution fit and then finally product market fit. Let's see if we've had a, I think it's adding definitely number of ideas and innovations have come through it Intuit, which are very much through this. Yeah, yeah. Yeah, right, right, right, yeah. So definitely sort of the top-down strategy approach is something that still most companies and I think Intuit approaches something this way as well. And ultimately, different things succeed in different organizations, right? Now, giving a question sort of the approach, what would you maybe take? To me, maybe thinking about the plan and trying to understand what they're the most worried about, what is the most high-risk hypothesis here? Are they worried about monetization? Are they worried about whether the problem really exists? Are they worried about being able to, are they worried about having a unique proposition, about having a durable advantage and building a moat? Sort of understanding what do they think is the most high-risk assumption? In case it's an organization that's not yet familiar with lean and experimentation, potentially trying to see if you can run a Skunk's work team that would take the most high-risk hypothesis and run some experiments or test it and then come back and share the learnings. Because ultimately here, the biggest part about it is to see it as a tool that enables learning, rapid learning. So more than validation, which is to show that, oh, it's a good idea, it's a bad idea. I think the learning is a bigger, more important objective. And showing that this is a framework can enable that learning quickly, I think could be persuasive. Yeah, yeah. Yeah. Yeah. I think maybe we can discuss, let's fix offline. We have succeeded at various companies, too. But if we do believe, yeah, absolutely. I'll talk offline and maybe we can discuss some ideas. Any other questions? Sure. So as discovery teams with an introvert, we initially go broad, and we start off by trying to come up with the biggest, broadest ideas that we can. And then we try to go narrow. When we go narrow, we do look at it at the lens of what into its mission is overall and how it aligns with that mission. And those are some criteria based on which we decide, which one we will go after. So, yeah. At a certain, once they get to a certain stage, they will need to, that's right. Idea, so technology, so typically with experimentation, most often the objective is to test it with test customer behavior. So if customer does this, then we will do this. So with typically with tech exploration, you would want to still start with a customer benefit that you want to test, identify that there is a customer benefit, validate that need. And then from there, come back to say, okay, so here is a technology exploration that we want to do in order to deliver this benefit this way. Does it make sense? Yeah, yeah, yeah. Yeah. Let's say your idea goes to the market and then you have competition. So, yeah, yeah. So I think the idea was, look, you spend all this time experimenting, and then by the time you take your idea to the market, someone else comes out with it and you like. Yeah. Yeah. So I think there's a few things. So remember one of the most important parts is identifying what your most high-risk assumption or hypothesis is. So in this case, most likely your highest risk was that someone would copycat you really easily, right? So for instance, you can take the example of Groupon, which had a great launch and, you know, it really got a lot of people excited, but then people realized that one, they weren't necessarily making money or economics didn't make sense. And two, enough copycats came along. So then the question is really with what, sure, you have a problem that you've found. You have a solution that you've been able to deliver, but do you have an enduring advantage? Do you have a durable advantage? What's your secret? I guess Peter Thiel asked in his zero to one book what people's secret is, but identifying that secret becomes really important. So it's important, you know, ensure that you're going after a large problem, one that you know you can solve, but also one where you would have an advantage of some sort over someone else who would come in either now or in the future. So that's when you'd want to think about what those defensible abilities are, are the patents, are there network effects that you could create, and those start to become important. In general, I would say that, you know, the period of stealth mode startups is gone, right? Just being able to talk about your idea and being comfortable talking about your idea will save you a ton of time, get you much more input and get you to, you know, a stage where you learn a lot sooner. So I would definitely not worry too much about competitors and at the early stage, and try to focus on the problem you're trying to solve. Okay, thank you everyone. I'm around at lunch. If you have any other questions, let's focus to you. Thank you.