 All right. I think we're live. Hey, Ash, thanks for taking the time today and joining this Hangout. It's a pleasure to, again, chat with you. We met a couple of years back when you came for the Agile India conference. And it's always a pleasure to catch up with you again. Congrats on your new book, first of all. Yeah, thank you. And thanks for having me on. It's always a pleasure, likewise. Awesome. I know we only have 30 minutes. I'm going to jump in very quickly. The kind of big plan I had was we have about five questions that I have come up with. And we spend about five minutes each question and kind of go from there. Sounds good. Cool. So I know I'm a big fan of your lean canvas. And I believe it was August 2009 when you first wrote a post saying how I can document my business model hypothesis, which then kind of you turned it into the lean canvas and published that. And it's gone places for sure. What's been your biggest learning after having created this model and helped many companies use it? Sure. Yeah, I mean, first of all, it's amazing how time flies. So 2009 does seem far away, but it also seems just like yesterday. But I still remember I created the canvas initially for myself. I'm a maker. I like to build stuff, have a technical background. And so I felt that you needed to be talking about problems and solutions in any kind of a business model. And that was kind of my bias coming in. So that's how it started. And I think it struck a nerve. And now we are seeing it, as you said, gone places and lots of people using it. But the biggest learning that I've had, and it was not something I expected, was how far it's gone. But when I dig deeper, the reason I find is that it really strikes a nerve. And the way I'll describe it is I've seen this bias, what I call the innovator's bias or the entrepreneur's bias kind of same thing, is rather universal. And what I mean by that bias is this bias for the solution. So given any idea, lots of people immediately gravitate towards a particular solution. And they spend all their waking hours in trying to push that solution out. And what we realize after the fact is that that solution may not actually be what will get you to product market fit. So that's what I call the bias. And so everyone starts with that misstep. And one of the things the canvas brings to the forefront is you have to really, you can start with a solution, but you have to back away and fundamentally ask, who is the solution for and what problems are you addressing? And that bias, whether I go into startups, whether I go into big corporates, I find that rampant. So that to me is the biggest learning, but also the biggest reason why I think this has been adopted, those that recognize it see that as something you need to do, is that foundation needs to be solid. I'll add one more learning, and that is something I call the curse of specialization. So when I talk to people about their ideas, what's funny is that they may agree on the problems, but they all have different solutions. And those solutions are also biased based on the specializations people come from. So to put that in perspective, if I've got, say, a conversion rate issue on a landing page, my marketers will tell me it's a copy problem or it's a marketing problem. I should maybe do better ads or targeted ads. My developers will want to build more features and highlight more powerful things in the app. My designers will want to improve design. And that's another bias that I see, is that people may still agree on the problem, but they'd rush towards their myopic view of the solution. And a lot of this process is about saying that's OK, but let's find a more rigorous way of testing all that. Cool, cool. I think that strikes a chord with me for sure. And I've seen similar, in my own experience, having been a prey for some of these myself. A lot of times you have an idea and you're like, this is it. This needs to be built. And it's just hard sometimes to stop from that and take a step back and try to validate it before you kind of jump off. This is what I think I was really impressed by your first book. I think you first self-published your book, Running Lean, in February 2010. And that also seems like a long way ago, a long time ago. And I think the second edition of the same book was republished by Aurelie two years later in 2012. And I think that's 2014. We had you for the conference. And I know a lot of people are really appreciating the book you have, the thought process it brings in. If I were to try and summarize your previous book, to me, the key takeaway, at least for me, was the focus on the right product market fit, using a three simple step approach that you highlighted. First one was around documenting the business model, your plan A, using the lean canvas. So building on your lean canvas, you put that out. And then the most important part was identifying the riskiest part of that plan. Out of the plan, what's the riskiest part? And then finding a systematic way of testing that plan. And I think you highlighted in your book, four stages, understanding the problem first, defining the solution, first validating it qualitatively, and then verifying it quantitatively. And so that three step approach to me brought in a lot of practical approach to lean startup. I mean, there were a lot of talks about lean startup, but this kind of gave a more practical approach to lean startup. So very impressed with the books and thanks for that. Now, you've come up with a new book, Scaling Lean. So what's exciting for you? What do you think of new, interesting ideas which built on the previous one and maybe new ideas? Can you quickly touch upon some ideas that you think are exciting in the new book? Yeah, so my process is one of immersing myself, and this is a very meta thing, but immersing myself in my reader's world and trying to understand what they are struggling with. So when Running Lean came out, I continued to speak in the conference you talk about, I ran workshops and I did that for many years. And people said Running Lean is great, but we're still struggling with these types of issues. And that's what I look to address in Scaling Lean. So Scaling Lean is really a play on two meanings of scaling. One is that it really is more, it's a more appropriate book when you're beyond the starting stages and that's kind of the scaling reference there. So as your product starts to scale, but the other is just a maturity of the process itself. As we all know, processes start out being very high level ideological best practices, but to make them really scale with people, you need to bring in some more systematic thinking. And so the book kind of has both of those goals. And so some of the new things, so it has three parts like the other one. And funny enough, as you described, that I actually can see each of the parts really going deeper in all the three stages from the previous book. So if I take the first one, document your plan A with the Lean Canvas, what I found with the Lean Canvas is that while it's a very effective tool in getting the business model story out, if we look at the alternative, which is the business plan, the business plan has two things. It has the story part, which is all the paragraphs and the words, but it also has the financial forecasts at the end. And the Lean Canvas didn't do as good a job on the number side. And so in the next book, I wanted to tackle that. So how can we test an idea without doing all this Excel magic wizardry where we pound away on the keyboard at thousands of numbers? How can we do a very simple bottle up estimation exercise? And those in the audience that know for me estimation, I apply some thinking from there to essentially take an idea and estimate whether it's going to be worth doing or not. So in some ways, the first part of the book talks a lot about how do you take an idea and it may look good on paper, but how do you then really test whether it's a problem worth solving before even getting outside the building? So some of the back of the envelope types of calculation. If we take the second kind of stage there or the second principle of identifying what's riskiest, I found that some of the starting risks are rather obvious. So the big emphasis on running lean was on customer and problem. So until you get that foundation, there's no point building, there's no point doing anything else. So running lean was all about slowing entrepreneurs down and saying, get that foundation of your house built first and then go forward. So scaling lean now talks about when you do go forward, where do the risks really come from and how do you identify them? For that, I dive into some other concepts like the theory of constraints and bring that in. So there's this metaphor in there of the business model as a system and how you can really identify what's holding the business model back and really prioritizing focus that way. So that's where the risks are, that's where the bottlenecks are. And then kind of in the last piece, the systematically testing the plan and stages, there's a lot in the book, as I mentioned about the processes that have to go into place, into place, things like running good experiments, being able to collaborate with the team to avoid some of those curses of specialization where everything, their idea is a good one. How do we as a team come together and agree to test and use more empirical evidence than passion and dogma to kind of move things forward? So that's what the last part of the book is about. So in a way, it really takes the same framework as what running lean had, but it just takes it to kind of a more deeper kind of level of practice. Cool, that sounds exciting. And I know people would be looking forward to it because if this builds on top of what they've already kind of experienced, then this gives them the depth to tackle some of the more real world challenges. And you just touched upon like one, which is basically, in several startups that I've been involved and my own startup, one of the hardest part is honestly defining your hypothesis, both value and growth hypothesis. And then basically designing experiments around those to validate those hypotheses. This is easy said then done. And I know you would agree with that. So I saw that in your book, you have a whole chapter dedicated to experimentation. Can you highlight the seven habits of designing the highly effective experiment that you've got in your book? And maybe if you have an example to kind of run through, that would be really helpful for us to understand. So I would say the first thing I'll throw out there is that we all experiment whether we know it or not. So anytime you have an idea on the back of your mind, you know that something good's supposed to happen, like you're gonna get lots of customers. So you run some kind of a campaign and then we post rationalize whatever happens. And that's how we typically take a good experiment and turn it bad because we use a lot of post rationalization to convince ourselves that we are still okay, we're on the right track. So in the book I talk about those seven habits running really good experiments to hold ourselves accountable. And I'll just warn people that it does take discipline. In the beginning, you are going to see lots more invalidations and validations, but over time your judgment improves and that's what this process is all about. But one of the first things, one of the first ground rule is one of declaring expected outcomes upfront. So it sounds very simple, but people think they do this, but they unconsciously do not. So in their mind, they think I'm gonna put this out and I'll get customers. When I'm talking about declaring outcomes upfront, you wanna get specific. So you wanna talk about what you're gonna go do and what's actually going to happen so you can then measure what actually happens against it. So again, for those that come from any kind of a science background, that's the basis of science. We have to make declarations and then we test our theory or hypotheses against them. And if they don't match, as Richard Feynman said, it's not the experiment that's wrong, it's the theory or the model that's wrong. And so we go back and change it. So this is a very empirical process kind of in there. So that's the first ground rule. So just going out and declaring outcomes upfront. If I use an example, let's say I'm going to run some kind of a campaign and on my lean canvas, I think that I have pretty good channels and I'm an authority, I'm an expert in a particular domain. So I can just say I'm just gonna launch this campaign and because of those two things, good things will happen. So that's a very poor experiment because I'm not declaring the outcome upfront. So let's talk about what are some of the ways that you might make that more specific. But first I'll talk about some of the pitfalls. So this again, as you said, is more easily said than done. Two of the reasons I find why people are fearful of declaring outcomes upfront is one, there's too much pressure on themselves, especially if they have any kind of a C level, if they have any kind of a CEO or founder title, they feel that they have to have all the answers. And if they make a declaration that's wrong, their teams, their employees will think they don't know what they're doing. So the way that I try to counteract that is by making the declaration of the outcomes a team sport. So in part of this ritual that we come in when we're designing experiments, we get everyone to make a declaration. So if someone presents an idea and everyone makes a declaration and it's almost like a game, you can turn it into a betting game if you want, but it's just more of a fun game, fun exercise. If you decide to run the experiment, when the results come in, you test everyone's declarations against the actual results. You may even award a small prize just to make it kind of a fun thing. But what you quickly find is that people's judgments improve and that's the key in learning there. Is that in the beginning, people will make all kinds of wild declarations, but pretty soon after a few experiments, when you're routinely off by an order of magnitude, you automatically adjust your declarations and they become much more kind of in line. So that's a way to counteract the fear of making declarations. The other one is a lot of people feel they can make precise declarations because they don't know the market yet. And so they just say I'd rather not do anything. And in the book, I talk about how you have to emphasize estimation versus precision. So it's okay to have a range, it's okay to have a ballpark. So if I'm launching a new iPhone app, it's okay to say I might expect a 20 to 40% conversion rate rather than saying I don't know and I'll see what happens. So that's kind of a key message there. Now after that experiment goes through, maybe we get a 35% conversion rate and it's within the range and that's great. Next time around, you might narrow your range and that's how your judgment improves. Some of the other kind of ground rules in there are a big emphasis on the quantitative versus qualitative. So measuring actions rather than words. So when we have customers, especially in interviews, it's very easy to hear a lot of things we wanna hear. And then these other cognitive biases that we think we are set, like what we want. But the way that you wanna really test that is to really put an action in front of them. Like take out your credit card or write me a check or even just sign up for my early access beta or something like that. I don't like using the word beta but maybe early access version of the app. So those are much more testable things than just the verbal commitments. Some of the other kind of pieces go more into things to make the experiments more testable. So a big body of work from the scientific method is one of falsifiability. So if we declare outcomes that are hard to test, so in my last example, if I say I'm an expert and I'm going to get customers, that's a very hard thing to test because I may go and do something. I may go and do five things and I may get 10 customers. Is that validated? What if I did one thing and got two customers or one thing and got a hundred customers? So it's hard to know when I'm done. And so the way that we make a falsifiable hypothesis is by tying it to a specific action. So I might say something like I will write a blog post and announce my product and then having a specific declared outcome attached to it. So as a result of that blog post, I will drive a hundred early signups. And that makes it much more concrete because when I run this experiment, at some point I'm going to get data that will come in and I'm either going to get a hundred signups or not, so it becomes a very binary thing. And so we want to make all of our declarations so that they're tied to an action, they're tied to a clear result and they're a very binary outcome. Now, of course, there's a gray area. So instead of getting a hundred signups, if I got 95 signups or even 80 signups, that's where we can still decide it was not a failure and we might still decide to move forward. Were you going to say something? So I was just about to ask that when you're deciding these, the one important aspect is also to decide how long you're going to run the experiment for because that can make or break. Like if I ran it for a year, sure, I'll get a hundred. But if I decided to only run, so do you also determine the time before you start or you get some data coming in and then you determine the time? Yeah, so that exactly is the sixth best practice which is time boxing all the experiments. And so again, those, we probably have a lot of agile audience. I know there's the agile and scrim approaches. So I'm a fan of time boxing because one of the things I found at least in entrepreneurship, time is probably the most valuable asset that we have. And it's the thing that we can never get back. You can get more people, you can get more money, but you can never get time. So I'm a big fan of time boxing the experiments. But what I also kind of do a bit differently is rather than doing the T-shirt sizes for experiments where we go small, medium, large, I try to figure out a way to run all experiments within the same cadence. So what I mean by that is that every experiment is two weeks long or three weeks long, at most four weeks long. And the burden on the entrepreneur is to fit the experiment into the time box rather than extend the time box. Now it sounds very hard because people will say, well, I've got a very big experiment to run it will take several months. Now there is an art of taking a big campaign like building an app or building a large feature and turning it into these small fast additive experiments where we can test them in two week increments. And when we do that, we get feedback a lot sooner. So if I was building an iPhone app, the first thing I might wanna do is even just test is anyone interested. So I might send an email to my list and say, we're building the app here is what the screenshot might look like. If you want this, vote for it by going to this website. So remember action very important than just words. So they go to the website and they say, yes, I want this. That could be a two week experiment. Now if nobody said they want this, why would we even pursue that? So that's an example of how you may take a very big feature and still break it into many smaller kind of experiments. So time boxing absolutely, because as you mentioned, one of the things we easily find is that time escapes us is that as we run experiments, the data isn't all good yet. So we just convince ourselves if we wait a bit longer, the data will get better. And what you want to instead do is say, in my declaration, I expect in two weeks to get this data or in three weeks to get this data. And at that point, we will reassess the nice thing I like about time is that as long as the world doesn't come to an end, that time always comes and it's the same time for everyone. So it's a synchronization point where the team comes together, looks at whatever the data says, and then tries to make the next right action next to it. So that's how I would. So yeah, a big fan of time boxing. And the last one, since we are kind of, I'll just kind of hit the last seven, the seventh one is really one of setting up a control group. And again, this comes from the scientific method is we can't really measure whether something is good without measuring it against, without it being relative to something else. And so when you're first starting out, some of your initial control groups are what happened last week. So in the book, there's a big emphasis for measuring things in batches. So thinking of every week as a batch of learning. And we always want to be measuring against the previous batch to see that progress is being made. Now, once we get more data and more customers and more ways to test, we can even do parallel split testing and AB testing. So that's a way where we can even go faster with the control groups. So that I would say is the seven habits that are described in that book. And also, I think the important part over there is also finding the statistically significant, you know, data set, because a lot of these experiments, you can't conclusively, you know, arrive to a conclusion without having statistically significant data. And so that's also something that you would recommend is done as part of, you know, defining the experiment itself. Yeah, and that's where like, if I go back to the four steps that you mentioned early on, there's a qualitative validation phase and a quantitative verification phase. Now, some of the experiments, especially in the earlier times when you have very few customers, it's very hard to get to true statistical significance because you're dealing with such small numbers of data points. You may be talking to 10 or 20 people and it seems like the data is all over the place. At that point, I encourage people to look at the qualitative validation. So if you go, for instance, go and try to sell your product and all 10 people or all nine out of 10 people say, no, I don't want this, that's pretty significant, that's statistically significant because you don't have enough yeses to warrant going forward. So what we would do there is really try to figure out how can we turn those nine or 10 no's to maybe five or 10 and maybe seven or 10 yeses. And if you can get to that stage, it doesn't guarantee that we have statistical significance, but at least it gives us permission to move to the next stage where we now try to scale up kind of the experiment. And that's where we're looking for the quantitative verification. So we have gotten many instances where we get 10 people saying, we really love your product, but in the next two or three weeks worth of iteration, we have found that that was a fluke. Like those were just our best customers or they were kind of just weird because they wanted something that the majority didn't want. And so we have decided to kill a feature because it didn't pass the statistical significance test. But it still gives you that quick feedback. In more experiments than others, I really get to a point where I go and talk to those 10 people and it's not yet a go with them. So until I can get it to be a go with them, there's no point in trying to design for statistical significance because you're probably gonna get the same results as you do just with the 10 people. Sure, sure, absolutely. I know we set this up for 30 minutes and we're almost like 25 minutes into it. What's the flexibility on your time? Do you have like 10, 15 minutes more? We can keep going, sure. Okay, awesome, thank you so much. Like moving on to the next one, I love the 14th principle that you have in your bootstrap manifesto. That's like awesome, which basically focuses on traction. I think you highlighted that in the bootstrap manifesto that a number of features, the size of your team, or how much money you have is not the right measure of progress. And a lot of startups you see focus too much on these numbers. And the only real metric that matters is traction, which is basically the rate at which you can capture monetizable value from your customers. And also, a lot of times we see people asking, people or random customers in terms of, what do you think of my idea? But that's again, not a good measure, right? I mean, you want to look at basically what customers do, what actions they perform. And that's what comes back to the traction part. And I know in your book, you have a whole section around traction and how do you build traction? So can you give a quick gist of what's in the book for someone who wants to read this? Around building traction. Yeah, yeah, so I think just qualitatively, people look at traction as this hockey stick curve where things, good things are happening because things are starting to accelerate very quickly. There's an inflection point, but we never really, I find this not a very good definition of traction out there. So I set out to define it and the way you described it is the definition that's in the book. But I'll maybe start with how I came to that definition. So I look at the traction as really the output of a business model. So the business model works, we should be able to measure something coming out of it and funding and the size of your team and all those things are not the right types of things because we can show lots of examples of people who have no funding but creating amazing businesses and vice versa, things with small teams, large teams. So I really gravitated on this definition of a business model as having three jobs. So this is a definition put out by Saul Copeland where he talks about a business model's job is to describe how you create, deliver and capture customer value. So creation of value is the unique value proposition, delivery is with your solution and then the capturing is how you get paid out of the business model work. So when I looked at that definition, I quickly realized that the common factor in all those three things is the customer. And so traction has to be a measure of customers and it has to be the behaviors that cause those three things to happen. So that's how I kind of came up with that definition. And one of the metaphors that I put in the book when I kind of hit that is I began to realize that all businesses are the same. All businesses have customers and all businesses look to take people on the left. So these unaware visitors and turn them into happy passionate customers. So in the book I kind of toy with this metaphor of the customer factory and the metaphor sounds like a cute metaphor but it has a lot of roots in manufacturing. So much like we can apply a lot of lean thinking to the manufacturing process. I mentioned the theory of constraints early on. If we can create a blueprint of traction like as a customer factory, we can similarly begin to identify where does traction come from and how do we trigger it? So that's kind of what the book goes into a lot about. So I use this customer factory blueprint which I state as being a universal blueprint with made up of only five steps that you can use to both measure traction but also define what causes it to happen. And so to answer your question more directly when you're faced with a entrepreneur with an idea and they are trying to measure traction, it should be a measure of how does that business model capture monetizable value? And so you have to ask those fundamental questions which is who are you creating value for? So who is the user in this case? Sometimes that user is also the customer, they pay for it. So when we look at software apps or coffee shops or restaurants, the user and the customer is the same but in other instances, the user and customers are not the same. So if we look at services like Twitter and Facebook, users consume a service and they get value but the capturing of the value happens elsewhere. So the advertisers pay for that. So they're buying data or they're buying attention. And so the book kind of talks about how you take these fundamental business model types and figure out where those three jobs are happening, who is causing them and then your job then as the entrepreneur is to make those numbers go up and to the right. And there are only a few key metrics that really matter at that point. If you can make those numbers go up and to the right, the business model takes care of itself. Cool. I think you brought up an interesting point which is something in the early days of Agile, we struggled a lot with is Agile had this notion of customers and focus on customers but a lot of original Agile projects were inside the enterprise world where basically the customers were the people who had the money and they were paying for it but the users were different. Like it was someone signing a check and buying enterprise software but the users were different. And a lot of times if you didn't solve the problem for the users, they would sabotage the product. So in terms of traction, I think it's equally important to keep both those sides in mind else you could easily get sabotaged by one or the other. Yeah, absolutely. And I would almost always side with the users being the lowest common denominator in those scenarios. Because again, if you don't deliver, if you don't create value, so creation of value is a prerequisite to being able to capture value back. You may have gotten that big check to move forward but pretty soon the feedback comes back to them that this is not worth it and then they cancel and don't renew their subscription or the purchase order. So I look at the same thing with car companies as a reason why car companies don't just rely on dealers to go sell their cars, they have to go and do all the research to understand what drivers really want. Even though they don't sell directly to drivers except say Tesla, for instance, that change that relationship when more car companies sell through intermediaries but they don't rely on the intermediaries for what to build. They go to the end customer, the end user. So similarly here in that B2B context, in a Facebook context, if you don't take care of that lowest common denominator then everything topples over and that revenue also vaporizes over time. Absolutely, yeah, makes sense. Let's move to the last question. For most companies launching new products or features, the launch itself is a daunting task. Because the fear of simultaneously juggling several risks and every time you think of a launch, you're thinking about a product risk, a technology risk, customer risk, market risk and a lot of other kinds of risks that basically becomes a huge innovator. It really gets in the way of someone wanting to launch something very fast but then because of these risks, they hold back. And you had a blog that you had written earlier and I think that's also kind of featured in your book which proposes the using the 10x launch approach which is the stage rollout as a way to mitigate some of these risks. And you've also highlighted again four stages. I don't know how you do this but you have this art of simplifying things into like three stages or four stages which is amazing. So can you just kind of briefly explain what is the 10x launch and then what are the four stages that you have in there? Yeah, yeah, so as you described, so most people struggle with all the risks at once and oftentimes people lose sleep over scaling risks. They worry about their scalability of their databases of their code, of their marketing, the channels and while all those things matter at scale, they don't really matter when you're starting out. So the 10x kind of approach really gives you permission to embrace small scale and really go and rock it with a small scale first because if you can build something valuable for even 10 people or 20 people, why do you think it's gonna work for thousands of people or hundreds of thousands of people? And so the way that I kind of, I began to observe that most products fail not because of technical risk but because of customer market risk. And so that's how the 10x came about. And I said, instead of getting the hundreds of customers and have to worry about scalability, what if I intentionally go out and tell people I'm only gonna get 10 customers and internally we only go after the 10 right customers. And that's how this really started out. And then I began looking at what other companies were doing and some, a great example that's in the news now is Tesla and I realized that they were actually employing the same strategy. Of course they didn't call it that because Elon Musk came up with it and he called it his secret master plan and it wasn't so secret because he told everyone about it. But if you look at what Tesla was up against they were up against building a mainstream all affordable electric car. Now trying to do that is a very hard problem but the way they did that by embracing the 10x is they said we'll build version one of the car which was their stage one and we're gonna build an expensive car so they're intentionally going out and testing their riskiest assumptions which at that time was the battery technology. So we're intentionally going to only test this car we're not even gonna build it ourselves because that would require us to hire a whole team to do it. We'll go license a car from someone else. So they licensed the Lotus sports car, took out its guts put their electric battery in and sold a handful, a handful being maybe 100,000 of those but not the millions that they wanted to sell at scale. But by doing that in stage one they were able to test some real risks around customer demand around their ability to deliver on their promise of building an electric car that could go 200, 300 miles on one charge but they did not have to worry about scale they did not have to worry about the infrastructure the charging infrastructure, the factories none of that stuff. They didn't have to build a car they had to just license a car and kind of use it for their purpose. Now their stage two was let's level up from there. So now that we can prove the battery works let's build this model S which was their stage two and let's build a car from scratch so let's hire an automotive team let's get a factory and let's build kind of the next production level version of this vision. And so that was increasing the capacity of what they were going to deliver but still not a mainstream car still a very expensive car. So still my playing with supply and demand they were able to control how many people would actually be able to afford that car was actually was driving their stages. So this is an example of where they're using the stage rollout but they're using price and positioning as a way to really limit their need to scale. Now once they got the model S out there got a lot of good coverage and they proved a lot of things their brand became more recognized. They just two or three weeks ago announced their model three which coincidentally aligns with my stage three. So the I think the story is that they were going for model C but Mercedes already has that. So they actually, instead of letters they went with the number since it was the third stage in their rollout and the model C is now they're all affordable $35,000 car but by now they know how to build cars they've got factories they've got the charging infrastructure so it's a much easier path than it was several years ago. So that's an example of how 10X can work at a very massive scale but I find it's just as easily applicable to anyone building anything is give yourself permission to go and do a very small kind of rollout of what you're doing carefully give yourself permission to go and find just those 10 customers. Now once you can get the 10 customers where the 10X comes in is that you're not going to just double or triple your customer production rate you're going to 10X it. So as an entrepreneur it requires non-linear thinking if you're in the services business for instance you might say well I can just hire more people well that's not gonna get you to 10X. So to be able to get to 10X you have to start productizing or automating or doing other types of things and so that is that entrepreneurial mindset that comes in. So you give yourself permission in the beginning to go and prove the customer market risk at small scale but then you have to quickly level it up and scale it up and that's where the 10X really comes from. Cool. I didn't know this is what it was called before I read your blog but this is a technique that I've been trying to use and a few startups and I know there are, this is something Google also employs in some sense. Again it's in some sense people talk about this is creating that artificial demand by creating the artificial scarcity and then that gives you the focus to really solve the problem well for that set of users and then you kind of think of as you rightly pointed out the logarithmic 10X and that requires non-linear thinking and that requires it's the core loop is solid before you start expanding it out and I think that's what makes it very interesting is trying to find the core loop but do you think this model applies in a network kind of in products which requires a network of people to work and would the similar thing work for let's say a messaging app for example? Yeah so in the book the other example I give is Facebook so Facebook we can argue is probably the biggest network app that we know on the planet and they didn't go into 10X intentionally they went into it out of constraints so Mark Zuckerberg did not have money he was a college student at Harvard his competitors had millions of dollars, millions of users and so he could only do a launch in his dorm and only do it in Harvard but what he quickly showed is that there was so much traction within that audience and sure he could go out and maybe try to raise money and open it up to everyone but because of he was embracing this constraint of I have no money he was also kind of cheap and this is his own words he began to put Google ads on there and so he was everything was paid for because he was getting so much engagement from day one that they were already cash flow positive he was not at least putting money out of his pocket to serve Harvard but then he went very methodically and that's the part that's the 10Xing he went to three other Ivy League schools and Stanford which I guess technically not but he went to them and kind of rolled it out there and so again there the idea of 10Xing is you're giving yourself permission to say while everyone's social network is being used by publicly by the masses we are going to not go for that kind of big show we're gonna get the product right first so by getting it right on these campuses what began to happen as you said the artificial demand, the scarcity kind of set in is all the colleges wanted Facebook they kept hearing how good it was they were lines forming people were vying and competing to kind of get in and so when they did go to their schools they got very quick adoption so from a marketing perspective it also helped them roll out very methodically so they didn't have to scale day one they didn't have to build expensive servers a story I often tell people is that to get those four schools we think it's probably millions of dollars worth of investing their only cost that they were students they weren't paying themselves their only cost were servers and they were only paying $85 or something like that for servers at that time so that's an example of how by embracing this idea of small scale they could go and prove the model out in just these college campuses and after they got a hundred thousand users at $85 a month they then went through investors and got an amazing valuation because they could show that internal customer factory working so well they could show higher engagements twice as much engagement as their closest competitor until their valuation was also twice as much and then their closest competitor so again that's another way as you know to answer your question so in a network app it's a bit different you can just go and get 10 people so Facebook wouldn't work with 10 people they had to get enough of a sample size so a college was about that sample size maybe 10,000 people and then when they went to three or four colleges they got to a hundred thousand people but again they were 10xing so in the book I have this 10x diagram of Facebook and it's actually a textbook example of 10xing so their initial launch was about 10,000 people then 100,000 and they went to a million, 10 million a hundred million and then a billion and they did that like clockwork so that's a great example of that Sure and I think the beauty of putting constraints on yourself, giving yourself the permission as you say to put constraints actually gets you to really be creative in terms of how you're gonna really focus on what's the most important thing instead of trying to solve like if you went out and got funding then it's very easy to start focusing on 20 things instead of focusing on that one core thing and I know you're a big proponent of getting the core loop right in some sense before you go out for funding and do you still only believe that that's the advice you would give for entrepreneurs? Yeah, absolutely so I'm a big fan of what we call bootstrapping but it doesn't have to be bootstrapped you can still raise small amounts of money but it is fundamentally that is that the conversation that you're gonna have with investors when it comes to scaling shouldn't be about getting the core loop right it shouldn't be about the fundamentals it should be about growth and that's a different conversation so by that point you should have something that works and investors want to hear a story of how their money gets deployed towards kind of higher acceleration of something that's working so they can get their returns faster not you figuring out the model so that's even on if you go and talk to investors they'll tell you the same thing they don't tell it to you as simply as sometimes I put it because they're not in the business of saying no to people they just politely give you lots of other things to go do but it's fundamentally a no or you're too early for us as an example of how they say no without saying no but yeah, I would say that's fundamentally kind of a core thing that I would still say others should embrace. Yeah, and I know you've been leading by example yourself in all your startups so appreciate that it's been an awesome conversation and I wish we could continue for another hour or so but I know your time is precious and I don't want to take too much of it but any last parting advice for budding entrepreneurs that you would have? I think we cover a lot but the one thing that I've been ending most of my talks lately with is this kind of tagline that we came up with in the company which is one of loving the problem not the solution so it goes back to everything we've already talked about but kind of the way to kind of see that is I see that as entrepreneurs we pay a lot of lip service to passion and grit but what I found is that passion and grit perseverance, determination, all those things get you only so far when you're trying to brute force a solution that's in your mind it's the analogy I like to draw is that it's like we build a key but we don't know what door it's going to open so we start searching for doors and we try the key in every door and that's not a very optimal way of finding what key that door is gonna fit so a much more effective way is to start with the door and understand what key needs to go in it and so that's the idea of finding the customer and the problem first as entrepreneurs I'm convinced that we can build almost anything it's just a question of understanding the problem which to me is half the value Absolutely and I think that's a brilliant so love the problem not the solution and kind of focus on that, awesome. All right actually it was amazing talking to you and I look forward to continuing this conversation and good luck with your book I'm sure people would really benefit by reading this book and I look forward to reading it myself in detail again. Thanks so much for your time. Thank you Nuresh, it was a pleasure. Take care, bye.