 Thanks for joining this next lecture, part of the Youth Investment Readiness Program. We're talking today with Furman, who's gonna go over perceived challenges and validation plan. Furman, over to you. Thank you. Thank you, Lauren. It's a great pleasure to be here and sharing with all of you my experience in this field. You know, I'm sure every time an entrepreneur comes up with a new idea and it's working for it and pursuing it very hard and trying to make it happen and go to market. There are so many questions that I'm sure this entrepreneur has and so many questions about is this going to sell? Do people really need the product I'm selling? Do I have a reasonable business model? And the good news is that nowadays we are pretty much, it's very easy, we're pretty much able to validate all those questions and answer all those questions way before we go to market. Maybe years ago, it was more difficult to do so, so companies actually needed to run pilots to figure out if the business model was going to be profitable, to figure out if the product was going to get some traction in the market, if the need really existed. So it was really catastrophic if any of those questions had a negative answer and they had to go back to the very beginning and start from scratch. So the good news is that nowadays thanks mainly to the internet and also to all the developments that have happened in the world of innovation and all the new methodologies that we have all embraced, we can test most of our assumptions about our business model, our product and our need and the customer's need way before we even have a go to market plan read-in. So today we're going to be talking precisely about this before you launch any product, before you launch any new initiative experiment. And I'd like to begin with what I have called the decoration of experimentation. These are key ideas that I hope you keep in mind through your entrepreneurial process and especially today through my session here. The first and most important one is the fact that experimentation is not for confirming, it's for learning. And this is a huge mistake that many startups make. When they do their testing, they think they are confirming what they have already taken for granted or what they have already assumed. But no, well, you may get some little confirmation, true. But the experiments are mainly meant to provide you with insight about the need, the product, the market so you can learn and you can change, you can tweak your product or you can pivot your business model eventually if the experiments show that the direction you were taking was not the right one. So number one, experimentation is not so much for confirming as it is for learning. Another very important thing about experimentation is that what we're doing is reducing the risk and reducing the uncertainty that we can never reduce all the uncertainty. We can never, sorry, we can never remove all the uncertainty. We just reduce a little bit of it. So later, later on in the presentation, you'll see that when we prioritize our tests and decide what we're going to be experimenting with, one of the criteria that we're going to be talking about is precisely this one. How much uncertainty are you able to remove? And of course, you have to begin with those tests that will remove the most uncertainty. Of course, a pilot test removes a lot of uncertainty but it's very expensive. So the key message here is going to be try to remove as much uncertainty as possible at the lowest possible cost. So other thing that's quite relevant is that experimentation is not a step in a linear process. Many, many times when we talk about innovation methodology and the innovation process, we like to talk in phases and say you have the discovery phase where you get all this customer insight and all this understanding about the market and then we have the ideation phase where you come up with the new ideas and new initiatives and then we do the testing and then we validate and then we go to market and we roll out our business model. Well, probably we say it that way to make it easier for people to understand the different steps in the process but this is not a linear process and it's pretty much an iterative process where you are ideating, testing, prototyping, ideating or perhaps the opposite, ideating, prototyping, testing, ideating, prototyping, testing. And that's a process where you are continuously learning and improving, refining your idea until you have an MVP that's good enough to go to market. So we're going to review this in a few minutes so I want to spend more time on this part of the iteration of experimentation but there are some other things that I'd like to highlight before we get started with the methodology and the examples that I wanna share with you and this is that you have to experiment this idea as before you develop a business plan. This takes place before developing the business plan, the business plan is something you may need to show to the investors so they believe in your idea and they fund it but all this testing has to be done before the business plan because all this testing is going to condition, it's going to change your business model, it's going to affect your value proposition, it's going to help you decide what is the right customer segment and all these things are things that should be already answered and I'm very clearly stated in your business plan. Something that's also very relevant through the experimentation process is that you have to be very clear about what is data and what is an assumption and we're all pretty optimistic when we are launching a new idea, launching a startup when we are in our entrepreneurial mode, I think we all tend to be very optimistic so sometimes we may confuse data with assumptions and we should be very clear about that. In this process, the experimentation process where we are doing is verifying assumptions, not verifying data because data is already there so we don't need to verify if we trust as the source of the data. And the final two key elements of the experimentation process is that this is not about collecting data, it's about generating insight, meaning that we don't want to have just numbers, it's not a quantitative approach to a go-to-market or a business model, it's about understanding motivations, it's about understanding attitudes, it's about understanding how willing will people be to buying my products, not so much what percentage of people will be buying my product. If I was experimenting to figure out what the percentage of people buying my product, they'll be looking for data and that's great to have, but what we're looking for here is not so much that percentage of people willing to buy my product but why is people going to buy my product? What is the motivation behind buying my product? What is the job they want to get done when they buy my product? That's all deep knowledge about their motivations and their needs, even beyond what they can articulate if you were able to ask them. And finally, experimentation, it's meant to foster decision-making. All these experiments will provide a lot of information that should simplify your decision-making process. So I hope you have understood this seven key principles of experimentation and you have internalized them. So now you're ready to, now we're ready to talk about the experimentation process. And I'd like to spend two minutes just explaining a bit more what I said before about this iterative process, discovery or learning, ideation, experimentation, just to make sure that we're on the same page. Today we're talking about the experiments, the testing of all the assumptions necessary to validate your solution and using some tools that I'm going to be presenting to you and that will provide you with enough information to go back to step number two and change those ideas that you have, improve those ideas, refine those ideas so you can experiment them again. So basically we're looking at a loop between steps two and three in this slide. What are we supposed to test? So basically there are three things we can test. Today we're going to focus on the first two, but I think it's worth mentioning that there are three things you can test. The number one thing we need to test is if there is a need. We need to gather evidence that shows that the jobs that I want to get done for my customers matter to them. I'm sure you're all familiar with this job to be done approach that basically says that when people buy a product, they are using the product to get a job done. And for us as entrepreneurs, we need to figure out what is the job that people want us to do for them or what is the job that people is trying to get done when they come by our product. So the first thing we need to verify is actually if that job that they want to get done matters to them, if it's important to them, if there is a need. The second part that we want to test is if we can deliver the product to service that we believe it's going to satisfy that need, it's going to get that job done. So we want to collect evidence that shows that the solution that we are proposing to them, the solution that we're bringing to the table, not only is effective and efficient and relevant, but they also like it. Because it may very well be the case that the need exists that our solution can very well satisfy that need, but people don't like it. People don't like to get the job done using the product or the service that we are offering them. So that will be a disaster. That's why it's important before we go to market to not just verify that the need exists, but also that our product, our solution is good for that need and people like people is going to be willing to use it. And finally, the third thing we can test is if it's worth doing all this, are we going to make money doing all this? Are we going to attain our goals if there are not financial wants? Can we create value for the companies? Can we create value for the company? I think this is the fundamental part of any business model is to create value for the customer and be able to keep some value for the company itself. So is it worth it? Are we going to be able to do this? Create value for the customer and for ourselves. As I said before, we're going to focus on testing the first two needs, but obviously the third one, it's also very important. And the reason why we're not going to discuss it so much is because pretty much the solution to that third part to validating if it's worth it, if we can create value has to do with mainly with financial projections and sales projections and then actually testing in the market. And that's what the goal of this session is. So that's why we're focusing on number one and number two. So let's go ahead and talk about the first problem we can solve, which is is there a need? And I like to borrow this approach from Alex Osterbauer. He likes to present the need as a collection of three elements here. And I think it's a very interesting way to look at it. So when we are trying to figure out if the need actually exists and is relevant for our clients, what we have to do, it's have a clear understanding of the jobs that our customers want us to do for them. And also to learn very well, what are the painful situations that they're confronting when they try to get the job done today with the existing solutions? I guess the underlying message behind what I'm saying is that people basically get their jobs done one way or the other, they get their jobs done and the jobs their needs are there and they get them done. The thing is that they may be using a not very effective solution. And that's why they find it painful to get that job done and they confront some painful situations as they try to get the job done. But they also see some potential gains in the process, they are trying to get their job done using whatever is available for them and they see what they would like to see, that they understand what they would like to see. There are some gains, some extra benefits that they would appreciate. So taking these three, connecting those three components, what is the job that people want to get done? What are the painful situations that they are confronting nowadays when they try to get the job done and what are the benefits, the extra benefits, what are the gains that they would like to see in the process are a great starting point for experimentation. All we have to do is figure out what are the assumptions behind each of these three components. Once I have laid out very clearly the customer jobs, the current pains and the current gains, then you have to ask myself, what are the assumptions? Underlying behind each of these things I'm saying here, what are the, what am I taking for granted? What am I assuming when I say that this is a customer, this is the job that customers want to get done. These are the pains they have and these are the gains they would appreciate. With regard to, can we deliver what is the product? Again, borrowing from Alex Osterbader, which is a very interesting way to present your value proposition. And again, it has three components. Your value proposition is composed of three things. The products and services or the connection or bundle of products and services you're going to offer that are meant basically to get those jobs done. There's a direct connection between the customer jobs in this slide and the product and services in this other slide. But also you have to include those pain relievers and gain creators that are going to, number one, remove the pains that we have identified in existing solutions and number two, create the benefits that we know people would appreciate on top of the existing solutions. So again, what I have here is a list of products and services, of potential, of how am I going to create the gains and how am I going to remove the pains? And then all I have to do is figure out what are the underlying assumptions under each of these components of the value proposition. So I am somehow already sharing with you the first step of the experimentation process, which is to extract the assumptions. But of course we have to extract assumptions from somewhere and my suggestion here is to use the value proposition canvas by Alex Ostrabader to extract the needs from the customer jobs, the gains and the pains and extract the assumptions about our solution from the gain creators, the pain relievers and the product and services that are going to compose my value proposition. Once I have the assumptions, the key assumptions that I think are relevant to the model, I need to prioritize them because not all of them are as relevant. So I need to figure out which are the ones that I need to test, which are the ones that I must test before I can move forward. Once I have figured out what are those hypotheses that I'm going to test, the next step obviously is to design a test and then obviously to run it, then it's to learn from the test. So you can see there's a typo here in my slide. Number five should not say extract assumptions. It should say learn from the test and then we can go ahead and make progress. So let's focus on one, two and three. How do we extract assumptions? How do we prioritize the hypotheses and how do we sign the test? So as far as the assumptions extraction is concerned, I've pretty much explained already how I think, what I think is the best way and this is the way to do it. With regard to the need, we need to figure out what needs to be true for the job to be done to be relevant for customers so they're willing to pay for it. When I state the job to be done that I think the client has, what needs to be true to prove that that is relevant. We'll see a couple of examples later so you understand much better what I'm talking about. What do we think are the customer dislikes and pains? Why do we think they don't like those things? Or why do we think the customer will appreciate the gains? So it's not actually asking you what are the gains? What are the pains? What are the jobs? You've done that before. You've done that too in the discovery phase. You have them in your value composition canvas if you've used this tool. What do you have to ask yourself to identify the assumptions is a question. Why is this true? Or what needs to be true for this to happen? And it's exact same situation with the solution. You have it framed with your products and services, your gain creators, your pain relievers and now you have to ask what needs to be true for these products and services to actually, for them to actually deliver the job to be done and what are these pain creators? Why are these pain creators going to, sorry, why are these pain relievers going to relieve the pains and why are these gain creators going to create the gains? So it's a matter of asking yourself, why? And then you can also start assumptions about the core value creation, but we said we were not going to focus on this. This is basically figuring out what are the assumptions in the financial model to verify that your cost of structure on your revenue stream and figure out if the model is sustainable. Let's talk a little bit about a prioritization before we describe a few examples. Well, the goal here is that you have some assumptions that you have extracted from your value proposition and from the need and you need to figure out which are the ones that are worth experimenting. Experimentation takes time, takes some money. It's not supposed to be expensive, but it does entail some cost. So it's important to prioritize and decide what you wanna do first. So let me go to these slide directly and explain what are the three criteria that we use to prioritize the hypothesis from the need and the value proposition. So basically we look at three things. We look at the level of uncertainty that the experiment will remove from my model and we look at the impact that hypothesis has in the overall success of the business model. There are some assumptions that if they are proven wrong, they will be a big game changer. There are some assumptions that if they are proven wrong, they have a very high impact in your business model and would have to change it dramatically. While there are other assumptions that even if they're proven wrong, that will have a very little impact in our model. At the same time, some assumptions will be able to, if we prove, there are some assumptions that if we are, if we prove them, if we verify them, we will be removing a lot of uncertainty and there are others that unfortunately, even if we verify the assumption, the level of uncertainty that we remove will be little. So here we clearly identify three zones. Zone one are the assumptions that you should test right away where you have a lot of uncertainty and those assumptions have a high impact on the model. Then we have zone two. These are the assumptions that you can test next in a second round. Here we have assumptions that have less impact and that will allow us to reduce uncertainty to a lesser degree than before if we verify them. And then in zone three, we have the assumptions that we may never test unless we have a lot of time and money because those assumptions, they have little impact in the model and the amount of uncertainty they remove is not a lot. It's not a great deal. So here we have two key criteria to prioritize hypothesis. And we also like to introduce a third one. A third one that is about how easy it is to test the hypothesis. So we think that you have to try first those hypotheses that are easier to test or that you can test faster or you can test at a lesser cost. So if we're looking at assumptions in zone one, zone two, zone three, you should first of all test an assumption in zone one where you remove a lot of uncertainty on assumptions that have high impact, but with a low cost or a low difficulty in terms of experimenting. Number two, the second assumption that you should experiment is not an assumption in the zone one. It's an assumption in zone two, one, two. Why? Because here we're saying, all right, we rather test something in zone two that is easy to test, something that has little cost and little time and little effort than trying to verify something that, although being in zone one is much more complicated to test. So we begin with zone one that are easy to test, we continue with zone two that are easy to test, but then instead of going to zone three, easy to test, we go back to zone one and start doing the one that's not so easy to test and then to zone two, the one that's not so easy to test. Most times we test just two things, two assumptions, the two key assumptions. So we always end up testing one in zone one and one in zone two. So I hope this is clear, a lot of information in just 30 minutes about experimentation. I'd just like to review here this iterative process that we mentioned before. So we're all clear while we're talking about and just to sum up some of the things we've said so far. So you start with your customer insight, you figure out what is the need or a job to be done for the client, you figure out a valid proposition concept, your valid proposition, you ideate a valid proposition. And the first thing you do is a conceptual test. And there are tools specially designed, that we're going to review in a second for this conceptual test. And once you have this concept test, then you get the results from that concept test, you have an MVP. An MVP that you can prototype on a low-fidelity prototype that you're going to test that will help you create a medium-fidelity prototype that you will test and will help you refine your valid proposition to create a high-fidelity prototype that you will finally test to finally obtain your go-to-market minimum viable product. So this is why we refer to this process as iterative because you're going back and forth from testing to a higher-fidelity version of your product. What's important in this slide is that there are two different types of tests, overall speaking. Those that are better suited for the concept test phase where you don't have yet a minimum viable product. And those are the tests where you already have something that you can test in the market. Let's take a look at all the different experiments, sorry, test tools that we use. We won't have time today to describe all of them, but we're going to talk about them a little bit in the examples that we have, although the description of all these tests is in the presentation that you're gonna have access to. But let's say that there are some tests that are better for conceptual testing. For example, talk to customers. You don't need a prototype, you don't need, sorry, you don't need an MVP. You just need a two-dimensional prototype and you can run any of these creative games, speedboat, product box, buy a feature. All these are what we call creative games that are easy to implement. The description of those is in the presentation so no need to spend time on that. You can also do desk research. You can run thought experiments. So you talk to a couple customers and you make them walk you through the process of them filling the need and trying to satisfy it and seeing how they're, as they progress through their progress, through their journey and they tell you how they do things, you can figure out if your solution is actually making things, would be actually making things easier for them or not. You can also talk to an expert. You can simulate a transaction. These are good examples of tests for testing the concept. Once we have the concept transformed in MVP and we have the MVP transformed in a low-fidelity prototype and then a medium-fidelity and a high-fidelity prototype, then we can start using other types of experiments. For example, A.B. testing where you run two different types of prototypes and you show them online. You run a Facebook campaign with different banners that would take you to different landing pages and in each landing page, you present a slightly different product and you ask people to sign up for a future release or to pre-buy the product or even to buy it. As long as you tell them later on that you are not able to deliver it so you are not taking money from them, it should be okay. So these are tests that are more suited for the MVP, low, medium-high, fidelity prototype phase. So we have a description here of each of the tests that we mentioned before. I don't think there's a need to go through all of them right now. You will have them in the documents from this session so you can take a look at it and they're pretty much self-explanatory. I'd rather go to explain a few case studies and maybe describe the experiments that we did for those projects. These are real projects that we've done and these are the actual experiments that we run for the client and the outcome from those experiments. So this is a financial institution. They offer term loans to the M Bank and they wanted to expand its product portfolio. So basically this is a base of the pyramid lending and they're just doing one type of loan and they wanted to see what other types of loans people needed. So we strapped a couple assumptions that we wanted to test with regard to the need. One of the things we had said was that the reason for people to ask for a loan, people at the base of the pyramid, to ask for a loan at this financial institution instead of going to friends and relatives and loan sharks was on top of the financial concerns or the interest rate, et cetera. Another reason for applying for a loan with this financial institution is because they were providing credit information to the credit bureaus here in the US. So by getting a loan from this financial institution, these people at the base of the pyramid were actually beginning to build a credit record and we weren't sure if this mattered to them or not. So this was an assumption that we wanted to test. And the other assumption we wanted to test was whether people liked or not more flexible loan facilities. I mean, this term loan was pretty nice. It was pretty rigid, like three months fixed rate and very fixed amounts, so limited amounts. So would people appreciate a more flexible loan facility? And with regard to the value proposition, we were debating if the new loans had to be longer term, shorter term, if we needed to create some new financial lending products that did not exist. So we wanted to test whether people preferred long-term or short-term regardless of the amount. And finally, there was also this thing when lending money to small businesses in the base of the pyramid to figure out if borrowing from invoices could be a possibility. I mean, what we mean here is that we know all these small businesses, informal businesses from people at the base of the pyramid, they do get a lot of sales traffic and they can prove how many sales that they have on any given day. And we were wondering if we could connect the loans to their sales volume more than to their final profits. So this was something that we wanted to test too. So we prioritized the hypothesis based on the uncertainty that was going to be removed and the impact. And the test now was a 1B and 2B and the test next was 2A. So this is the test that we designed. So for 2B, for example, figuring out if people were willing to borrow on invoices, we simulated an online application process for invoice factoring and merchant cash advance. These were two products that we wanted to test. So we actually run an online application on this company's website. It looked like you were applying for a product. It sounded like you were applying for a product, but at the end of the process, you actually were not asking for anything confidential. We're just saying something like, would you like to apply for a product? Something like, would you like to apply for a merchant cash advance solution? Or would you like to apply for invoice factoring solution? And people clicked yes. We asked them the amounts. We asked them the terms. We asked them what they wanted the money for. And then when they said next, we would just say, happy to know what your needs are. We don't have this product right now available, but we'll let you know the minute we have it. And we asked them to provide us with their email address in case they wanted us to update them the next time we had this product available. So I don't think anybody felt cheated because we were not saying it was an application, although it sounded like an application. And we learned based on the number of clicks in each of these two products that we advertised for a limited period of time, which of the two could get more traction in the market. Two A, wondering whether people prefer longer terms regardless of the amount. We did an email campaign to existing clients. This company had already a large number of clients. So we sent an email offering the possibility to extend their maturity dates if they met some undisclosed conditions. The reason why the undisclosed condition, the reason why the conditions were undisclosed was because if they said yes, I want to extend the maturity date, we wanted to be able to say, you don't meet the conditions. So that's why they were undisclosed. That's a little bit tricky. We are in gray, gray waters here maybe, but we got lucky and the people that asked for an extension, they could actually get it. So the finance institution ended up offering the extensions to the ones that said that they wanted it, but overall we didn't get too many positive responses. So longer terms was not the solution. And finally, with regard to the need hypothesis, one B, we wanted to figure out whether people liked or not more flexible loan facilities. So what we did is that we allowed new applicants to request different terms on loan amounts on top of the starting awards. So when we got a new application on top of the standard amounts and terms, we asked them, is there any other specific term or amount that will serve you better? And it was not binding for us. We didn't have to satisfy what they're need or what they were saying, but at least we were gathering information. It's a very simple test just to understand whether people prefer it or not more flexible facilities. I have two more examples. I don't know if this, should I go to the next two or is it about a good time to open the mic for Q&A? Lauren, how do you think we should do it? I think we should do two more examples. I think that would be helpful. Okay, let's go for it then. So this is San Edison, this is a company in India. This is not our project. Like we got this from, we haven't done this. So this is something that we learned from some book, but I thought it was interesting what they were doing and the assumptions and the prioritization and the tests design, this is something we've done in-house, but the project itself was not one of our projects. So San Edison is selling solar cell panels to mid-large organizations so they can generate electricity for their use. So basically you have these big boxes, big box stores in India with a huge roofs. So because the electricity supply is not very reliable in some regions, why don't, how about installing some solar cell panels on the roof so the companies can generate their own electricity and rely on their own production for their commercial needs. So again, the assumptions in the model, the need, people need a guaranteed electricity supply. And also we at the time, it could be discussed that doing this by doing this, by generating its own energy, a company could be perceived as more green, as more environmentally friendly. So this could have been a concern, this could have been something in the value proposition. So we thought it would be worth testing that need. With regard to the value proposition, the main question was, is everybody, is there a good disposition by a business to invest today to save tomorrow? Because basically installing solar cell panels in your roof, it's great, but it takes a lot of money. I mean, it's a big investment and you are not going to benefit from that investment in the short term. You're going to benefit little by little with every monthly utility bill that you don't pay. So you may have to invest 20,000 today and you just have a utility bill reduction of $20 for the next months. So we're businesses willing to do this. And the other one, we're wondering if the solar panels were easy to install on the big box stores and warehouses without interfering with the commercial activity in those stores. Again, we prioritized the hypothesis and 1A, 2A and 2B were the ones that we wanted to discuss more. 1A, 2A, 1B are the ones we would assign tests for and let's go for it. So hypothesis 1A guaranteed electricity supply is the need. So what we did, we opened, oh, no, sorry, we didn't do this at that moment. What we suggested that could be done could be to open a complaints hotline and report electricity outages with the skews of using it to push authorities to take action. So as a company, if I'm a company and I'm thinking whether I want to install solar panels of not or not. And you know, I'm Stan Edison and I'm trying to sell those solar panels. What I can do is I can open a complaint hotline, just a number that I can publish everywhere. So every time a company has a problem with the electricity supply, they can call me. I take the complaint and then with all those complaints, I'm supposed to go to the government to share with them that all these companies over the course of this month have complained about the electricity outages. That's great, collecting all the complaints taking them to the government, that's great. But in the meantime, I'm also learning. I'm also learning how many companies are complaining about electricity outages. So that's some sort of experiment that gives me information I need while also doing some good for the community. This is great. And the parameter would be of course how many complaints received from the local businesses. Hypothesis two A, overall disposition by businesses to invest today to save tomorrow. We could have done a direct marketing campaign offering businesses to pay in advance for two years of electricity in exchange for a lower kilowatt grade. That's the same thing as the solar cell panel investment. You pay, you invest in solar cell panels today and you get a reduction in your bill for the next 10 years. What we're saying here is, okay, we run a campaign offering people to pay the electricity for two years today, like a lump sum, like one payment for two years of electricity average consumption in exchange for a lower rate during those years. So the difference between the two, I mean, there's conceptually, there's no difference between the two. You have to pay more today to pay less in the future. And in terms of testing, this is a much easier. This is something much easier to do because you just have to offer it through this direct marketing campaign this possibility, but you don't need to invest in the solar cell panels. It's a financial product. So it's a contract, which is much easier to do and to execute. So that's why we thought this could be a nice way to prove whether people are willing to pay more today to pay less in the future. And finally, 1B, if you want to be perceived as a green company, the test here will be something as simple as an online survey offering the opportunity to monetize large-roof surfaces by installing solar panels. So actually what we're doing here is offering our service we hope before we have it. You can call it pre-sales. You can call it mock sales. You're just before you actually have the capacity to do it. You can run like a market test, offering it to as many companies as you want and just checking how many accept your offer. So just some very simple tests to prove the hypothesis that we have like there, that we have extracted. And finally, I love this one. This is something we did in Mexico a few years ago. This is a petrol company, a gasoline company. They wanted to experiment if a premium diesel could be a good product to sell. They had created premium diesel at our technology center and the rationale behind the premium diesel is that it was better for your engine and your engine would perform better in terms of speed and power and all those things. So the assumptions we wanted to test people is willing to pay more for a premium diesel. People is willing to pay more in exchange for higher performance and longer engine life. And also we wanted to prove that people feel that regular fuels are not good enough that they don't like what they have. So there's a need for a premium one. In regard to the valuable position we wanted to prove that drivers value a longer, healthier engine life over a shorter, not so healthier life though at a cheaper price. And also that drivers think that premium fuels improve performance and power. We wanted to see if there was a connection there if people connect the quality of the fuel with the engine performance and engine life. And also talking about drivers of environmental responsibility. But we finally, because of the prioritization we did we ended up testing 1A to A1B. And 1A was my favorite. Basically we installed a mock fuel dispenser at a pump. So we went to a selected number of gas stations. I think it was like 10 gas stations. And at each pump we installed a mock fuel dispenser that said premium diesel. And he said premium diesel and had a price and you had a pump there. And actually we counted how many times people took the pump and try to pump this premium fuel into the cars. Just to see a sign saying that it was not available. That premium fuel was not available at that pump. It was great because that was a great experiment because people were seeing the product there. They were seeing that there was a premium diesel and they were able to actually grab the hose and try to pump it into their cars. And it was just at the very end that they realized they couldn't do it. And people understand that sometimes you don't have the fuel you need in the gas station. So frustration levels were very, very low. People just brought the hose back to the pump and just took a new hose and added the standard diesel. But it was quite illustrative and it was very good to test these hypotheses. We also did a study to learn how long do people hold on to their cars on average, like a survey. Very easy, very easy way to experiment. And we also run a poll at gas stations with clients using a standard diesel. So we actually talked to the customers while as they were pumping diesel into the cars to learn how they liked the diesel. If they thought they wanted to improve their get a better fuel to improve their engine performance and all the things we said before. So very interesting tests and a quite successful project that led the company to not launch the premium diesel. Apparently the public was not going to appreciate it enough to pay the extra price. And I'm giving it back to you, Lauren. I think this is pretty much what I had. Here today for this session. Thanks so much. That was a great and very clear presentation. I hope so. Too many things in 30, 40 minutes. Yes, it is a lot of content. I'm curious to hear, I think maybe, well, I guess I have two questions and I think they're sort of related. So I love this last example you presented with the diesel fuel possible product. And I particularly like this one for a couple reasons. One of them being that actually the assumptions that they were making that people would be willing to pay more for this and that it was a useful product were not confirmed and they ended up not marketing this product. I'm wondering if there's a more concrete example or a different example you could use where perhaps some of the assumptions were confirmed and others were not and an example of the more iterative kind of a circle of testing and retesting that you presented. Do you have an example of that? And then maybe related to that as well. I'm curious as well if you have maybe any famous examples of companies that either did a really great job validating their assumptions or any famous failures of companies that put a product out on the market that failed perhaps because they didn't effectively do this kind of iterative model. So I guess that's three parts sort of an example of the more iterative way that this could work and then maybe like a best case scenario and a worst case scenario. Okay, so let me bring us back here to this image. This is the one you are referring to, right? The more iterative process. And I'm happy you asked that because we just finished a project a couple of weeks ago. That has nothing to do with oil. I hope doesn't matter. It has to do with olive oil. So it's not fuel, but it's a different kind of oil. So it is very interesting. We were, this company, they produce the top extra virgin olive oil and we were creating a new olive oil that's specific for frying. The problem with frying with olive oil is that you don't get the high temperatures you need sometimes to do deep frying. If you wanna be like, especially here in the US if you wanna do a fried chicken and you wanna deep fry your chicken and you want the butter to be very crispy, you need very high temperatures. And the extra virgin olive oil is not very good at high temperatures because if you get too much smoke. So we're helping them create the perfect olive oil for frying. So we've done everything from the consensual test to the high fidelity prototype. The consensual test at the concept test level we're trying to prove if people really needed olive oil for frying, if people really needed that or if they were happy using canola oil or using other types of oil. So the way we did this was just by participating in different cooking forums. So we had a team of people just that participating in different cook forums talking about frying and asking questions to people asking, hey, I need to fry this. I wanna make this fried chicken. Which oil should I use? And people were saying canola, people were saying avocado oil or grapeseed, I think it was. And so our person, our secret blogger or secret forum participant, we say, hey, how about olive oil? And so these practice conversations and all these conversations online and we also did a bit of social media scanning too, took us to an MVP. So this MVP was actually not yet an oil that you could use but it had a compass, we figured out what the composition of that olive oil would be, what the packaging would be and we did a low fidelity prototype that we tested showing it to people on the streets. So we went to places where people buy cooking appliances or cooking, things for cooking, like utensils, things like that. And we showed them a picture of these olive oil, especially made designed for frying to gather the insight. And that helped us go to a medium fidelity product that was actually an oil. And we were able to test it in a pop-up store, not pop-up store, in a pop-up stand in a farmer market. So we went to the Upper West here, the Upper West site in New York City near 80th street. And for three consecutive weekends, we had a stand where on top of selling other olive oils who were testing this one and saying that it was good for frying and getting feedback from people. And that helped us create a high fidelity model, high fidelity olive oil product that is actually right now in process of production and it includes all the packaging and all the naming and all the preposition and everything. So this is an interesting project that we've done recently when we've gone through the full process. I hope I've answered the first question. Yes, yes, absolutely. Okay, and with regard to the other two, easier to understand, easier to begin with some failures. One of my favorites is a local brand of beer, Coors, Coors beer. So Coors beer has always said that they are Rocky Mountain water beer. So basically what they're claim is that their beer is made with Rocky Mountain water. And they made this assumption that if people liked Coors beer because it was made with Rocky Mountain water, they would also love their Rocky Mountain water by itself without the beer component, just the water. So they didn't test it, they just took for granted that they were right and that assumption was good and they launched Rocky Mountain water and it was a huge failure. They had to, the market did not connect the brand Coors with Rocky Mountain water and they had to remove it from the market. That's a good example. That's example of when it didn't work and when it worked, there are a few cameras right now on the market done by Polaroid where we've done a lot of testing. I cannot talk too much about it because of the non-disclosure agreements that we have, but there are a few Polaroid cameras in the market that we've tested extensively in terms of user experience and user interface and how the clients interact with the camera that we've done through what we've gone through all the iterative process from initial low-fidelity UX sketches all the way down to high-fidelity apps where we had people interacting with them and I think that we have the right version and the cameras are being sold in the market and as far as I understand, they're pretty successful. So that would be my example of something done well. Great, that's fantastic. That's very helpful. Thank you. So I think we're at time and unless there are any other burning questions, I would say we'll wrap up. All right, so thank you so much. We really appreciate your expertise that you've shared with us today, Furman, and just a note to thank anybody who's watching this video online after we've posted it. Thank you so much for joining. Thank you, my pleasure.