 I'm not to use talking on the microphone. So if I'm not audible or if I'm too loud, just let, just wave your arms and we'll do something about it. All right? Okay, so let's get started. So what I'm gonna talk about today is something to do with the data culture, right? And in order to say this at all, without data culture, agile is just a process, right? Before we jump in, I'm just gonna spend 30 seconds talking about who I am and what I do, right? Just to set context. I work at Hike Messenger in case you haven't heard about Hike. Hike is a social app where you can connect with people who matter most to you and have fun with them on the app. And also you can consume content and do tons of other stuff, right? So that's Hike. I am a data scientist and I work as part of the analytics and data org within Hike. My team is responsible for a few different things. On number one, it's responsible for making sure that the BI and reporting system is up and running and data is available all the time. And on the other side, we are responsible for making sure that we mine user insight out of all the data that we have about our users and about our product. And then we use that user insight to personalize the experience for our users, right? So that's pretty much what I do. That's my day job, right? Okay, so what are we gonna talk about today? What are we gonna talk about? Building a data culture. First and foremost, what is a data culture? What do we even mean by a data culture, right? And then once we have some common understanding of what a data culture is, what do we do? How do we structurally change an organization to get this data culture thing happening? And what I'm gonna talk about now for the next 40 minutes or so are things that I have learned over the past year, year and a half, working at Hike. This is what I do at Hike, right? This is sort of my primary role at Hike, right? And again, what we're gonna talk about today is not something I read in a book last week or something, right? This is something that I have sort of worked on over the last year and a half, and we've made a lot of mistakes and we've learned a lot of things, right? So hopefully what I'm gonna do today is share at least some of those learnings with you folks, right? I'm gonna try my best at the very least, all right? So what is a data culture, right? So in order to answer that question, what is a data culture, right? Took the easy way out. What I did was just Google for organizational culture, right? And that's what sort of the first result that shows up is this, right? It says organizational culture is a system of shared assumption, values and beliefs, which governs how people behave in organizations, right? These shared values have a strong influence on the people in the organization and dictate how they dress, act and perform their jobs. And then I did the obvious thing. I just changed a few words around and just like control F, just changed a few words around and here's what I call a data culture. Data culture is a system of shared assumptions, values and beliefs, sound familiar, right? Which governs how people interact with and make use of data in organizations, right? These shared values have a strong influence on the people in the organization and dictate how they communicate, take decisions and perform every aspect of their job, right? So this is what in my mind is a data culture, right? And what hopefully we are gonna do over the next few minutes is talk about how can we get this going within a company, right? How can we make all this happen within the company? And Dipeshu just spoke a few minutes before me. He said very, you know, he said it very well. You can't build culture, you can't sort of make culture happen with force, right? You have to foster it and over time, it's a process, over time, you will hopefully see the right signs, the right things happen, right? And that is how you build a culture. So three major points from the previous slide and I wanna elaborate on that just a little bit more. Shared assumptions, right? When we say shared, everybody in the company has to have a common understanding of what we're talking about, right? Shared assumptions. Data should mean the same things to everyone in the company. That's different from saying everybody should draw the same conclusions from the data. It's, everybody should have the same common understanding of what data is, right? This is about creating a level playing field within the company, right? Usually there'll be a few folks in the company who are, you know, very data savvy. And, you know, they'll be able to do a lot more with data. And there are some other folks who are not that data savvy. And maybe their job even doesn't require them to be, but then they will not use data as much as the other folks do, right? Shared assumptions mean that regardless of who you are and what your job role is, you're all playing on a level playing field. You have the same access to data. You understand data the same way and you're able to use data just as freely, regardless of who you are, right? Shared values, everybody in the company places the same amount of importance, right? We're talking about a value system. We're talking about everybody in the company sharing a common understanding of what they should do with data, how they should use data, how they should communicate with data, and so on. Shared beliefs, right? Everybody has this common understanding or common belief in the power of data. And this sort of only comes if you have experienced it yourself over and over and over again, right? You've used data, you've seen the power, and you've done it many times over and over again. That's the only way you'll start believing that this thing that we call data is really powerful, right? Sure, yeah, sure. So I can give an example of what do we mean by shared beliefs. If you go to someone in the company and say we have a lot of data, why don't you start taking this data and making some use of it? There'll be resistance, right? People don't want to try new things. Let's say for like two or three times, you make them do it, right? Somehow or the other. If when they use the data for those two or three times, suddenly magically, they start seeing results. They start seeing that data is making their jobs easier or making them much more effective at what they do. That's what I'm talking about, right? That is a belief, right? Then they will start believing that what they do, they'll be able to do much better if they start using data, right? So here's how we start, you know, a process which will hopefully end up with creating a data culture within your company or building a data culture within a company, right? I strongly believe that there are two fundamental pillars that we need to build within the company and we're gonna talk more about what those pillars are and how we build them and so on. We have to then, once you've built them, focus relentlessly on adoption and when we do both over a period of time, we get vibrant data culture within the company, right? Build those two pillars and we're gonna talk about how we build those pillars and create levers so that you can take hold of those levers, use them like a crank, right? Turn them around over and over again, many times, hundreds of times and if you do these two things, right? You know, six months or a few, I don't know, weeks, months down the line, one day you'll go into your company, into your organization and you'll realize that something magical is happening, right? And that would be a data culture. So what are those two pillars? The first pillar I'm gonna talk about is called data democratization, right? Making sure everybody in the company, one has access to data and two is able to use the data in the right way, at the right times, to draw the right conclusions in the whole decision making process, right? This is at a very high level what data democratization means and the second pillar, experimentation at scale, right? What does experimentation do? Each step forward in the company or in the organization's evolution, the way you take it is by taking, sort of by making small experiments on what the way forward should be and these experiments should ideally help you move forward in the right direction. It should keep you oriented in the right direction all the time, right? And just sort of a quick thought on how do these things sort of connect with agile, right? Like how does this all add up? I believe that these two pillars are actually fundamental for a company to go truly agile, right? Both are needed to de-risk what you do to be able to develop something iteratively for different parts of the company to move independently without sort of coming in each other's way, but at the same time, move forward in the same direction and be in sync, right? So these are the two pillars and we're gonna talk about them in much more detail now. Before we move forward, I'm gonna set up a small sort of 4K study, right? Let's assume that all of us work for this company. We're in a widget sync, right? We are an e-commerce company. We sell widgets online. We have an app and we have a website, right? And it turns out that, you know, many people use widgets and they're vastly popular and they come in different shapes, sizes, colors, whatever, what have you, right? And so very competitive market, we're taking on all the e-commerce giants in the world, right? So that's who we are. Why are we doing this? Many of the concepts that we're gonna talk about in the next few minutes are very hard to relate to in the abstract, right? Having sort of an example to use when we talk about those concepts is hopefully gonna help us make sense of this sort of much faster, right? So we are all part of this company, right? Now, as it turns out, widget sync is an internet-age company, right? So they have like tons of data. They collect a ton of data about everything that you can imagine, right? And we are talking about two kinds of data, data about what happened in the past historical data and also what's happening right now in real time, right? So you have things like inventory data, you have orders data, delivery data, product details data, you name it, we have it, right? And, you know, as it turns out, different components of these data sit in different sort of data stores, right? Some of them sit in, you know, SQL databases, some in some distributed storage, somewhere in the cloud, some of it sort of like, you know, app or web logs data, the volume is so huge it just goes straight into archived storage. Some of this may be marketing data, for example. Might be sitting around in a spreadsheet on someone's computer, right? So all of this data is available, is there within the company. But the problem is it doesn't talk to each other, right? It's sort of different elements of data sitting around within the company. None of it works with each other, right? Now getting data from one source, from one of these elements, might be possible if you know the right person, right? But getting access to a data from across these elements is next to impossible, right? Because the way we talked about just now, we just don't talk to each other at all. These different systems, right? And why don't they, these systems talk to each other? Couple of problems, one is technological, right? You have these sort of different kinds of data stores and you know, they just don't talk to each other. And the other could be organizational, right? What I mean by organizational problems is that as companies grow larger, they grow in silos, right? You have sort of departments or, I don't know, or fiefdoms, if I may call them that, who are very, very protective about what they have, data being one of those, one of those things, right? So if you go to your supply chain management department and say, hey, I want inventory data for the last six months, the first question they're gonna ask you is why? Why do you need it? We can't give that data out to everyone. It's very sensitive, right? Our competitors might find out how we stockpile certain inventors, right? You'll hear a lot of reasons. Basic fact being, organizations are siloed, right? And I saw a couple of smiles around the room, so I think people know what I'm talking about, right? So in the boardroom, right? In the company's leadership, everybody's talking about how we have tons of data and our people, our employees should be making a lot more use of the data, right? That's the conversation always happening within our leadership team. But you know what? If people can't get their hands on the data, you can't really blame them for not being able to use it, right? So this is a key problem. Nobody in our company, Widgetsync, is able to get their hands on data and use it effectively, right? So that's problem number one and we need to solve it somehow. There are deeper problems. Let's look at a couple of facts about our users, about our customers and see what happens, right? Customers who come through from clicking a Facebook ad prefer orange-colored widgets. Showing a discount coupon as soon as a product is added to cart leads to better conversions in tier one cities, right? Now these things, assume that these are true about our customers. How would we, as employees of the company, realize that this is true, right? These are correlations, right? These, you know, people who come in through clicking on a Facebook ad are strongly correlated with people who order orange widgets, right? That's correlation, but how do we figure that out? This is how we figure it out, right? We look at data across different elements, across different departments, right? And if we are able to get access to data from across silos, we have some chance of being able to figure these things out, right? Mind some insight out of it. Here's the second problem. Insights almost always come from overlaying different kinds of data on top of each other. Very, very rarely will you find insights only within your inventory database or within your marketing database. Sure, there's some stuff there, but more often than not, your insights, the most powerful ones are gonna come from sharing data and looking at data across silos, right? So this is problem number two. So what do we do? How do we solve these two problems, right? The answer is building a single unified data platform, right? Now I'm not gonna spend too much time into talking about how we build this or what should be the right architecture for this because this is just too specific to whatever organization you're talking about, right? But just a couple of points. Number one, this is an area of major engineering investment, right? This is not a project that two engineers are gonna sit over a weekend and finish, right? This is something that's gonna take once, maybe one year, right? So the leadership has to really get behind this and as leaders, we have to get behind this and say that we really need to build a single unified data platform. The good news is that even though the whole process is gonna take a year or I don't know how long, we can start small, right? We can build iteratively. We can start with a very, very simple version and scale up over time, right? That's the good news. Now I'm saying this, this is something that we're gonna have to significantly invest in. I've not yet talked about what value we're gonna get out of it, right? We're gonna talk about that in a few minutes. But are we done yet, right? Assume that we build a single unified data platform. You know, you build a tool where someone can go in, maybe a fire up a web browser and access any data within the organization, right? It's possible, right? Let's say we did that. What next? Are we done yet? Not really. What would you have to do to access data from sort of these different data stores, right? This is an example SQL query. I don't even know if it's correct or if it works. The objective of showing this here is that to get data from diverse data sources, you're gonna have to do something like this. And regardless of what pool you use, you might use SQL, you might use something else, right? There might be a few data sort of savvy folks sitting in this room who might say, okay, big deal, we'll do it. But let me promise you, 90% of the people within the organization are going to faint if you tell them that this is what you need to do to get to data, right? Not gonna happen. They're gonna say, please try the next person. I'm not gonna do it. Is it because SQL is hard? No. I can promise you again that if we had to, we could train everybody within the organization to write stuff like this within a couple of weeks, a month at most. It's more to do with fear of the unknown, right? All of us as human beings are resistant to change where we are fearful of what the unknown holds for us. So the first reaction when you see something like this is I don't want to do anything to do. I don't want to have anything to do with this, right? So what next? We build this single unified data platform and data is now available freely to everyone, but nobody can access it, right? What if we build tooling which made this look like this? And again, folks who know SQL will tell you that this is very simple, right? It's not hard to do at all. The point being, we, the architects of the data platform, have to build these ourselves. We have to anticipate what are the different things that people in the company are going to need from the data angle. And we have to pre-build all those tools so that actually fetching the data the last mile, the user who fires up that tool and tries to get data, his or her job is absolutely simple, right? Zero friction. You want them to get to your tool and get at that data within seconds. They should not have to think about how to get the data at all. That should be taken out of the equation, right? So that's the other key learning here. While we're talking about tools, what about dashboards, right? So when you build that single unified data platform, you could also have, you know, dashboards sitting on top of that, right? Which are showing you fancy graphs of how your metrics are doing. Many a time, dashboards work only as reporting tools, they tell you what is happening, right? In this case, you know, you see a drop right at the very end, if you ask the question why has that drop happened, the dashboard doesn't really help you, right? It doesn't tell you why exactly that drop happened, right? So this is a dashboard as a reporting tool. What we should ideally be gunning for is a dashboard as an analytics tool, right? Not only should you be able to see what's happened, you should be able to click on it, interact with it, ask questions and get answers, right? Drill downs, filters, pivots, right? These are the different kinds of functionality that you, you know, would wish to have in a dashboard. So when we're talking about building all this tooling, may it be dashboards, may it be SQL, may it be any other tool? The idea is to take away all friction possible, right? Someone who wants to get at the data should be able to do it very, very quickly, very, very easy, right? What about metrics, right? Most organizations will have sort of a host of different metrics with which they track performance. What about them? If everyone talked about the same metrics differently, they become that much more harder to understand, right? These are just dummy graphs. I just put some dummy data in Excel and I just made different kinds of stuff, right? It's the same data. Imagine you're sitting in a room with, I don't know, your senior leadership and someone shows you a graph and then you have to make sense out of it, right? The first, I don't know, 45 seconds are sort of a mad rush to trying to make sense of what that data is even telling you, right? What is that blue line? What is that red line? What is the metric? What is moving up? What is moving down? So that's a lot of effort, right? And again, it's intimidating, right? You're sitting in a room full of smart people and you want to be the first person who actually looks at the data and makes an insightful comment, right? How do you make that happen? The way to make that happen is to standardize one, all metrics, all the important metrics for our organization. We have sort of standard definitions for them. And number two, you standardize the visual representations of them, how they look on the chart. The blue bar always talks about category one. The orange bar always talks about category two. Now, I mean, I'm guessing this sort of sounds like police state, right? You're making way too many rules and you're saying everything has to be standardized. Well, here's the deal, right? If you're talking about everybody in the company having a common understanding of the data, this is vitally important, right? Again, the idea is people who are data savvy or who are comfortable with data should ideally not have an edge over other people in the company when it comes to understanding data. What they do with the data, of course, right? So that's the key point here. Yeah, so build tooling that will make it easy for everyone in the company to access data, standardize everything, standardize all the metrics, and this is what will build shared assumptions, right? You know, a level playing field is what we talked about. Everyone has equal access to data, understand it in the same way, right? Nobody's at sort of a disadvantage when you're talking about data. Now, you've built all this tooling, right? You've built, you know, this amazing dashboard tool I don't know, a multi-layered SQL tool, you've built this unified data platform. How about adoption? Do you think suddenly the ones you've built it, people are gonna start using it? Absolutely not, right? You've taken the first step, which is, you know, you've taken away people's excuse for not using it, but that doesn't mean people are gonna start using it suddenly, right? So imagine at Widgets Inc, we have this design review meeting going on. We're gonna release a brand new app next week and we're gonna review this, right? This is how your meeting has to start. Every day, our app sees 250,000 unique visitors, 48% of them buy Widgets on average, 350 rupees each. Our hypothesis is that our new app design will increase the conversion ratio by 5%. This is a preamble, then you review whatever the design, the new design is. And this is how you end. We feel that the checkout flow optimization and blah, blah, blah will contribute to giving us a 5% increase in conversion rates. So what have we done? We've taken this meeting, which was, you know, gonna show you a design for a new app and you have converted it, you have sort of laid a frame of data around it, right? You have made sure that everybody in the company, if you're able to do this, if you're making sure that everybody in the company is actually leading their conversations with data and ending their conversations with data, right? Every meeting, every document, every presentation should ideally go in this way. I talked about levers, right? I talked about you build a pillar, then you take a lever and you crank it around over and over and over again. This is that lever, right? This is how you take that lever and crank it around again. Every conversation within the company has to lead with data and end with data, right? Now, I talked about we build this whole data platform, the tooling and all that, what's the value, right? We haven't really showed what the true value of this tooling is, right? Now, this is a question which most companies would probably agree is very important for them, right? Who are our most valuable customers, right? And you can sort of change that question to anything else, right? Who are our most reliable suppliers? What are our top selling products, right? You can frame any question that you want to answer in this way, right? There are many ways of actually trying to figure this out. For example, everybody who transacts more than 5,000 rupees a month is your valuable customer. There are other methodologies as well. The problems with these is that these are able to identify your valuable customers after the fact, right? Only after you finish the month are you gonna say, okay, last month, this is the list of my top thousand customers, right? What if we were able to do this before the fact? What if we were able to predict who are going to be our most valuable customers next month? Now, this is a very, very, very simple, basic machine learning model, which will do just that. And the intent of showing this or talking about this today is not to show you that we can do machine learning. Of course, you can. The idea is this is how your tooling is gonna get used, right? Imagine this, I don't know, data scientist, an analyst who's sitting with your company who's gonna build this model, he or she needs access to all this data. Earlier, it used to take, I don't know, three weeks, a month to get their hands on this data. Now, it's gonna take 15 minutes, right? What are we doing here? We're freeing up our team or our people to focus on business problems, not on data problems, on business problems. They don't have to worry about how to get the data. Data is available, freely available. Now, they can get started on the business problems, right? We talked about shared values, right? Everybody in the company shares a common understanding of the importance of data. This is what building shared values means, right? So this was the first pillar, data democratization, right? And I hope I've been able to convey at least some of, you know, at least in some small way, how powerful this pillar is, right? This is about getting everybody on the same page as far as data goes. Now, we're gonna talk about the second pillar, right? Which is experimentation. Experimentation also known as AB testing has been around for a long time, right? I'm guessing people would have at least seen it or heard about it before, if not actually used, right? One of the key philosophies for experimentation and also agile is build, measure, learn, right? You build something, you test it out, you learn something from how you test it out. And if it works, you sort of pursue it, you iterate on it, you improve it and you put it into your product. If it doesn't work, you throw it away, right? Now, the question is how do you make this a repeatable process so that, you know, we talked about that lever and that crank, how can you turn this into a lever and a crank so that you can do this over and over again, right? This actually is sort of an expanded version of build, measure, learn. And again, there's a lot of literature on this online. I'm not talking about something that I've invented or something, this is just about how to operationalize this, right? The main criticism of build, measure, learn is deciding what do you build, right? What should be that next idea that you should try out, right? And since you are anyways gonna run a very, very, very quick and dirty experiment, many a time it turns out that people think it's okay to try out random stuff, right? It doesn't really have to, you know, you don't have to spend too much time researching, you know, what to build, what not to build. Anyways, you're gonna run an experiment for a week and throw it away if it doesn't work. So, you know, go ahead and do whatever you like. This is very sort of dangerous as far as thinking goes. So how do you change that, right? Here's the thing about ideas, right? Everybody has a gut, every gut has a feeling and everybody with a gut feeling thinks they're right, right? And again, there is no way for us to distinguish my gut feeling against yours, right? This is just no way. Is the COO's gut always right? We don't know that, right? For example, widgets in could sell twice as many widgets if they would advertise in colleges. Repeat buying would go through the roof if you send flowers with every delivery. You can think of whatever you want here, right? How do we decide what are the right ideas to even test out, even to try out? Can we build some process around that? Here's one way to do that, right? And I, again, there are a lot of different ways of doing this, this is just one way which I find works very well. You can frame any question you like or any important question for your organization into this, into this format, into this template, right? What are the actions that most widely separate valuable users from others? Refraising that for a different problem, what separates orders that get delivered on time from those that don't, right? What separates teams that retain people from teams that lose people, right? You can sort of take any domain, any problem and sort, and frame it within this template, right? What you do is figure out what are the set of actions, right? This circle here is some actions and the other circle is who are the valuable customers? And again, what, if you change the question, the labels will change, that's all. You find those actions where this intersection, this middle area is the largest as a proportion of the entire set, right? And I'll give you some examples. You take this list of actions which comes out of this analysis and again, this is a database, completely scientific way of doing this, right? You take each action at a time, figure out what that intersection is. You take the ones with the strongest correlations to rank them, right? And then your experimentation framework is just gonna take one idea at a time, try it out, turn the crank, and figure it out if it works or not. If it does, great. Pursue it, iterate on it. If it doesn't, you throw it away and you start again with the next idea, right? Now, what we're talking about here is taking ideas and taking discretion out of it, right? We are making it almost a formulaic thing, right? Many people would argue that you're taking creativity out of the system, right? You're saying that I can't even be creative about what ideas we try out. That's not actually what we're doing. All we're trying to do is focus creativity on the right parts of the problem, right? Creativity should ideally be focused on the solutions, right? This piece, the ideas from data piece is just trying to identify what are the right problems to solve, right? Solutions are never gonna come from data. Solutions are always going to come from creativity, passion, energy, right? Data is only gonna tell you if your solutions have worked or not, right? So ideas as a process is effectively reorienting your creativity or focusing it on the right part of the problem, which is coming up with solutions. And here's an example, right? Let's say, did not see an out of stock in their first week, sort of came out to be the highest correlated idea, right? Correlating with valuable customers. How do we make this happen? This is where your, this is where the creativity should start flowing, right? This is where you should start thinking about how to make this happen, right? Maybe show false in stock, even when things are out of stock. Don't show the item at all if it's not in stock. I don't know. We could probably brainstorm and think of 20 more things, right? But this is where the creativity comes in. Then what do you do? You sort of figure out what to build. The idea here is you build one thing. The idea again is to test out whether whether this way of solving the problem works, right? Whether this idea holds any potential for the future, what we're not trying to do is find the best possible solution. We can do that in the future, right? We can optimize what we've built as we go along. The first step is even to validate that idea, right? So build only one thing, test it out. I'm just following that ideas, build, code, measure, data, learn, sort of framework. What you do here is you build a very, very simple MVP so that you can test the idea out. And again, this is very specific to water building, so we're not gonna spend too much time here. Now we've come to the major phase. This is sort of the most interesting phase when we talk about experimentation, right? It's very important to note that success of that idea or that hypothesis is very different from success of the experiment, right? The idea is successful if it actually works in practice. If not, you know, if customers don't see out of stock in their first week, do they spend more on your app? Is that true or not? That's the success of the idea. Whether your experiment is actually able to discover what reality is, that's the success of the experiment. When we have made a process around what ideas we even prioritize, we have given ourselves the ability to be dispassionate about whether the idea works or not. It's not come from my gut or yours, right? So we don't care if it doesn't work. We can always, we have a long list, right? We can go to the next one. There is no pressure that, you know, the CEO really believes in this idea. So let's make sure that the experiment tells us that, right? No pressure at all. You just take all that out of the equation, right? So again, success of the idea and success of the experiment are two clearly distinct things. And this is something that we have to be very cautious about when we're running experiment. What metrics to measure? How to figure out whether your experiment proves that the idea works or not? Without going to too much detail, I would recommend that we have at least one or two people in the company who understand the statistics theory behind this and who are able to coach other people about it. Who are able to sort of lay the foundations of understanding this, right? What you do ideally when you're running experiments is you frame a very, very, very tight hypothesis. And then you launch the experiment. You want to prove or disprove this hypothesis, right? Ensuring 100% availability during a new user's first week causes percent of new users placing at least three orders in their first two weeks to go up from 20 to 23%. Now this reads like, you know, I don't know what all's happening here. We can break this down. This is the metric that we're gonna measure. This is the metric that's gonna tell us whether our idea is successful or not. This is the effect that our idea is going to have on that metric. And if you're able to achieve this, we say that the idea is successful, right? Sample size, again, leaving aside the stats theory, leaving aside what exactly, you know, how do you go about calculating sample sizes? There are online free calculators which you can use, right? It's not hard to do. What's the intuition, right? Across the top is that hypothesis actually true in practice, in reality, right? We don't know that. There's no way for us to know that, right? That's why we're running the experiment to find out whether this thing actually works. On the left, what does the experiment tell us? Is the hypothesis true? Yes or no? Ideally, you would want to land on one of those green check marks, right? If you do this, your experiment was successful. You either discovered that the idea works or it doesn't work, and you found out what was actually true in reality, right? If you fall on one of those red crosses, it means that you've made a mistake. Your experiment has made a mistake. It either says the idea is great when the idea is actually a dud, or it says that the idea is a dud when actually your idea is great, right? At a very, very sort of intuitive level, sample size is the lever, which is going to give you more confidence that you land on one of those green check marks. The higher your sample size, the more confident you will be of landing on that green check mark, right? Now, obviously, that doesn't mean we can use a huge sample. There are other constraints on how big a sample you can use. You ideally don't want to expose a new feature or a feature under trial, or only an MVP product to a lot of people, many a time, right? So there is a natural tension there. Your experiment craves more users, more higher sample size, whereas there are other product-related sort of constraints which are pushing down the sample size, right? Now, how do you run the experiment, right? It comes back to tooling. And when we talked about data democratization, we talked about making it super simple. Take away the excuse that I can't use this. I can't do this. It's too hard for me, right? Your tooling should ideally make it as simple as possible. You can either build this tooling in-house. You can buy it. There are options out there. But the idea is that you should have a good performance tooling in place so that running experiments is sort of a breeze, right? Yeah? Also, the aspect of duration. Sure. How long would you run an experiment? If you cut it through short, you could land in the direction of the top. Absolutely. So an example of that would be that your metric says that sales would go up by 5%. Right? And you would start to take experiment today and you know the metric. Sorry, I'm sorry. So the question was, what duration should you run an experiment for? It might just happen that if you'll run it for longer, you might actually end up landing on one of those red crosses, right? And the example I was giving was that, for example, the metric could be sales, right? Your sales will go up by 5% or something. You would start the experiment today, you check the metric tomorrow, and it's up by 8% or something. You check back after a week, maybe it's back down to 3%, right? And then in that case, you have made a mistake, right? How long should you run an experiment? No right answer. Right? It is something that you have to take a call on based on what the metric is and based on how fast changing is your environment. Right? Do your metrics move on a daily basis, on a weekly basis, on a monthly basis? If you're talking about an established product, a small change in say sales or revenues, you know, it will stick. It will change over time. It's not volatile, right? But if you're talking about a volatile metric or a volatile product, it could change. What you see today may not be true tomorrow, might not be true the next week, right? So it's a little bit of an art rather than a science. But so long as you pick the right metric and you have enough knowledge about how that metric moves based on your experience in the past, you can make some judgment about it. While you don't have that much experience about the metric, it's always a good idea to run an experiment for as long as possible. Obviously, you can't say I'm running an experiment for six months, right? You can't. So start with at least a week or two, right? Until you get comfortable with what metrics you're using, start with a week or two and then maybe you can run them much tighter than that, right? Does that answer your question? Right? So we talked about tooling. You can either build it or buy it, but it's important to have the right tooling. And again, once the experiment is done, this is sort of mechanical, right? You pull out data, you compare the metric across your control and test sets. You do check for statistical significance. And in sort of a one-liner, what the statistical significance means, it's how confident are you that the change in metrics that you're seeing across these two sets is actually because of that product or feature that you've built and not just due to random chance, right? And then as sort of the last step of your experimentation process, you do what you would do at the end of a sprint, right? You do a retro. You figure out how the experiment went, what you learned from it, not what you learned from it, only from that feature or idea perspective, but in terms of the whole process of experimentation, right? When you run that next experiment, you want to do better, right? And again, we talked about that lever, about that crank which you take and you sort of keep turning it over and over again. So key takeaway is make it super simple, to run experiments, make the idea stage completely process-driven, right? Take away discretion from this stage. And again, just a tip, if you can, align experiment cycles to sprint cycles. Ideally, if you have, I don't know, if you have a two-exprint, you run an experiment in the two-exprint, you are analyzing the experiment that you ran in the previous sprint and you're planning for the experiment right? If you're able to align these two cycles, I guess that would make it much more effective, right? Now key takeaway here was that lever, right? Experimentation is just those five or six steps, right? The lever, you hang on to it, you turn the crank over and over again so that people start seeing value, right? If people, the first time around, nobody will want to run an experiment. So the first three, four, five times, right? Adoption, more often than not, comes top-down, right? You have to tell them, you have to use it. Trust me, you have to use it. You probably have to do that a few times. Over time, they start, they're gonna start seeing value, right? You will reach a stage where people will have experienced the magic, right? How experiments run, how the results pop back out and tell you something about reality. They'll experience that magic often enough that they'll start believing in it, right? And that's how you build shared belief. In conclusion, these are two pillars, data democratization, making sure everybody's on a level playing field as far as data is concerned and experimentation, right? Showing people, everybody in the organization, the power of data, right? And doing it over and over and over again. If you do this, right? Over time, over maybe six months, maybe a year, right? We will find that we've built up our data culture. That's it. So I have, left my email idea at the end. So if anybody has feedback or any ideas around what we talked about, feel free to hit me up. Any questions before we end? Sure. So I have not... Obviously, that's not practically possible. If some parts of your data need to be guarded, you can put access control on that. But as far as possible, as far as practicable, the more access people have to data, the more trust you build in the system and the better you are able to build these three layers, the shared assumptions, values and views. If your people get the idea that you're hiding something from me, they start trusting you less, right? I mean, that's just human nature. So yes, there are times when you have to sort of keep certain sensitive data out of the public domain, out of everybody else's eyes. But I would advise against doing that too much. Conclusion? Sure. Have I seen this in experience that data showed us something and actually, that was not true. That turned out to be false many a time. Many a time. Almost every other time, absolutely. And that's how you learn and get better. That's how you learn... So I talked about... Which is experience, exactly. This is like blood, sweat, tears. You make a mistake, you learn from it. So that's what experience teaches you how to read. So how to deal with it simple and so stop them from doing it. But you can't. Yeah. So, I mean, are there any external influences on the data which are actually causing your experiment reads or your data reads to be wrong? Is it possible? Of course it's possible. How do you guard against it? You have to build different guard rails to make sure that doesn't happen. Your experimentation or data processes are not going to tell you that. Unless there is some huge anomaly that you're seeing, maybe once in a while. More often than not, your security is something that you have to build out completely separately. Possibly. Anything else? There are a few. I mean, I've evaluated a couple and maybe one-on-one we can talk about what those are. So one sort of quick tip there is building something in-house is always better than whatever tool you get. But obviously not always can you do that. You might have other constraints on why you want to buy something. Sorry, just one last question. Alright, we'll just talk offline. Thank you folks.