 When I moved to Condenas, I said, okay, we're going to rebuild the whole system. It's a publishing company, and they have a Vogue, GQ, Wired, and all that. And you need to have a new CMS, and you need to have a new front-end. And each individual Vogue, each individual GQ, and each individual website has their own, is connected to a different CMS, and it has its own, you know, ecosystem. And you need to bring everything together. You have to think about, why is the core? And in here, it means, what are the key templates or pages that people actually interact with? Is it the galleries? Is it an article page? Is it long-form pages? So, and by understanding that, you also need to understand core, as in, what is the core? But also, what is the effect of the core? So, is the core something that is completely functional? Is the core something that it's more, or the core is, say, related to, for example, in the case of, as you could say, the core is a template. So, the article or the gallery. But actually, a template on its own doesn't work. It needs to be integrated with advertisement platforms. So, how big is the core? Is it just one thing or multiple? But the only way you could start is realizing that you need to understand what is the core. Because if you just get it on your first day, you're like, okay, where did I start? I mean, that's one approach. You can also start by just talking to people and understanding problems. So, that would be the other way I would do that. For example, if you join a company and they give you a product, and if on the first week, you realize that you have a backlog of a lot of bugs, or suddenly you get pinged by a lot of people like, hey, this is broken, or hey, where are you going to fix this? I would go a step back and be like, okay, first of all, why we have so many things that are broken and that sometimes it's just, my experience has been a byproduct of communication or developers not necessarily understanding what is the problem that needed to be solved. So, maybe someone said, do this this way, and they were like, okay, fine, I'll do this way, but actually broke something next to something on the other side because they didn't know that was connected. But if you have a, so if this is the case, if you join a team and also interview and talk to your engineers to see, always ask the question like, what would you like to fix? What would you like to improve? So, what are the things that's causing you the most painful as part of your development? Like, what are the things that are really painful? By understanding that, then you can be like, okay, either if something is really, really broken, you need to assess if you want to spend more time trying to fix that or if it's better to try maintain this, like low touch, something like that, and then maybe build something in parallel that can maybe address any even better. So, having a backlog full of bugs, not necessarily means that you need to go and try fix. You need to understand that there is something there that you should tackle first because what tends to happen is, okay, I'm going to try, you know, somehow fix the bugs, but they are going to keep coming and they're going to keep kind of interacting, interrupting the development process. So, I know this sounds very obvious and to me it's like, yeah, of course we need to fix the bugs, but really pay attention to the ones that are constantly popping up, not necessarily the same bug but on the same area. So, in my case, for example, it could be, I don't know, problems with the ordering flow when people want to order, maybe the order doesn't go through. And sure, it's a critical thing and maybe we put a patch or something, move, you know, continue with development, but if it's in a critical area and then you have a problem, maybe not on the bottom, but you have a problem with, I don't know, the settings of that page, yeah, I would try and be like, can we just look around what's happening on this area and probably either, you know, rethink, can we just build something in the meantime and then we can migrate and shift or, you know, what's the point of building features on top of something that is broken? So, I always, I know that sometimes it's hard to have conversations, especially if you are new to a company, you cannot be like, I mean, you can, definitely you can be like, listen, it's my first day and I know this is broken, we need to fix it. I mean, if you have enough context, do it. But I will definitely spend more time trying to understand what are the underlying causes of something like that. I don't know, my experience has been mostly people, but not necessarily because there are people there, it's misunderstanding of what the feature is, but also what's the goal of that system or that screen or that product or that specific flow. So, yeah, that's one that I will try, try to do if I get a lot of bugs. Yeah, I have a particular view on agile. So, I have read almost all the books. I have seen a lot of videos about, I mean, I really like to consume information and I am not a dogmatic person with agile, so I try to, so for me, it's very important to just to see the context, right? So, if I join a team, that's why I will try to understand the, even the different styles of people that you have in the team, because in the end, people, they are the ones developing the product. So, if you don't see that your team is working very well, probably your product is not going to perform very well. I'm not saying that this is the standard for every product, but if you see that a product is broken, you can trace back and, yeah, miscommunication or something like that. So, back to your question about agile, it is another tool. So, I think they have different methodologies. So, I would say, if you have a team, for example, if you have a team that they always work at startups and they, no one from the team has worked in a big company and you come from a big company, it's dangerous. You're like, okay, we're going to have all the ceremonies and like daily stand-ups and like just right away because you don't know if that's needed. So, you can take things like, what is very interesting is like you can take the goal of or what each method is trying to achieve and you can say, okay, how this can be fit into this team. So, for example, if you have a team of developers that they have been always working in silo and they have been always fixing stuff and be super reactive and they might be very tired of that situation, you can say, okay, what about if we start working on one thing altogether, you can still like it depends on the situation but you can say, okay, this is one of the projects in which we are going to work together and we're going to have maybe a regular catch-up every two days or something like that. You keep it very light because you could understand that the, or you can understand that the whole day-to-day kind of be shifted right away to, okay, now everyone drop that and everyone does this and another thing would be trying to understand how, like what kind of developer or engineer works with you. So, sometimes you have people that they like to just think about the problem more and other people, they really like to execute and they prefer to kind of being told what, not only what to do, but like very, very clear with no acceptance criteria but something like, okay, this is what we expect this feature to do and this is the scenario in which this feature is used or the user flows or something like that. Other people are really, really drawn into, oh, I want to, you know, just tell me the problem and tell me what success looks like and that's the one that they will be, and if you understand what is the balance, if you have a team full of people that they really want every single story or they need to know exactly how to implement something, you as a product manager, you need to understand this context so that you can either tailor or change their approach a bit if that doesn't match with your, you know, way of working or you can, I don't know, flex your style and be more, okay, I'm going to start writing down the tickets in more detail, for example. In my experience, things that were very, very tough for me was working with a team that was very mixed so sometimes we had developers that were like, no, I don't care about like the whole problem and the massive thing, I really want to know what is the thing that I really need to implement and but they were working with a person that was more like, oh, no, no, I don't tell me exactly what to do, I want to just have my, you know, kind of, what is the problem, why we need to solve this and what are the metrics? So what I tried to do in that situation was have, I had like a meeting every Friday in which I will discuss, like try to please both audiences in a way that it was effective also for me to hear back what would be the issues if we went to that experimentation line. So trying to hear that but also explaining, okay, listen, we saw, so I would go and drop all the stats, like we saw this was happening in the part of the product or this is what is important to solve and usually another thing that I learned is that if you go with options, if you, it depends on your team and also the autonomy that you have but I like to go to the teams with options like, hey, we have these two big problems or these three big problems, which one do we think we need to tackle first or which one do you think would you like to tackle first or so by giving them options, you also, you're giving the options that you think are going to, either even if they pick the number three, that's going to be impactful for the product. So yeah, but that's your, a good program manager or I might experience people that I really admire is people that can bring all the context to the developers without bombarding them. So how they know, how they know if they're being successful with what they do is every time you deploy, say a new feature, an experiment or you do a product launch or something new, if you're able to feed back to them what was the result, that closes the loop and that gets them excited because it's kind of like okay, I'm not just launching things live, I actually have an impact, good one or bad one, I don't know, but if you don't do that, then I think it's harder than as a program manager to be like, hey, this is the most important thing. They're going to be, yeah, you see this data, but how can I actually, you know, is it really important? So a good practice would be to, when you launch something, go back to, hey, this is what happened or this is the effect. So for example, last week we, there was a tiny thing that we needed to solve for the operations team. The operations team is a team that we have in-house. So it's a small team. So, you know, if you, this developer did change in the code just to make their life a bit easier. So just to automate some stuff. So I went back to the upstream like, hey, now if you do this thing, it sounds like it does all this stuff for you and you can, you know, you will save maybe half an hour from each thing or each task. And they were like, oh my God, this is great. Oh, thank you very much. You know, that was the reaction of the tiny change. And I went back to the, so this developer said, hey, listen, this is what they were saying. And he was like, oh, that's cool. Yeah, that's quite good. So, okay, it's a tiny field like loop, but being able to say, this is what you did and this was, what was the impact. Also strength makes your relationship with them a bit more, or stronger, just a bit. I found that when you basically have different projects of problem, if they're presented to the team, okay, how does the team know which one you think, what is the value that they're going to solve? It depends how you frame it. So if I say, okay, we have a problem with the sign it, login flow, and if we solve this problem, we're going to affect 80% of the users and that's the data I can see, right? And if I say problem B is upper funnel, so this is more with offers and we want to work with the, I don't know, offers team to integrate their API so that we can show the most relevant offer for you. So you have two experiments. So one experiment is, for example, changing the login flow. So we improve, the metric would be, say, logins, imagine, and that would affect 80% of users. However, if we work with the offers team, we can affect, say, I don't like, 20% of the users but we can also impact this amount in the business. So I just bring more data. I know that both things might be, so are good for the product and I know both will have a benefit but I need to frame it in a way that they understand if you work on problem A, solving problem A, you're going to affect, this is going to be the impact. If you work on problem B, this is going to be the impact. Also you can think about the, not necessarily, so not everything is going to be quantitative data. You can also say, well, we did user research and we understood that, I don't know, the first order flow or the first booking flow is very painful and within that flow, these are the areas that are really, really painful. Which ones do we think we can tackle now versus maybe in a month's time? Which ones are, you know, logical order? So maybe you cannot tackle the last part if you don't tackle the first one. So that's the, it's kind of like, you need to frame it. You need to tell them in advance, like what is the impact. Hopefully you know what is the impact as well or you have an estimation of the impact. If you don't have a lot of data, then you need to either try to get it from somewhere else or half-proxy. So that would be one. Well, the other thing was the approach. So in general, what I would suggest is when you, for example, when you think that you cannot, so when you have a bunch of developers, engineers, copywriters, even people from different roles in the company, I don't know, marketing, sales, and you're trying to talk about a problem that you want to solve or a feature that you need to implement, if you don't believe in that feature, like, I don't know, use it more as a cue and like, okay, why am I having such a hard time to justify to my own team that we should build this? Why? Because that might be a signal as to, maybe we need to rethink this a bit or maybe this is not the right thing to do. So this, I, well, I got this especially when I was working at the very beginning of my career. I had a product in which, so for the user, the product was a bit broken because for the client, the product was more a marketing channel, but for the user was their actual product to go and commute around across London. So I was having, I had a really, really hard time to justify to my team like, oh no, let's not fix this. Let's just put, let's just change the UI or let's spend more time, you know, tweaking the way the, say, the icon looks and rather than actually fixing other part of the code which was gonna take more time, which I needed to then go back to the person I was working for to ask for more budget. So, but I had a really hard time to justify to my team that, yeah, we need to spend more time on the icons rather than like, well, we should work more on the API side of things. So that would be one approach. So if you are in a meeting with your team and you're discussing a backlog or you're discussing stories or you're discussing, I don't know the way that you like to work, but if you're having a hard time to justify something that's a signal for you to maybe go back and rethink what's the, you know, what's the feeling that, why you have that feeling and also because depending on the country, but depending on the company that, you know, not being able to quickly, not quickly, or that justify your thought can actually harm the progress of that feature. So I remember at some point at booking, I was somehow convinced that if I use some APIs to offer more ways for people to log in, I would increase the chances of people logging in. So I was like, oh, maybe we can integrate with a lot of APIs, I don't know, depending on the platform, say, VK in Russia or line in Japan. So they have a lot of APIs so you can use it as a sign login, kind of a different login flow. But I was not quite sure that if that was the right approach, for example, are we, is our product at the core very secure? Is the data very secure? Is it, at that point I didn't feel like I had all those answers. So when I presented to the team like, oh yeah, let's integrate with VK API and Line API and all these things, because I was not quite sure about it. They were like, why though? And I'm like, well, because we have all this, the traffic from all these countries because we have, this is the impact if we do so, estimation of the impact. But because I was not necessarily sure, they could see, so they were like, okay, but what about the other problem? And I was like, okay, you know what, I cannot justify this. So I said, okay, Spark is for now and after like two months we implemented it. But yeah, so if you are in a situation in which you cannot completely, it's not necessarily about your thought process. It's maybe if you don't feel that that's the right thing to do at that time, just step back for a second and then you can continue. The other thing about the approach is, I don't know, how to, for example, how to deal with older people that might not know what product management is. And this is very common, I don't know, has been very, very common for me, except for booking, but when I joined Condenas or when I was working for TFL or Ford, the concept of a product manager was so different or distant what they had in mind that sometimes people would think that I was a project manager or that I was a marketing person or that I was, I don't know, like, it was mostly those two roles. So people would come to me like, hey, can you tell me when things are gonna be done? And I mean, I can report on delivery and I didn't want expectations inside that, but I was like, hmm, I think you're not getting my job right. And the other thing was the marketing part is like, hey, what do you think about the social media strategy that we should have? And I'm like, I mean, I can't comment on these things, but this is not necessarily what the product, that product that we are building is about. So what I would suggest in these cases is to, I mean, it depends on if you can do this, but to start bringing those people to maybe your conversations with the team or to sit with them while you look through the data so that they understand that, okay, she's not the person that is just controlling the timelines because it's like, no. But they get a sense of, okay, she, they get a sense of how to work with you as well because for example, with the sales team at Condenust, I work with them a lot in terms of how do you sell Vogue? How do you sell GQ? How do you sell, because in the end, I'm developing that product, so how do you sell it? And I learned a lot about how they promised things to maybe the advertisers because they think, oh yeah, it's gonna be done. And then by talking to them and by working with them and then realize, okay, wait, so there's a whole process to get, say, the banners showing the client's ads, for example. So it's not that they are dumb or stupid or things like that. No, it's more like by bringing them and by you talking to them in a regular basis, then you kind of see each other's worlds. That would be one. And also try sell your product. That's something that gives you a lot of insight as to how people react. And I know that maybe you are not in a position to go and try to sell your product. Maybe yes, but if you have an opportunity to go, if it's something like a B2B product, try go to a meeting with a potential client and you try sell that product. I know that, for example, for Condenust, I talk a lot with product managers from Google and they are there trying to sell new features or new integrations and things like that. And that, to me, I felt like that was a good, I would say a good practice from their side as well because they have even more knowledge. But yeah, the other one would be ops, the operations part. If you can work with at least a week or at least even a day, if you can sit down next to someone that's doing the customer success, customer support, that role, if you can sit next to them, then you will get a better sense of what's going on with your product when people are using it. I know that if you have tracking and if you have all the data structure in a way that you can analyze it, that's good. But if you sit next to one operations person, then you will see even how they solve the problems. So that's another way. And by doing that, they can also learn from, okay, why you're even sitting next to me and then you can kind of educate like, okay, well, I'm the person, because I'm developing the product, I need to know what's going on with the users and because they feel like, oh, okay, so that's okay, if something comes up, can I feed back to you? If you're in a startup, you could say, yeah, I mean, talk to me. If you're in a big company, well, maybe there are other ways of communicating. So yeah, so try sell your product, try support your product as a support person. That will give you a lot of feedback and the other thing in terms of approach to say, even new products could be how, I mean, you can look at the competition and that's fine because if you want to look at the competition and if you want to understand how other people are solving their product, I would say do so. But maybe there's an interesting angle that you can take which is, okay, because people tend to look at Facebook and Google and they tend to be like in old verticals, so yeah. But you can also take the angle of, okay, what, if we think that this is a big problem or for that user, what is the first, what are the benefits that I can kind of, what are the benefits that I can show to them as a quick experimentation, but also what, yeah, it's kind of like how can I measure this in a way that I can quickly get the results and then can I sit down with the team and can we start building towards that? And I know it sounds a bit obvious, but you don't need to look at what Facebook and Google are doing, they do provide the standard experience, so you do need to understand what is the standard, but it's not necessarily that you need to follow that standard, so you can be smart about it. That means that, for example, if I'm developing a product that is a chat app, yes, for sure, I will look at different chat apps in the market, but do I need to go and try do the same just for the sake of oh, yeah, it's a chat app or can I understand how can I leverage that standard to my own product, so that would be another one. The other thing I wanted to ask you is what has been the most kind of hard or complex situation you have had in terms of approaching a product? Yeah, convincing everyone that that core is the same, so you mentioned the onion, there was kind of going to be a question and asking questions, and you were trying to go through your presentation there, it's just that a lot of people might perceive the core of the onion as something different, especially in B2B where each person is responsible for a client that knows a lot more about a certain element of your product. Okay, second, so, right, and how are you trying to solve that, for example? We'll try, I'm not doing very well at the minute. It's ultimately bringing people back to the question of whether or not we all know what the true value of the product is, and if we have to keep asking that question for six months, we're gonna keep asking that question. Okay, and what, I mean, why do you think there is such a, people are not maybe agreeing on this stuff, why do you think that that is the case? In our example, it's because of the business model that we have and the fact that we have account managers for every customer, so each customer's got a voice and each account manager has a voice for the customer and we have probably the team which all the accountants have seen the same product itself. Right. So, completely, yeah. How was it with booking.com? How was it convincing people that access was the core? Yeah, so, at booking, the thing is every team is kind of responsible for something that can be measured or they organize the teams in such a way that they're independent enough to try work on something. Even if they are part of a bigger, say, department, so you could say, well, you have a, say, the convert team or the team focusing only on conversion, but then they try to break it down in such a way that, okay, if you're focusing on the hotel page, that's your focus and you can do whatever you want there, but then you need to try convince, so the conversation is more around what line of experimentation you can do that brings the most value and what line of experimentations other people might like be really, really interested in following up because sometimes, and that's how they, you know, spread their own kind of knowledge, I would say. So, for example, if you have a really successful experiment by changing the copy on the hotel page, other team will be like, hmm, wait a second, if you change the word here and this is actually increasing conversion, can I try this on my own product? So they will go and try that. So the conversations are kind of this, like this, or you have the other part which is more the conflict, which is conflicting metrics. So for example, if you have, if you're working on the conversion funnel, you care about people just booking, right? I don't care about what happens after, I mean, you can care as a person, but say in terms of metrics and things like that, okay, you are optimizing for getting more bookings. And then what happens is when the user or a person books a hotel, then they enter another part of the business, which is the post-booking experience or you could say retention or engagement and all that. And those are trying to optimize for retention or they're trying to optimize for less number of tickets created and things like that. Sometimes you would have teams that are trying to get more conversion, like more bookings, and it doesn't matter how, so they can do that. And then maybe one of the experiments is a change of a copy and it says a book for free. And then in tiny copies, like pay when you stay. That what happens is you, as a team, you're like, oh my God, we're driving conversion up. But in the other side of the business, you get product managers like, what's going on with customer support? We're getting like, I don't know, 10,000 tickets per day, even more than that, like what's going on? And then that's where you have the debate, yeah? So how do you give them, you have so many competing metrics? Yeah, good question. That's when you need something that is maybe above that layer, that's a layer above, okay, those individual teams. And that's where companies tend to have kind of overarching goals. Some companies tend to have the OKRs as a method or as a way of a framework to drive all these teams together. Or you try, yeah, it's kind of organized, it could be KPIs, not necessarily objective or more KPIs, but you do need, when you have this setup, like individual teams doing things on their own, you do need to have either KPIs, goals, or join metrics, and kind of like understanding why this is important. But at looking, for example, they introduced OKRs when they joined, and they were kind of experimenting with setting the OKRs, and it's hard to get the cadence right because for a big company to get OKRs, they need to actually start, say, a quarter before, so it's always the game of catching up and planning ahead. But they were doing OKRs as a company, more like, where are the company's priorities? So, well, this year, we want to do these three things or four things. So they're really, really big, or I say, not necessarily big, but wide, so we want to say, we want to improve the experience of the booking experience. We want to go to China. We want to go to the US, for example. So they have these massive goals, and then you get, now, the following thing is, okay, how each department's going to contribute. So, well, marketing, they're going to work on promoting booking.com in the US doing these things, and then you have like tech, they're going to focus on this part of the priorities. So you start kind of breaking the big goal down, and then you have different approaches there as well, because you have people saying, well, management only defines the big things, and then each team will go and try and find the best way of trying to find a way to contribute to that, or then you have another approach, which is management will define everything, and then people will try kind of like, organize the teams accordingly. So, which one works the best, I think? Well, I really enjoy working at booking, and for example, Reki, I really enjoy having the freedom of understanding where do we want to go, and then defining the OKR for my team with my team, or my teams, depending on the setup, and that's, I think it's a good exercise. I would never underestimate the time it takes, because you do need to understand what you really want to do, and how, you know, if you pick the wrong metrics, you're gonna maybe halfway the quarter, you're like, why are we even doing this? Like, why are we even optimizing for logins, when actually it's not logins, what is driving the most impact to get to this goal, so that would be one. Yeah. I understand that, like, this is not what we should do. You just cancel this project, change the test to do, and just change the KPIs for yourself, or what is the approach? For what? I mean, you just mentioned, like, okay, so you've been doing something, like optimizing logging, yeah, yeah, yeah, for like two months, and you understand, like, this is not the best way to hit your KPIs, and you find another way. So you just cancel logging work? Well, if you're gonna, so for example, if you're gonna completely cancel, I mean, sometimes you do need to be that aggressive and be like, stop this, just, but where you can just face out the work, so you could say, well, actually, I can give you a concrete example. I, at the beginning, in accounts, I was like, you know, signups was one of the KPIs, we needed to bring more signups, and it was the focus for, say, six months, we were trying to get people to sign up and in different parts of the products, and you know, you book your holiday, sorry, your holiday with your hotel, and it's like, create your account, manage your booking, and like, create your account, here, here, here, and the thing is, okay, fine, those experiments were positive in the sense that they were driving signups, but then we realized that the actual benefit of signing up in terms of metrics was not signing up, but actually confirming your account, confirming that you own that email address or that phone number, so the verification of that signup was the valuable thing, because if that was the case, then the hotel could talk to you, we could actually reach out if there were issues, you could actually go and change your primary email and have a secondary one, so, and also in terms of security, it was an extra layer to protect your account and recover access, so, you see, if we were just focusing on signups per se, it would be, well actually, at some point it was like, why, we were getting a lot of signups, but actually we're not solving the rest of issues that should be solved by having more signups, but to get to that stage, it's not like, oh yeah, we need to do verification, it's more like, wait, one second, so we have more signups, but still we have a lot of issues with people handling their bookings, why though, because we have more signups, and then, it's like, wait a second, but why, okay, we started to think about, well, to be able to manage your bookings, you need to be verified, because we're not gonna expose bookings to any, you know, if you are trying to login, maybe it will send you something, it will send something to your email address, but it was a fake email address, for example, you would never get that, so that was one, the other one was the security, like imagine that someone would take over your account on booking.com, we needed to alert you that, hey, there's a suspicious login here, but if it's an unverified email, you will never get it, or at least we would not, for sure, know that that was a communication, or the best way to communicate with you, so by having, so we were doing this experiment, and it was after six months, and we needed to say, you know what, the KPI is no sign-ups, so verify sign-ups, that had a negative effect in terms of the numbers, because of course, it's very easy to, I mean, you could even put a, you click a button, and you can create an account, I mean, you can do that, you can game fire, you can game the metric if you really want to, but verifying a sign-up now is a harder step, now we need to put even more attention to the flow, because it's not necessarily, okay, giving you the means to create an account, it's giving you the means to verify it, what if you lost access to that email address, can you verify with alternative methods, Facebook, Google, I don't know, that is connected to that email address, for example. So it shifted the approach that we had in the beginning, but also it shifted the focus and the iterations. So it was not a hard stop, it was not like, drop everything, switch. It was more like, okay, let's finish that experimentation line, and we started working on sign-ups, and me as a program manager at booking, I needed to communicate this to the directors and say, hey, from now on, we are not gonna measure ourselves in terms of sign-ups, but verify sign-ups, and here's why, and here's how it would affect your teams, and here's why you need to look at that metric when you run your experiments, and that's the other thing. So at booking, you look at your own metrics and your own experiments, but you also keep an eye on other people's kind of, or product's metrics. So for example, if I, I was responsible also for the, what's the name, the navigation bar at booking, so the blue bar that you see on desktop as well, so that bar at the beginning has entry points to a lot of teams, so for example, entry points to a lot of products by a lot of teams. So there were, there was one product which was the discovery, like kind of like a discovery experience, and it was a very prominent link that in smaller screens, like it took a lot of space and no one actually was using that, so I was like, you know what, rookie mistake. I was like, let's run an experiment, and the hypothesis was that if we were to remove that link, people will be able to navigate, to actually go to the settings page or sign in more, because having more links and more options just drains your brain, but you have a higher cognitive load, so if I present you with 10 links on a nav bar, you need to go and process each one, so the hypothesis was it would start removing, then people would have a easier time to actually go and find what they need to find, which was the two main areas, like logging in, sign-ups and all that, so I removed that on an experiment, and I didn't have the kind of like the, yeah, at that time I didn't have the understanding of how removing something will affect other people's metrics, and then one day I did that, and after like two, three days, the product owner for that team, she was like, Elena, I'm like, yeah. Did you remove the entry point for the product? And I'm like, oh, yeah, we're running an experiment, and she's like, yeah, I saw a drop, I mean, of course, these experiments are always, it's not like 100% of the users, it's always contained, I mean, it's an experiment tool that manages the traffic very well and distributes the experiment, but she was like, yeah, I just saw a drop, and then we were checking the experiment, and I'm like, hmm, yeah, I can see all her metrics were read on my experiment. So how do you solve this problem? Well, it depends what you want to achieve, but in the end I said, listen, but this is our problem, so the problem is that the users are on small screens, they are, we believe that they're having a hard time to understand where to go, especially because we have all these links, so our hypothesis is like by removing, they will get access and all that. So we agreed that, okay, we will not have the link there, but we will have it on the menu, and she was happy with this, and it was not a massive, so basically that entry point hurt her metrics, but she was fine with the compromise, so then also on the dashboard side, we were gonna have like a banner there, so she was gonna experiment on that on the dashboard. So that was the way to, and I learned from that experience that, okay, when I run experiments, I need to also check other people's kind of main metrics. Of course, a booking conversion is kind of the main one because that's what's driving the business, and also what they know that gives value to the customer, so when someone books a room, it's exactly what they want to do, so that's a good, for them it's like a good metric, and that's why that metric is so well understood by the whole company, and that's a way for everyone to get together, they get around the metric. Is this the best approach and will work for everyone? I don't know, I don't think so, because what if that metric doesn't fit different areas of the company, for example, but that metric was well understood, so everyone would run an experiment and then check, okay, well, it's not hurting conversion, maybe we can run it out to everyone, or if it's hurting conversion, then you start having conversations like why and all that, so yeah, that's an example, yeah. How do you manage inaccurate time estimates when it comes to development? Right, right. The answer is I don't, I've seen, so what I learned to do, I don't know if this will work for you, but I will try it out. After all this time, I learned that estimation is trying to estimate stories is also time that you invest of your development cycle, so if you have a team of say, 10 people, let's say the team of four, then sitting down with them, trying to estimate the cards, that's time that you are investing in estimation. So what I learned is instead of obsessing about how accurate the estimation is, what it started to do is say, okay, this is a mobile app, so we have release cycles, so I said, we are gonna start just having release windows, and I created, it's not an artificial deadline, but I just created windows, so I said, if we start working on this, to say, we are gonna solve this problem, do we believe that this could go to this release, or is big enough to go to the following release? So what I do now is this, is saying, is this problem something that you think can be released to users into weeks, or in four weeks, or in six weeks? So I give a timeframe that they can go back, digest, and be like, actually, I think this will go on the next release. So we have a different conversation, and this works for me because I have, all these teams work with me, and I don't need to coordinate with other people, which was a different case at Condenas. Condenas, for example, they had three teams, and three project managers, and I needed to coordinate with people around the world, and they had their own projects. So understanding when things were gonna be ready was kind of super key for them, and the approach that we had there, I mean, we had that approach, but I don't think that for me would work going forward, but the approach was, okay, we spend time, we invest time trying to estimate how many cards, rather than story points, was more like based on cards, and then you would have the program manager saying, okay, we could, this is our velocity, based on previous experience, and then it was just based on like, okay, we could do 90 cards per sprint, therefore it will take this, I'd say, six months to deliver the whole thing. I think that what happens with, and this is my point of view, I believe when you are like so obsessed about telling me exactly how big the task is, or how many story points, you are deviating from the actual, like from the actual feature or solution, or so you're having a conversation about what is it gonna be ready, rather than is this the right approach for this problem, and that, I don't know, I think that some developers would see this as a sign of, oh, so annoying to have to estimate all these things, I don't know, because I don't have all the context, so if you do need to estimate, I would say, keep it light, and do a re-estimation if you can, after a week or so, after they started to work on the problem or the feature, because it is very subjective, because they may not have all the context, or you don't have all the context, you might think like, yes, this is the thing that we need to do, and suddenly, oh my God, we find out that Instagram is launching this in a month's time, we need to actually speed this up and actually add something on top, I don't know, just making this up, but, so yeah, for me, what has worked very well is the launch windows for mobile development, I guess you could do something similar for web development, or something in which you can release all the time, but give them the power to decide, or turn the conversation more into the, okay, these are the chances, can we meet all this, what is the earliest that we can actually show these to users? And I work with developers that are super pumped, and they really want to have things live, so it's also in their, I mean, they are motivated by launching things, so by giving them launch windows, they are kind of, I don't know, like a challenge, but they're like, I think we're gonna achieve this, and yeah, so far, from all the things I have tried, this one, for me, has been the one that I actually can't release since every two weeks, and I don't need to do sprints, because the sprints are kind of artificial, following the release cycle, but I do a bit of planning, but not necessarily the planning which everyone sees together, like that is more like, okay, kind of like this is what we want to achieve, this is the part of the flow that we want to solve, we work together with the designer, and the designer works through kind of a proposed solution, they also get some input, and then we start building, so yeah, that works very well, but I imagine that if you need to coordinate with other teams, you do need to have a sense of when, so if you can create those windows, and communicate those windows with people, that might be a way to move forward, yeah. Other questions? Yeah? You thought that you should at least try to sell your product, or do a customer success, if you can. Yes. What are your thoughts on your, like using your own products, because by being a product manager, it's in our human nature that we could be very biased to our products and relationships that we've had in this space, but how to be, you know, unbiased with product and users. Yeah, I always think to myself, I am not the user, I am not the user. You know, and to be honest, I always have that question to people that work on Facebook or something, because they might be used to Facebook and Instagram all the time, they're like, yeah, but we are not, that kind of user anyway. But yeah, I always think I am not the user, so, and I try to, try to find users that I can talk to, try read about how users talk, or how users are in their own forums. You can always find, for example, for travelers, you can go to forums, you can go to websites or blogs and things like that to see what they look for. But as a, even as a reflex, every time I'm making a decision, it's like, or not a decision, but like when I'm like, oh, this might be interesting. Oh, this is a great idea. I'm like, hmm, would it be a great idea or would it be a great thing to solve for, for say, for a chef or for an ops person or a sales person? So yeah, I just say I am not the user. And the other thing is like, if I'm really, really passionate about a problem or a solution, what is the thing that is making me think this thing, like this way? Is it something I saw from an interview? Is it something I saw on the data? Is it something that, I don't know, an idea the CEO had, I can be like, hmm, it sounds like a good idea, but is it really for the user? Is it a good idea for us? Because we're the ones experimenting the pain of having to do this 10 times. And this is very common for the operations team, or for example, right now, because sometimes they're like, oh, it would be amazing if we can do all these things and we put it on the product and I'm like, yeah, but it seems that a real user would never even see the screen and would never even use the screen as you use the screen. So yeah, it's just like having that in my life. You're not the user, but for me, it works very well. And I'm never attached to, I am attached to satisfying who the customer is, but I'm never attached to the solution because if you're really attached to the solution, then you might not be even willing to change it. And which, when I work with designers, for example, I'm like, don't worry, this will evolve. Right now it seems like a perfect thing, but it will evolve, so don't get attached to it. Yeah, so you are free to move. Yeah? What kind of tools do you use for experimentation? For experimentation, okay. So at Booking, for example, they had their own in-house system, so they have the experiment tool. So that's, I mean, to get to that level, you do need a bunch of people in house that can do kind of, they can do their own tracking and ingesting data and all that. So Booking level experimentation is, you have in-house team of, I don't know, I remember how many people, like more than 30 people working on the experimentation tool. Apart from that, consumer tools that you can get on the market, I've used, well, I haven't used it optimistically, but I tried to use Optimize from Google. It was okay. It allowed me to change things like copy or some UI stuff on the screen. You can also do your own setup, for example, if you have things like Mixpanel, you can use Mixpanel to track events and you can also cluster users. There are tools like Fun with Flags. So there are some libraries that you can go and integrate and they have that kind of open source. So Fun with Flags, there was another one that I used at some point. Oh, for God. But Fun with Flags was kind of cheap, cheap way of starting to experiment and stuff because it's kind of feature flags. So you could define to this group of users, show this feature, to this group of users show that feature, and then you can use an online tool to measure things like aesthetic power, how many users you need for that cohort. I mean, so you can be very, very cheap or you can be booking style, which is like in-house. I never use Optimize this, so I don't know if it's a good tool. I know that I have heard it's a bit expensive if you get to a certain threshold, I'm not sure. Or you can use things like good analytics, but for good analytics you need to make sure that you define the tracking correctly so you are able to see, you can define the dimensions and goals so you can see who's getting where and your funnel, for example. Or, yeah, Mix Fun with Fun with Fun has interesting things like properties per event and has events, so you can just do an integration with Fun with Flags and Mix Fun with Fun, and you can track people. But, yeah, Mix Fun. And usually, how much time do you provide for this experimentation? Do you feel like a specific deadline when you try to achieve a specific goal or what's the timeframe? To launch the experiment? Yeah. And to leave the experiment running? Yeah. So to launch the experiment, well, the time it takes to build the experiment, so I don't know. For that, you need to talk to the developers. I mean, if there is something that is very time-sensitive, you might plan ahead. But to leave the experiment running, that's why I'm saying that there are some tools like calculators that you have online. But to run an experiment, you need to make sure that you have enough traffic and enough people to have a good reading to make a real decision, so you need to have enough users exposed to that experiment for you to say, oh, actually, the effect was real.