 Hi everyone. I'm excited to be here today and I'm looking forward to sharing with you some insights from my experience. My name is Olga Kuritsina, I'm a Product Manager at Farfetch. And before my current role, I spent considerable time launching and developing machine learning and data products in various roles. For example, I worked on personalized communication and optimizations in market tech products. Before this, I was a product leader in different roles in Mail.RU. For today's theme, I will use my experience that was involved in creating data products for advertisers for ad platform. And this experience allowed me to intersect with machine learning and advertisements. Finally, I had an opportunity to delve into storage and recommendations in e-commerce, also in Mail.RU. And in our conversation today, I'll draw from these experiences to share some practical knowledge and tips what I hope could be applied to other products and organizations. Yeah, so first, I'd love to start with presenting the key takeaways from this session. So each of these steps presents a crucial aspect of the process and learnings which I got from my experience in building machine learning-based products. So the first one is about forcing the product story to guide product development and to help the team to understand the annual batches. Next point is critical task of aligning the team on core, this is the team on core and related metrics. This alignment ensures that the team is focusing on what truly matters for the product success and also it helps to define how success looks like and how are you going to measure that. Also very connected topic, the next one is managing stakeholder expectations. And obviously it's a crucial key or quite crucial aspect of product management, but here I'm going to focus on some distinction or some maybe highlights what could be considered for machine learning-based products. The next one number four is about data. So the value of the outcomes is directly linked to quality and representativeness of data. So what I want to highlight is some thoughts or potential tips on how to consider data when you build product, this type of product. And finally, I will cover testing hypothesis in a cost efficient way. So it's important for any type of product and here I want to highlight some ideas which I think is the most interesting for machine learning-based products. Obviously it's not the full guide into the theme, but some highlights which on how to make product development more efficient. So let's dive into the first milestone, crafting the product story. Then we think about traditional products, we usually describe them by the user interaction with the interface. Product manager could detail what the user can do with this interface, what will happen when they click a button and so on. So in other words, interface serves as a map which helps us to explain the key what is expected from the product. And allowing us to do that. However, this machine learning products is changed. Machine learning products often don't have an interface or at least don't have it in the same way traditional products do. So a product manager could face the challenge of describing the feature in a different way. To close the gap, we need to cross the story. A story where it explains how users will interact with the product, what are the expectations and what results they should anticipate in different scenarios. So the story helps synchronize understanding of the product across themes and also support product managers in thinking over the solution in a comprehensive way. Let's consider, I think, a familiar example, product recommendations in e-commerce. So usually product recommendations we could see on the product detail page somewhere under the main information on the product. So on the surface, the interface for this feature is quite simple. Usually it's a preview of products. But what the interface doesn't reveal is what happens behind the scenes. I think that's the most interesting part. So how could we navigate this list? I think that what could work is to delve into specific use cases and to answer some questions or key questions on the user interaction of the product. Some examples. What kind of customers? What will you say? I mean, would there be any difference for a new customer or existing customer? Should we all be the same or different? What criteria are the most important? Product criteria are the most important for customers. For example, are there any product parameters which we think are necessary to keep the same? Or are there any parameters which are more important for a customer based on customer resources, say? Which criteria are not that important and could be more flexible? So we could think about this type of questions and about customer expectations and describe them through these different kind of stories. How and what customer will see in different situations. And by addressing this type of questions, we essentially will cover the story which helped us to define and to build the product at the end. And also to align this in the key. The next point is about key metrics. It goes without saying that we need to define key metrics for a product, right? And also, like this metrics should be high level. They should be directly associated with company objectives. The caveat is that there could be an expectation that many metrics, different metrics, could be improved simultaneously. Pretty often it's not the case. So there could be even conflict between different metrics. For example, let's say e-commerce as an example, we might be focusing on increasing customer engagement. But without considering other metrics, it could lead to decline in profitability if we don't take into account product margins. So understanding the trade-offs between different key metrics for the company, different metrics for the company will help to prioritize the efforts and to define the product strategy and define this key metrics. So because the metrics could be in the conflict and because it's pretty challenging to improve all of them simultaneously, this is why we need to align a key metrics and also counter-metrics. Which we are not going to improve, but we are aiming to maintain them at a certain level while continuing focusing on the key metrics. So the challenge here could be to define the trade-offs and to align the key metric. Align on that it's something like what we need to choose one and it should several of them, but not all at the same time. What I think worth mentioning here that sometimes the key metric can be directly measured or it takes a long time to really impact on this key metric. For example, left hand value in some products could be just metric which is moving really slowly. So in this case, proxy metrics could be especially helpful as they are for more immediate feedback loop and allow quick iterations plus they allow to learn from this feedback. For example, in one of the products where I was working, we wanted to increase number of closed deals on the platform. So we want to measure those deals between buyer and seller, but the deal was happening offline and we not always knew that it really happened. And in 100% of cases we have delays in data, so that was really challenging to learn from this because it takes weeks before we get really access to this data in our platform. So what we decided to do, we decided to use a number of contacts between buyer and seller as a proxy metric. And in our particular case, it helps us to learn faster and to use this metric as a key metric for our algorithms. So that helped our team to move quicker with this experiment. While key metrics, proxy metrics give us insight into our product's direct impact, we should not neglect the metrics which assessing how data is represented to users. So these related metrics are essential as they provide a comprehensive picture of the user experience and help us to prevent potential missteps. So some examples, not the full list of these metrics are on the slide. In essence, these related metrics help to understand how to improve the performance of our products. They provide a more holistic view of the product performance, so they're helping us to understand its impact and give us guidance on where to invest in various areas of improvement. So by focusing on these related metrics, we couldn't ensure that our users represented the accurate, diverse, novel data, or at least we know where the improvements are. So as a logical next step, I want to highlight managing stakeholder expectations. So previously we touched on the fact that improving core metrics simultaneously isn't always feasible. Different stakeholders across the organization could be concerned about different metrics. For example, profitability, overall sales volume, customer engagement, et cetera. And I guess that because of buzz around machine learning, there could be sometimes an expectation that we could create miracle and improve everything at the same time. And I think this is why this is essential here to manage stakeholder expectations and explain on a high level how the product works. So taking this time to explain how our solution works on a high level can significantly enhance understanding among all stakeholders. And this shared understanding can aid in improving solutions, ideating new features, and even producing just better, bad reports. And giving this side of view, I think we should consider discussing, again, key metrics and related metrics we're using to measure success, types of data we use, and also high level of your overview of underlying algorithms and principles behind them. Machine learning solutions can sometimes be persisted in black box by those who are outside of the team. And this lack of visibility can lead to two issues. One, that it could lead to a lack of belief in the efficiency of solution, simply because it's not fully understand or just there is no again visibility given to other teams. Or on the other hand, there could be an overly optimistic expectations. Again, such as expecting moving all the mattress at the same time without, let's say, effort from the teams. So by breaking down the complexities and providing insight on how product works, we could build trust and align expectations, ensuring that everyone is on the same page about product capabilities and limitations. The next point is about data. Embracing real data is a crucial step in developing machine learning products. By using real-world data, something really close to what happened in production, we can significantly enhance the performance and accuracy, especially in contrast with using some data sets downloaded from the internet. Real data is rich, it's messy, it's often missing parts, but this is something that truly reflects the environment where our products needs to be functioning. For example, in a project, in one of my projects, we were developing a solution to recognize objects on images for user-uploaded content on a platform. Our initial test on high quality images, which we made by ourselves, like an office on a white wall behind, shows really performed well. But while it proves that it's a little different in real life, our users were uploading pictures with multiple potential main objects. The quality was quite different from what we test from this kind of test for us. Our initial solution had a hard time identifying what is the main object and what is happening on this image, which led us to realize how crucial it was to base our solution on real data and how time-efficient it is to start working on the real data from the beginning. Another point why it's also helpful is that using real data also gives an idea of the cost of production of the solution. We could anticipate by doing challenges related to accessing and processing data. In short, using real data from start allows us to create more robust, realistic, cost-effective machine-earned solution, which is ready to be operating in real life, real environment. And finally, when we are working on machine learning products, it's crucial to remember that building model, building infrastructure to test ideas could be really expensive. And instead of calling, I think we should aim to test ideas through a lean approach. And three points on this. The first one, I think sometimes it might be efficient even to postpone implementing machine learning and start with simple rules and guidelines. And I guess sometimes this approach could give us 80% of the result with 20% of the effort. So additionally, it would help us to gather more data on the impact of investment in a particular area. One example to illustrate this idea. Imagine that the team has a hypothesis that in e-commerce, cross-category recommendations could enhance customer experience. So as an example, we could suggest that the customer is looking for a phone. We could suggest cases for this type of phone. Sounds pretty logical, right? So virtual approaches here. One is to invest a lot of time and resources to build infrastructure and algorithms to define this complement, to predict this complementary categories. Or we could handpick several category pairs like phones and cases for a phone and test implement them pretty quickly and test this to validate their hypothesis. Another approach will be much quicker. And yes, it will have much more coverage. But it will give us an idea on the potential impact, solution cost, and also if we have data needed to build the full-scale solution. Another approach is to start from MVP for the product. And this approach brings two advantages. First of all, MVP helps to understand again what data is needed for the final product, even if it means doing some part of the job manually at first time. And also, second point, it confirms that it's technically feasible and helps to define what infrastructure support is needed for the solution with cost, what is quality. And the last one, the third one, it also helps us to measure the impact so to understand if you want and we need to continue investment in this area. The last point regarding cost-efficient hypothesis testing is more about tech. So there is an idea floating around that once you have developed technology, it could be applied to pretty much everything, right? But in practice, it doesn't work that way. And let's use one example. Let's use image recognition as an example. Let's imagine that we have a team who is working on a concept moderation. If they want to use image recognition, their focus would be on identifying a specific object on an image, maybe some appropriate content or something. But if we take the same technology and apply it to a different problem, let's say auto-filling for the product listings, the example I mentioned before, suddenly it would be a full new challenge and the solution would be completely different from the first one, even if some words in the technology are the same. So while it might be tempting to develop a technology and then search for a problem it can solve, it seems to be far more efficient first to identify the problem and then tell the technology to solve it. So I guess it would be just way more efficient and apply it to any technology we use. So aligning tech development with specific product ideas can save us time from spinning around and just maximizing our impact. And as we come to close this presentation, I want to recover pure ideas I shared today. So the first one, product story can guide product development and help the team to understand the end goal better. The second one, aligning from the core and related metrics that the team is focusing on what truly matters to the product success. The third one, managing stakeholder expectations is crucial, especially when dealing with such complex products. Next one is that the value of our outcomes is highly dependent on the quality and representative of our data. And finally, testing our assumptions effectively, cost efficiently could help us just to find more successful way to achieve results. Again, if you would like to explore this area a little more, feel free to reach out. I would be more than happy to share some links to some articles and materials I have to support this session. And thank you for your time today.