 مرحباً جميعاً، مرحباً جميعاً في today's Product School Webinar. My name is Makram Mansour and I will be talking about Managing and Experimentation Platform. A little bit about myself, I'm a Product Manager at LinkedIn. I would love to hear from you and to connect with you at LinkedIn. Here's my LinkedIn profile. I have a Master's and PhD in Electrical Engineering from University of Illinois at Urbana-Champaign. I also attended the Stanford Lead Program at the Stanford Graduate School of Business. Before LinkedIn, I was a Product Manager at Texas Instruments driving the Online Design Tools Program for TI. And then before that, I was an IC Designer at Intel working in the Intel Server Chipsets Group. One of the important values I have is I consider myself an out-of-the-box thinker who's not afraid of taking risks. And I actually get more committed when people tell me it cannot be done. And I strongly believe in the saying, where there's a will, there's a will. Today's agenda, I will start by giving an overview of Experimentation at LinkedIn. And then I will discuss how I prioritized my backlog and discuss the prioritization framework I use. And then I will talk about the workflow, how have we optimized our workflow. And then I will give some final note on how do you develop your vision and strategy. As you know, our vision at LinkedIn is to create economic opportunity for every member of the global workforce. It's our true north and what drives our business, products and all our decisions internally and externally. Creating opportunity for every member is why we all come to work every day. And our mission is how we operationalize our vision. And it is to connect the world's professionals to make them more productive and successful. Whether it's a profile created, a connection made or an open job that got filled. All these actions contribute to building the LinkedIn economic graph. This is a digital mapping of the global economy across these six dimensions. The LinkedIn data on the economic graph is actually mind blowing. We have 96 million profile actions per day, 30 billion feed updates per month. 180 million messages that are get sent every day. And just to list a few. And all of this is happening across as people are connecting with each other and talking over the platform. This complex growth engine is full of network effect. Just like the butterfly effect, we have the network effect on social platforms. One thing we learned over the years is that even small localized changes can have massive impact. For example, an AI engineer who is working on the PYMK algorithm, people you may know, could have made some small changes. And this could have caused some unforeseen negative impact on sessions. So imagine now we have so many different teams that are making different changes. Customer facing, infrastructure facing and all of these network effects could have, you know, could lead to different consequences. So for us to maintain and accelerate our growth, it requires a strong discipline around experimentation and data. Here's another example. Of course, you know, typical not only for LinkedIn, but for any online e-commerce website. For example, we have the ad banner on the top and the designer decided to reduce the ad banner by 5 pixels. And you can see here the click-through rate chart. And as soon as this change got deployed into production, we started seeing a small dip going down in the click-through rate. So as you can see, small changes just from a cosmetic. If we don't do A-B testing, there is a big significant impact that could automatically get reflected here. So the moral here is that we test everything at LinkedIn, whether it's front-end, ranking algorithms, back-end infra, testing and experimentation, every team, every product team, infra teams, they're all running these A-B tests and experimentations. That's the only way for us to make sure that any change that we are introducing on the platform, we are introducing it in a controlled manner that we are seeing gradually the impact of it. We are checking our True North Matrix, Signbox Matrix, Guardrail Matrix, making sure that they are all safe. Our members are safe. We are keeping all of these important metrics checked as we are deploying them and introducing them to the outside world. And this has a huge activity on our experimentation platform. So we see more than 100 new tests being run every day and 400 tests or features that are being ramped. And there's a lot of data here you can see in terms of the activities that we're seeing on our platform. So for us to support LinkedIn's goal and requirement of testing everything, we built a state-of-the-art large-scale experimentation platform at LinkedIn. And it handles like tracking, data events, more than 90,000 queries per second. We have an offline infrastructure, parabytes of data that's being computed. Then we compute all of the unified metrics platform, 20,000 metrics that are being computed every day. 8,000 of those are AB testable metrics. And then all of this is being fed into the experimentation platform, our charting platform, as well as our anomaly and alerting infrastructure. We call LinkedIn Experimentation Platform TREX. TREX stands for Targeting, Ramping, and Experimentation. Targeting helps the teams to run experiments on different audience groups. For example, based on location, job title, company, industry, etc. Ramping, it allows them to safely introduce, like ramp or de-ramp, or take out the feature from the outside public. And this is independent of that code deployment. So even though this new feature has been deployed and is available on production, through this ramping and de-ramping, they are able to make it accessible to others or removing it. And obviously the bigger piece is experimentation. It's an advanced experimentation infrastructure platform that handles large-scale data. And it has state-of-the-art experimentation features like multivariate testing, advanced randomization features, reporting and alerting variance reduction techniques, even looking at it from the metric owner, most impactful experiments. And that's this platform plays a critical role as a checks and balances at LinkedIn to ensure member safety and our guardrails as well as we are pushing through to advance our growth, making sure that we maintain our guardrails. As you can have seen so far, there's a lot of demand on the LinkedIn experimentation platform. Every team, every individual, every engineer at LinkedIn is touching the T-Rex experimentation platform. And obviously there's a lot of requests that are coming our way. And for us to be able to prioritize all of these requests, we have developed an objective prioritization framework. Some of the criteria for us, the purpose for that is to quantify business value and are able to compare Apple to Apple between different requests that are coming from different teams at LinkedIn. For example, a feature request coming from the flagship team who are working on consumer-facing features. How can we compare that to a request that's coming from an infra team who is working on some infrastructure capabilities? And how can we compare them Apple to Apple so that we can prioritize it? And this will make our decision-making process data-driven and transparent. And it will set clear guidelines on what input we really need so that we are able to prioritize these requests. The prioritization framework we have built is based on four key pillars. The first one is value. And this measures the impact of this ask. And to make it flexible enough so that it covers the different use cases of value for LinkedIn. It covers the site up which is like infrastructure capabilities whether the site is down, for example. In addition to product revenue, key metrics lift and things like that and user satisfaction. The other pillar is leverage. How many users are going to be touching this feature in the next 12 months? Emergency. We understand we want to know how urgent is this request in terms of are you blocked? You are not able to proceed with your work without this feature. You are inconvenient or you have a workaround. And lastly, we also consider the cost of implementing this feature in terms of engineering quarter effort. For the value rubric we have provided different options so that someone who is submitting a request can pick the top two most important factors that can justify the impact. For example, a PM from the LinkedIn marketing solutions can, for example, who is working on advertiser budget split testing, for example. They can pick revenue and metrics as the two most important justification for their feature request. They could say this feature that we are introducing has more than 30 million dollars of revenue potential so they pick number 4 and it can also make metric lift for the important metric lift for LinkedIn marketing solution so they can select that number as well. As opposed to someone who is working on the infrastructure team who would be submitting a request for infrastructure changes that our T-Rex platform needs to support and they would use GCNs and productivity engineering productivity as justification for that. So you can see how different teams now are able to pick from these different value pillars and are able to provide impact justifications that feed in into our backlog prioritization formula. Similarly for the other rubrics we have identified different mappings. For example, for the leverage we will understand how many users are the potential users of this new feature and we can pick the corresponding number for urgency we really try to understand whether the users are being blocked or there is a workaround or it's nice to have capability and we try to use a T-shirt size rough costing estimation so that we can identify what is the effort involved in terms of implementing this feature. So finally we can compute the impact score which is a multiplication of the leverage times value times urgency and we are able to prioritize our backlog from the biggest impact score going down but we do realize that some asks have a small cost and for that we compute the return on investment which is the impact score divided by the cost. So when we identify high ROI items in our backlog we are able to override and be able to push the ranking of these items to the higher place on our backlog. And this is an ideal mapping of where we strive to keep a balance between quick wins which are low level of effort but low impact our homeruns which are low effort big impact big batch which have big effort big impact and the fallbacks which have big efforts but medium impact So that's like a good balanced approach between them that we always look at the backlog and see how that is being reflected. Next I'm going to talk to you about our workflow and how have we optimized our workflow over the years So the idea behind it is that we want to apply a design thinking approach with experimentation in the core of our purpose So what we have talked so far is that first we wanted through the prioritization framework and the backlog prioritization we are focused on doing the right thing the right work the most important high impact work now that we have identified the most important high impact work we need to do it right and that's the idea behind the workflow How can we do that a high impact high important work right So it has two main pieces main phases in the workflow designing with empathy and here we gain insights in what our customers really want and why we build clarity in terms of the before and after experience and OKRs we define our OKRs in this big phase and then the deploy with confidence and this is where we gradually release and ramp through percentage ramping to efficiently address the unforeseen issues that we even did not think of at the early on and then we measure the success matrix and our OKR the object the key results and then when we announce in fact as well and not only all we have these cool features and people would say so what we want to try to more than so what address the so what question by really trying to focus on the impact that this has made together with testimonials as well as you know engaging with our users to really learn get that feedback and iterate if necessary so the phases that we have as first learned and in the learning phase we really understand our user stories identify the core use cases are able to justify and prioritize the requests we are able to put together a product requirement document that even clearly define the requirements as well as the OKRs and the metrics and the objectives that we want to achieve then we move into the ideation phase and in the ideation phase that's where we do the design thinking approach we do flow diagrams flow charts wireframes really engage with our user so that we can quickly iterate and uncover unseen requirements that high level we were not thinking about identified a non-objective non-goals as well be clear with them and then after we have identified them signed off on the PRD we are able to work on the engineering design work and this is where we focus on the high-fi design RFC we put together an RFC request for change and again you know with our engineering teams being involved in the infrastructure teams dependency teams are all are able to see see these changes and the design are able to provide their inputs and are able to understand that there is some dependency and some changes that are happening that are required from them especially in a multi-product cross-dependent software clouds infrastructure it's not only one team that is being involved in this development activity so it's very important to follow these steps to make sure that all of these things are being uncovered in the design phase and this will help us to have a better smooth deployment and code development as well as our roadmap and our project timeline is clear so after we do these design sign-offs we are able now to start the building and this is where we again you know part of the design we define the MVP phase and all of those we are able to work on the code deployment and we try to focus on modular code design and user unit testing acceptance testing all of those are incorporated into that and now we move into after the bug bash we try beyond bug bash to start doing a ramping we focus on the primary team who is requesting this feature we ramp it to that particular team so that they can get the chance to use it and we get the chance to work with them on pilot projects so that we can see the impact and get early feedback identify any early issues that they might have seen and try to quickly incorporate and iterate with them and this way also we get the chance to get them to use it and see the impact across the org of that team collect testimonials and then when we are able we build the documentation and training and finally when we are able to announce and deploy to company for example in the case of t-rex we are able to announce with the features as well as the impact it made not only to the t-rex matrix but to our org matrix and to our company matrix as well together with testimonials and here's an example of a feature request prd template that we put together and you know we found it very successful here at t-rex it helps everyone to focus on the problem statement core use cases put together the prioritization details that we talked about before as well as focus on the okr clearly defining what are the objectives and the key results we are looking at and then it will help us to go forward in the ideation phase focus on the problem area focus on the before state and after state allow the teams also to brainstorm together on that and then all of this will help us to start putting together the milestones what is pre-MVP what are the features that are in the MVP phase as well as the future phases if needed too and finally we have a sign-off section which we found very important all the key stakeholders I got the chance to review, comment as well as sign-off on the document one more note about putting together your product vision and strategy which are very critical and you need to be very clear about them because if you're not clear about your vision and strategy and priorities then you're just you know being working without no clear objectives and no clear goals and one great practice we have at LinkedIn is to put together the vision to values statement and we start with the vision which is the dream, the future this is what inspire us in the product in terms of what the impact it will be making so that's the focus statement on the vision the mission describes the goals and the purpose of the product why the product exists today and what's its goals and what's going to eventually get us to that future of the vision then we put together the target audience and here we list down our primary user personas this is the primary users of this product we need to be very clear on them and even try to prioritize these user personas and then we also be clear about the non-users especially if you have a high demand product with high back look and the requests and that will really play a critical factor so that you know which products and personas you are focusing your product on and if you are a platform product then you most likely will have producers as well as consumers and you will have personas that are grouped on them for example Airbnb for example they will have the hosts as well as the consumers who are the visitors who place them so for the hosts for example you will have admin boards and you will have different capabilities that are focused on the producer persona as well as you have different experiences that are offered for the consumer persona so being very clear on those on who are your producers as well as your strategy should be very clear in terms of how you are going to build up your product features and your platform features and this comes to the strategy as well in terms of listing down your strategic objectives and roadmap initiatives for example you might initially focus on your producers setting up the capabilities so that they can list their offerings and then you can start opening up to your consumers as well in the different persona groups and again this will also fall down to your priorities and then you will need to stack ranked all your critical initiatives and one thing that will really help you is if you can answer if we can only do one thing this quarter what would that be so really trying to be hyper focused on terms of your priorities on how you can achieve these initiatives that's going to build on your strategy to get to your target audience and to get to your mission and as we are doing this work we will be very clear on our objectives and the best way to do that is to list them through metrics be clear in terms of what are your true north metrics obviously true north metrics are not easy to move in a short period of time but these are the things that define your key vision and your moving the big big moving metrics that you are trying to lift the signpost metrics on the other hand are the signpost that you are going to be able to measure in a more granular manner and be able to see am I moving into the direction of my true north metrics and as you are pushing forward on your true north and signpost metrics you want to make sure your guardrail metrics are kept safe and you need to be very clear on your product what are these guardrail metrics that you want to monitor and that you are not hurting and finally you need to be clear on your values what are these guiding principles that are going to be helping you in making your day to day decisions whether it's for your member values like keeping our members safe or your prioritization values in terms of how are you going to be prioritizing your work and for example I showed to you today the prioritization framework I hope you found this presentation helpful and insightful I would love to hear from you please reach out and connect with me at LinkedIn again and let's chat and if you are interested in any of these templates that I have let me know and I would be happy to share them with you thank you