 Hey, I'm Rajagopal. I started out as an engineer at Yahoo and worked with Zynga and later I was working with Minthra as a product manager for shopping experience. And what I'm going to share with you today is a few things that we used to do at Zynga and Minthra. We used to always drive user experience-related decisions with data. I'm just going to share a brief overview of all that you could take back and dive deep into. Okay, so the first thing, why do we have to do data-driven decisions when it comes to user experience? It's always been perceived as an art form. Designers typically, at least the image of a designer in most people's mind would be someone who deals with art form, but that's not what UX is all about. It's about the overall experience that a user gets across various touch points, including the web product that you build. And the reason we, I mean the reason we should be driving most of these decisions using data is if you don't have a standard process in place, you might just not be able to make the right decision most of the times. So the idea is to increase the probability of making the right decision by bringing process around how we make that decision. Sorry. We want to be able to get the UX related decisions that we make right most of the time. There is one. And the second thing is that we should be able to repeat it consistently. If we could get it right once and if we can't get it right the second time, it doesn't make sense. So the moment we have a predictable mechanism over how we make these decisions, it becomes repeatable and has a higher probability of being the right one. And the other thing is that when it comes to user experience, everyone thinks that they are an expert in user experience. Sir, let's turn around and see. See, this angle is correct. Much better, sir. Sir, you are so... Sir, let's turn around and see. You know how frustrating this could become for UX designer or anyone. When sales and marketing folks get their hands on to user experience, they are thinking that they know users better. So everyone thinks that they know what user experience, what the right user experience for a given thing is going to be, but that's not true. Until you launch the product out into the wild and let the users tell you how the experience is, you wouldn't know whether that's the right experience. So we're going to take this mock website as the example. It's a mock of Zoom car. Let's assume we are managing the user experience for this particular product. The product lets you book a self-driving car specifying a pickup time and a return time and selecting the car model. The moment you look for available cars, you get a list from which you could choose one, and you would be able to fill out your details and rent it out. This is the sample website that we are going to use through the talk. So the first rule of thumb is just track almost every piece of data that could be tracked on a website. Like the previous speaker said, a lot of times you would want a piece of data two months back, but you might not have it unless you track it in well in advance, even though you might not need it at the time when you think about tracking. So the rule of thumb is track every click, every hover, every keyboard usage, and success or failure outcomes from any action that the user takes, as well as information about the user. This could mean the user's email address and any user-identifiable information. There are privacy concerns around that, but let's ignore that for a moment. And the second thing would be page-level context, which would include information on what went into the page when the page was displayed to the user. Most of the pages that we deal with are dynamic in nature, and the content could be different for one user versus the other. And you should have tracked context about the page. That is, if the page has, say, ten widgets of which eight would show dynamically for different users, I mean, different set of eight would show dynamically for different users, you need to be sure which set of eight a particular user encountered. And the other one that you need to track is session-level context. And session-level context could include anything that goes across the browsing session, and we should be tracking all actions and events. So in case of this example, let's look at what we'll be tracking for this widget. So if you notice, there's a sign-in register button on the top, and if the user has to rent a car, he has to register a sign-in here as well. So you'd probably want to track from where the user started the sign-in process or register process. There is one thing, the source of the sign-in or register, and you track the sign-in slash register form itself. You track every action that the user takes on the form itself. And if you look at this booker car widget, there's a date picker, and the date picker has a text box as well. So the user could type there, and the user could select a date from there. And the moment the user selects a date or types in a date, you want to make sure that you've recorded that event, and you also want to make sure that you record, whether the type date is valid or not, and things like that, so that you could go back and figure out why something went wrong when the user tried to submit this form. And you'd also track which car model the user searched for, and whether the check availability returned back a success or error response, if at all it returns error response, you'd better track the error code as well. So having the basic level of tracking done for the entire website would let you answer a lot of questions around what is going on in the website. For instance, you could figure out when do users register. Do most users register when they are about to check out? That is when they are about to book the rental car, or do they normally register when they come onto the page? And if you look at it, there are these three links on top. You'd want to be able to quantify the importance of every item on a page. And by tracking all the clicks and navigations that happen on the page, you would be able to figure out whether tariff is an important link or not. For instance, a lot of people might not even read policies, so you'd get a sense of how important tariff is in comparison to policies. If at all you need some space clear, there you would know which one to clear up. And you'd also find which field users struggle to fill out the most because you'd get to know that one particular field is throwing up a lot of errors, which means that that's the field that is difficult to fill the most. So essentially, there are a bunch of questions that you could answer using all the data that's tracked with basic events and session context. And these would be questions that give you a sense of what is happening on the website and what is important and what is not. And the moment you understand your website in and out, you could start making improvements over it and give a better experience to the users. So I'm going to walk you through a bunch of techniques that would enable you to what you call assess what's going wrong on a website and what the users might be wanting to know. I'm just going to walk through the tools and then talk about how we could use these to give a better experience. So a funnel is nothing but a series of what you call numbers that gives you a sense of what percentage of users progress from one step to another. So if there is a flow, you divide the flow up into multiple phases and each phase becomes one piece in the funnel. And you start tracking what percentage of users move from one phase to another. And in this case, say 100 people land onto the page, 72 people move to the search page and 54 end up trying to book one, that is select a car and click on the book button, and then 118% end up booking one. And there's one more thing that you could use which is called cohort analysis. When I say that you do a cohort analysis, what I mean by that is you slice and dice users by various criteria you segment users and you could even segment sessions and you try to look at the various metrics that you care about for each of those segments. So this enables you to isolate a problem or isolate user need that is being expressed. Let me just walk you through an example so that it gives you a better sense. Imagine you wake up one day and you find that this part of the funnel is suddenly shrunk down to, say, 32%. That is 100%, this is 72%, the second phase is 72%, and only, say, 30% of people end up trying to book a car at all. You start wondering what could be going wrong. So one thing that you could do is start trying to figure out if something is radically wrong in the website which you would be able to by looking at the basic symmetry matrix and looking for errors. Let's assume all that didn't give you any clue. So the next thing that you could probably do here is split up users based on various parameters. So you could split them up based on demographic information, what class of cars do they use, or what is the date in which they joined, have they already made a booking or not. So you could start comparing various segments with each other and see if the problem is with one or two of those segments and not with the entire population of audience. So when this drops, it need not necessarily drop across the population. That is the point. So what you could do is you could possibly, yeah, so let's assume we segmented by various dimensions, and we finally ended up segmenting by whether the user rented a car for a long term or for a short term in the past, and you end up with two different final comparisons. So the first thing that you are looking at is all the users who rent cars only for a few hours. And the second set that you're looking at is users who rent cars for a few days. So there could be a set of users who use a car just to go to a movie or to a restaurant and return back. And those users would be looking for a short enough time slot. In the queue, the users would be trying to pick pick up time and return time, which are just a few hours apart. Whereas in the second case, they would be trying to choose pick up and return time, which are few days apart. And you realize that for the users who book on a hardly basis, there's not much difference. Whereas for the users who try to who normally book for longer term, there is a lot of difference and that is where the decline in conversion is happening. So the moment you realize this, you would be able to start looking at what could be going wrong for these users. It could be quite possible that the next few days are fragmented into multiple pieces by all those people who are trying to book just for a few hours. So a person looking to book a car for multiple days is unable to find an available car. So how do we come to realize that? You could follow a few techniques to find that. One would be if you're tracking, if you're doing screen recording, you would be able to see what the users are actually doing on the website. You could segment users by the same criteria using which you segmented in the previous analysis and pick a few users who fall into that bucket of users who book a car for multiple days and you start looking at the screen recording. You get an idea of... For instance, if these users were going back and forth quite a lot, say the user filled up the form, came to the next page and say the user filled up this form with a certain date, came to this next page and then went back again and changed the date and came back again here and so on and so forth. You would get a sense that something is wrong with... I mean, the problem is availability because you see that cars aren't available in that page and you could get to know this even by looking at all the events that you've tracked. You would have tracked page context. So in page context, if you track something like, what is the percentage of items? So in this example, we are displaying, say, five different cars. If at all we track that three out of five are available when this user visited, we would get to know whether there's a correlation between the number of vehicles that were available for a given user and the percentage of users who moved on to the next page. Could make use of these two and figure out whether there is a reason for the problem. And the moment you realize that there is a reason there are multiple ways to fix it, you start hypothesizing around what the user might be looking for. Maybe you just start showing the user which date range he could choose around the date range he specified, which might have availability, or you could suggest your alternate car, I mean, move the available cars on the top or whatever. You come up with multiple solutions. You don't know which one is going to work. You could do a split testing, but before we get to split testing, let me just give you a brief of how you could go about tracking all the data that we saw earlier. So the screen recording and heat map, click map, and all this could be tracked using click tail. And all the events and everything else could be, I mean, all the page context, session context and even context could be tracked with Google Analytics, Mixpan, and Chrismetrics. And the moment you mature beyond this point and get to a point where you have to slice and dice data, randomly, a tableau and click view might come in handy, and then you could graduate up to what you call specialized databases and whatnot. So what does split testing mean? You have a bunch of hypothesis in hand. That is, you think that X might work or Y might work, but you want to figure out which one is actually going to work. When this is the case, and you have an existing website in place. So you use the existing website as the baseline, and you want to roll out something, only if that is going to improve the experience over the baseline. And you could either roll out bigger chunks and test whether the hypothesis is working in a holistic way. For instance, if you wanted to bring in a... So you realize that the users are struggling to find available cars among the cars that are being displayed. One thing that you could do is just put up a filter right there which would let the user look only at the available cars. In the first example that we looked at, this car is rentable only from one location. Imagine the company goes and opens up a few more locations. You would have a location selector, and instead of having a location, which one do you make the user choose mandatorily? Does the user always have to choose a car model and then move and figure out where that car model is available, where the user want to choose a location and then look which car model is available at that time? This is something that would vary between user groups as well. So if you're testing a bigger change on the website, you just define... I mean, you call it holistic testing, and when you do minor optimizations to make a funnel tighter, that is fine too. So let me just walk through how the whole thought process would go on in this case. First thing you need to identify a user need to make a user experience improvement. And where does the user need come from? The user might communicate it directly to you. It could be explicit communication or implicit communication. In the example that we saw, the user was going back and forth between these two pages. That is an implicit communication office need for the ability to find available cars easily. And in case of explicit communication, the user might have sent you a feedback or you might have had a user-focused session which tells you that the user is looking for a particular thing. Or it could be a hypothesis that you start with. You come up with a bunch of hypothesis on what might be useful to users, and you go about validating those hypothesis. Or it could be a business need from which you try to fit it to the customer's need. So the way the whole thing goes about working is you start analyzing the track data and you look at all this data and figure out what is going wrong with the users. So the moment you realize that people are going back and forth between these two pages, you get to know that there is a need for a better way to find availability. And the other way to find it is talk to users, ask them what they are struggling with. Even if you find a problem with the metrics and you aren't able to isolate the exact problem, you could just call up those users and talk to them and identify user needs. Or you could hypothesis user needs and it's really important to validate whether people actually need it or not. You could again do this by either talking to users or going to analytics tools and trying to figure out whether you have enough data and support of your hypothesis. And the moment you find the user needs, you could address the user need either by making a change to how the website is going to be laid out. That is at the very basic level on what is going to be shown to the user first and what is going to be shown next and what not. That is the way information is presented or it could just be to do with the content that you present there. So for instance, let's say you found that users who were booking a car for the first time weren't feeling as comfortable as users who have already used this service. It could be quite possible that, so we just stop there and try to think what could be the questions going on in the minds of people who booked for the first time and that would have been answered to people who booked it for a subsequent time. So you might just come across things like, okay, the user is probably worried that, you rent a car, you go crash it, who's going to pay for it? It could be that you just need to tell them, hey, don't worry, you have a good insurance policy so you don't have to worry about it. Or it could be a bunch of other things. So the change could be in content as well. Or it could be in the flow, make the entire checkout flow as what we call squeezed in as possible so that there are lower number of drop-off points if drop-off is the problem that you are facing. And it could be a combination of some of these which results in a feature. You would have seen this widget almost everywhere, almost on every e-commerce site. People who bought this also bought. That's a feature that includes content and what you call flow changes in how the user purchases an item. As soon as you roll out any change, drag data again and feed it back to the whole process. And the key thing is every user experience improvement should start with an objective. If it's a flow optimization, start with an objective on what percentage improvement should come in terms of conversion in that funnel. And if it's a page optimization, a page would ideally have one goal. The page is supposed to lead up the user to... I mean, every page would have an objective for certain pages. It could be that the page has to take the user to another class of page. For instance, in this case, the objective of the search page was to take the user to a booking page. And the objective of the home page was to make the user search for a card. And a page optimization should result in improvement in the objective of that particular page. And similarly for content or a feature or a call to action that you're putting on the page, you need to assign an objective to it and see whether that objective was met. And if it's not met, cut the feature out or improve the feature so that it meets the objective sooner. Any questions? Can you tell us about the list of services in tools again that you used? I'm sorry? The list of services and tools that you used. And which one is useful for what scenarios? Sure. Yeah. I should walk you through that. And ClickTail and Lucky Orange let you do user testing virtually. So it's not to deal with... So the typical analytics tools look at users in an aggregate form. ClickTail and Lucky Orange let you sample users and dive deep into them. They give you the ability to record the screens of a certain percentage of all your users. So you could say record the screens of 2% of all my users. And you could go back and start viewing how the user actually interacted with the website. So when you look at stops or pauses in mouse movement or mouse hovering over a specific area, you get to know that there is an area of confusion. And you could then start thinking what could be wrong there and come up with a whole bunch of hypothesis, go about validating and whatnot. And these two tools also give you the ability to look at heat maps, click maps and whatnot. That is what these specialize in. You wouldn't probably be using it on a day-to-day basis. You would use these tools periodically to get a sense of how users are interacting with the website. And these could be time-consuming. When you start looking at screen recordings, it might just suck up a lot of time, so need to be handled judiciously. And Google Analytics, Mixpan Analytics deal with... I mean, they mostly do event-based analytics. So Google Analytics was doing page-based analytics to begin with while Mixpan Analytics, Mixpan Analytics were doing event-based analytics to begin with. But all three of them have converged to a certain point now. All three of them are good at doing event-based analytics, that is, to know at an aggregate level what percentage of users are doing what actions and how the sequence of these actions looks like. And you could also do cohort analysis, which would let you know how the users... how various sets of users use the website. And the next set of tools, Tableau and Clipview, could connect with any kind of data source. It could be your database, it could be your Excel sheet that you have, it could be anything that you have. And it would let you slice and dice data in a way that none of these other tools would let you do. And the moment you go beyond a certain scale, you would want to track all the... A lot of these tools wouldn't support you well enough, so you'd want to push all the events asynchronously into your DV so that it doesn't block your website and make your experience bad. So you use a non-blocking DV and pull from data to data arrows and do all the upstream analysis. And specialized analytics tools could be, for instance, e-commerce sites have this tool called RGmetrics, which is tailored to e-commerce sites and it gives you tons of things out of the box that there could be specialized analytics tools for various industries. Thank you.