 So let's get started. My name is Dushanth. Like most of the Indian name, it's a super complicated name. JDA is my last name, and Dushanth is my first name. I've been in Bangalore for almost 18 years. I did my graduation here. And then I've been working in a few companies, did something on my own, then work out, and then kind of a joint corporate went back. And then finally I ended up with my current job here. Throughout my career I had, I was fortunate enough to have an experience in engineering, design, product management, business management. And currently I would say I'm kind of back to square one and learning something new here. So what's UX and why does it matter? I know it's a pretty broad question. I'm not trying to kind of prove that I know more or kind of know you guys know, but just trying to understand what is your perspective about UX here. Correct? If you have an answer, great. If you don't have an answer, it's fine. But I'm just going to show it from my point of view that what do I mean by UX? Like a pretty simple thing, correct? What do you think about it? It's nice UI, correct? But it doesn't really kind of tell you what's called to action. All the information that represented the same level. It's kind of a little confusing language as well, correct? Well, and this one kind of becomes a little bit more clear in terms of what an interface is trying to communicate to a user out there, correct? Pretty simple. So making things obvious is a key part of UX. If you're solving a problem, you have a solution. But if the solution is not obvious, you may not kind of know really solve the problem or kind of know that you're trying to address that. Let's take a second one. This is one of my favorite one. What's wrong here? Everything? Any one thing that you have in mind? The state is not clear, correct? It's not clear for what switches for what particular appliance, correct? So ideal solution is what? Something like this. But how often do we use it, correct? So the thing what I'm trying to say is that there's some sort of an cognitive overload or cognitive decision that a user try to makes it. When we were a kid, we have learned that, you know, switches are supposed to be like this, correct? Though it's wrong, correct? But trying to change that behavior and make it something right cannot be right. So it's extremely important to realize solve it the way user want to solve it. Not the way you would want to solve it. Another, you know, interesting example here. I'm sure all of you have taken a flight and have a boarding pass. There are times when you are, if you look at this particular boarding pass, all the information at the same level, correct? There's no clarity of, you know, what is more important, what is not. And if you're trying to find an information, correct? Your brain gets confused. What should I process here? So in a very simple term, you access something which makes things really, really obvious. And at Google, we try to kind of, you know, really hard in most of our products. Sometimes we successful, sometimes we haven't. But that's our kind of, you know, philosophy of what we can try to implement in our products there. So I'm going to talk about the challenges in, you know, UX or validation, what we have seen throughout working with different teams at Google. Some of the case studies, maybe I'll go through one particular process in deep and how we are doing it at Google. And then kind of share some of the resources. I want to be kind of, you know, careful and make a disclaimer here is that there's a no one process that entire company follows. Each different team has its own freedom and flexibility that kind of works for them. The reason for it is that, you know, each team is solving a different problem. Because each team has a different set of users in mind of it. So there's no one process we can solve this. What I'm going to talk about is specifically from an Android point of view, you know, that our team kind of does it, but can be applied to other teams as well. So what are the challenges in UX testing? Most of the time when I talk about UX, you know, people think like this, anybody knows what is it? It's a Japanese tea ceremony. And if you've ever been to one of the Japanese tea ceremony, it's painfully long, correct? To make a one cup of tea, correct? It takes five hours, correct? Similarly, when I talk about UX or user research, UX research or UX validation, people say that it's a lot of time that I have to invest into it. Maybe I need to do primary research first, then I need to do secondary research first. I need to get it inside, then give it to somebody else. I have interfaces, put it into product, and then you do validation, correct? And now, but your typical development schedules, the run and sprints, correct? Every two weeks, you want to launch something out there. Now, how do you marry both of them, correct? So I'm just trying to overlap on one side, you have a development cycle and your traditional UX research sector, correct? So there's clearly some sort of a gap there. Something what we have tried, and we have taken a liberty of moving away from some of the core principles. The reason for it, because it works for our product development lifecycle. So what do teams want? They want feedback before people start coding, correct? They'd, hey, I'm doing the right thing before I invest my time, energy, money, resources. I want to know that, okay, this is gonna be really, really, but the chances of failure kind of thing, you wanna minimize that. You wanna feedback now, and you don't have time for everything. So this is an example of how we do a validation of ideas. This is a Google Glass. All of you, at least hopefully have heard about it, about the product, really great product, but kind of didn't do that well in terms of what was intended to do it. But how do we start validating that? So the first thing was that, we kind of slap a phone on us, on us practicals. You say, hey, does it do that functionality, correct? And then over a period of time, we iterate that. Couple of weeks down the line is what kind of know what Sergey was holding on his hand that was a prototype would kind of only got out of it. And I'll kind of talk about some of the processes that we have used to kind of go through this validation case there. So what we want is something like this espresso shot, not the Japanese tea ceremony, which takes a lot of time, but espresso shots are kind of really easy to make, packs a punch and still kind of gives you at least a good enough insight into what your solution you should be building there. So the way how I kind of know, decide that if you want to do kind of understand goals and attitudes, you kind of look at on the left-hand side of this particular diagram, say, okay, maybe I want to do in-depth interviews, I want to do focus group. If you really want to understand kind of what behavior aspects of it, you kind of go ahead and say, okay, let me do usability study, let me do an A-B testing and stuff like that. And then there's another tangent, which is a qualitative and quantitative algorithm. Again, this is not a standard one. This is something you'll find everywhere. You could customize it based on your needs. So let's take some of those methods that I talk about, correct, we'll be focusing mostly on the top right-hand side tangent there and see how it kind of, helps you do validation and feel really easy and short a period of time. So the first is the heuristic evolution. I'm sure all of you are aware about it, correct? Nothing, I'll not spend much time. But here what happens is that, if you really want to identify, let me give you an example. We launched an Android tablet, sometime five, six years back. And one of the feedback what we got from a team saying that, hey, the time to set up a tablet is like really, really long, compared to the other standard tablet available in the market. So I said, okay, let's figure out and how do we kind of quantify that? So we put both tablets inside by side, kind of loaded there and look at, okay, choose heuristics. So one of the heuristics what we chose here was a speed, number of seconds or number of minutes it takes for us to kind of, you know, set up a tablet, we measure it, kind of summarize in terms of, you know, what could, what is taking more, what is taking less, send a report to our team, got a feedback in within a week, we had a new build, kind of tested again on the new device and kind of we were on kind of little better place there. Of course, it took a time for us to load or roll out this feature completely around the six months, but it is good to know that, okay, you know, we are going to write track out there. So this is a very easy way. So if you want to kind of look at quantified evidence, heuristics is a great example for it, doesn't take too much time, maybe three hours, four hours, depending on the kind of problem you're trying to solve and the heuristic that you have selected, you could do that. Write system with prototypes, anybody have heard of write? It's basically rapid iterative and testing. So we have launched, you could use it with, you know, multiple scenario here. Let's take an example of, I'm just taking an example of Google search and you would want to kind of, you know, try to get a feedback from users on a new design of Google search. So what you do is in this particular thing is that, you know, you schedule interviews in the morning till afternoon and you have a designer and the team standby. So let's say I have a first interview at 10 o'clock, I basically present a design saying that, hey, what do you think about it? And then I kind of push back and kind of listen to users, try to understand what kind of feedback I'm giving. Now I've got all this feedback and I will prioritize that out of this feedback and look what I'll take one of them and give it to my designer or interface designer. Hey, you know what? Can you create a quick mock up of this before the next interview comes in? And I've kind of now taken our gap before the next interviewer comes there. And then I kind of again verify that. So by the time if I do it, you know, over a period of time, I've kind of now gotten to a stage where I was in into a little bit of better stage there. We have done this multiple times. One of the things that we have used it is again, take a same example of an Android tablet. People were not able to detect the same slot of an Android tablet. Correct? Hey, where do I basically put in? Because if you look at the design had some sort of, you know, kind of, you know, camouflaging the entire SIM slot there. And we kind of looked at it and then see what we could do there. Of course, it was difficult for us to 3D print the new model of tablet there, but we had a bunch of other tablets, models we were ready. Hey, what do you think about this? So it was interesting study there. Right works extremely well when you kind of now have a focus change you would want to drive in your product and your solutions there. Correct? You could do it on one day or two day, depending on the kind of time flexibility you have. And it can be done with, you know, one designer, one researcher, and you know, five users there. Caffe study, this is one of the very popular UX validation methods at Google. We have kind of, you know, many employees were working at Google. They're supposed to also have a good advocate of products. Now, if you really want to get a quick feedback of one thing, correct, you know, it's again a good method to explore and kind of like, hey, what do you think about it? So we, it kind of, you know, get a fresh perspective on whether what you're doing it. Usually it starts with kind of, you know, writing a small script, understanding, you know, what are the goals of that particular methods or is it that you want to drive out of it, load the solutions on devices, and kind of, you know, spend not more than five minutes talking to users and say, hey, what do you think about it? So this one, we have done it. And if you know Android widgets, so Android phone has a bunch of widgets, you know, when you move from a home screen to on your right hand side, there are a bunch of widgets out there. There's a weather widget, there's a calendar widget, and a bunch of other widgets out there. And when you lock a screen, what happens that, you know, sometimes the widget comes on top of it. So the user were not really sure whether they're unlocking a phone or tapping on a widget or, you know, things that's a bad interface design that we had. And we want to kind of, you know, see that how many users are falling to the trap? Are they able to, you know, easily able to navigate that, you know, lock screen there? So we have a cafe study there, you know, kind of what good feedback some of our assumptions got validated there. And then we can work on it and kind of move it to a different sections there. Again, super, super interesting method if you really want a good decisions there. Deep into Pulse Study there, which is kind of matches with your sprint cadence here. Correct? So what happens is that on first couple of days of a week, you kind of, you know, sit together and understand, you know, what you want to try to validate. And then you screen the users, write a script, schedule a bunch of things, and also have your developer team kind of in a part of it. Correct? So that kind of, you know, both of them are working together in terms of, you know, building that solution there. It's flexible in terms of, you know, sometimes, you know, depending on your sprint cycle, you may want to do a one week or two weeks. It gives you flexibility. It's not disconnected completely since teams are co-located doing together. It kind of results are much, much more faster there. So how do we typically start there? We try to understand what are we testing, correct? What do we want to validate? What do we want to measure here? Either it's a kind of a software interface that we're trying to get some more information. Is it a hardware product? It's just a concept that we have. Who are we inviting? Correct? So who is the user? Correct? Extremely important for type of a product that you're trying to solve. It's an early stage of a product that you cannot really afford to have somebody else coming and giving you feedback because you are not sure whether something may get leaked out. So in that case, you know, we kind of have an internal users, so maybe a Googler, or it could be external user where the product is already in the market and you're trying to validate some of the new feature that you integrate in the new upcoming release or maybe a target profile. Can I get up to you in terms of what it is? So identify what you want to solve or what you want to validate. Second thing, who do you want to invite? Who is the target persona there? And then, who is doing what? Now, it's extremely, you know, I've seen that important, even at large companies like Google where they're like six people and you know, there's one user and they tell me what do you think about it? Now, what happens at times? The user kind of forgets, it's psychological as well, correct? I mean, you don't really kind of scare individual saying, okay, no, no, this is right, this is wrong out of it. So having the clear rules and responsibility is very important. Most of the time, we kind of ensure that there's only one person at max, two person kind of part of that entire discussion so that it doesn't become a little overwhelming for somebody who is taking the time out and giving us a feedback there. So we kind of define clear rules and responsibility. Who is the researcher? Who is an observer? You need somebody who's kind of taking the notes. You need somebody who's kind of looking at non-verbal signals, correct? Because human beings are like super complex. We think something different, we say something different, we do something different, correct? So we try to pay attention to all three aspects of it and see how you want to kind of include in your feedback there. Once you have all those things decide, then you say, okay, where are we testing it, correct? The environment also makes a really, really huge difference there. So we have a kind of multiple preferences there. We have a UX lab, correct? Which is kind of as similar as a typical home. We understand not everybody could have this kind of facility, but if you try to get an environment which is as close where a user is gonna use a particular thing. So for example, if you are hypothetically validating and solution which is like a Google Maps, I would be in your place, I would go to a bus station or bus station and then try to say, what do you think about it? Or maybe on a local train or things where environment is kind of very similar where user is gonna use the particular solutions there. We also have at Google, we also have a kind of an observer room. So there are a bunch of cameras where actual studies are done and the other people who could not be part of the research or kind of face to face discussion can sit back and kind of observe body language and answers and maybe some of the things that a researcher or an observer or not ticker is mystic. They could kind of take down and give it to a team there. If you can't really do all those things and which most of the cases is not, even we didn't have the lab for quite a long period of time, the easy is to kind of maybe do online meeting. If user is not there, as long as you are focusing, you're asking the right questions, as long as user is comfortable, you're walking to the entire activity that you're doing, it doesn't matter. And then you could kind of look at the environment aspects completely separately there. The first step is go and execute the study, correct? A bunch of things I wanna highlight here is that do's and don'ts when you do a user research is that you wanna be kind of really, really nice and thankful because somebody's taking time, thank them, try to understand what they're doing. They're coming for the first time, hey, how was your ride? Did you had a good time? Very good time? You had lunch? Whatever, correct? Just make them feel comfortable before they can go and do actual study there. Tell them that if something goes wrong, it's not about them, it's about the product, correct? Almost every point of time, user is right and the product is wrong. Don't tell them, hey, no, no, try this, try another thing that doesn't really work out most of the time. By doing that, even the user gets a little bit uncomfortable. So basic things, nothing kind of rocket science here. But not to do is that no matter who you are or what company you represent, never kind of try to talk about a product itself, correct? Try to talk about problem. If I give them a scenario, I'll see how things can work out. Don't assume that the user would know what they're doing. Correct? So standard checklist, nothing much. If you've done user research, you know what I'm talking about. Checkboxes, correct? This is one thing we found it really, really helpful is that it's extremely difficult and it's very interesting to kind of listen to user, correct? You're almost every time lost in that conversation, you're kind of not be in the same frame of mind and understand what user is thinking and at times you forget taking notes or you kind of miss out some of those things. But having checkboxes, they do pre-anticipate. I mean anticipate what users might be gonna say that, okay, check, click the box or maybe the body language was something like this, correct? That helps you kind of do a lot of writing, I mean, avoids, that makes you not write a lot of things and also kind of makes you be involved in that moment and you know, kind of understand what that is. Step four is kind of, once everything is done, debrief with them, tell them what we can do about it. Usually that means going back to drawing board, list out priorities, another thing that we do at Google in terms of priority, we put a dollar value of each inside what we got, saying that, okay, if this is the feedback I got, correct, how much dollar value we should be putting there, correct? Let's say the $5, $50, $100 out of it, correct? Obviously $100 will get a priority in terms of no, a $5 there that helps us kind of prioritize which feature we need to focus on. And then we kind of send an email and then focus on to next user research and validation of that. I just wanna, anybody know about this equation? Very quick, there was research done at IBM that if you invest a $1 in research or validation, you will save $10 in your development time. And that $10 will hopefully help you basically have revenue of $100 there. So extremely important to kind of know, I know it, sometime it can be boring and you're starting initially, it is time consuming, but trust me, once you get a hang of it, if it becomes in a second nature of what you're doing, it helps a long way in your product development life cycle. With that, I have like at least three, four minutes to take your question. If you like this talk, give your feedback, something that didn't work out, we'll try to improve. And in the meantime, if you have a question, I'm happy to take some of that. We do, we do. So it depends on what are we trying to solder, what are we trying to get insight into it. So many times what happens, I'm just trying to give an example of, so Google search is a public product, correct? And whenever we want to kind of test and validate it, many times we invite users who are not Googlers, correct? And kind of get a fresh perspective or open perspective of what we could do it. But sometimes when you're building a product which is not launched, correct? You don't really want to kind of go ahead and do it. So either we kind of, you know, either we kind of have to do it, we don't tell what the product is, but sometimes when you kind of hide a lot of information, you don't get the right insight out of it, correct? So we usually kind of, you know, refer on the scenario our internal employees or maybe associate employees where we have, there is an NDA and you know, bunch of other things. But we don't, so whatever we do, user is at center of everything and we do a lot of external user research studies. To answer your question in demographic, we have a team spread across, thankfully, we have a team spread across, you know, multiple geographies, countries and regions and we take their help. Somebody who is building in US, you know, they kind of reach out to us and India saying, hey guys, I need your help doing this, you know, quick, we can study, can you go ahead and do that? And all of us kind of, you know, put our hands in their ears to do that. So it is, there is an incentive as well, correct? I think most of the time it's an interest. If you're an engineer, you have never done a user research, so kind of the people joining, but there is an incentive as well. Okay, so it's a good thing. I forgot to mention about the incentive, but the incentive is part of the entire process. We usually, you know, kind of look at either giving vouchers or maybe if it's a product, depending on, you know, what kind of mindset you are getting. If the, if the insight is very large, you want to thankful and kind of ensure that, you know, the feedback you get, ensure that incentive is that large as well. Sure, so I'll just take a step back and maybe this last question I'll take, this next speaker is already here. So I would look at in terms of, you know, I have a less user, but is my solution is being, what is the engagement level on my solution? Let's say I'm just giving example. Again, I'm taking example of Google search. Assume that the people are spending, you know, more time on search, but there's not many users out there. So there could be two things wrong here. Either the results what we're showing is not really kind of, you know, helpful and that's why people are spending more time or people are coming back and they're finding something around it. So, so there are two person as I have, correct? So then I try to find more into in one of those directions or kind of I'll take lead in both directions then try and more find information out of it. So it's important to kind of, you know, take a hook or take a lead in terms of, you know, what's wrong with the current approach and then go a little deep into it. Like it's almost an iterative process. At one time we had a roadblock and you know, maybe this way is what we need to do. So I don't know if it helps, but try to understand in terms of, you know, what's wrong with this? Why there are not many people and what is currently happening with your solution there? I'm very happy to kind of, you know, maybe chat offline as the next speaker is here, but let's chat more and see how it goes. Thank you.