 I'm Roli. Hi. I'm Peter. We are from the Team Script Box. Just a show of hands, a quick show of hands. How many of you were using Facebook back in 2011? Anyone here? Well, quite a few of you. So if you would remember, it used to be a wall that we used to have in Facebook. You literally had to go to your friend's wall and see all the activity. And in 2011-12 around, Facebook shifted to timeline. And if any of you remember then, there was a huge outcry. People were putting their statuses. There were groups being fought, petitioning Facebook to go back to its wall. And there were articles like this, you would see. Like literally, government asking you to link your pant to Aadhar. What is interesting about this phenomena is that though users were so loudly saying they did not like this feature, why did Facebook still have it? Over 10 years on, we still have some form of the timeline still there. Because it solves the classic problem what we say. What users say versus what they do is very different. The whole idea of timeline was to engage the average time the user spends on Facebook, also the engagement. And clearly the behavioral data was showing against what the noise on the social media was. What was interesting about this Facebook incident was also it was one of these early errors of where, as users, our voice was being heard. We had a channel to really express what we like or dislike about the product. I mean, now you can go to Twitter and you'll have people dissing every product possible and having loud complaints even to the minister to change the IRCTC app. But this was the beginning of that. And what is great about this is that as users, we have a lot of power and a lot of channels. You can literally send an email to your founder to a company and say, hey, this is a detailed problem with your product. Or even if you have an idea about a product, you can really reach out to them. You can give NPS feedbacks. If you like something, you can go to the place to rate them well, et cetera, et cetera. So with a lot of great data, there's a lot of confusion for us problem solvers. You have so many channels of feedback. What do you do with it? Which one do you listen? Which one do you ignore? Which one do you delve further? So something similar at Scriptbox we were facing. And before I go deep inside this problem, I just want to set the context of what the product we were working with. So Scriptbox is a wealth management app. And when it started its journey in 2013, it was a very simple app with the objective of how to simplify financial decision making for the users. If you want to invest or grow your money, first you have to decide how much can I save? How much of that can I invest? Then which kind of product should I invest in? Should it be a bank deposit? Should it be an insurance policy? Should it be stocks, et cetera, et cetera. And even if you've done that, then you need to figure out which funds to you or which kind of product, which bank do you go with. All these decisions can be very overwhelming for users. So we made it a very simple process. You decide the money you have, decide how long you want to invest for, and we will use our algos to select just two or three funds which are right for you. So you didn't have to get into the jargon of which funds, which industries, which sectors. You just choose long-term, short-term, emergency, or tax saver. But as the users grew from 2013 to 2020, their age had, their complexity, your problems had, their financial needs had changed. And we shifted with our users towards a complete wealth view, where not only could you transact on mutual funds, but also track your overall wealth, which could be your real estate, could be your stocks, fixed deposit, et cetera, et cetera. So now that you're familiar with the product, going back to the problem, the user feedback. When we released this wealth view, initially we got a great traction. We had people loving the new features, saying this is really helpful. I get a good insight into my products. But at the same time, somewhere early on 2021, negative feedback also started coming. NPS scores were going down. Our play store ratings were going down. And from different avenues, we were getting repeated feedback on the UX and the UI. People were so emotional about it that they also said things like, your UX sucks, which was really ouch for the design team. You also had very broad and generic problems, like your app could be better. I love the old app. And this was happening even in the user research we were doing, or even anecdotals. From different sources, we were getting the UI UX is really bad, and we need to do something big about it. The problem with it was it was ambiguous, the feedback. It was sometimes contradictory, because we also had promoters saying, we love your simplicity. We love the app you're making. Sometimes it was biased. So say we face some tech issues in the app crash, and for you, so that's the UX, right? So your UX is bad, but it's not really a UX problem. It's a tech problem that we need to solve for. So the emotion is, yes, the app sucks, but it comes out in the form of an experience. Also, it's nonspecific. Something like this bruise. It's red, purple. We know something is going wrong, but how did I get it? How did we reach here? We don't know. As a design team, when we got this problem, it almost felt like a bunch of tangled threads. You know how frustrating it is? You really need it at that moment. You want to untangle it, but you don't know where to start from. And if you have ever done, try to untangle a bunch of threads. You would understand the feeling I was going through. So as many frameworks in design education we've gone, sometimes you get overwhelmed like where do you start from? Because there was a emotion in the company saying, let's start from scratch. Let's build a UX from scratch and go for a complete revamp. But that meant a lot of cost, a lot of effort, a lot of time. And still it did not guarantee us that we will get positive results, right? What if we repeat the same mistakes? So I took the inspiration of the tangled thread and said, let's look at what are the loose ends are. Let's find the loose end. What could be the loose end for us? So similarly, if you are trying to solve a problem like this, try to look at what is the known part of the problems, whatever minor, whatever small part of it. Hold on to that. For us, it was the inbound calls. I think this is a theme you've heard other speakers also talk about. Like for businesses, once the call center, calls increases, the cost and effort increases, and they urgently want to reduce it. What is great about starting with inbound calls for us was, one, if we solve for this, we are generating big business impact. It was also a rich qualitative data. So I didn't need to start from a fresh user research. Like some of the suggestions were, why don't you call some of the people who are given bad rating and have long interviews with them? That would mean a whole new research cycle that is required, right? I would need a one month, or if nothing, at least two weeks time to bring out something. But calls, they are long calls, they are very specific. They are also easily quantifiable, because we had a great volume of these calls. So we could really quantify what tasks or what issues were really emerging. And it was readily available for us to get started. So what did we do? We looked at inbound calls, like Roli said, right? So we started analyzing calls for the last six months that we had, which were about 2,000 calls, including emails and NPS that we had received. To our surprise, most of these were related to how do I do something or can't do something, which really put a stop on what we were doing. And we wanted to figure out, why is this a cause, right? Which also gave us a discoverable, which led us that this is mostly led to, it started because of discoverability or navigation issues that the customers were facing, which we were able to classify. I think that there are about top 25 discoverability issues that is there that is really critical and that had to be solved immediately, which can improve the business. Right. So suddenly, from this blurriness, we were getting something opaque. We knew, OK, what to lead with. At least we had some 25 tasks that we could solve for. Now, the option in our hands was to we straight away jump into the solutions and we should we solve for these specific 25 tasks. Now, the problem was that we were looking at the problem only from one angle. So I would use the analogy of the blind man and the elephant, right? You only know the part of the problem that you look at it. We are only looking at very, I mean, the nature of the kind of calls that come to a call center are also of a specific category. So we wanted to tell diaper, like sort of, look at it from a different angle. And for that, sorry, what we needed to check was also that we knew the task they're not able to complete, but we did not know with what intention, what journey are they taking, right? Where is it OK if I just, hello? Is it clear? Yeah. So we knew the intentions with which the tasks were not happening, but we did not know where the users are clicking or where the journeys are going. In the app behavioral data also, you are not able to really figure out that for this specific task, the user is taking this journey. So to do that, what we did was a tree testing exercise. So the way we approached tree testing was we wanted to validate both the existing design that was there and also the proposed design or the wireframe that the stakeholders had proposed, right? So we tested both. And what we had figured out was some of the ideas that came out from there that was proposed in the wireframe could improve the discoverability. But at the same time, there were still issues that we were not able to resolve for. So from the tree testing, something that was clear for us was in the home tab, customers were looking for invest and withdraw, which is clear for us, and also accounts as the tab was completely working. But the other three tabs that was proposed were still unclear for us. So we didn't have a clear direction from like Roli said, the intention of the user. And what they were expecting to do was not overlapping. Right. For transaction related tasks, we knew exactly what the problem was, same with accounts. But what we observed the other behavior that was coming in both the eyes was that people were going to home well transact all for the same task. So which clearly told us that there's a problem with how our tabs are structured, how they are nested. There was no clear distinction into it was not clear for users that for which task should they go to, which tab. So that was still a good step for us, because we knew some things that we could release. And also at some point, we knew that we needed to investigate further before jumping into financial solutions. So as you can see on the screen, what we did was we sort of did an experiment. So we did not change the complete information architecture. What we did was that we brought up front the withdrawal and the invest as a button in the home tab. And also we created a section called which showed like the monthly investment the users making and a managed button. So they, primarily out of like 80% of the tasks which were in the 25 tasks, could be sort of resolved there. But we also did another thing was like we changed the bottom thing from invest to transact. What was happening was invest as an intention. I know what to invest, but you could also to withdraw at the investor. A lot of people were not discovering that. So we did an experiment and we tried to see how people are behaving now in this. Where do they actually go for doing these tasks? Right. So we solved for those invest and withdraw. Coming back to this, we still did not know what is happening with the home invest and the wealth tab. Why are people still, you know, why are they going for the same task everywhere? Like what is really a problem? And we needed to then check was like, we have only looked at 25 tasks. What about all the other tasks? Is there something missing in that that people have not vocalized yet? So what did we do next was we tried to get a card sorting with users. So we tried a bunch of options for the tabs, right? Because we wanted to figure out if home is not working, transact is not working, what could be the other alternatives that they're going to map the task to. So we set out two different exercises, one which had home, track, and plan, and the other one which had investment, advice, and goals. What came out from that was there was an overlap between invest and home invest and dashboard. So we had some clarity that the task that they wanted to do was available in the wealth tab. But that is the one clear for us was what is home going to do for the customer. Why do we need a home tab? Right, also, sorry, just going back. Yeah, what we realized was also that the kind of task people were grouping under home, track, or dashboard was pretty much the same. There was a lot of overlap happening. So we realized there is no clear distinction between user's mind, what a home, as a terminology, what a wealth, as a terminology, track as a terminology, or a dashboard, there are all interchangeable terms for the user. Just only internally we had clarity on what these tabs are supposed to do. Like I mentioned earlier, we had changed the transact tab, which in our tree testing, we had got some validation, but in our actual release, the experiment behavior, we realized we were still not getting transact, you know, sort of traction on transact. The discoverability was still low of features on the transact. And it also made us think that do we really need a transact tab? Because if with the invest in the withdraw button and that small change we made on the up top, do we need a separate tab itself if majority of the tasks related to the tab were being able to solve in the homepage itself? It made us really question that do we need so many tabs? At this point, we had five tabs. Are we complicating things for a user? We thought we are simplifying by bucketing them in these tabs. What was happening was it was complicating it for them. So we really didn't feel the need for home and transact. We thought that wealth in accounts and the data was supporting it, right? We just need these two tabs. But internally, as a design team, we have a lot of clarity. We were like, okay, we are ready. We know that the information architecture needs to be changed. And these are the specific changes that we want to make. But as usual, they're always stakeholders and they have their views. And it almost felt like a minesweeper to me that even after knowing all the calculations and all the things that we have, we didn't, there were still bombs to be unlocked. So what we said, okay, so actually, sorry, the context here was that a lot of stakeholders said, but where do we do all our growth initiatives, things we want to push to our users? Home is where we do that. All the nudges that we have, all the to-dos we want to give to our users. Where do they go? How will, if everything is functional, where does the marketing stuff go, right? And that was a big tussle for a long time internally. So as a design team, we said, okay, let's do some prototype testing usually. The another thing was that hypothesis which we did not have a counter to was and was fair enough, is that our homepage was never engaging enough. We felt that we've not released enough features that never got priority to release engaging content for people to really look through it. So there was no horizontal, sorry, there was no vertical scroll much that was happening. Most people would straight away jump into wells to get into the functional aspects. So we did one prototype. We did another one which, that did not work, so we got another one. They did another one, another one, another one. So it was an endless, like a rigorous process of we kept doing different versions of what could be an engaging form and to not so much of surprise to the design team, but sorry. Okay, sorry, I think one of the slides are missing. But what we realized was that maximum traffic was going to the wells directly because people wanted a first intention is if you're investing, you want to know where is your investment at, right? What is the status of your investment? So there is head to think about like, oh, these are the products I should explore. That is always after my top tasks are done, do I actually think about exploring new stuff or discovering new stuff? So what we decided was to give priority to wealth itself, remove the home and transact. We tried to sort of counter within the wealth tab itself, accommodate that and have just one tap for all the account related changes, like admin related changes, you know, putting a nominee, adding your bank, all of those things. So through all these stages of broken down process, like step by step, we started building for everything we had evidence. And for all of these, like I said, the experiments, we were also created funnels on amplitude, like a data tracking trap app. We sort of looked at very closed funnels and how users are behaving. So for every experiment or prototype we were testing, we were also tracking what the data is saying, right? And that's the support of the clarity we had. So it was very easy to say, let's build things from scratch. But if we did not have this clarity, the new design also would have had the same tab structure and we would have still struggled with all these problems. So from five tabs which were initially there, home, wealth, invest, wisdom profile, we first shifted to four, like I said, the transact. And then literally now, we had the wealth explored in accounts. Now what you're seeing, Explorer is something I've not mentioned earlier. So like I was mentioning, the users only want to check things after they're done, they topped us. So we said Explorer is a more intuitive way to sort of discover what new Scrubbox is doing. And it's also the intent is very clear. So we shifted to the final task which were wealth explored in accounts. So yeah, while we knew all of this, the Explorer was still a last minute entry to the design. There are still things to be tested and we are still continuously monitoring or experimenting things. So no matter how much you know, there'll be still parts of your design that you don't know. And that's okay. It's always trying to figure out how can I move from not doing anything to how much more can I know. So this was the final design. We did revamp the UI also. We still made it vibrant. If we felt that our UI was still a little outdated. So we made all those changes. But as you can see, especially I would highlight the first screen. This is the first screen the user opens. And within the first fold, actually a user can do pretty much everything our investing app can allow you to do. So I can come here. If I want to invest, I can do that. If I want to withdraw, I can do that. If I want to check where my plans are, I can also switch to products. So where money is in each product. And if I want to check through investor, by which investor, where, what money is lying, I can do all of these tasks in one clear shot. There's notification up top. So if there's any urgent message or something I need to do, it's right up there. So there was really even, we really made the design very minimal as possible compared to the other old UI that you saw. So just reiterating some of the principles we followed with you. Always start with what you know. That's the easiest and it's the most intuitive way to fall back to your design problems. Balance your generative and evaluative. What I mean by that is if you notice all the exercises we did, tree testing, card sorting, we did not wait to just do the explorative or just test out what is not working. If we trust, we trusted a design intuition and said if we have solution, let's simultaneously test them. So we were able to get what is working quickly released because in the end you have to keep the business objective in mind. We need to release things. We need to sort of solve for business problems also. We cannot take long release cycles. Always back your qualitative or user research data with the app behavioral data. So work with your product managers to create funnels, to create customer journeys and whatever analytical tools you use and closely take a check with them regularly. Do that check in with them. And of course the last, there'll always be some unknown. So as designers, we tend to get very optimistic and say this design is going to solve all the problems. So don't worry. Not every problem that you discover will be solved with every design. Always wait what is relevant and what is relevant not just for the users, from business. So in both ways, you are able to balance your work. Thank you. That's about it.