 W Polsce, kiedy mówimy o dipersach, użyjemy słowem Pampersy, z znakiem Pampers brandą. I kiedy mówimy o samochodzie, możemy nazywać jeep, z USA karbranem jeep. I kiedy mówimy o sneakerach, nazywa się Adidas. A sentence like, I've bought a new pair of Nike Adidas, would be nothing uncommon in Polish, although it would make probably both brand managers scream. These brands are an example of so-called generic trademark. When a brand gets very popular, it might become synonym for a whole group of products. But why are we telling you about this? Because Allegro, the company we are working for, has played such a significant role in the development of Polish e-commerce, that it had become a generic trademark itself. Even a few years ago, buying something on Allegro was a synonym of buying it online. A recent report on Polish e-commerce market shows that when you ask Polish consumers about the place to buy online, over 7 out of 10 will spontaneously name Allegro. When it comes to market share, Allegro holds a significant 50%. Next year we will celebrate our 20th birthday. As we operate in the marketplace business model, like eBay or Flipkart, we are not only home to millions of customers every month, but also host almost half a million of merchants on our platform. This is great, but as Uncle Ben from Spider-Man says, with great power comes great responsibility. We are not talking at the same time, so I think we can use the same mic. Okay, we love our loyal customers, however we want to attract new ones as well. So we need a fresh, modern look. And what is even more important, more and more people are buying online on Allegro, using their mobile devices. And not whole platform was quite ready for it. So two years ago we decided to go with redesigning our platform, so it would be fully mobile friendly and modern. Our graphic designers prepared a set of fully responsive components, from which we could rebuild the front end of Allegro completely. And the idea was to introduce it all along a critical path of our user. Starting from home page, through offer listing, offer page, ending with checkout process and then thank you page. And obviously it was not the first redesign we've had in our history in Allegro. And the previous one happened in 2012. And a lot of people were involved, we've put a lot of work to it. And we had great expectations. However, back then UX research was not that popular in our organization. So it wasn't widely tested and consulted with our consumers. And it went wrong. It went very wrong. We got a lot of negative feedback on our forums, on social media. Our clients were not ready for radical changes. And also some people were buying class on Allegro and some even just left us. So we've been basically losing money. So we had two options. We could roll back the change or we could investigate what went wrong. So we went with second option, more difficult one. And we started research and fine tuning of the new layout. And after some time we got back on track. But of course it was a bit too late. Being this reactive has finally cost us a lot of money. So this time, two years ago, almost, we wanted to do things right. So we started with asking ourselves the most important question. How to make the transition painless for our users, for our merchants and for our business. So we started off with creating a project team. And it was quite diverse project team. It consisted of project managers, UX designers. We had also developers, UX researchers and data analysts. And the team was about to control the rollout somehow and react if necessary. And actually we've been both part of that team. And I think it's perfect time for us to introduce ourselves. My name is Alicja Antkowiak. And I'm UX researcher. And while we were implementing new layout metrum, I've been responsible for surprise conducting UX research. So I've been basically gathering the feedback from our users about the new layout. And my name is Przemek Rosniak. And I'm a data analyst. I'm responsible for conducting quantitative analysis using our traffic data and our transactional data. In the project I've been responsible for verifying hypotheses, for monitoring our users' critical path and for exploratory and discovery research in case we needed any. So first, before the rollout, we needed metrics. So we needed to know how our product, in this case our new layout is doing. Is everything all right? We decided to use very high level metrics. And we decided to use these ones that are already very important to our organization. First of them was the visits conversion. So basically it's the percentage of all visits on Allegro that end up with a purchase. Another one is the GMV, so Gross Merchandize Value. It's actually the total sum of all purchases made on Allegro. And the third one was NPS, the Net Promoter Score. This metrics is based on user's declarations. So after the purchase we ask them how likely it is for them to recommend Allegro to others. We decided to start small, really small. At first we wanted to show the new layout to just one and a half percent of our traffic. Why we did that? For two reasons. The first one we wanted to minimize the losses in case something goes really wrong. And the other reason was that beside of the test group that had the new layout we wanted to keep the base group. So the users that had the old layout. This way we could always compare the metrics for both layouts and see how our new product is doing. So in November we wanted to do just small testing and do small adjustments. And then in December we wanted to increase the group, but also really slightly because what you need to know, whatever controversial changes you want to roll out, don't do it before the Christmas. Yes, because at least in Poland it's basically the gold mine for all the retails. So after the Christmas in January we wanted to fully implement the new layout and show it to other customers. And how did it go? Well it didn't go quite as we planned obviously, but let's not get ahead of ourselves. So after we showed the new layout for one and point five percent of our users we looked at the metrics obviously because that's what they are for. And unfortunately all of those metrics were lower for the test group who saw new layout than for the base group who saw old layout. The drop was only slight and thanks to the fact that only a small percentage of our users were seeing new layout. The losses for our business was not huge, but well actually if we had a big scale it could be noticeable, so we knew we had to react and to do some small adjustments. So situation was not great, but we knew we had to move forward somehow. So something was wrong and the most important question at this point was why is something wrong? So we started off with creating hypothesis that would give us a cost for our losses of the drops in our metrics. And then we were using both UX research and big data analysis to check those hypotheses. And those method did not only answer the question about hypotheses, but also provided us with new ones. And looking back we've used quite a lot of methods. The examples of UX research were surveys, very multiple surveys, diary studies, usability testing. And the examples of big data analysis are analysis of traffic where we basically track the user behavior on our site. Or analysis of transaction in which we check how much do people buy or how much they spend. And this may seem like a lot of methods for one project, but actually all of them were kind of necessary. And not only because we had different hypothesis to check, but also because we believe that the UX research and big data analysis findings are very good at complementing each other. So to illustrate it well, we'll show you a few examples from this rollout and we'll show you how that worked in our case. So one of the first thing we noticed is that people who had the new layout were spending less. The GMV in this group has dropped. We analyzed it and we checked it and see that they are not buying less often, but they're just buying less. So we had a UX survey running on the offer page where we've been asking the question, can you find all the necessary information? It was an open-ended question, so we got a lot of answers, but just a few of them were really useful in this case. But some users answered like, I cannot find more items from the same seller. So we got similar answers from our diary studies. So we knew we had to check on this one. And in compared to our old offer page, where the seller section was quite large and exposed, in our new offer page the seller section was compressed to just two most important information. The name of the seller and its rating. The link that navigated to the whole list of all items from the seller has been moved below the fold. But could it be a problem for a larger number of our users? We decided to calculate, to check, what's the average number of items from the same seller sold during the same transaction. So we run a quantitative analysis of transaction and we found out that the metric was really dropping in compared to the old layout. So now we knew that our users had the problem to find more items from the same seller and therefore they are buying less. So how did UX research and quantitative research complemented each other in this case? First we were alarmed. Our monitoring alarmed us that something is wrong. The metric showed us that the GMV is dropping. But we didn't know why is that. And here came the UX research with the answers. It gave us hints so we could check what's going on. And we could then verify these hypotheses using the analysis of our transactions. The hypotheses proved right and we knew the scale of the problem. Okay, the other issue we encountered was lower conversion of our checkout process. So throughout the whole project we've been monitoring the performance of every step of our users critical path. And basically offer page conversion was going up. So people were more eager to add items to the cart. But then once they entered the checkout process they were just quitting much more often. So our first, quite obvious conclusion was there must be something wrong with our checkout process. But that actually got us quite suspicious because the checkout process barely changed during the redesign and we didn't understand why would there be something wrong with it. So we ran a survey once again but it was to the very specified group. We were asking people who entered and then gave up on the checkout process right at the next page. Why, why, why did they do that? And the answer that kept coming back and was really interesting for us was that they just wanted to check the delivery options and the checkout. So they didn't want to buy the item in the first place. And that was alarming because the information about delivery shouldn't be looked for at the checkout process. It should be looked for at offer page. So, so yeah, so we compared the offer page before and after the redesign once again. And the information that was quite large in the old layout about the delivery was now compressed to only the cheapest option. Of course people could click on it and all the information would be displayed with prices. But apparently this navigation was not intuitive enough for all of our users. So we formed a hypothesis that people are adding items to the cart only to check the delivery prices and to verify it, to verify its scale we run data analysis again and it was about the behavior on the site. And it proved that the hypothesis was true. People were adding all delivery options button were clicking all delivery option button in the checkout process much more often in the test group without seeing the all delivery options on the offer page. So what was the scheme here between big data analysis and UX research? First of all big data analysis alarmed us once again but then the UX research completely redefined the problem we were facing and got us on the new track. And then big data analysis showed us the scale and showed us that the track we got was right. The last but not least our third example of interaction between quantitative and qualitative analysis. We noticed that the conversion rate on our offers listing was dropping means that less people who viewed the items on the listing were displaying the details of the items on the offer page. So as before we asked the users if they missed something on the listing and this time we didn't get any valuable results because apparently everybody liked the new listing they found it clear or comfortable but we knew that there's something to it that we have to dig deeper. So we've taken a look at ourselves at the old and the new listing and we found out that the new listing has more space but on the other hand less items fit into a single screen. So we asked ourselves could it be a problem for our users. The users didn't set us that they had problem with the new listing but the quantitative analysis of the user's traffic showed us that there's a big positive correlation between the number of items users see on the offers listing and the conversion rate to the offer page. The more items they see the more likely they convert to the offer page. We also knew that they were seeing less items because we thought bigger items means more scrolling and probably they didn't want to scroll that much so they viewed less items. Knowing that we decided that knowing that and without any hints from UX research we decided that we will just prepare alternative listings with the same set of information but just with smaller items. And bingo, when the items were smaller and the users could see more of them on the items list on the offers listing they were converting better to the offer page. So we thought ok, let's go one step further and play a little bit with the set of information on the offers listing. So we tried to remove information from the offers listing from the items to make it even smaller. But this time we got one step too far, the users didn't want so they didn't convert to the offer page that well. The conversion rate dropped. So apparently making items smaller on the listing was good but only as long as you didn't take any critical information from them. So what was the scheme here? What was the interaction between these types of analysis here? It was a little bit different than in the two cases before. At first we were alarmed by the monitoring of the checkout of the conversion funnel, that something is wrong. Also quantitative analysis provided us here with a hypothesis. We knew about the correlation and that was a hypothesis. And also this time the hypothesis was also proven by the big data analysis. So all of the three steps come from quantitative methods. US research here could not deliver any valuable input. We think that's because the changes in the items listing were so slightly that they were just below the perception of the users. Then they weren't aware that something's wrong with the listing. Okay, so now you know how our action plan looks like here. So after we were noticing that something is wrong, we were creating hypothesis, checking it with UX research and big data analysis. And then we were creating potential solutions for the problem we discovered. And then we were testing the potential solution with AB test or multivariant test in most cases. So we've just told you about a pretty huge project in 20 minutes. So it's a lot. But we would like you to remember one thing from our presentation. And that is when you have a big project, big implementation. Remember about always using both big data analysis and UX research. Because big data analysis can often show us that something is happening. Can show us the scale, how often this thing occurs. But it's not very good at giving us the answer why this thing is happening. And therefore you don't really know how to make your product better. UX research on the other hand, is good at answering the question, what do people feel, how do they think of your product and why. It can also give you the list of usability problems to just solve. So it provides you with a lot of hints how to make your product better. But it will never show you the scale of the problem though. And also it can prove itself uses when people are deeply unaware of the product. Like on the listing where the changes were really slight. So both approaches have their limitations separately, but they go very good together. So they can give you the full picture of your product condition and they can help you move it to the next level. And for sure you're wondering what happened to the new layout. Well we managed to implement it, but it was not two months, rather eight months. It turned out to be a huge project with lots of hypotheses created and verified with dozens of AB or MVT tests and a lot of fine tuning. But at the end thanks to integrating different analysis approaches and thanks to integrating different data sources, we managed to release a product without the last for our business and a product that satisfied our customers. And that's what really matters at the end of the day. So any questions? Ok, so the question is why we did redesign in the first place. So what problem were we solving? Ok, so it was a strategic decision because not the whole Allegro was ready to answer the needs of the mobile customers, so using mobile devices, because we weren't really fully responsive. And also Allegro was kind of inconsistent in some places because we were building from different components because we didn't have very well done the front and design before. So we needed a set of components, like bricks from which we could build our platform. And we wanted to refresh it also. Well the redesign was both based on users needs and business needs. It was also for the new customers because we have a lot of loyal customers that had their expectations but also younger users used different shops and we kind of wanted to look and act like the retailers in Poland, different retailers.