 All right, good morning So today I want to be talking about how to design User research so that it actually has an impact on your project. I am boy on somers I'm a user experience designer from the Netherlands where the next Drupalcon will be and In my free time I work on Drupal 7 and Drupal 8 I am actually the UX maintainer for Drupal 7 and Drupal 8 and I wrote a chapter in the Definitive guide to Drupal 7 about user experience So today I want to kind of share my experiences on how to design your project in such a way that the research actually has impact. I Work at a small User experience consultancy in the Netherlands. It's called user intelligence And we do research on all kinds of projects We work on kind of the small projects with no nonprofits where we do field studies But also the large projects, you know the large quantitative studies where you do Usability testing in a dozen of countries at the same time So it's very It varies a lot So in this talk I want to go over how to avoid getting into those discussions Like this one No, the participants weren't representative. We need a larger sample size Have you ever know done some kind of user research where you end up in these kinds of discussions or or something like this? We don't really need user research. Department X really knows the user. You're going like I'm not sure and You followed some kind of design process where you did know your research Either talking to stakeholders that are actually talking to users. You did your design Implementation and perhaps you did some optimization after that so The session won't cover the kind of how to how to do use with the testing We're interviewing or stuff like that. It's really for that one time that you did manage to sell user research of some form in your project and you're struggling to Make sure that you know this user research actually has an impact on that project because it's I think it's quite often projects that That we get 90% of the way there when it comes to user experience, but that last 10% really makes the ordeal a little bit difficult and And this happens with a lot of projects right where the user experience is kind of okay But in the end of the day, it's just not good enough to either competes or to just give a good user experience to your users So I don't think you know, this is a matter of priorities You know, they got the right priority that they that it should be accessible, but it's it's in the execution where the problem is because I think many organizations already realize How important user experience is for for their products? It's it's quite often that they're already using a lot of products that have a you know a great user experience themselves Yet when they go out and you know build products It's often that they end up being pretty bad or or kind of okay from a user experience perspective So I think a lot of clients see that their website or their products, you know it doesn't really offer that great experience and And we can help them get to that point Because it's it's always there's always a part of the business That is is hurting because of this bad user experience and I think in a lot of markets today It's it's not really just about competing on features anymore. It's also about competing on the overall user experience You know even something like the iPhone was when it was introduced It wasn't a great competitor on features You know it lacked a lot of things that at the time were very competitive in the market like no video chatting but it did compete on user experience and So the company that I work with we do a lot of research on you know apps on Responsive websites on you know the kind of common desktop experience and I think what we find is that people often Prefer the mobile experience being at an app or a responsive website They prefer it over The desktop experience so any website that you're using on your desktop It doesn't matter in which context around even when they're at home and the laptop is only two steps away They prefer the experience on on the mobile And I think this is because it's in you know a lot of those Websites and apps simply offer a better user experience around the core functionalities I Think a lot of users are more confident that they can successfully fulfill tasks using this platform and I think this is in part because Mobile platforms have a lot more focus. I think stakeholders users You know everyone who's involved knows that when you're designing for mobile. It has to be more focused You know because of there's you know, there's less screen of state So I want to talk about these six parts That you will encounter whenever you're doing some kind of user research And how to you know pick the right stakeholders to be involved in the process to choose and communicate your method your recruitment and Test material and then finally how to report it in a way that it actually has an impact on the organization and Then to figure out what impact all of those changes have on the actual So I think a big part of having a successful research process whatever you're doing surveys or interviews or usability testing It's it's about finding the people who are most affected within that organization by the bad user experience And it's often not the development team that you're engaged with that's experiencing all of those You know that use experience moments It's the sales manager or the support team needs You know, it's all those kinds of people who are in direct contact with the user That's get all of that negative feedback when the experience isn't good So for example If you have an interface like this and you're a sales guy You're gonna have troubles It's you know, it's it's a difficult proposition So whenever the product in itself in all the initial look is it's hard to explain to customers The sales team is gonna have troubles But it's also repeat sales right whenever They get back to the client try to sell them an upgrade They will get the feedback that the end user is having trouble using their system But it's also support They have something like this where they you know support has to explain inexplicable use interfaces and having to add guidelines and a lot of documentation to explain how it works or Where support has to respond to the same questions over and over again because those Because the user experience isn't good so one story Last year I was working on a project for for a large telecom provider in the Netherlands and And what we did is we investigated how their Online channels were performing in getting customers to the right support channel So for example as you know something like you know a live chat support channel I think that's about 50 to 80 percent cheaper than having someone actually call in and Having someone email is even cheaper than the other options So while we were working on this project with the product team We found that they really had no clue what the like top 10 questions were in each channel and This really impacted the project so what we did is we reached out to the support team was handling all of these questions and And we got a lot of feedback from them. We got feedback from You know what were the actual top questions, but also how you could get people into those support channels more effectively Like one example would be chat What they found is that in In the chat channel the chat support channel that people had to get the feeling that they were talking to a real person So some of the design tweaks we made is like you know showing an actual name Making sure that the first response of the chat that the support chat Wasn't a you know kind of bot-like response so that it was more natural and and what we saw that the Conversion on those channels got a lot higher because we made a lot of small tweaks and not necessarily big ones so the support channel were the people that were actually hurting and Involving them and helped create a much better user experience And it actually also cleared up a lot of budgets for us to continue improving that particular part of the experience So this is probably a comic that you all know It shows how each stakeholder has kind of a different picture of what the end product should be like and It's not on to you know uncommon that on projects where you do use your research that you're kind of the mediator between all of these different departments Understanding which one is most connected to the actual end user and Helping make sure that their voice is heard in the process So sometimes it's just repeating their knowledge Sometimes it's actually helping them inform their Their understanding of the end user so that they can provide better services better sales better support. So The first big thing I want to be talking about is method I think whenever we do user research We're often bound by a lot of restrictions, right? Usually there's very limited budget for doing user research There's limited time in you know the development process for that user research to take place So we have to be kind of creative with choosing our methods and making sure that you know the one that we do choose actually has an impact So we actually have a lot of clients that come up to us and say I want a usability test or I want to survey and And this is actually troublesome Because no one's going up to their dentist and saying I need a cap my number teeth 16 because you know, that's that's an expert question, right and Whenever that happens when the clients comes to you, I want a usability test or a survey or a particular user research method You miss part of the discussion Where they decided and and defined the actual problem that they're trying to solve and You also miss part of the discussion where they chose the best method to resolve that And I think frankly, it's it's always good to be part of that discussion because It's very important for user research to get Really clear understanding of what they're asking Because sometimes the question is too broad and it's hard to pick the right method and sometimes it's too narrow Where you're not looking at all of the connecting parts So We often use the following graph To understand what clients are looking to answer and to explain what each method brings So the first question we ask is is the client trying to understand Beliefs and understandings of users So what they say and and this is really where the client has a need to understand the kind of Attitudinal part of the user when they're using the product or website so The other thing that we're looking at is the client trying to solve a Problem where they're trying to understand what their users do with their product. What kind of behavior they express So another thing that we look at is is the other axis? Is it a qualitative question? So does the client need to understand why people are having trouble with their product or is it a quantitative Question where they really need a number to explain certain behavior or opinions or they just need a number to convince their Other departments that that they're making the right decision So we're also a little bit careful with the quantitative question Because it's not too uncommon that people just like numbers right they just like to see Numbers to confirm their thoughts But they don't actually need them to make the project go forward And doing user research where it's you know very quantitative in nature It can be a little bit expensive not always but but Especially when you're doing use BT testing or those kinds of things and you're looking for numbers. It's it's quite hard So Using the schema We try to explain to clients what each method will bring them Where it will answer a behavioral question where they will answer a quantitative or a qualitative questions And most importantly we can tell the client what each method won't bring them what kind of data that won't shouldn't be expecting Because I think it's very tempting to do a Survey or those kinds of user research methods and trying to just get all of the questions in and Ask everything of the user and it's very tempting to do that because you know You sold user research that one time and you really want to make sure that you try to do as much as possible But this often leads to very inconclusive answers where you get you know kind of a picture, but not the complete picture So we try to focus on fewer questions, but at least focus on the questions that we know with this method We can actually get an answer to so If we look at a couple of the methods use BT testing, I think the most popular one right next to surveys Because usually testing is a great mix of Attitudinal data, you know people are carrying out tasks and behavioral data Oh, sorry the other way around so attitudinal how people perceive things and behavioral data How they actually interact with the website or product and and the other way is focus groups Focus groups are often very useful for getting kind of an understanding how a concept is perceived Especially when you're in that first stage, or you're really just trying out different IDs another thing is On the other acts the quantitative part So like a B testing or data mining no very useful tools to get inside into what people are actually doing on your website Another one is intercept studies where you give the user a pop-up When they show certain behavior on the on your website, so they're you know moving through the card for example and At the end of the day pretty much any method that's in the user research spectrum could be added to this graph and It's a schema that's created by Christian Rohar And I think he shows like a dozen more methods maps through this But in all it's a frame of mind to help the client understand what the method will bring and I think it's also interesting to show this one There's a lot of online tools now and This is really a market that came to life in the last probably a few years where Doing user research online with online tools gotten a lot easier and I'm quite excited about them. I use many of these here For example, we use treejack To find out whether navigation changes work where we you know Kind of benchmark two different navigation trees against each other We use loop 11 to test workflows in the prototype So whether the you know flow going through the application or the website whether that makes sense And we use user testing comm Whenever we're doing use BT testing in like 10 different countries and we want to get the results fast So some of these tools are really easy Like treejack like use a bill up and can easily be put to use and are very inexpensive and The other tools like zoom user zoom are a lot more expensive, but can answer a lot more complex user research questions so I think the takeaway when it comes to methods is show why you chose a certain method to kind of take away that part of the discussion and You know it helps you take away that part of the discussion where the client or Part of the team goes like why didn't you do a survey? That's less expensive, but then you can go back. Yeah, but we're trying to answer a different question And it also helps you scope your user research you're focusing in on the questions that you can answer with a specific methods and Showing what data they should be expecting and what they shouldn't be expecting But it's also leaving a lot more room For answering all of those other questions that tend to come up You say, you know, this is not the method that it's gonna answer that question But if you want we can do a lot more research to answer those questions. So So that's method I think whenever you do a user research And some point you're always gonna do some recruiting whether that's recruiting You know stakeholders at the clients And users or anything else in each method you will generally get users involved so What we found then it's it's not very in common that after a day of usability testing You get the feedback that you know the user didn't quite fit the audience Because what they're looking for is really someone who's just like them who knows the product just as well as them and It was just as smart as them, right? And What they know is their audience isn't like that, but that's what you know, they're faced with Because even when you've recruited straight from from their user base, it's not uncommon to get this feedback It's because it's I think it's often tempting to blame the person in front of the user interface Rather than blaming the actual user interface so What we do is We heavily involve the client in this recruitment process so we develop a screener with them which is kind of the criteria that a participant must met and We try to define as exact as possible what they're really looking for and We'll also involve them in the actual recruiting of these participants So either ask them to help out getting participants or whenever we're using like a recruitment firm We ask them to validate the people that we found and Sometimes, you know, it's it's hard to find the people that they're looking for and when you're trying to find that like middle-aged man Who's tech savvy living in Austin with pink hair and two degrees. It's gonna be hard You're gonna have trouble finding them. So having them being part of the process also Make sure that you're reaching the right people and you know whenever you can't At least they will fat on the people that you know aren't quite meeting the criteria, but they're fine anyways So that's recruitment The other thing I want to be talking about is test material So when I talk about test material, I'm thinking like sketches or mock-ups or paper prototypes HTML prototypes Or the real thing, you know, if you've actually already built it and You know having done a lot of usability testing What what we find is that no matter what you're testing whether it's a screenshot or a prototype Users have a lot of difficulty understanding the difference between low-fidelity and high-fidelity So even when your you know design has looks very sketchy But it still has a number of elements that look like the real thing people will believe it's the real thing So it's very important. I think to either shoot for low-fidelity. So paper prototypes are really sketches Or to just go somewhere in the middle and explain to the user that it's you know, it's not the real thing, but it's close So another thing that we learned with test materials whenever you're developing them It's especially when you're working with larger teams There's always some discussion after that you didn't test the right thing either, you know tested part of the flow that was broken or They didn't carry out the right tasks, especially with the usability testing you see this a lot and And what we find is that having the clients kind of proof the scenarios or the questions that you developed is Is a big part in making sure that they're involved and making sure that You're not missing anything So it helps them validate the right scenarios and It makes them catch, you know any bugs or perhaps things that could be improved really quickly before you do your user research But another thing and I think this is the most important part of having them kind of test run Whatever user research approach you developed Is that it's a practice run? because They see you know users carry out the same tasks a few days later when they actually carry that out themselves They can see how different the user experiences from there You know their perspective and that of the user and And that realization that the user comes from a different kind of base level of understanding I think that's key in making sure that your user research lands and It's a fundamental realization that will also help you a lot in the future Whenever you're discussing your results So I think a key takeaway here is that so ask your stakeholders or your the team that you're involved with to test run your approach from answering questions to trying out scenarios and Involve the whole team of potential stakeholders in this process is a very effective way of Validating your approach And and sometimes this means that you get a lot of feedback from a lot of different people and It's always important to keep in mind, you know the amount of questions you can ask sometimes, you know More than 30 questions or whatever in the service going to be bad so You know keep in mind You can't ask everything and you can explain it to them using you know the graph that I showed earlier That's this is you know, this is the method that we trying to answer these questions and not the other ones And it's also helping them With conflicting priorities, especially when you're you know talking to a lot of different departments and they're all involved You can always refer to the project brief or the creative brief and then say no This is the part that we're touching upon with this user research So the other thing I want to talk about is reporting So communicating whatever you did whatever user researcher did communicating that to the client and Yeah, I mean what we found over and over again is that Most clients don't really need a report they maybe ask for it, but they don't really need it and What they actually need is something that can be easily communicated to the larger team that they're working with And and a trick that we use is as we asked the client How the results we act will actually be communicated internally? surrounding the project and Sometimes it means that they're just sending an email to the whole team but in a lot of other times they're using some kind of project management tool to Get the results into actionable, you know, workable to do So an example is Drupal Whenever we do user research for Drupal Whether that was Drupal 7 or Drupal 8 we rarely created these no full reports Instead what we did is we defined the problems and then we wrote them up in the issue queue This is kind of Drupal's no ticketing system Because at the end of the day, this is where everyone lives and this is where all the work gets done so being part of that system is Key in making sure that the results actually get translated to you know, something that has impact on the project So it's a great way of making kind of a living deliverable Something that's directly tied to the environment where changes happen. I Think there's a lot of other ways though to engage stakeholders With deliverables that are you know kind of different than your ordinary report. So Users often have kind of this mental model of how a user interface is structured and You know designers or programmers think about the left side and go like, yeah, it's no, it's nicely structured nicely around Users feel like it's magic, you know, there's lots of stuff happening. I'm not sure what's happening, but it works Especially with the more complex user interfaces like booking sites It's it's it's very common for it to feel like magic and This can be of various reasons, right? It might be that the actual concept that you're trying to communicate It's just hard to capture So for example, you know when you're building a UI for neurosurgeons, you're you know, it's gonna be difficult and It requires a high-level based understanding to know what you're doing with that user interface But the more likely scenario is is that the user interface is unable to actually communicate all of the different parts So we make use of concept models To show what the parts are and to show where users have a disconnect So this is an example This is kind of trying to capture the bigger picture of the views to user interface and this is the one that Roy Shrouton created and so it captures all of the different parts of that user interface and By using a model like this, you can show What part of the interface is having the disconnect where people are struggling because it's often not the whole interface But it's just certain concepts that they're not getting So you can see how the system is put together and then how the user thinks the system is put together So it's a great great mechanism to capture the kind of more complex user interfaces Where there's a lot of moving parts and and you're trying to make sure that you know, people understand what you're talking about Another thing that we use a lot is this is called a web traffic map I think what we often see when we do user research and we involve stuff like web analytics in that process We see that the client has trouble understanding and grasping What what you know what these web analytics data say and what they mean and how they should drive priorities So one of the things that we do more and more is Instead of giving them, you know, the good looking Google analytics graphs that are really hard to understand we give them Kind of a more custom tailored report or you know Something like this that you could actually hang up on the wall Where we handpicked certain indicators that we think you know, those are the most important and those can easily drive priorities or understanding Another thing would be customer journey maps I think customer journey maps are a great way of showing the client in a rather explanatory format the complete journey that a user takes and Kind of follows with that product because it's often that the product only or the website is only a part of that process and This is a map that we made for For patients that that have prostate cancer And and we showed them, you know What the whole process was like and then we picked particular Parts of that experience where their website was helping or not helping the patient and I think what's interesting also is that a customer journey map can help for example sales teams Understand that they're part of a larger a larger picture a larger experience that the user has because I think in a lot of Processes now sales has to go beyond the notion that it's you know that their sale is made in a particular Part of that channel like you know just a desktop and just when the patient is getting their treatments It's often won and lost on on many parts of that experience where the user is it is you know, it's touching upon their product So making sure that you show that is I think a very big part of showing that bigger user experience But no matter what your deliverable is I think it's always good to tell the whole story And we use this kind of pyramid model where It's it's kind of a top-down approach where we start off and introduce the client to the overarching findings No, the navigation that was trouble some the terminology that was different and wasn't matching with expectations or perhaps that you know The features that were expressed weren't actually required by the user From that kind of holistic picture of the overarching the big picture We kind of deduce down to the detailed findings of how certain interactions performed and how they influence that larger picture because I think Especially for a lot of clients It's very easy to get bogged down in the details and to be worrying about whether a button should be left or right When the actual concern with the user is that they can't even find the you know find the page where the button is left or right so Making sure that you capture that bigger picture that you know Holistic part of of the process is is very important in kind of your added value there The other thing is is be brutally honest about the results. I think whenever you do use the research You know when you're asking people's opinion, they're gonna be honest with you. So They might say that things that the client doesn't like or they might say things that the client has been debating forever and Know that they're giving a new direction or a direction that the client doesn't like but at the end of the day you're you're you're have the role of the observer and I don't think you should be judging whether I know finding should be included or not At the end of the day the client can decide on himself to ignore certain parts But you know, they will influence the product so an example I think as we keep improving interfaces a Lot of changes won't be noticed by the larger audience that you're using so for example vertical tabs this is in Dupal 7 we introduced this on the content creation screen and it's a way to kind of group all of the settings and It was a very good thing when we introduced it because it reduced a lot of clutter but Like the research we did after that not one user mentioned that it was a good thing because people felt it was you know It's a good thing. So why should I comment on it? So You know, it is I think it's a part of a good development when the user doesn't notice it anymore But it is something to at least report on when you're doing findings and making sure that you also communicate all of the things that are actually working and are usable so Make sure that you pick the right deliverables and That you don't work on a large report although it might be fun But it's not the most useful deliverable in an organization to have an impact Try to pick deliverables that can kind of live in the organization that can be you know put up on the wall Or that they can live in there, you know, whatever project management system. They're using and And try to make sure that you're telling the whole story and not just the details So the last thing I want to be talking about is impact. I think a way to have impact on On a project with user research that time that you did sell it Is it Understand what the monetary impact is of any suggestions that you make at the end of the day Whatever suggestion you make is going to cost money So our added value is often in balancing Kind of the impact on the user experience Making it better and the costs that are involved with making it better You know building the business and and all of that so a few months ago I worked on this project Which was an app and It had undergone rigorous testing. I think we were in the like fourth round of usability testing and what we found is they they have this landscape mode in their app and Although it was better than you know the last three usability testing rounds It still really wasn't offering that good user experience that you know end users wanted to happen that they wanted to percent and I think instead of focusing on the details and figuring out how to make the landscape mode better What we did is we asked the users What are you expecting from a landscape mode for this particular app and No, what do you require what should absolutely be in there? And as it turns out People didn't care didn't really care for the lens code mode because This was a particular app that was going to be used a whole lot more in portrait mode than in landscape mode So people had no strong feelings about it and and that was Interesting because you know, that's not the angle that they took at it. So instead of suggesting to them You know continue improving landscape mode what we actually did is we said Users don't find it a very important You know, let's not continue improving it. Let's just Scrap it from the project and keep improving portrait mode because that's where 99% of the people will want to engage it with So it's about understanding what are the basic needs and and sometimes that maps and sometimes it doesn't So we use this model, which is the the Kano model and it was created by Norakai Kano in I think the 1980s to explain to Clients why their product wasn't creating that kind of great or satisfactory user experience So there's there's two axes. There's the degree of achievement on the X this is really how well a given feature is executed in your products and Left being very poor and right being very well So the other one is on the Y axis is customer satisfaction. So how satisfied people are with the product? and bottom being you know very dissatisfied top being very satisfied and and So in this model We have basic attributes and and basic attributes kind of represent the features that Are so basic to the product that you're using that people just expect them to work People just expect them to be there and and these are the kind of features that people often take for granted in a product and an example of this is in interweights we now have No, we see big editor in in in core I Think pretty exciting for the people here at Drupal con But for most people outside of the kind of Drupal will to go like wait it didn't have a VC week editor in Drupal And it's a basic need right? That's what they expect from a content management system that that it has that and Another example would be you know toilet rolls in your room you're definitely expecting that when you're in your hotel room that they have that there and You're not necessarily happier when there's no three or four rolls But you're definitely not happy when there's no rules right so When when it comes to these basic attributes, I think it's people aren't more satisfied when you're meeting those needs But when you actually leave them out when you don't have a VC week editor Or you don't have those toilet rolls in your hotel room It doesn't matter how you know how great the product is otherwise people will feel that it's no it's it's it's broken It's not meeting their needs So you should be on the lookout for these these meeting these you know these basic attributes and seeing whenever you're missing them Now for a school site this might be that they don't have a map right of the campus site That's a basic attribute that people expect on the school website So the other attribute that I want to be talking about is performance attributes this is where there's kind of a direct correlation between the satisfaction and The degree of achievement so kind of the more the better and These these are the kind of attributes that are often expressed when you're interviewing users or you're talking to users Like for example, you know the amount of bandwidth and space that you get with your web hosting provider generally more is better The same is for like waiting in the airports, you know less time waiting in the airport generally is better so these are attributes where you can easily compete upon and That companies often do right like bigger megapixel cameras and all of that These are the kind of attributes that are very common to compete on So delight attributes they represent the kind of unexpected Where you delight the customer by over delivering or giving them something they didn't expect and I think for a short amount of time that was like, you know Internet on an airplane where you're like, well, that's that's pretty cool so The interesting thing about these delightful attributes is Whenever a user is actually delighted it tends to kind of be Resulting in over excitement like people feel like the product is great and It's a very effective way, you know to do that word-of-mouth marketing So this is the Kana model and I Think it's important to note though that these delightful attributes You know these really great interactions They become basic over time. I think as a company starts to compete on these delightful features Customers get accustomed to it. So for example Motorola You know in the last few advertisements that I saw from Motorola You don't advertise anymore that you don't need like a bulky battery and transmitter to carry around No, that's not a selling point anymore It's something that is now part of the market and it's not a delightful feature anymore But when it was introduced, it's pretty cool. You didn't have to carry around a bulky battery and transmitter so If you take a look at for example content management systems And particularly in two-way now We see that there's kind of these basic attributes that people expect like being able to create content being able to categorize their contents and Being able to add you know media to this content. These are Part of the basic attributes. I think there's like at least a dozen more and the people just expect when they're using a content management system And there's certain performance indicators as well with content management systems I think speed you know the faster your website is generally the better. I don't think I've seen anyone who likes slow websites Um Supporting platforms know does it run on my particular technology stack? No database or otherwise Obviously this kind of depends on the complexity it introduces But generally the more platforms that we support the better and then there's no things that the light users. I think in Drupal 8 in line editing is Is one of those features that could actually be considered a delightful feature It's it's not necessarily something that people expect from the content management system especially the open source ones and I think it's implemented in such a way that it can actually be delightful Actually be helpful in content editing So the same goes for WAI area It's it's a technology to help Make interfaces that are that are highly interactive like the interface for views or fields or blocks To make that accessible for people who are using What is the assistive technologies like for example screen readers and this is a technology that's really groundbreaking and It's used somewhat in other applications and other content management systems, but not as extensively as we're doing it So Whenever we're working with the client, we're trying to evaluate how well are their attributes performing on this kind of model This is to kind of give them a picture of where they should be focusing in their efforts if they're not meeting a basic need That's where they should be initially focusing their efforts If we look at droop wait, I think we can see two features that are kind of underperforming right now speed droop wait isn't particularly fast. It skills better, but it's not fast and Media handling something we've done a lot of work to get that in there, but it's it's still not quite there You know the user interfaces aren't there so The user research Does is you're trying to find are we meeting the basic expectations? Okay, great. Are we actually competing on the performance attributes? Great, right? That's that's something that that's easy to compete on and are we providing them with delightful features? perhaps so When we're working with clients and and the research that we do we tend to see kind of this extreme focus on adding delightful features But it it often happens that these delightful features kind of take away the focus from the basic features the things that should always be there and working well and This is where users kind of feel like But you know you're adding this fancy feature But you don't got the basics right and they feel like this kind of fluffy cute feature is adding a necessary clutter and The app or website or product is missing the point So most of the projects that we work on are tend to be feature-driven right people I think we operate in a world where it's more features you know new version numbers etc, but Even user experience is often marketed like you know, we're now releasing version 1.8 of our new product And it will have even more user friendliness you're like Really because in the end of the day you want to be in a world where It's not just enough that we build products that are understandable usable Satisfactory we want to be building products that bring joy excitement pleasure and fun And I think Dries and his keynotes clearly shows right there's there's quite a road ahead of us Where we're actually bringing those product that that over a kind of a seamless user experience so to recap what my presentation was about is You'll try to find the right stakeholders try to find the people that are hurting Try to involve them in every step from choosing the method explaining them why the method is good or not good and Involving them in the process that you're recruiting participants Having them proof all of your test material and Then making a deliverable that can actually live in your organization is not just a report that gets stuck up in some kind of draw and finally Impact knowing what the impact is of your suggestions on the whole of the project and That's what I came to talk about today Thank you. Are there any questions? Yeah. Yeah, I Can repeat it though if you want to Thank you Like it so the big conundrum of running two is the vast difference between low fidelity and high fidelity Yeah, user testing Because obviously if you work with a good designer, there's gonna be a vast difference between a grayscale wireframe or prototype That's set in one font with no color or contrast Compared to something that's what's actually being built So if you can only choose one point in the project to do user testing When do you do it? Generally my advice would be as early as possible because a lot of times When you're when you have the kind of finished product and you're testing it There's less time and budget to actually improve it and and what happens on early in the process that you can find out Whether you have the right focus in your product because sometimes it's it's not about you know Where the buttons isn't or how they are visualized, but it's whether they should be a button in the first place so Yeah, when it comes to fidelity and use BT testing I generally say earlier on is better because a lot of designers know what they're doing and You know, you shouldn't be you know worrying too much about usability if it's a good designer I've seen so I just want to follow sure. Have you done any research between like the difference between a Wireframe and the actual what is built in the actual comps that Yeah, so so we guess there could be a vast difference in the user I would imagine that a good designer the usability of the actual site Would be a lot better because there's a big red button that they want you to click on that says donate well Yeah, so so when it comes to when you're actually like really just testing the usability The real website is a lot better or the real thing is a lot better to test on but often So so when we whenever we're testing like early on in the process and later on the process What we find is is early on the process you get especially on sketches You get a lot more feedback on the concept whether the concept is correct And when they have the real website, they don't focus as much on the concept as they do on whether the buttons should be blue or red Or where this position correct. Yeah. Thank you. All right Hi I had a question about the the matrix when you're trying to determine where your efforts should go And so assuming you have covered the basics because you're not going to make a product that doesn't do the basics It seems like the the features that delight have If we know that they will eventually become basics or many of them will In order to get the most impact from them you want to be First to market so the iPhone is a great example They were first to market on a lot of things that are standard features on virtually every phone now Well, how do you make the decision between? performance the this sort of For lack of better term I guess return on investment for performance based Feature or a delight feature because they're the cost and the metrics for their success are so different ones qualitative and once quantitative right Yeah, I Think yeah, there's different strategies at play. I think when you're in the market Where there's heavy competition on the performance attributes Then it makes a lot of sense to invest your time in delightful attributes Whenever you're in a market, you know like like uber for example where the focus is a lot on the delightful attributes Then it makes sense to focus your efforts on performance attributes. So it kind of depends and I Think the interesting thing about delightful attributes is that They have a very strong return on investment because they help you sell your product Because it gets people excited and it tends to help with the word of mouth. So Yeah, yeah, generally I would advise to look at the market space go where there's more margin. Yeah, exactly Yeah, thank you Other questions so you've done some focus groups and some user testing and the team agrees that You know, you want to get rid of some pieces of the website and But the senior executive sponsor wants to hold on to those pieces like a sacred cow How do you like can you give me some tactics on some ways that you can talk to them to try and make them understand that? That may not be the best decision. So so was that senior was he involved in user research? She was yes and You know has been deeply involved in a lot of focus groups and still is not listening Okay That's tricky Yeah, we've had it a couple times and Yeah, sometimes yeah, the focus is not in the right place They're trying to focus on protecting their baby the thing that they've been working on for maybe, you know, five years before you came in So it's hard to get them to kind of let it let it go Another thing that we we do sadly more and more is that we Provide them with a lot of data. So whenever there's a lot of conflict between Between stakeholders what we are is we're mediating with data. So we're giving them a quantitative number and saying You know, I know that you really think this is important but you know 800 users said it wasn't important and that's why that's a way to break that kind of paradigm where they're still Really focused in their own opinion Does that answer your question? All right Other questions All right. Thank you