 All right, so you're in the session on design validation, a.k.a. how to suck less. That's what's printed in the schedule. It's actually kind of a working title, so I think meaningful design feedback is more the approach I'm going for. But anyway, so you're here because you're interested in collecting design feedback, maybe using it for your site, and just ensuring it happens in some form. So in this session, we'll define what design feedback is and how do you collect it, how to use it, that sort of thing. And also, I'd just like to thank these people. They really helped with my session. I did a lot of chatting with them. And the other thing I did is I did a run through of my session and they had some great feedback. And I'm a big fan of feedback. I asked them to focus on the content and the flow and because I don't get terribly excited by fonts. So great feedback. OK, so my name is Lisa Rex. I'm a user experience researcher at Acquia. Formerly a sort of design editor. I've done front ends. I've done some triple site building. I was also a genealogy researcher, which has been a kind of interesting juxtaposition with my current role. But some of my biggest failures in my design career have been because I didn't get the right design feedback. The story that springs to mind is actually my first sort of web design role. I was tasked with building the internet. And it's hard. And so my friend and I, we kind of got together and we ran down to the kitchen and we made toast and we were all excited and we built the whole thing. And then we showed it to people and we kind of missed the mark on content and visuals. And the whole thing was scrapped. So good lesson there. So this is the design team at Acquia. We're definitely focused on products, things like Acquia Cloud, Acquia Network, Commons, Spark, Drupal Gardens. And our process, this is the story basically of our process. And it's definitely, our process is a work in progress. But we're growing, we're learning, we're adapting. And it's actually really quite fun. These are our UX interaction designers. Their focus is largely on the visual and the interactive part of things. They do a lot of the design briefs and style guides. And then they also do the visuals and the prototypes. And that's everything from wire frames to clickable prototypes. We have lots of approaches to the way we do things. And we just decide based on the needs of the project and timelines. And then there's the UX research team. And there's three of us at the moment. We are doing the research planning and conducting the studies. We do a lot of sort of more basic studies, like kind of like click test, if that seems appropriate. And we're actually talking a lot with our customer-facing team. So support and the technical account managers. So, I mean, this comes together with our feedback processes pretty broad. We are mostly focused on the qualitative and not the quantitative. So it's great to use Google Analytics or ClickTail or anything like that to see where people are clicking. But it doesn't tell you the why of things. So that's why we're more focused on actually talking to people directly. So design feedback is anything from written emails to tweets. But actually, we prefer actual conversations with our stakeholders and users and potential users. So yeah, everything from gut reactions to really planned studies. The reason for this is that we recognize that we are not necessarily our users. And that's the same for everybody. You can't know what your users need until you talk to them. So lots of things you can learn from feedback. One of the things I find really useful is to figure out the actual needs of your users. So your brand new user, he comes to your site. Things might be a little difficult to figure out. But maybe over time, it's learnable. The next time he comes, he's whizzing through, great. And then we have the power users. She comes along. And she's been using your site or your product for a long time. And she doesn't need all this kind of introductory stuff. So it's working out how their needs vary. This is kind of a fun example. It's the dupal.org admin. It's the user permissions page. So it's not public facing. But I think it's fun because most people don't get to see it. Obviously, what works for a few doesn't scale. So that's what I mean by scalability. I mean in the UI. You see this a lot on forums and other social sites occasionally. But as you probably know, the dupal module will let you generate reams of content and stuff of it into your theme and make sure nothing breaks. So nodes and comments and users. The other thing that you can learn is just whether your content is actually working. This is a big deal. That's the main reason people are coming to your site or coming to your project. Just figuring out what's the right amount and what they need is the messaging right. Is the tone working for them? Maybe they're, I don't know, maybe it's like you're talking down to them. There's all kinds of things you can learn. And then the other thing is the UI text. So that's things like on your forums and just labeling can be confusing unless you check with people. So I approached this talk like I did, like I would a research project. And so I talked to nine different people. Seven of them were actual designers from around the drupal community and at Acquia. And I also talked to two product managers because I thought they would have an interesting take on things. So yeah, I did interviews with them and I looked for some common themes. So here's, well first, some quote from my colleague who's just super smart. He points out that feedback is the sort of difference between design and art. So with that in mind, I was like, yep. OK, so common themes for my interviews with these people. Number one is that design feedback is absolutely essential. And what's interesting is not all of them start out off thinking that way. And in your career as a designer, you might think you've got it all figured out. But one of them said, as often as I think I'm right, I'm usually not. So it happens. And then the main reason that feedback is essential is that it just saves time. The faster you can uncover problems before you start getting other people involved, making putting stuff into code, the better. So talk is cheap. It's not always what they say. It's what they do. So this major airline, they decided that they would survey customers or their passengers to find out how they could improve their long-haul flights. And in the survey, one of the questions was what sort of snack would you like mid-flight? And the overwhelming response was fruit. So the researchers thought, well, let's ask the flight attendants. So they did. And the flight attendants were like, no, no, no, no. They want cookies, warm cookies. So the researchers thought about this. And I think they realized that their passengers were saying that they wanted fruit, but they were probably at home, surrounded by family, thinking about being healthy and safe. And it was interesting how you want to be able to observe your users, not just ask them. The other thing that we learned from these talks is that during design feedback sessions, people don't know quite how to behave. They don't know what to say. They say too much. So I'll talk about how we can deal with that. OK, so you may be familiar with this meme. People see inspirational photos of baking projects that professionals have done on Pinterest. It turns out it's harder than it looks. So people tend to think of themselves as a designer. And it turns out it's really quite hard. So if you're gathering feedback with people, just a little tip on reining in the designer and everybody. There's a few reasons. It's wonderful that they have ideas and definitely take them all down. But at the same time, they probably don't have all the information they need, such as if there's been some existing research or design patterns or anything like that. The other thing is that a flat design, just like a drawing or something, it probably doesn't have all the interactions. So the designer still has work to do, even if someone hands over a drawing. And then the main thing is that they could have found spend that time finding other problems. So one of the people I talk to is she used the phrase monkey hands, like design monkey hands. Don't make me think I'm just these monkey hands. So let your designer be the designer. OK, so let's talk about actually getting meaningful design feedback. OK, let's say your client or your stakeholder says, hey, let's make a thing. And for the purposes of this, we'll say it's a home page redesign. So as I mentioned before, everyone wants to feel like they're part of this successful design thing. So it's great. It's super great. Definitely, definitely get your team involved. We encourage the support. We get a representative from support, from QA. Definitely, the designers, even if it's not the lead designer, including the lead designer, but the other, their peers, tech leads, developers. It's great. Get everyone involved. And marketing too. So one of the first things we do is we start with a creative brief. And I don't want to spend a lot of time on this because you've probably seen these before, but these are the headings in our creative brief. So we talk about what the problems are, like what it is we're trying to solve, and then sort of the goals of the project. And that's super important. Get everyone aligned on those. Success criteria, dependencies. Dependencies are things like if you're, one of the developers isn't available until a certain time. So anything that could introduce risk to the situation. And then kind of break down the aspects of this design or this new feature, whatever. The must-haves, the should-haves, and whatnot. So the must-haves, this is interesting because the story or the project can't be considered ready to ship until all the must-haves are met. And the should-haves are things that you could release without it, but then you won't be done until you've done those. Also must not have. So there's probably instances where it just shouldn't have something in there, so be sure to declare that up front. We also talk about who our competitors are. That's just an awareness thing. And then any deadlines and who you're targeting and inspiration. So I'll talk about that in a second. And then notes. Just people can take notes as you have these reviews. So we recently released these notifications here. And some of our inspiration for these things we're looking at, like Facebook and LinkedIn and Jira, Twitter, a well-established design pattern now. So it's helpful to look at that for inspiration. And then I mentioned success criteria. Here's some examples. Starting with something vague, like a higher conversion rate, as an example, we like to make them smart goals. And that's specific, measurable, attainable, realistic, and timely. So here it's just like from 2 to 5 by quarter 4. And I've just made these things up, but just as an example. So now let's say you're working on your design brief. And at the same time, you want to start looking and thinking about your feedback and research plan. Don't be tempted to skip this step. It's really, really important. So again, these are headings from the document for the research planning. We first talk about the goals. These will match, hopefully, to the goals of the design brief. Then we take input from the stakeholders. And your stakeholders are whoever the product owner is or the client. And that input could be things like what they want to learn or if they want something kind of validated in particular. Next thing is your methodology. So that can be anything from usability testing to one-on-one chats to if you know you don't have a lot of time, then you just want to do something really fast or do the quick and dirty testing. Absolutely fine. Then depending on those top three things, then you'll decide who your participants are and how you'll recruit them. And then lastly, it's just like what are you asking? And it's really helpful if you're talking to a series of people to have those questions written down. I mean, start with maybe a kind of a briefing for the participant, just introducing the study to them and then all your questions and that sort of thing. So I don't know if you've ever had a situation where people couldn't agree, but possibly. One of the great techniques I just learned from my colleague Christine is this KJ method. And it's really super forgetting people all kind of to reach consensus. So let's say that you just wanted to find out what research goals or how to focus the research for something. So what you do is you get everybody in a group with this KJ method. Everyone in a group. And you ask them the focus question. So for example, we asked what would you like to learn about customers in the next quarter? And everyone who's participating will write down all their answers on a sticky note and put them all randomly on the wall. And then as a group, they sort these sticky notes into categories that sort of make sense. And anything related. And then the next step is they label these groupings and then they vote on them. So they can have three votes. So three X's for their number one and so on. And then at the end, you can just add up and see which groups won. So and there you have it. Like, everyone worked together. They had opportunities to change groups, mix up groups. And you can read more about it. There's a link here. Jared Spill wrote a great article. Also in your sort of recruiting methodology, you want to think about who do you need to make happy. So for people in the agency setting, it's going to be probably heavily client-centric. Your clients are the ones who are paying the bills and that's who you need to make happy. They have their business objectives and that sort of thing. Paul Boag is another designer who has written and he's podcasted and has an e-book about this client-centric approach. So I would check that out if you get a chance, if that's of interest. And then obviously there's the user-centric approach and I think for most people it's going to be a mix of the two. So you decide what that mix is. I mean at Acquia we definitely do a mix. So, okay. Now let's say that the design brief has been signed off and design begins, again, get everyone involved. So in this example, we started off with like I think 12 people and we divided up into three groups and we started drawing, making sketches and like sort of over a couple sessions. You know, it took several hours but we got there and we distilled it down into two kind of quite robust drawings. And I realized this photo that one of my colleagues put a bird on it and so if you watch Portlandia, I think it's Portlandia and if you say in Portland if you put a bird on it, you can just call it art. So anyway, I left that in. Oh, put a bird on it.com in case you need that emergency bird. So now let's talk about the research execution. So your designer has something to show you and to show others. So what we usually do is start with the stakeholders and you know, the stakeholders are the people that have like specific deadlines and goals and so they're gonna care about all these things in any like design change that are gonna impact timelines and budgets. So we tend to do sort of three different things. Design reviews, design check-ins which are just sort of shorter and more frequent and then we use like commenting tools. But first let me talk about design reviews. So this is where things can go awry. So the best thing is if you're the one who's like leading the session, you wanna set expectations up front. So if you have an hour, just like explain like we're gonna spend 10 minutes going through the design brief, then we're gonna spend 30 minutes you know, talk actually getting your feedback and then the last time it will like think about like what's the sort of next steps. So yeah, I always think it's a great idea to go through the design brief together just to remind everybody like what the goals are, you know that sort of thing. As the person who's collecting the design feedback, you're gonna sort of facilitate and take lots of notes and let the designer present his or her vision. Another quote from Matt, he's really quotable. He said that you know, you'll get a lot of probably a lot of talk coming at you in the session and you just have to like, you know as a designer and as the facilitator you'll learn to kind of focus on what's important. So actionable feedback, sometimes there can be opinion wars but what you wanna do is just ask them to focus on the problems and the goals and not focus on any solutions or try not to jump to any solutions yet because that comes later and actually that's really the designer's job. A good example is if you were doing a car design, you'd want the feedback to focus on things like the brakes and the steering wheel and not the headrest. If the design review isn't working, you can have like shorter, more frequent check-ins like I think maybe every other day might be good and then have like one sort of mandatory session once a week just to avoid any surprises. The other thing we use is something called envision and I think there's other similar products out there but it allows you to just upload a bunch of static screens and link them together. So you have a sort of clickable prototype and people can add comments directly to the screen so it's like nice and in context and this is, well this is the sort of the backend view I guess you could say, but we use it for commons and because we also needed like public, you know, the community comments so we use this and it worked out really, really great. So now I want to talk a little bit about actually talking to users. Christine says that the best way to understand their goals and attitudes is to have people watch. When I say people watch, you know your, watch people using your products. So I mean getting like a developer in a room to watch and take notes is really, really powerful. So yeah, they'll definitely understand what people need. So I think of user feedback as like that harsh, glorious feedback from the only people who can like really make your design better. They're in a really good position to tell you what needs improvement. Users are always coming somewhere with a purpose and you know that's what you're gonna uncover. The other thing is that I think most people are really happy to provide feedback. I've never, I always ask in person and I rarely get it, no, but people are motivated to help and they like to help and if there's something they have to use frequently then they'll probably be really, really happy to do so. The Drupal community I know is just really passionate and it's just, it comes through in a lot of different ways. Also customer hugs are awesome, I have to say. So you're gonna like need to find these people to talk to, it can be people off the street but it really, you know, it's like thinking about like what your goals are and in your process you are also thinking about who you need to talk to and how you're gonna recruit but that's why I mentioned asking in person. Thinking about if you need to talk to designers or developers or you know whatever type of person that is and also offer incentives so it could be anything like, we do Amazon gift cards but maybe you just buy them lunch or coffee, beer, they all work. And then just make it a conversation, make it fun. So you have your users or you have your customers or whoever it is that you wanna talk to that's not the stakeholder and the next thing is to think about whether you wanna do in person or remote. I prefer in person just because it's easier to sort of establish like a rapport and it is a little more fun. It's easier to communicate as well, there's a lot of non-verbal communication that goes on when you're face to face with people and also it's a little easier to deal with interruptions. I had the doorbell ring once and it scared me a lot and they didn't know what was going on and they kept talking and so yeah, that sort of thing. The cons of trying to do it in person are the logistics. If you're not in the same place then there's gonna be travel and that sort of thing. So remote, we do a lot of remote I have to say. So we use things like Skype or WebEx and it's just convenient. All you have to do is really schedule them and hope they have a, everyone has a good wifi connection. But sometimes there's issues with the audio and sort of things like that. But you know, so what can you do with customers? So aside from doing like usability studies and just like interviewing them and getting their feedback, you can also do things like validating their priorities. So we did this recently, we showed them some roadmap items in a spreadsheet and we had them like record their responses. I mean, I think what we did is we said, pretend you have $100, how would you allocate this? And we did that, we just hid the previous participants' columns and it was really, really great and I think we will definitely do that again. So usability studies, just a word, this is probably the area I know the most about, but usability studies is roughly like you're sitting them down, they're using your site or your product and you're asking them questions. It can be really open-ended, you could start off with like do whatever you would do normally if you first encounter a site and then you might have them do a few specific tasks, but what I think is great is just to have them like give them a goal and just see where they go, how they get there, record all the sort of deviations and hesitations and that sort of thing. Yeah, just definitely watch what they're doing. I've observed people like clicking in one spot, even though they weren't saying anything, I saw this clicking happening, it was like, okay, that could be a problem. And then the other thing is to use measurable metrics. So at AQUIA we've been doing everything on a one to five scale, so one is poor and five is excellent. So as I observe, I'll come up with the rating from the effectiveness point of view and then I'll often ask the participant like kind of like how is your experience and what's funny and it's pretty consistent. If I observe something to be like a three or four, they'll be like, oh, that was a five. So like, oh, that's nice, but you don't have to protect my viewings. The other thing also is just to communicate your findings visually. Reports are really boring and your stakeholders are likely not going to read them. So I always, I like to do sort of annotated screenshots or if you've been recording, you can take video clips and that's awesome. So if you are doing usability studies, what's really great is just to start setting up repeatable ones. So if something changes in the design, you can run the study again and see how it alters. But if it's exactly the same as it was before, you can just see if it went up, hopefully it went up. Also, some of our studies where we have people like actually editing content on a Drupal site just to test things out. And you can use something that, if you have a Drupal site on a multi, multi environment hosting provider, you can use that to just like revert the database back so it just resets to where it was. That's a big time saver. And if you have any kind of studies where they're like actually altering settings on modules, you can use Drush to just revert back to the default. So a couple of pro tips. Always, always keep your stakeholders involved. Don't go it alone either. I always think it's great if you can get someone else on your team to come in and observe and take notes. And then afterwards you can be like hash it all out and like, what happened, you know, so. And then always record your conversations with your participants. WebEx is what we're usually using, but if you have to just use the thing on your phone or any kind of voice recorder, the reason is that for whatever reason, your note taker can't capture everything and you're trying to like ask questions and listen and be able to come up with additional questions. You can't also take notes at the same time. It's hard. I wouldn't recommend it, so definitely record. Couple things, if you're interested in learning more about usability testing or kind of UX, undercover UX is another great one. There's a couple of books there and then this video, the square weave, it's called The User Is Drunk and it shows some examples of pretending if your user is drunk then it'll help a little bit with like the design. I think it's funny, so. And then, excuse me. Adam Conner and Aaron Irizari, they talk about design critique a lot and actually they're presenting today at WebVisions and I saw them last night. So there's a link there that I think is Adam talking about critique. But yeah, they've done a lot on that so definitely look at that because that feeds in quite nicely to the whole design reviews and that sort of thing. Okay, so let's say you've got all your research and you've got it and you're happy, you've talked to enough people, that sort of thing. Now gather it all up and start sort of synthesizing it, like look for the high level themes and then show these to stakeholders. So what you can do is sort of group everything by the original questions they may have had and then as a group you'll decide what to act on. I mean this process does take a little bit of time but if you focus on the big ones, like forget about the things that maybe one person said or two people said but if like five people mentioned something then it's definitely something that you wanna focus on. If you can't decide, you can always run a KJ again to decide like well what are we gonna act on so that works too. So when you start gathering this information you can do things like develop these sketch personas and these really only come from having like a deep understanding of who your users are. Sort of the difference between a regular persona is that these are in flight, they're never, we're always learning so they're not final. And how we usually start with like the person's name and what they do, a little biography, a little bit of their day to day life and we talk about their technical profile. So something like how well they do Drupal to do these version control, are they comfortable in the command line, that sort of thing. And then task profile, so these are the things they actually do on your site or your product or whatever and I've broken them down into sort of primary and secondary so another way you could break it down is like the sort of things that are crucial but less important and then the ones that they do all the time. And then just generally like their kind of concerns and challenges to do with their job. Security comes up a lot, that sort of thing. What is great though is I've, once your team starts recognizing these names and these profiles, they really absorb it, then they've become like able to defend this person. So I've seen that and it's really fun. Like well no, Dave would never do that because Dave, you know, so. And then I said mentioned like visually communicating. This is like an example of a journey map and this is, you know, it could just map to one of your personas but imagine there's all these tasks on your site and you know, after you've done usability testing you know that certain tasks are frustrating, certain are happy, usable. So yeah, just, you know, break them out. So like on the X axis we have the one to five rating and then across the bottom we have phases and these phases, you know, they're gonna depend on your project but you know, phase one might be like brand new user, the first time they encounter it and then like phase three is like, you know, they're like there every day so. So now you've got your design feedback, you've identified the problems and now you gotta act on it. So I think the hardest thing is like to actually just commit to a percentage of your time of the, you know, the development team's time to actually incorporate the feedback. It's always tempting for them to, they're probably really excited to work on new stuff but it's gonna be important that they kind of work on the things that making the site better too. So that feedback, you'll just wanna translate it into just chunks of work. Again, like you'll, you wanna prioritize those and get them on the development schedule. So I think it might be a little early here but your turn. So I want to see if people have done any design validation. Has anyone done this before? Or feedback, usability testing? A few? Yeah, okay, good. Yeah, so just think about a project, the parts you're not sure about, what you can do. I'd love it if people came back to me like some time later and they said that they ran some feedback and got some good information, that'd be great. We're also doing ongoing usability studies on our products, on Drupal, where there's a greater sort of Drupal UX team. So if you're interested in doing anything like that, for the aqueous side, there's a URL there. So definitely sign up or come talk to me. And then for the Drupal side, we're actually having a bof today at 2.15. I forgot the room but it's on the board. And you can learn if you wanna become part of the team that's improving and growing aqueous UX, that would be great. So we've identified a lot of the problems with the current process of contributing to Drupal UX and now we're looking for people to help like come up with solutions and I think the answer is going to be like doing more work at the local level. So if we can empower people who wanna do UX, like they could do like a run a usability study at a camp or something, that would be amazing. So, but anyway, I think we'll get together and we'll brainstorm some ideas there. So that's all I have, but if anyone has any questions, you can come up to the microphone. Can you explain the process that you go through when it comes to accessibility and people with visual impairments when you are trying to decide what your clients priorities are and how to bring them down to earth that way it's more accessible? Okay, so the question was how do we do accessibility feedback and balance the client's priorities versus what is your client's priorities like? Are they concerned with accessibility or not? Yes, of course, they are concerned with it. Yeah, we have, it's a pretty big deal at the university where I work at for the visually impaired especially. Okay. Yeah, I mean, we don't, that's not a thing I work on at the moment but in previous roles we would just build in time to do accessibility testing and so that was just like making sure it could run on the screen readers. I mean, actually it was started off by just, there's a lot of like online tools to check it. Sure. Yeah, I mean, I would say just try to make sure there's time and get the team, I mean that would be part of the design brief I would imagine. Did you guys ever have like or ever have actual blind people or say older folks trying or any testing in that regards? We've, so for, well for Drupal we have, yeah, definitely. Yeah, so we've, go ahead. Did you find that valuable? Yes, yeah, no, they uncovered a lot of problems. I know like right now there's a views issue right now about the, I think it's about something like the tables and the descriptions so. Yes, yes there is. Yeah, okay. Thanks. Sure. So what would your advice be for dealing with a client that says when we already do design validation we have, you know, 4C or one of those other, you know, pop up things that comes up on your site that says rate the site experience from one to five. How do you explain them? Well there might be something more to get from design validation than just one to five ranking. So the question is like if the client says they already do design validation because they have something popping up on the screen and asking them to rate the experience. Well the first question is like what kind of, what kind of responses are they getting, you know, what percentage of users are actually clicking, you know, why are, would they be motivated to do so? And then the other thing to ask them is like what are you learning from that? And they might learn that oh well, you know, it's getting a low rating but they don't know why. So the whole thing with like this sort of qualitative design feedback is you can get out the why. Why things are or why people feel why blue doesn't work or whatever. So yeah, so just try to see if they say no, I don't want to know why that's a problem then. I don't know, yeah. Hello, I have two questions. The first one is what percentage of user testing do you do on the admin side versus like the non-admin side? And then also how do you incorporate feedback that you get from social media like Twitter and stuff like that into the design process? Okay, so the question was don't run away. I'm gonna just double check. So the first question was the sort of public facing versus the admin testing. Okay, so for something like commons, we definitely need to test the admin interface because the people who are gonna be downloading and evaluating commons are the people who will have to set up the site. So we haven't gotten into a ton of that yet but so I'd say for something like commons, it'll be some like, I don't know, 60, 40 maybe? 40% admin, 60% public, something like that. Yeah, and then for something like, Octavia Cloud, it's all whatever the user interface is that the lead developer, whoever's using that, that's like, we consider that public facing. So what I'm hearing is that it all just depends on the project basically. Yeah, yeah, so. And sorry, what was your second question? Social media. Yeah. Okay, so if someone tweets and says something like, well, we just keep track of it. Like, we've certainly had feedback that's come through in different formats, comments and tweets and things like that but we wanna make sure it comes into this greater pool of feedback because if we start hearing things over and over again, then we know it's a priority and if it's just like a one-off, then we can't do everything. So just. Okay, thank you. How would you handle a situation where you have someone, a stakeholder, probably a manager or your CEO or someone like that in your organization who really wants things to be a certain way, really likes a certain feature or wants it to work a certain way and your usability testing either goes contrary to that or isn't really indicate one way or the other. So how do you balance that out? Very carefully. So the other question is like, what happens if someone higher up in the organization or a stakeholder wants things a certain way but your user testing and user feedback doesn't exactly match that? Well, you can go back to like, what are the goals of the project? Think about what your success criteria is and if the user testing doesn't play out with the success criteria, then there's that. I mean, the other thing is like, they're not the end user, so if you can gently remind them that. So I guess there's probably more you can do but that's where I would start just explaining like, well, we think it'll work better for these guys because, so, does that answer your question? Yeah, as best as it can be answered, I think. Okay, if you have specifics, we can talk afterwards. I mean, there's also the point that sometimes they're just gonna say no and you're gonna have to accept that. Sometimes they'll say no. Yeah, definitely. Yeah. Dictators. Hi. I just wanted to ask, how does all this fit into an agile process? So when you do a design, does the whole thing have to be completely finished by the time you go into the first sprint or do you undo parts of it and then implement some of it, say, in the first sprint? And then the second question, because I think I know what the answer is to the first one, how do you communicate that to the client and to make sure that they don't, you know, if they're used to working with more old fashioned sort of waterfall methods, how do you just give them the, what's the word? How do you give them the certainty that the whole thing's gonna be finished by the end of the project without them actually seeing pretty much the whole finished project in photo shop or something? Okay, so working with agile. Well, I'm glad you asked. That's a good point. I think you've got to sort of maybe set up some sort of milestones where you start showing them designs early so that there's, again, no nasty surprises. First, a lot of stakeholders, it's hard for them to envision things at an early stage. So, you know, maybe, maybe you don't, yeah, I guess it depends on your stakeholders, but, I mean, I would definitely like start showing them things earlier if possible because then if it's starting to go in the wrong direction, you can make that correction. And, you know, in terms of like sort of hitting those milestones, like that's why you have, like in our design briefs, we have the must-haves and the should-haves and the nice-to-haves. So, focus on the must-haves first and that way, if you run out of time, you're not, like, you know, it's not going to be a disaster. It's just, you know, there'll be something that maybe you just work on it in the future. Do you work at a relatively early stage with, like, Photoshop designs or do you go to the customer with wireframes or how do you do that? And then how do you communicate to them that I'm just thinking sort of a personal experience a bit of an issue with, if they do have properly-layouted Photoshop designs, how you kind of tell them, well, this bit's finished, this bit is kind of in the open and I find it difficult myself. I'm working with designers when you sort of think, hang on, which bit of this is actually settled and which bit of it is still open? Yeah. Well, I mean, yeah, designs are always in flight. I mean, we have our stakeholders are all internal for the most part and we'll start off with, well, like this recently, we actually had them drawing with us. So that was actually pretty cool. And then from there, it gets translated into, we started doing like kind of in parallel with two designers. One person doing like the Photoshop or Fireworks, I should say, you know, really like detailed and then one person doing the sort of more interactive code prototype at the same time and then kind of they come together. So that was tricky because people are like, well, which part should I pay attention to? So, yeah. Oh, thanks, that's really interesting. Anybody else? A quick question that might be related to the discussion we just had. How do you, how are customers and clients ready to pay for usability? How do you put that in your offers and how do you price it, et cetera? Sure. I mean, so that isn't a thing we do at Acquia, but in previous roles, in my previous job, we would, you know, strongly encourage it because it was like, you have a better chance of success. This project will be more successful if we can take the time and show it to users, you know, take their feedback, incorporate it and I don't know, it's a balance. I mean, sometimes it gets cut from the budget, but, you know, I almost don't see how any sane person would want to cut that. You know, you might just scale it back, but you can't, I don't think it's a great idea to. And at Acquia, do you do it differently there? Yeah, at Acquia, so it is part of the process. It just is. Because it's the product. You do the, because you develop the products or also for clients. Yeah, no, we're only internal. We don't do usability for clients, but yeah, so, yeah, everyone's pretty accepting of that. Sure.