 All right. Hello, everyone. I guess we'll go ahead and get started. It's five after we are going to be recording this meeting. So, yes, just before we start, I wanted to make sure that you all are aware that there is going to be a Norwegian national warning system going off around anywhere from 1155 to 1210 that will be going a big siren outside and then also any phone that is associated with the Norwegian telecommunications. So if you would like you can go ahead and put your phone on airplane silence if if that's convenient for you. We wanted to welcome everyone here today to join our session. This is our session is on usability and design. But with our focus is why listening to users is important and how do we do more of it. Let's see. So just to give you a little bit of background, DHS to has a foundation of working with users. The foundation is built upon participatory action research and design. This is a Scandinavian research approach with that involves collaborative activities carried out between users, developers stakeholders to enable system design development capacity strengthening and system testing. And this is a picture of Lars in 2000 and team in Kenya in 2010, and really highlighting the core foundation of what DHS to is built upon, going into the field with as a developer, going to different clinics, finding out what users are wanting, then coming back to the hotel and coding at night, getting from based on the feedback they've gotten, and then putting it into production the next day. However, with the increase. Expansion of DHS to, we are not able to do that same process anymore. We're trying to be a little bit more professional, maybe using Jira, maybe using sprints, maybe having a testing instance. So this is the way forward with DHS to and design is being able to keep some of our core principles of working with users, but in a more sustainable manner to be able to scale our design practices. This is the last year and a half UIO the University of Oslo has started having a dedicated part of the software team, addition to the researchers to make sure we stay close to the users. So maybe I can just have the UI design team stand up. Caroline, Artie, Joe, and Marcus over here. So yes, this is our UI design team and hopefully it will continue to grow as design and user experience is a high priority to DHS to design. But when I say design, many things come to your mind. I think this is the hardest part is when you say design, we sometimes talk, we miss each other, we talk and we cast each other. So when to an implementer design might mean configuration in the software. How can you make your program usable. How can you use the functionality and DHS to to something that the users want program manager the word design might mean I want my form to look a certain way. I want the workflow of the design to mimic the work process to a National Ministry of Health design means indicators, how can we get those indicators in and how do we make decisions based on that to a developer it means could be a feature software design. This is, this is kind of the ideal idea of what design is, I want this button here. However, you all know working with the HS to on a generic platform. That might not be the role of design that you're capable to have. However, what you are capable is whatever design you are doing, whatever you call design, the source of the information that you need is found with the person that will be using whatever you have So this is why it's so important to listen to users whenever we mentioned the word design in whatever form that means to you. Since we are designers and talking about users let's begin with a mentor meter. Let's hear from you all. So we'll go ahead and scan this, and we'll ask a few questions. So yeah I'll let you give you guys a few minutes to get logged in and answer the question. Taking off airplane mode you have tell almost 12. I'm going to go ahead and change the mentor meter that's okay. Oops. Okay, now I want you to ask her this question. How do we get to this. Okay. Okay, this is nice. Look, in this room we have quite a wide variety of DHS to users. I like we have we have an app developer. This is nice you guys get to choose where the button goes and how it looks. Are you talking to your users to know where that button goes. We have engineer director of design. Well lots of designers. We have developer solutions architect product owner product manager research manager digital health specialist superstar you all are superstars in my eyes. That is actually a difficult job. As you can see, there's a wide variety in this room. You all come to the table looking at design from your own perspective, and through a particular lens. However, with this session what we want to talk about is giving you and your particular role, another tool in your toolbox, another way to add dimension to your role. Another way to start thinking about how to design with users in mind in whatever form design means to you. Okay, we'll go to the next mentor. Next question. What apps do you think are easy to use Google maps. I don't think that any app. If you have one in DHS to feel free but general apps are also nice. Data entry. Okay, this is nice tinder swipe right. Google map. Oh, Instagram. What's app slack. Wow. I don't know what that is. What isn't router. Look at you Norwegians. Cash cleaner. All good. Uber dashboards. Oh, the Apple news Trello. The camera app. So we were talking. Oh yeah, we can talk about this, but I'll mention it since while people are still kind of getting things in here. I was talking with arty the other day about visual memory and your camera app. So visual memory is basically you've used it enough and she can explain it if I misuse it, but basically you have a visual memory of where a roundabout on your phones app is. So I know in your mind you can go right to this left hand corner and swipe and you know that that is where your camera app is. And that is part of design. Instagram is the big winner. What's app Google maps. Uber slack. Okay, data entry. Joe should be happy about that. Okay. What makes these apps easy to use. Good workflow and friendly design. Consistency. Good data flow. Simplicity. Intuitive. Good UI UX. Friendly use. Not over complicated clean. I like that. It's easy to find what you're trying to do. They're simple. I really like this. So the question to think about, do you think based on the apps that you just mentioned, and the responses that you got that this was done on first try. Or do you think that this was this apps and what you like from these apps was took an iterative process of talking with users of being able to tweak something, test it out. So this is what it takes to make a good app you're not going to get it on day one. So the next question is. How do you in your implementations in your job with DHS to make time to have this opportunity to do iterative design in whatever capacity you are. Is this something that you need to start talking to your ministry of health and talking about the importance of design and what it takes. Is this an opportunity to talk to the donors and say, maybe we should have a deliverable line saying, listening to users is important, because this is how you get to a good app. Last question. Why is it important to listen to users to drive adoption. Nice. Just keep making them just say keep using it keep using it to make sure we address their needs. I understand a lot about what they need. I love this. It's, we had a nice conversation with one of our his users and they said, we know our users we talked to them a lot. However, it's easy to start assuming also. So as you need to go down and really talk with them, even though you know them as your own personal user to know how you can satisfy their needs. It's a little better than us. Ding ding ding. There's lots of workarounds that they make for relevance ease ensure you truly understand the needs and challenges. The system is for them. I like that the system is for them. How do we make it so that they want to use it. Better use of system data. I like that. If they don't, if you don't listen to users, if they don't trust you, if they don't understand what you've designed, why would they. Are you going to get the correct data output that you want. I think we have some more. They're the ones that have to use your system. Yeah, this is all great users need to be comfortable to encourage them. I love this. I really liked the impact story, the impact session yesterday, a couple of the presentations were focused on this district of L excellent concept. And I really liked how they were focused on going down to talking with the users creating indicators that mattered at the district level instead of just pulling national level indicators. And then pushing those to the district what they were doing was creating district level indicators that were meaningful to that level. So I like how this is really people are already doing this. But what we want to focus on oops. That was a surprise just wait. So our focus today is of course why listening to users important. So we all know this is true. The hard part is, how do we do it better. How do we know how to get output from listening to users it's nice to have a conversation, but do you all know how to make it some sort of output that can be pushed into development or pushed into conversations with the Ministry of Health. So I'm going to go ahead and turn the time over to my colleague Artie and she will continue with the presentation. Thank you. Hi everyone. I'm Artie and I work in DHS in DHS to as a UX researcher I've recently somewhat recently joined their team. I think Kim did an excellent job of summarizing why the shift of our entire organization is slowly moving towards paying more attention to the user experience because when I joined the application I had a bit of a hard time myself right. And I think some of us may echo with that opinion. But it's it's so I found it also very nice that a lot of people over here were open to the idea of making this not just a functional app, because everybody here in the room knows that DHS to is probably the most used app in so many domains for collecting data. But taking it to the next level is kind of where we are moving towards now. And it's going to be an exciting journey and let me just begin right now. And kind of walk you through what the process of design looks like and what it could look like for anybody in the room, regardless of what they do, whether they are a developer, whether they are a manager. And for this example, let me let me say I'm an implementer right and I need to configure a program. So what are certain steps that I may want to take in order to get to my ultimate goal of configuring this program. Right. My first step would be probably to find the problem that needs to be solved. Right. And as we go through this, some of this may resonate with you and some of this may be new. So feel free to like make a note of this and we can have a discussion about this towards the end as well. But finding the problem to be solved. Let's say I have figured out that manual entry of data can be tedious. It can be time consuming. And so the problem that I'm trying to solve for is reducing and working on the speed as well as reducing the redundancy because we know that when data is being entered manually, there might be more chances of inaccuracy of data. Right. So digitization could be one of the solutions for that. And so my first step then would be to identify the problem that needs to be solved. Secondly, I would want to then start building more context around this problem. Right. Trying to maybe figure out what exactly are the causes or maybe understanding the environment in which this healthcare worker probably works. What kind of devices are they using? Are they using Android apps? Are they using iPhones? Are they using tablets? Do they have internet connectivity? What are the kind of rooms they are working in? What time of the day would they probably be entering this data into the apps? Right. So these are all things which helped me in my next step, which basically is to design the solution for this problem. Right. We'll spend some more time over here as we go ahead. But designing the solution could be in the form of maybe considering that I need to build a new form, which is easy to use. Perhaps does not have as many entries and not enough data fields so that people can do this quickly. Because at every point of this journey, I need to kind of keep coming back to the first thing. What is the problem that we are trying to solve for? If we are trying to solve for speed, then I need to ensure that the design that I have created does that. It makes it easier for people to use it so that tech does not kind of get into the way of the user rather it supports them. And this can happen very often and it's a very difficult problem to be solving for because it's a complex one as well, which is why the next step is extremely helpful, which is testing the solution. Right. And let me just pause here and kind of ask all of you if actually, you know what? You can maybe close your eyes because I don't want you to get worried about what you are saying and what others are saying. So take a minute, close your eyes and with the show of hands tell me whether you think you are a user centered designer as how we have discussed what design means. Do you think you are you user center? Yeah. Any. Okay. So I see the response and that's great. Keep your eyes closed again. Of the people who said that they are user centered. Did you speak to a user in the last three months? Okay. All right. Great. So I see that there are there's a mixed group here. Right. There's a mixed group of people who are well aware that they might be user centered and make an effort to continue speaking with users. There are also people who are aware that they may not have that sort of user centered sort of an approach. So I think this session should be good to maybe help you get started. And for people who have been doing this, that's great. That's amazing. And you should be doing more of this right. I think my role over here right now is kind of help you understand that this honestly isn't really rocket science, right? Like listening to users shouldn't be as difficult as we may anticipate it to be, but it's a skill. Right. And in order to learn that skill and in order to be able to perfect that skill, we need to do more of it. Right. So if you haven't been doing it already, I would then urge you to do more of it. And I'll tell you more about why. And if you've already been doing it, then I'd urge you to do more of it. And again, I'll tell you why. Right. We'll move on. But as you know, yeah, the CEO of Land Rover had something very interesting to say. If you think good design is expensive, you should look at the cost of bad design. This one resonates really well with me because a lot of times this becomes lesser of a priority. Good design becomes lesser of a priority because it gives us the data that we want. But does it really? That's again something that we would be looking at. All right. Coming back to this, we are going to focus mostly on designing a solution and then testing the solution. Right. Because this is where the loop of design happens. And while it might be difficult to be able to continue doing the testing and designing again and again, even once your program or your implementation is out and it's released, let me try and convince you. All right. So let's look at several Venn diagrams. Right. So let's see what we think users want and what users actually want. Right. This would be a great strong overlap where what we are designing and what they are using have more or less an entire overlap. Right. Like there are some things that they may want more of and then it's kind of like a back and forth. But this is kind of what we want. This is what we want that the overlap is strong. But a lot of times what ends up happening is that what we think the users want is a very small section and what the users actually want might be something more. But what's kind of interesting over here is that this could also happen. Right. So I think the users want is maybe ABCDE but what they are looking for is a very simple solution. Right. And then this may also happen. Right. Where we are like worlds apart. Right. And what happens when, you know, two, three and four happen, I think, think of it like a domino effect. Right. The first thing that it begins with our poor assumptions being made. Right. We are not entirely certain about when there's a disconnect between what we think the users want and what they actually want. What ends up happening is that poor assumptions are made. And so poor questions are asked. And so now we have poor answers. And then poor decisions are made, poor actions are taken and eventually you're collecting poor data, which is not what we want. Right. So this would be probably the first reason why you should be listening to users more is to kind of avoid that disconnect. Between what users think, what we think the users want and what users actually want. How many of you are able to relate to this photo show of hands, maybe. Okay. All right. That was also me. Yeah. Can you give us some examples of when you have felt like this and it doesn't have to be related to DHS to but maybe. Yeah. Does anyone want to? Oh, that's really annoying. Yeah. Yeah. I know what you mean. Yeah. Yeah. Yeah. Yeah. Oh, accessibility. Yeah. Yeah. Yeah. Yeah, I struggle with that every day. Yeah. Yeah. Yeah. And I think this kind of contributes again to the kind of work that we are doing, right? Frustration can be a very big problem and often an underrated problem, right? Because users are not kind of in our close vicinity. But when people are frustrated or confused or just annoyed, a lot of repercussions could be happening, right? They just walk you through some of these slow uptake of the app, right? People may not want to use it as it is intended to be used. A lot of data inaccuracy can happen, right? Yeah. Maybe let's just stop there and think about that one because data is extremely important. That's the, that's what DHS to is essentially trying to collect. But when somebody is frustrated and is expected to enter, let's say 25 fields, because they have been told to do so. People find ways around it, right? They may enter false data. They may enter zeros. They may find their way around it, right? And of course we have like ways to validate our data as well, but falsification does happen, right? We can't like look around that. And that's something that can be avoided if they are listened to in the first place and also if we are able to create design which is user friendly. Sharing poor reviews with your fellow colleagues. And what that essentially does is that there is a sense of mistrust created around the application which ends up making them hesitate in order to use it another time, right? Giving something another shot. I've faced this many times, right? Where I'm not a big fan of let's say Microsoft Teams and that's because my first time of having used it was terrible. So I've never had the courage to kind of go back. Yeah. And also the second last point, I think that's an important one. Bare minimum usage, right? They do just enough of what is expected of them. And what that stops them from doing is exploring more. So let's say if we are pushing features which might be amazing and which might really help them do their job better. But since is that the... Okay. Okay. Just to be sure. Yeah. So they find other ways of doing work and they don't end up exploring as we may want them to and to no fault of theirs, right? Right. So in order to create designs which are effective, efficient and easily understandable, listening to users becomes a very, very important part, right? Because if you are trying to create intuitive design, they need to be able to use it as intended. They need to be able to use the product quickly and efficiently. And they need to be able to understand what's written on the screen. And I think a lot of us end up facing this challenge. And DHIS2 is actually a very unique product because we are doing half the design and also the implementers are doing the other half of the design. So it's a very great back and forth between the design team here as well as them. But I think therefore it becomes even more important for all of us to be able to value the importance of design and the users on the field. So here's my attempt at making... This is not an exhaustive list, but these are all the complex problems that DHIS2 users are working for, right? These are the big lofty goals that we are all eventually trying to go towards, right? Improving access to education, right? Making policy level decisions, right? Making, you know, being able to control an outbreak, COVID, all of that, right? And these are very, very big, difficult, nasty problems that all of us are trying to work towards, right? And getting people aligned on these goals already is a mammoth task for all of us. And training is one of the ways we do it, right? Getting people aligned on a problem together. But then a lot of times what happens is that when design is not intuitive, a lot of our focus then ends up going towards how do you click a button? What would be your first step and second step and third step? And this kind of takes away from the very valuable time that trainers have, which could be then used to train them towards these large complex problems that they would like to resolve for, right? So the final, and I'll just leave you with this one for now, but intuitive design basically helps reduce training efforts. And this is something I know that we all wish we can reduce. And I think that it is possible. It takes a bit of time and it takes a little bit of training to be able to create designs which are intuitive, but it's definitely an achievable goal in my opinion. So this is something we saw previously, right? And if I were to broadly kind of divide how design works or the process of design works, we can maybe split it into two where you have ideation and then you have release, right? So you're thinking of like what are the problems to be solved designing and all of that. And then testing kind of is a part of ideation and a part of release as well. And then you have developing. And there are lots of tools that you can be using to be listening to users more, right? And some of these may just be words to some of us, right? They may not make too much of sense. So I would suggest that you don't look at this as, you know, what exactly do you mean when you say like usability bug reviews or like, I don't know, task analysis and all of that, but our competitor analysis. I think what's more important is to kind of see what the kind of variety that we have to be able to listen to users. And quite simply, if you're just speaking to them more, that would be the first step to like start building that communication so that we are understanding the problems a little bit better. So today, let's look at these two methods of speaking to users, listening to users. And user interviews is a very easy way to be able to do that. Usability testing and I'll speak a little bit more about this because not sure if all of us over here may truly understand, but long story short, what we are trying to do in usability tests is to observe more than ask questions, right? So if we have an idea and this idea could be half baked, it could be a full functioning prototype or a design that's already out and released, but we want to kind of try and see how they take these tasks that we give them and how do they kind of complete this task? And this can be done remotely. It can be done in person. It can be done sitting in a lab. It can be done in a field. It's a very flexible sort of a technique which is out there. And I'm going to try and show you what we did very recently when we were at Sri Lanka and we worked in collaboration with the HIST team at Sri Lanka and we were able to kind of make progress on the Android team and let's kind of look at that. So these screens might be familiar to some of you and even if it's not, that's totally fine, but this is kind of what the Android app looked like previously and continues some of them to date, right? Still looks like that. Yeah, nothing has changed yet, right? But these are some work in progress ideas and explorations that we have been doing and you can see that there are subtle changes here and there, right? Most of it seems more or less the same, but then making these subtle changes to this whole app is what the team at Android, the design team with partners has been doing. This is the TEI dashboard, which is slowly shaping up to look somewhat different, right? And I want to keep stressing on the fact that these are work in progress and because one of the takeaways that I hope you have is that it's not a one-time job, right? We have to keep going back, testing it again and again to be able to make sure that people really, truly understand what this is. What exactly does it mean to have an event stage, right? Is it clear that the top half of this is static information? Is it clear that these blue circles over there are in fact buttons, right? Are they able to understand what or necessarily not understand, but also maybe guess what they might find in the family tab or the graphs tab right at the bottom. So when we were at Sri Lanka, we were basically testing for these four different indicators, I suppose, right? So learnability, can the design that we have created, can it be learned easily? Is it discoverable, right? Are things discoverable? Do they know where they will be able to find a button that they are looking for? Will they know how to be able to refer a particular patient without being trained to do so? Can this be made more intuitive? Are things comprehensible? And this is again something which was very interesting for us to test for because a lot of times users of DHIs too may not be English speakers, right? So the language that we are using becomes extremely, extremely important. And finally, memorability, right? If people are given a particular training and then they don't use the app for let's say three or four months, if they start using it again, will they remember what they had learned in the first place? So let me just actually walk you through what a usability test session looks like. So if I were sitting for a usability test session for like about 45 minutes with a participant, I would first start priming them, letting them know that this is going to be a session. I'm going to basically tell them that we are not testing them. We are simply testing the application, right? These are certain things that you have to do in order to be able to make them feel comfortable. And once the tasks begin, we also let them know that once they are able to share their honest feedback, it's only going to go back into the loop. It's very important to be able to create that sort of a comfort with them. And I know that a lot of people over here have been doing, you know, like building that sort of a rapport with users. What also ends up happening, and this is like purely out of experience, I have gone to fields with very important people, right? Doctors, and you see that the, you see how people change their personas. They may always want to present their best work to you. And I mean, I would have done that too if I was in their position, right? But we want to meet them where they are, right? And so it's very important to remind them constantly again and again that this is for their benefit, as well as ours, right? And it's very nice to be able to create that sort of a mentor, apprentice, sort of a relationship with them where they understand that they are the experts and not us. And the minute you're able to achieve that, it becomes a far easier process to be able to get honest feedback and not what I want to hear, right? So while I'm speaking with them, I may give them a situation, right? And maybe I might say that, you know, there's the woman who has come here and she has a child and you want to be able to find this child on the app again. And so just something I want to point out here is that I said that you want to find the child. I didn't use the word search, even though I'm, that's the thing that I'm testing for. It's interesting how quickly people are able to grab on to what you're saying and then try to find exactly that. This happened to me yesterday when I was creating some of these slides and I've, somebody was like, I think you should hide this slide. And while I was looking for options, I was like, how do I do this? Where's hide? And I couldn't find it at all anywhere. And then I realized that there's another word that they're using, but my focus was so much on finding the word hide. So it's just a personal experience, but I think some of you may have experienced this yourself as well, right? Yeah, I think creating these visualizations and making it sound more realistic is one of the ways I do it. And there might be other ways as well, but this has helped me. This has become a bit of a joke, but being able to ask questions back, especially when they are not sure about what to do. So a lot of times people will stumble when you give them tasks to do, especially if they are difficult tasks. So if I've asked them to find somebody, and if this is my first interaction with them, they may take forever and then they're getting scared and they are thinking that, you know, we are testing them more. And so they ask us back, how am I supposed to do this? Is this right? But the true value of usability testing is to be able to see how they stumble, right? We want to, and as uncomfortable as that might make you feel, it's very useful to be able to take a step back and just allow them to go through those mistakes because those mistakes are what is going to help feedback into the design. And then we will know exactly where they were stumbling. Yeah, a good example of this and a small tip over here would be to ask them the question. So if somebody asks me, how do you do this? So you just go back and ask them, how would you do this? Well, what do you think about this? Just reverse it back to them. Avoid over explaining the design. This is a tendency that especially if there are designers on the field may want to do and these designers, I'm using the word very broadly here. If you're an implementer, you may want them to understand what you have created before you start testing them, right? I think a lot of us are guilty of having done that. I have been as well, right? So when you hand over this additional information to them, you're kind of trying, you're unfortunately taking away from the beauty of usability design which is that it helps you find all these mistakes that have been made. But if you are helping them understand the design, you will never be able to kind of discover this for yourself. Asking open-ended questions. I think this is another thing that is so easy to mess up. So for example, if I were to ask you if you enjoy shopping, let's say testing for Amazon or Flipkart, I have already made the assumption that you like shopping, right? However, what I should be asking is maybe taking a step back and asking you if you do like shopping to begin with. A very common one that I've seen people do and have done it myself is also asking if they like what we have created, right? I mean, you're not really giving them much of an opportunity to say no there. So instead, asking them what their opinion is instead of, you know, it's a matter of rephrasing. These are very small things and like I said initially, this is not rocket science. It's a skill that you can just pick up and can be done by a lot of people more than you would think. So this was us at Sri Lanka. We spoke to a lot of primary health midwives. We spoke to nurses. We spoke to people who are working in public health facilities. We were also observing how this session happens. And here you have Carolina and Marcus who were sitting in a different room and they were able to kind of see whatever the user is doing and make really useful notes for us so that we can then relate back to our team. These are just two examples of maybe let's focus on search and then I'll show you what happened when we were trying to test filters, right? So we noticed that search was actually working pretty well. The new design that we had created was working pretty well. And as you can see, like 13 out of 14 people were able to discover this button. But we also noticed that when they would come to this screen, there was a lot of confusion here, right? And the reason was that they thought that this is the TEI dashboard, especially people who have not used this previously. They thought that this is the final screen that they have to be working on. And this was only because we were not able to make that look clickable, right? And so they didn't realize that this is just an intermediary step, right? So they need to click over here in order to be able to go to the next screen. Anyway, so this was a very big learning and we are trying to work on how the results of search can be shown in a manner such that even if they don't have training, they are able to go ahead and enter data where it needs to be entered. We also noticed actually, because I just realized that I didn't share another thing. What would happen here instead is that if the task was to add a new event to a particular patient, what they may end up doing is clicking on this, especially non-English speakers, right? So even though we thought that this was very clear, it says and roll new patient, how can this be missed? But when they see this button, they think that this is going to help them go to the next step. So they would click here and then add what they think. But in effect, what they are doing is that they are creating duplicate patients. So this was a very useful insight that we had towards the end of the rounds of testing. Sorry, sorry. So when there were, I mean, if you were to look at results, there are three types of results, right? There might be zero search results. There might be a single search results like this. And there might be multiple search, which is more or less like a list. We noticed that when there were multiple results on this screen, there was no issue absolutely, right? Because this is a familiar thing that they have seen already in DHI as to in other applications as well. It was very evident what they needed to do when there was zero search results as well. But when it was one search result was when the problem was more. So now we have to go back to our drawing boards and see how we can make this one a little bit better, right? Filters and sort, right? A feature that we think can be extremely useful. But is it really used? Is what we were trying to kind of figure out, right? So here on the right side is what you see. This is the final result that we have kind of drilled down upon. And I think it's working very well. But initially what was happening is that on this screen, they were not able to find the filters chip. And we thought that it was pretty simple because you see these chips outside, which say starred patients with high BP. That's exactly what the filters look like right ahead. And we thought this is upfront and people might be able to click it, but it didn't work as intended. And the reason why they were not able to click it is because they didn't understand it to begin with, right? The same thing happened with sort. We had sort as one of the options over there. And the first point of interaction, which is to be able to understand what you're looking at was not there. And we realized that this is going to be a problem across, right? Like either you're in Sri Lanka or you're in India or you're in Bangladesh. A lot of countries are not going to be having English speakers. And if this hasn't been translated, this is going to be an issue. So one way to be able to get them onto the screen is to push them into discovering it, which means that the final button that you now see has been made blue. It looks slightly different from every other chip. And when we made it like this, people were clicking on it far more than we expected them to. Another thing that we did over here is that the sort button initially was on top, but then we pushed it closer to the list so that they're able to associate that sorting is related to the list that they see right at the bottom. And filters is a feature which is different from this altogether. Daniel Burka who's sitting right here once told me that a poorly performing design, and especially when you're iterating, this may feel like a very bad thing, right? It's easy to kind of fall into that trap of having created something and really, really wanting it to work. And when it doesn't work, it feels terrible, but it's also great that you're able to realize that early on in the process so that you make the changes right then, and therefore it can then turn into a very, very useful, powerful thing to be able to do right in the beginning than having done this, and then this kind of reminds me back of the Land Rover quote that we saw in the beginning, right? Changes can be expensive, so it's good to be able to find these failures early on instead of having to like change them later. So once we were done with this testing at Sri Lanka, you were kind of able to create results which looked something like this, right? So here you can see at the end, we did several rounds of testing. So here at the end of round one, you can see that, for example, search results didn't test as we expected it to, right? Inputting values for enrolling was a little difficult for users to be able to do. Filters and sort were not being able to, they were not discovered basically by users, but over time, as we were making these iterations, we saw that it became from a red to a yellow, and the hope is that the yellow becomes a green, so we are then confident enough to be able to ship it for production, right? And this is again a very useful sheet, and the reason I'm showing you some of this is also because later on, like if you are stopping by at the expert lounge and if you are interested, these are certain templates that we can also hand over to you, and you can start doing this by yourself as well, right? So these kind of matrixes help decide and kind of prioritize work. What are certain things that we should be taking up for design and without wanting to become too technical? Yeah, I think priority is the key thing over here, and it helps you prioritize what are certain things that you should focus on, because time is of essence and a lot of people don't have enough time to do as much testing. One of the most useful things, I think, when we were at Sri Lanka for us, was that we had a very strong bind from the Ministry of Health and the Family and Health Bureau over there as well as his team, and that becomes very critical for anybody, right? It helps you get access to participants. It helps you not have to worry about the logistics of things because they are there with you and also seeing the entire process kind of take shape. So that's Pramod. He is also here with us right now, and Pramod became a very integral part to our whole session because he was essentially our translator during all of these sessions, right? And I mean, he was amazing at his job, which is outside of translation, but even as a translator there are lots of things that one must keep in mind, and he was able to kind of really take a neutral approach and not push his agenda through the users to us, instead be able to take a step back and really see what we were trying to do. So maybe I'd like to invite him on stage and just see if he can share some experiences if you'd like. Okay, not putting him on the spot, yeah. Thank you so much. So let me start by saying it was a great field trip. So why I would say so was like there are a few reasons. So firstly, like most of us like who have been implementing DHS2 in health, why do you know we are implementing DHS2 in education and so many other different sectors. Like health if we have implemented DHS2 for like 10 years, and we have talked to these different types of users for so many years. When we talk to them, even when we want to implement DHS2 in a new program, we kind of understand like what they say. And without even thinking, we kind of tend to assume their requirements, but when it comes to requirements identification, this is kind of true, mainly because the health data workflows and the information collection process, that doesn't change much. But the thing is when it comes to design, it's not the same. Like some of us just by looking at the DHS2, we know like which version of DHS2 this is. It's not just us, like even the users, because we keep on upgrading the instances and the users, they tend to kind of go through this journey of different versions. But the thing that we don't really do is like, we really get their feedback on like, what do you feel about this new interfaces? Like what do you feel about this? So like while I was with this team joining as the translator, which I consider as a privilege, I will mention why. I kind of tend to realize like things are a bit different, which I did not kind of, I mean, I did not understand it, why I didn't see it that way before. And also the process, like as you mentioned before, like the placement of each of us in the room and in different rooms. And as Arthi mentioned, like, yeah, two of us were joining for the interview, but like it was more about observing. Like so we had three people simultaneously observing each click, each movement of finger, like all these things. I mean, it was a major task and how to do this process, like interviewing for like more than one hour, one user. Like it was a great experience, even that. And finally, the self-reflection, like you mentioned, I was a translator. It was a very difficult role to start with because I mean, even though I was not involved with this particular implementation, this was a Ministry of Health implementation and we did not actually customize the DHS too for this. I mean, me working as a implementer and now trying to act the role of a translator, it was very difficult initially like, I mean, like, when a user is struggling and things are very obvious to me, I mean, I'm doing the translation, I was kind of tempted to help the user, but yeah, it was difficult. But like, that's the thing, like when you work with the team or for a period of almost two weeks, that's how you kind of get used to and then we understand the team dynamics. So it was a really good experience. It was a agile experience. All these team meetings and debriefings we had in the night to kind of prepare for the next week. So it was overall a really good experience. Thank you. Yeah. So I was just saying that I'm going to let Carolina take over and this was the first section where I focused mostly on, you know, making a case for why you should be doing this in the first place. And Carolina is going to like help us see how we can be doing more of it. Yes. So yeah. Thank you. Okay. So you can hopefully hear me. Okay. So once again, we're going to go into Mentimeter and we really want this session. We kind of had two purposes for the session. We really wanted to tell you all kind of what we've been doing our work and experience so far. But then the other purpose was really to hear from from you both here from you in terms of your experience in terms of your needs and what maybe you would like to work on us with. So we can go into the into the mentee because I think the link will also be there. I can hopefully just move on from this one. See. Yes, great. Yeah. First, we just want to hear from from all of you in this room. In addition to artists, close your eyes and raise your hand. We wanted to just get some numbers on how often you feel engaged with end users when you're designing or implementing your details to solutions. So we see some are doing it more than once a month and that's really exciting to hear and we would love to hear more more from you for sure. So just in the interest of time, I'm going to move on to the next question. But we see also there's some potential here and that's also what we will get more into and how we're feeling that there's a lot of so much potential and so much insight to gain even with just a small couple of user testing sessions or user research approaches. So we would really like to hear from you. You saw some techniques that are sorry was mentioning earlier. So we just wanted to hear from you as well. What kind of techniques you've used and we decided to keep it open ended, you know, so you weren't swayed by the the ones we had earlier. But I mean, if you remember them, that's that's fine. Yeah. Yeah, lots of different good ones there. User interviews, user stories. We would say that us as a design team have really focused until now on a few of the first steps in the ideation process. We've really worked on gathering requirements, hearing, collecting users stories, really building context and understanding the problem. And now we want to also continue more into the later parts of designing and testing and iterating on the usability of the solutions. Yeah, there's a lot of great options here. Seems like some have been using or been doing usability testing again. We'd love to hear from anyone who has been doing these different things. It's a lot of a lot of interesting techniques here. But it seems like interviews is a is a is a good and and you know, maybe low barrier approach here. Okay. And then, like I said, we would really like to hear from you, whether there's any resources or any support or any help from the design team at UIO at DHS to that would be helpful for you. If you were conducting user research, if you wanted to gather user feedback, if you wanted to do usability tests. So we will go into some of the things we're thinking of helping with, but would also really like to hear if there are some other suggestions that we have not thought of. So yeah, seems like, you know, templates, guidelines, just being getting some help with how to start. You know, you don't have to start from scratch. There's lots of great templates and starting points out there that even we are using and then, you know, adapting more to the DHS to context. So this is great. We will of course, you know, go through these answers later and really use that to help us guide our future work. So looks good. All right. So now to lead into the next session, what do you anticipate would be your main challenge if you were to conduct usability tests? We can say also user research in general. But again, we've been focusing a lot on our usability test work and what we've done so far there. So yeah, language. Again, we saw that in Sri Lanka, but we luckily had Pamu does an amazing translator access to users, avoiding bias, like the leading questions we were talking about time. Time is always an issue, you know, with maybe you have a very short term project and you kind of need to get it going, get it set up as soon as possible. So yeah, these are, these are great. Yeah. Time costs, language, a lot of, a lot of the similar things. I think, yeah, we are on terms of time. I will then go back and spoiler. There's always going to be a barrier. So then we want to talk to you about how we can figure out some ways to reduce these barriers so you can get the kind of user feedback that you would like. So let's start with number one. We saw that a lot of people were mentioning budget costs and so on. So what if you don't have enough budget? Here are some suggestions we have, but we would also love to hear from you. If you have anything, you know, you can, you can throw it out. But one thing you can do is test remotely. You know, recruiting and visiting users are one of the biggest costs here. So you could, if you're possible from where you are, you know, get on a Zoom call with the user. You saw how in Sri Lanka, some of us were on the Zoom call and we were able to hear what the user was talking through and see where the user was testing. You can also consider a kind of a testing approach called guerrilla testing, which is actually what Arty did a couple of rounds of in India before we went to Sri Lanka. So she found some, not DHS2 users, but users of similar, you know, work background, healthcare workers, pharmacists and so on and did a lighter weight test with them. And those iterations with them built into the prototype. So we already had a few, you know, smaller rounds of testing before we went to Sri Lanka. And then if you could recruit from a known pool of participants, for example, you know, if you start doing usability testing, hopefully you, you kind of collect a pool of people that you can reach out to. There's also some places where you, there's even like recruiting agencies for these type of tests. So let's go to the next one. What if I don't have access to the actual users, which we also saw some people mentioned. So again, the kind of back to the guerrilla testing, like you can test with similar user profiles that you might have in closer access to you. And then, you know, we love our community. Maybe there's a way you can reach out to the larger community. Maybe someone has, again, more access to end users that they can help you, help you reach out to, you know, any, we would say, as we said in the test in Sri Lanka, you know, they are, we kind of said it that you are helping, you know, make a softer that's used in more than 70, maybe now almost a hundred, you know, countries. So, you know, you can contribute to something really impactful and greater than yourself. And so hopefully we can also, you know, build on that in the community. And here is the, what if I don't have the technical capacity. This is the place that we hopefully can work together and help you more with. So first of all, I mean, the ideal would be you can hire a UX researcher in your organization that can help you with this. We luckily, you know, have a UX researcher, which is super nice. But again, that goes into the budget part as well. But you could look into building capacity within your organization, have some seminars, have some workshops and in general build that capacity among your staff. And then, you know, reach out to us to see if we can work together because this is where we're hoping we can help reduce the barrier. This is where we hope we can contribute with resources and resources. And then, we can work together to build on that in the community. And then we can work together with templates and so on. What if you don't have enough time? That was, again, also one that came up. We would say that, you know, don't do too much at once. Like test, do smaller tests, test rapidly and be flexible. You know, rely on templates because you don't have to be confident about this is the, and I would like to know more or the area where, like, this would probably make the most impact if it's really efficient, effective, easy, easy to use. So you kind of reduce the scope a bit in that way. So what more, just a couple of more sections about some advice for, for usability testing, but also use your feedback generally. So try to build, again, a regular cadence of conducting usability testing, not kind of have smaller tests. You can, don't do just like one huge test, like, or use your feedback session once a year. Try to do it, again, dependent on budget and time constraints, but try to do smaller ones throughout the year. You know, it helps with iterative processes that we really like here at DHS to, instead of an all-out effort. Also, if you lessen the scope that you're testing each time, you get deeper insight for that particular part of the scope. We also suggest that you, of course, share the progress with the stakeholders, because to really show the impact of the process that you're doing, you know, that will both motivate the stakeholders to probably, you know, let you, or let you do more of it, you know, like, again, increase access to users, you know, maybe increase budgets but they see, you know, again, they're contributing to a bigger thing. They're seeing kind of what their feedback did and proof of, like, a positive change to that. And then, of course, what we want to do here in this room and in the conference in general, we want to, and the community in general, we want to collaborate and learn from each other. So, you know, collaborate with, you know, his groups, NGOs, other implementation partners, donors, other, you know, different types of experiences. We would love for you to share it in the community of practice and we'll try and think of some more avenues in the community of practice that we will reach out more. And then, I think we should just need to lower the barrier to feedback. I think almost any insight could be useful, you know, like any insight can help us, you know, make the test to more useful. So, let's talk a little bit about how we can look back at what you all wanted in the Mentimeter and that will also inform our decisions for this, but we really do want to develop guidelines and conduct webinars to talk even more in detail about how you can do usability testing and also other parts of the design process. And then we want to take our own advice. We are not, you know, doing this kind of regular cadence for usability testing for all our students. We're doing quite a lot with Android now. We had a really good post-release test of the line-listing app in Rwanda. But other than that, we also need to take our own advice and establish this routine. And we also want to scale. We want to, again, going back to the purpose of this session, we want to help you do the same. We want to scale with user feedback, but we also need to scale how we do take action on that feedback. And then, of course, we have the expert launch later today at 5, I think, and feel free to come and talk to us about share your experiences about usability testing and come and ask how to get started with usability testing and all of this. And then we have our very simple design at DHS to email if you have any questions, if you want to reach out and get updates on anything else we're up to. So I think then we're at the question part. And how are we doing with time before the alarm goes off? Before we open the questions, I have asked Enzo to share an experience. Remember how I asked you, Enzo? Okay, come on! Yeah, so we were talking with Enzo and he gave a very I'll remind you what it is. You gave him a very nice use case of an implementation that you were writing a contract for. A little extra money, but not enough to be piloting, so you decided to pass. Yeah, okay. We did have there was at some point we had a contract where we were going to do any initial they said, but we decided to actually do that to have some direct interaction with the users at first. So we essentially with Stefano we went for a very quick and dirty prototype first so that we could test it with the users. Is that what we're talking about? Okay, good. So the first meetings were going to be just for design where we're going to decide what the design was going to be. But instead, whilst we were getting ready to go on that trip, we just made a prototype with what we thought made sense and turns out it didn't and that was very useful to get very quickly out of the way, right. A lot of our assumptions that we had that we're also verified by the partner we were talking to and we were making them were wrong simply it was not what the users were going to be doing. We were thinking like a lot of if I'm to make it more contextual like a lot of the interaction with the clients was going to be happening in the streets when we didn't even consider that aspect of that the order in which things were going to be done was not what we envisioned. So that initial interaction with the users was really valuable if we had waited until the end to do all this it would have been a lot of wasted time and effort. So like, you know, failing early it is failing cheap essentially and that's something we should focus on I guess. Thank you Enzo. So I appreciate you sharing that on the spot. So you were given all the papers you were given the reports you were given the paper forms you were given the information from the Ministry of Health. So you basically you were basically taking that information that you had and making some assumptions early on based on what you had but by going to the users you saw a different picture and we're able to make a quick change and you were not doing developing this was configuration of a program so it's how do the data elements work how do you ask questions yes please just so the online people get to hear so much really appreciate for the presentation the name is I am a scientist in the NCD department in the WHO headquarter and I saw that we repeated the word of the user 1000 times and I think it is important to consider this is a heterogeneous word is not the homogenous word it could be a nurse patient it could be also policymakers it could be authorities in the local level so when it is the case there are possibilities that the interest of the users would so how conflict to each other and I wanted to get your opinion how you are dealing with this type of for example people who are working at the facility they like a less data but people who are working at the national level they like more data and how we are making this balance between the users when we are going to decide about the shape of the site thank you I'm Joe the designer for the HS2 and I mean that is one of the big problems designing for the HS2 is that overlap of users with different needs the best way we can attempt to do it is we provide sensible defaults for each user type that we know about but then we also provide the tools for users to configure for experiences we don't know about so if we attempt to second guess kind of like what Enzo was saying what a policy maker needs we can get halfway there but then their needs might evolve over time and the needs of the nurse might evolve over time so I suppose what we try to do in the interface is capacity building in a way capacity to customize your own experience and that's kind of where I feel we are taking the apps more and more now is we have that solid generic experience but then the tools within the apps themselves to make locally relevant apps and experiences but that's a lofty goal that's the ambition I'm going to add something I think this is also a case of implementation are you getting all the users in the same room are you having these conversations are you communicating this is important do the end users know do the people who are putting the data into the system know why it's important is each data element the way you're configuring your program is that meaningful to the user and if it's not why isn't it so kind of questioning having this time to design with all the stakeholders and that stakeholder includes the user having the opportunity to reframe okay is this indicator really useful this is an opportunity to take time to redesign your indicators so it's meaningful at the top to the bottom or getting buy-in this is important and letting the user know why it's important because of this indicator you get malaria medication so I think that from my point of view of design that's something thank you I have two questions actually the first one is somewhat related and it's in terms of these sort of processes if people want to apply this what sort of sample sizes should we be should people be thinking about in terms of users you know is one enough to get good insights or should you try and get some variation and some good sample sizes and I'll just throw in my second question is to what extent do you think there's value in other types of tools where you do this kind of user testing by proxy and you kind of track you know clicks and things like this these kind of tools that might be easier to do without effort but in gain analysis and so on I can take that this is working yeah about sample sizes right so there's this really interesting website and some of you can maybe look up it's called the Niels and Norman group kind of like the pioneers of user experience they've been around for a very long time and they've done enough and more research to kind of suggest that being able to test with five users will be able to give you 80% of the usability issues that might crop up in at least one out of three participants so it's okay to be able to test with five people but the five people have to be of one particular persona profile right so if I want to basically maybe test and see if you know like if this can be used by DHIs to users as well as non DHIs to users I would want to then test with five of each and not like a mix of that ideally right again like there are always going to be barriers what's what what if you're not able to reach out to say 10 people right five of each I think it's okay to be flexible a lot of times the kind of usability issues that you see very early on are going to be the large ones for example when we were doing the gorilla tests these were not even users of DHIs to but we were able to spot very critical design issues that we may have then encountered in Sri Lanka but because we got it kind of out of the way initially itself with five users we were then able to get to the meatier stuff rather than just being having to like focus on like you know larger problems that could have been solved by a prototype what was your next question oh yeah what what are some other maybe techniques that you could be using right so usability testing is fairly it's fairly flexible in itself right but I personally prefer a hybrid approach right so I mix up methodologies so that might mean like doing interviews along with this you might if you're trying to work remotely doing some sort of diary studies could be helpful right so you give dedicated people who are kind of willing to do this right and giving them an honorium of sorts initially it's very useful so that you have the commitment but then being able to kind of give them certain tasks very simple tasks to be able to do every day and kind of like journaling it in some ways that could be a easy low hanging fruit yeah about right right right right right right so the questions more about like whether you should be using some of these like eye tracking devices etc or yeah I mean I think your answer is fine but yeah it was specifically about yeah these things that are kind of more passive you don't really do anything you kind of gather analytics like build some sort of usability analytics yeah yeah analytics is super valuable right and that's I mean it has to be a mixed approach of quantitative as well as qualitative data because quantitative data helps us understand what is happening right which qualitative data may not be able to capture very well because we are talking to smaller groups of people right but qualitative data or like speaking to people directly will be able to help us understand the why behind the problems right so we can see from analytics maybe that people didn't click somewhere but why did they do that what was their thought process behind that these are certain answers that you will be able to get from the qualitative method so a mix of the two would be a great idea you have a device that you use for the click follow the awesome so are there are there you mean like websites and like things already out there there are so many there are so many tools I think if you do a simple UX research tools and websites you are going to come across like so many you will be spoiled for choices my advice though especially if you are starting off altogether is to keep it extremely simple right use tools that you are familiar with to begin with right because again here it's the same thing that we want to avoid which is that tech shouldn't get into the way of getting good feedback so if you're comfortable with zoom it would be great that you could use something like that if you're able to meet them in person that would be the next best thing to I mean that would actually be the best thing to do there are I mean and I can give you some examples there is there is this website called look back there are loop panel there are lots of them we were using a loop panel it's Indian so I was trying to support him yeah yeah so protopi and Figma are great tools for prototyping some of them require more effort and some require lesser effort and we kind of saw that when we were on field protopi is this really cool tool that allows you to create very realistic scenarios so that let's say if you want to see how somebody searching for someone it will pop up a keypad they will be able to enter stuff and that's all of that is great but it requires a little bit more effort so if it's more of a high effort study you can use something like that but again what would be the simplest way to do it taking screenshots right taking screenshots taking printers printouts and maybe even giving them like paper prototypes you could even draw things out right so there are very very low fidelity ways of doing it also as well yeah thank you first of all great presentation really run along that scene session I just wanted to know that we have been discussing you guys presented the case of Sri Lanka and how that whole process went and the questions that we were seeing on Mandi as well do you guys have already tested questionnaires maybe we could find them on the community forum somewhere tested questionnaires specifically for DHIS too and if they have proven and deserved or any guidebooks on let's say if you are doing any FGDs focus group discussions if you could just follow them and that could help out a lot of us in you know conducting usability test and gain from your experiences so if you are able to come to our expert lounge we have created a toolkit basically essentially a google drive link which has some very basic templates that you can use right one of them is going to be in a sample questionnaire for conduct the interviews so you will be able to see what icebreaker questions should essentially look like right if you are doing certain tasks what they could potentially look like these are all samples and you can switch them and make it more relevant for yourself we are also planning to include another excel sheet which was very similar to this so that you are able to kind of maybe create notes in that so there are three steps to like any kind of usability tests or research or interviews or any of that there is a part where you're preparing for it and then you're conducting it and eventually you're analyzing it actually there's a fourth part as well which is then being able to take the recommendations that you are giving and feeding it back into the data into the design that's the most important actually so we have a template that allows you to be able to communicate that to your team in a crisp concise manner because honestly speaking not everybody goes through these reports and that's also because they're tedious nobody has the time so it's got to be small and concise so we do have it long answer sure we want to send us an email to designerdhi2.org we can't believe it wasn't taken yeah you can just send us an email and we can send you the packet and hopefully we'll also have send out something on it if you can do it oh we have one minute left are there any other questions yeah there's plenty but I'm just thinking of time thank you my name is Anakre from Burundi head of HMIS I have been very interested by exciting presentation about usability the HHS too so every time we manage a problem with our users they ask how to manage how to do this in the HHS too so I want to ask if there is a possibility to have a solution about the HHS to you user manual if we can have a link for someone who ask and you give him one it's the alarm just one second yeah they're doing usability testing right now yeah sorry you didn't finish I was going to finish I want that if those tools exist it would maybe I ask it to the HHS team also develop it yes that's a great suggestion and I think I mean through our cross product meetings and other things we've been getting a sense of what exactly is it that the users of DHIS do and require and you suggested a manual or a method to be able to document this I think we can think about that and I think it's important to then realize what is the real need the need is then to be able to train so I'm going to give this a thought if manuals are the best way to do this or maybe there might be something even more effective but thank you so much for the suggestion yeah also last questions okay so thank you so much for the great presentation and the session today so my loud thoughts and my question may not be relevant directly relevant to the DHIS too but to my curiosity I just need to ask this that whenever you get to have a like complex requirement from the user for the software development or the app development and whatever the approach which you follow let's say what fall cycle which is happening in the software development how much do you see a well balanced approach for instance like you know aesthetically well designed application to the core user functionalities which you have to make so how do you see that and how do you focus on this thing to be done first I don't want to speak for others but for myself at least again there's this quote I remember like make what's important and make what's truly functional but if you can make it really beautiful that would be great this is not me I'm quoting something else entirely but I think personally I think function over form especially in things like this but it kind of matters and depends on the context always if the purpose of an application or a website is to delight and that could be the case with let's say video games it could be the case with like mental health stress busters and things like that it's again something that's coming up in like financial technology so that it doesn't feel like such a difficult thing anymore so people are trying to add those elements of delight and things like that so aesthetics matter a lot here however functionality kind of trumps all of that a user should be able to complete their task as expected without doing any errors so here I would then prioritize this yeah okay yeah I think we can close it maybe yeah again we'll let you all go to lunch thanks for being here where it was such a great meetup we're very happy to see you all so we'll seek around for a bit if someone wants to