 This is the first time I've given this presentation, so it might be a little crunchier on the edges. And, you know, the topic, designing UX for specialized workflows, the point really is, how do you design UX for something that you do not understand? And really, nobody could expect you to understand, right? The other thing is that I kind of have a bit of a framework that I outline in here. It's been my personal approach to working with projects like this. There's no book or anything, so I might be, like, using terms that don't exist. Sorry. I will also make these slides available. I'll post them to thesketch.org because they are chock-full. Sorry. So I'm going to kind of, like, skim over the top of them. So if you miss anything, don't worry. They will be posted. Okay. So why do I have anything useful to say on this topic, right? Like, you might want to know that before you actually pay attention to what I have to say. I've been working at Red Hat my entire career since, basically, I graduated from school. And in that time period, I've worked on a lot of projects that have very specialized workflows. Like, you know, I worked on the Red Hat Satellite product. That is a system that people can use to manage 10,000 systems. That's quite a scale, right? But I'm not a sys admin, so how would I design for that? Or, you know, like, Vert Manager is another thing. I actually worked on that a really long time ago when Vert was just becoming a thing. I think the first version uses Zen instead of KVM. And I didn't really know anything about virtual machines at the time. Anaconda, the Linux installer for REL, Fedora. It uses a lot of enterprise storage technology that's pretty complex that the average bearer is maybe not ever heard of. So how do you design for that when you have no clue? Or the Chris Project, which is a project I work on right now with, it's a partnership that we have with Boston University and Children's Hospital in Boston. It's a medical imaging platform. It's for neuroscientists. I'm not a neuroscientist, so how would I design for a thing like that? We'll talk about sort of my secrets of how you do this. Okay, so what do I mean by specialized workflow? These are just maybe four properties that might help define it. They're very precise. I'm assuming neuroscience is pretty precise at scale. So like for satellite, you may be able to manage your own system. You have your own personal laptop, but are you managing tens of thousands of systems at all different locations? That's definitely at scale. It's complex, so it's not something that can be done casually. And by unusual, I don't mean like strange or weird. I just mean it's not typical, right? Maybe just to really hammer the point. Cooking dinner is not a specialized workflow, right? Driving a car, first aid, those are not. But you could take each of them to a specialized level. Cooking steak, sous vide at 135, yeah. Flying a commercial jet, maybe it's just like driving a car, but certainly it's unusual. It's not something that you could expect a general population of people to understand. Okay, yeah. I like to make hot takes on Twitter. Yeah, so the other thing about this is like if you're designing a user experience, you really need to understand what the user's workflow is. If you don't and you're just sort of mocking things up to try to have something on the screen, you're a visual designer. And there's nothing wrong with that, but if you want to do UX design, you really need to dig into workflows. And just to hammer it in again, I think I'll probably delete this slide because I think I made the point. Okay, so this is just a basic outline of what we're going to talk about. You're going to start with research, right? And this is a special, because you're working with specialized workflows, you're talking to specialists. There's some user research techniques that I found are just more effective, whether they produce the best results or just you can do them logistically in this sort of space. So we'll talk about those. We'll talk about UX models. What is a UX model? What do I mean by that? And how does that help? How to actually create a UX model. And then once you have a UX model or at least kind of a rough draft of it, how do you actually represent that in the UI itself on the surface? And by that, I'm going to sort of frame it by interface affordances. If you've read Don Norman's The Design of Everyday Things, it's basically that, although maybe I extended a bit far from what he says. Okay, so. Ethnography, has anybody heard of this term? Okay. The thing that I really don't like about UX design and just the field in general is they like to come up with terms. They like to change them. They like to kind of navel gaze at them and make things complicated. It's not complicated. Ethnography comes from anthropology. It's just the study of people and what they do, okay? So that's what you're doing. Contextual inquiry, another one of those terms that kind of sounds, it's talking to somebody. You're talking to somebody for two hours and you're just learning about what do they do? Contextual inquiry is something that is best done in person. You can do it remotely. I've done it remotely before with screen sharing. It gets tedious, so I wouldn't do it for two hours. I might do it for one hour. But basically, you come kind of preloaded like, oh, okay, I'm going to design the software for neuroscientists. I have this really busy neuroscientist I got some time with. Let me kind of preload, try to figure out what do I want to learn? So, you know, the Chris project is something that neuroscientists use to analyze brain image data. Okay. So let me start thinking about, well, what do I want to know what this guy does? Okay. How do you figure out what images you want to look at and what kind of analysis are you going to do on them? Can you show me how you go about analyzing an image? And you can kind of queue up, you know, show up prepared, but then once you follow how they're doing the task, you're basically, it's like a work study thing, right? Like they're teaching you. You're like their apprentice. So it's really all it is. And just take notes, figure out, and you'll get an idea. Job shadowing is sort of the same thing, but you do it for maybe a longer period of time, like maybe a full day. Just tag along, be their shadow, pretend you're like a first day intern, tell them to treat you that way, and just get a feel for like the environments that they're moving through. You know, maybe they're just sitting at a desk and taking a lunch break and a snack break every now and then. Maybe they're out in the field. They're walking around the hospital. What kind of devices are they using? Who are they interfacing with? What different departments? Take notes of all that stuff. And then an interview. Interviews, you can't do a job shadow remotely. An interview you can do remotely, you can do it asynchronously. That's a good fallback if you have a tough time getting access. And these sort of tasks sometimes it can be really difficult because these are a certain type of specialty that it's just hard to get access to the user. So you can't follow sort of the commandments of user research. Like it's not easy to get the sort of access you need to perform your research. So you just have to kind of wing it and do what works. So interviewing can work. You can just send it through email and have them answer questions in email. You can do it over the phone. You don't need to be present. But being present always helps because you can see the work environment. This is just a sample of like interview prompts that I wrote up for the Chris project. Mentorship is one that I rely on most heavily for Chris. And that's basically body up with someone who is a specialist in the field and just use them like you don't use them. Ask them lots of questions like on an ongoing continual basis. So it's not like a one-shot research deal. It's you have an existing ongoing relationship with them. You can have them sanity check ideas. You can ask them questions, that sort of thing. This works really well for the Chris project. It's kind of hard to get access to neuroscientists for sort of the more rigorous stuff. So this is very helpful. And then basic training. This is sort of boning up on your own basically. And you might want to even start with this just so, because every special field has its own jargon, right? Like one that comes up in CIS admitting a lot that I see people getting confused about is pixie booting, right? Like I see people spell it pixie like the little fairy. There's just like a few terms just boned up on what the basic terms are that are used in the field just so you're not wasting time, the precious time that you have with your research subjects like asking about basic terms. There's a bunch of ideas here. When I worked on Red Hat Satellite, which is a CIS admitting tool, I got an RHCT certification. That was sort of part of the process. I want to understand, well, what are the users trying to do? You could do short online courses. They tend to be cheaper free. Ted Talks are actually, if you find someone... Oh, sorry. Ted Talks, if you find someone who's a specialist in the field who's given a Ted Talk, that can be a great introductory. It's like bite size. They're usually like five minutes or something like that. And they're at a high level, but you can understand. And like entry level textbooks at the library, if you find Syllabus online, you can figure out, well, what is the basic textbook in this field? Maybe I'll skim through it. This is a class that I'm taking actually right now on Coursera on medical image. They do other image processing too, but they talk about medical image processing, which has been helpful for me in designing for Chris. Obsession helps too. You can see I'm pretty pregnant right now. There are a few visits to the ultrasound. And ultrasound texts are medical image specialists. So I've picked their brain quite a bit and maybe annoyed them. But obsession always helps. And there's different ways you can engage with the field. One thing that I recommend is if you can find people in the field who are sort of notable, follow them on social media, you'll get little bite-sized snippets of information about the field on an ongoing basis. Sometimes that can be more helpful than like a huge brain dump in like one session. So yeah, I mean just try to obsess about it as much as you can. Have a genuine interest because your design will be better for it. I'm just going to skip this because it's not super relevant. Yeah, so basically talk to people and read anything you can on the subject. It's simple, but maybe sometimes it seems really hard, especially when you have a hard time getting access to people. I mean even if you have like, you know, for example, using the Chris example, maybe you have a cousin who, they're not a radiologist, but they work in a hospital and they know radiologists pick their brain, get their take on it, you know, anything you can do to get as close to the subject as possible. Okay, so now this is maybe sort of the magical part. I don't know. You've learned all this information about the specialty. What are you going to do with it? How does that translate into a usable interface for people in that specialty? So UX model is sort of maybe a term I've made up because if you Google it, what you get back is nothing like what I'm talking about. It's just a way, like when you're building software, software is not a physical world thing, so it involves abstractions. Abstractions can be complicated because it doesn't exist in space because maybe the hierarchies of how things relate to each other aren't clear. The UX model is how you define how all of these things relate to each other, how the artifacts in the interface relate to other artifacts, what kinds of tasks you can perform on them. If there's any hierarchy or relationships between them, you define that stuff. And then once you have the UX model end, okay, when I'm trying to add this new feature, how does it relate to the interface, you have a model, like a working model that's defined that you can figure out based on the model that I created, that you should probably work this way. And we'll go into more detail here. So has anybody here used Twitter? Okay, I know the cool kids aren't using it anymore. I'm older, I guess, so yeah, maybe I'm too old. SMS, texting on your phone, yeah, okay. And IRC, okay, I'm happy to see IRC users. These are all chat apps. That's really what they are. They're all the same. They involve people chatting with each other. But because they're models, their UX models are so different, they feel so different. People interact and behave on them differently because the UX model drives their behavior. So with Twitter, it's like a global public broadcast. Like I can go on Twitter and, you know, I can address Beyonce. I mean, I could, it's not like she's going to like it or follow it or read it, but you can, it's like a global system. So when you're talking on Twitter by default, it's like a performance. So people are sort of in a performative mode. So it kind of puts you in a certain space and you say things that, you know, you kind of act out a bit, right? With SMS, by default, and I'm going to say for all these, when I'm making these points, it's the default. By default, it's point to point. So you're talking to one person. It's assumed to be a private conversation. I don't know about the NSA, but, and it's not performative. It's in chronological order. Twitter sort of mucks with your timeline. So you might be seeing tweets that people made last week based on different, you know, algorithms. But with SMS, you're receiving every message and you're receiving them in chronological order. With IRC, you can do point to point, but by default, it's based on rooms, which are just, you know, you might have 100, 200 people in a room. When you talk on IRC by default, you're addressing a crowd. You're not addressing globally. You're not addressing every single user in the system. And those properties of these UX models for these apps change people's behavior. It changes their perception of how the platform works. Okay. So here's some things I made up that makes sense to me that I hope makes sense to you. The first thing that I start with with a UX model is coming up with a concept map, which is just a mind map. It'll involve artifacts. And I say artifacts because I'm trying to avoid the word object, because like you have object-oriented program, and then people think, oh, it's like programming objects. It's not, no. By artifact, it sounds fancy. It's not. I just mean like, for example, in Chris, we have, you have data that comes in. You have what we call feeds. And I'll talk about that a little bit. You have plugins, and plugins operate on the feeds. So stuff like that, like your basic terminology and like the things you're actually working with in the interface. That's what I mean by artifacts. You kind of map them, do like a bit of a mind mapping to figure out how do these things relate, what actions can you perform, and which ones, that kind of thing. I'll show you an example. And then artifact models. So for each artifact, like what are the properties of it? For example, in Chris, when you're importing data into Chris, what is the lifetime of that object? Are you kind of pulling it in, and then you're working with it real quick, and then you can delete it, you're not going to need it anymore? Or is it something that you're going to need to reference over a long period of time? What are the properties of it? How big is it? What image format is in it? That kind of stuff. You have to think about those things. And then the collaboration model. Are there multi-users on the system? Is it I just log in as a single user? I have my own files. That's it. How does the system, like how am I interacting with it? Am I logging into a website? Is it a client-side application? That sort of stuff. Ordering defaults. I mean, that sort of comes from how you're defining the artifacts model. So like, if it's something that's time-based, like a newspaper website is not going to give you news from two months ago, because it's very important that stories, the most latest ones are at the front. But for other things, maybe something that is the most frequently used would be best to be put up front. So you have to think about how would things be ordered. Workflow and integration. So this is an interesting one. You have to think outside of your application. With Chris, you have to think about, at some point, an MRI machine was involved in this entire workflow. How do I get the things from the MRI machine into this platform? You know, there's something called a PACS server at a lot of medical institutions. And that's sort of like a central server where all the medical images go after they come off the devices that read them. And we have to interface with that. And we have to build things in such a way that users are accustomed to working with PACS servers to retrieve data. So we have to make sure that we're following the same patterns and the same methods that other tools they use to interface with PACS might do. Access footprint. Like, are you sitting in an office on a laptop? Do you have a mobile device? Do you have a more specialized device? Those kinds of things. These should all be built into the model because they make decisions. Like, you know, I used Twitter as an example earlier. When Twitter was originally built, it had that 140 character limit, which is oh, so wonderful, whatever. People talk about that being so innovative. The reason they had that was because they built it primarily as an SMS system. And SMS has a character limit. So that's why they started with the 140 characters. Now later on, like, different narratives came around. Oh, what we wanted people to do to keep things precise and speak very concisely. But it was a technical limitation that drove that. And you have to be aware of those things when you're designing something. Oh, yeah. This is kind of important, actually. I could rant about this for hours. User motivation. Why are people even logging into this thing, right? A lot of social media in particular uses what is called dark UX patterns to get people kind of addicted, to get them coming back, doping in hits, all that stuff. You've probably heard this rant before from someone. When you have specialized workflows that you're designing for, people are coming to it because they're trying to do their job. So you basically just make sure that the tool does, it fits their workflow, it performs what they need, and they'll come back to it to get their job done. Don't worry about employing any of those weird tricks or whatever. Psychological warfare. Yeah, so just a quick review on what we just talked about. UX models come from user research. You shouldn't be making them up out of thin air. Any kind of decision that you make and how things are constructed in your model should directly relate to something you learned in your research, talking directly to the specialist. An important one that I think relates to something that the last talk talked about with Optimistic UI, when you do like an Optimistic UI where maybe the message hasn't been sent to the server and successfully submitted but you show it client side as if it had, you're not representing exactly what happened in the back end. That's okay. The UX model does not have to be the same as the MVC model of the application. It does not have to be exactly the same as how things are modeled in the back end. It's usually a very good thing that they're not the same. You have to be careful when you're talking about it. How am I doing for time? Creating a UX model, I start with a mind map, just like what I talked about before. Then maybe a competitive UX model analysis. I wrote sort of there. For the Chris project, which is a medical imaging platform, we looked at tools like Blender, which is a 3D editing suite. I looked at Adobe Lightroom and the open source equivalent Darkroom or Darktable. Again, these are not medical tools, but they have something to do with image processing. I wanted to see what patterns are in common that we could sort of glom on to make this experience make sense. Once you do your competitive analysis and map out your concepts, you figure out, well, how am I going to adapt these to my application? You're not going to copy exactly how something else works, but what are the things you can learn from them and take into it to make your interface more intuitive? I'll give solid examples, because I think they'll help. Just along the way, review your research, sanity check with any mentors you might have engaged with. This is sort of the struggle. If you're working with a preexisting code base, check your model against the implementation. It's really nice, and there's ponies and unicorns flying around when you can actually implement your UX model as it was originally designed, but you're probably going to have to make compromises based on how the backend is implemented and then that's just life. But there's lots of opportunity to actually change the backend to be able to support your design, too. This is an example of a concept map where I called this a component of a UX model. This shows different components of Chris, and you can see where boxes are inside other boxes that shows a bit of a hierarchy. The most important piece is, like some of these ones on the writer out of date, so don't even pay attention to them, but the most important piece is you get data, you put it into a feed. Once the data has been pushed into a feed, you can chain together plugins and pipelines, and pipelines are basically compositions of plugins, and these manipulate the data and you get output out of it. That's sort of the base model of the Chris project. Then I'm just going to talk about the competitive. I mentioned we looked at Blender and Lightroom. I'm just going to talk about those. Blender has something, I think it's new in the 2.5 series. It's like the Node Editor. Is anybody here familiar with Blender at all? Okay, cool. Basically, each one of these boxes corresponds to plugins in Chris. Every box, it takes an action. It has config variables. You can chain them together in a different order. The thing that we took from this is you chain them together in a certain order. You run the data through it. It's called a pipeline. This term pipeline is very common in the image processing world, so we adopted that term and we used the term pipeline too. The other thing is that it's a graph-based interface. Blender is not the only tool that does this sort of image manipulation that has a graph-based interface. I will say as a designer, there is no really common graph widget that exists, so I was very frightened to maybe represent it literally as a graph in the interface. I fought against it a bit, but after surveying all the different tools and seeing, no, they use a graph-based interface, like MaxMSP for audio manipulation. It's not even images that uses that. So I'm like, okay, we'll use graphs. So this is how it ended up. We're doing a top-down graph rather than left to right just because in a web browser I thought we'll top-down. You have a lot more space and it just made sense. And then what you can fig each node is on the right-hand panel. But that's how we kind of took it and adapted it to fit our model. And these are just some tricks I tried with cards to try to fake a graph, but it didn't work, so we did an actual graph. Okay, and then with Lightroom and Darktable, they're both tools that process raw images out of a camera. And the thing that we took from this is it sort of separates the process into stages. This shows the stage where you're sort of hunting and gathering for images to process. We have to do something similar with Chris, right? Because you have the pack server, or you might have images you're uploading directly, or you might have images that you're getting from an image set that was created for certain types of studies. You basically have to hunt and gather, what images am I actually going to run analysis on? And it's sort of a different stage. There's different tools you need at that stage. Darktable calls it a light table. I believe Lightroom calls it a library. So we sort of adapted that term when we call it the library, and we split it out into stages. This is actually a mock-up. It doesn't exist yet, but it's in the planning stages. But this is sort of where you can gather. What is the data I'm actually going to be inputting into the process? And it's got all different sorts of layout to make it clear. Here's all the different ways you can find images. All right, so just a quick review of how to create a UX model. You just sort of mind map the concepts that you're working with, the tasks that you're working with. Look around and see what other tools sort of, they either work with objects that have similar relationships with each other, or they do sort of a similar, they work in a similar space, and just see what you can learn from them. Because again, like, something is going to be easier for the user to use if it's not novel. Like, if they recognize what they're supposed to do, they recognize, oh, okay, I see this type of object typically. If it's called a pipeline, it's things being changed together. Okay. You want them to have that recognition, so there's less explaining necessary. And then adapt what you find to your software's needs. Okay. Now, how much time do we have? 12 or 6. Okay, all right, I'm going to move really fast. Okay. This is the last section. So we kind of talk through sort of the high level abstract. What am I actually even designing for? What are my users doing? What do they need to, what are they even, what is the domain space that they're working in? We push that into building out a UX model to support that. Once you have your model, how do you actually enforce that on the screen? So this is surface level details, like interface affordances is what I'm calling it. So these are called Don Norman doors. There's quite a few of them that Red Hat's office in Boston. I don't know about this building. I haven't explored it too much. If you look at the door, you can't even tell it's a door. And then like somebody has to make signs to tell you, oh yeah, you push on this and it opens. The door inherently doesn't have properties that give you a hint as to what you're supposed to actually do with it. So you don't want to build Don Norman doors in your interface. You want to enforce your model by having any objects in the interface have properties that make it clear what is meant to be done with them. I like this example. Has anybody seen Star Trek 4? Maybe, no. Okay. So Scotty is from the future and he's in the 1980s in San Francisco and he's trying to interact with the computer and he thinks the mouse is a microphone. You want to avoid situations like that, right? Like you want someone to know right away, oh okay, I pick this up and I move it with my hand. Yeah, so affordances are just basically the properties of objects in your system make it obvious what they're meant to be done, what's meant to be done with them. You can enforce your UX model through affordances. Now a lot of times when you talk about interface affordances they talk about basic stuff like radio buttons, you can pick one, checkboxes, you can pick multiple stuff like that. But you can enforce a UX model at a higher level than just like widget selection. Visual models and terminology like I talked about how we adopted the term pipeline in Chris. I talked about how we adopted graphs as a layout method for pipelines in Chris. That's how you're enforcing your model. You're creating an affordance because you're using a standard pattern that people use for that type of thing. So people see the graph, people hear the term pipeline and they immediately sort of get more into a space of understanding what is meant by these things. What am I supposed to do with them? Motion or change to represent actions that can help. So like in Chris if you submit a plugin it enacts right away so to enforce that we change the status right away so you see movement on the screen. You see that something's changed. You know it's working. Following widely used patterns and standards is obviously a good affordance. Buttons want to be clicked so think about actions. When you're presenting actions to the user make sure that the actions that have the best real estate are the actions that make the most sense for them to be doing with that object. I know it sounds obvious but it's not always in practice something that interfaces do. And don't forget about scale. So it might make sense in testing to do a drop-down for plugin selection but if you think about as a platform Chris wants to enable all sorts of different plugins and pipelines to be used for image processing. I personally think drop-down selection box is maxed out at about seven items and that's optimistic. So if it's going to be like hundreds or thousands of options there you have to build a system around actually enabling the user to navigate that scale. And that sets up an affordance where the user understands. Okay, cool. Yeah, so basically affordances let you know what you're meant to do in the system. It helps enforce your UX model because it gives users in set what each object is meant to do what you can do with it. Visuals, terminology, motion, patterns placement of actions appropriate scaling or all different types of affordances that you can put in your user interface on the surface to enforce the UX model that's driving it underneath. Okay, and just a quick review you don't need to be a rocket scientist to design for one, you just need to talk to one. If you understand your user's specialized workflows then you can build a UX model based on your research. Once you have the UX model you can enforce it on the surface of the UI and the interface affordances. Yes, and I think that's enough review. Okay. Does anybody have any questions? I'm just glad I didn't run over. All right, that works. So I was going to share if you worked long-term with a great speaker. Sure, yeah, that's a really good question. See, I'm going to give you the answer I'm going to sound very lazy and I'm going to be very supportive is really the best way. If you start, I mean the thing is I have so many things in my head about how Chris's UX model works and there isn't really a standard way to document it. I wish there was, I could probably write a book and make money that way but honestly it's just how you end up talking about the thing. It's the stories that you tell about how a thing works. It's how when you're talking to developers you just naturally start talking about things like terminology ironed out. So sometimes you would be talking to a developer about something and you'd be talking about the same thing and using completely different language. So even just establishing those terminologies I actually do have like a cheat sheet doc that I wrote up that has all the different terms. So that can be a way having the concept model document. I mean I shared it with people it's not like they're referring to it constantly, right? So that's where you kind of don't get into the rigor of keeping updated documentation. So that's what I think is really the best way to do it. Right? My question was could it be correct to say that contextual inquiry is the subset of inquiry? So what are the things you don't need to do and what can you do in an interview but not in production? What can you do in an interview that you can't do in a contextual inquiry? So a contextual inquiry is really driven more by the user completing the task and your observation of the task whereas an interview is really more driven by the questioner. And you can go off script with an interview I honestly recommend that you do but sometimes, especially if it's asynchronous and you're doing it over email you don't have the high bandwidth back and forth where you can quickly switch gears with your questioning. So that's sort of the main difference is who's driving the content. Is it the person being interviewed the specialist or is it the user doing the questions? Thinking that maybe not scale so well that each question has to come to me to learn about what's going on. A narrative doesn't have to be, oh I am the master of all and I have the narrative and you must speak to me to learn it. It's sort of like a cultural thing like it pervades the entire team so if you're really persistent about anytime someone talks about this particular type of artifact in the interface I will call it a pipeline I will be a jerk about it I will really push the point it gets to the point that people just start sort of like using that term and it propagates in the same way like a narrative can propagate if you're very consistent and persistent about it. Another way traditionally just I as a person designing in the world propagate narratives this sounds very evil I don't know propagate narratives but I like to document my design process through blog posts developers find that helpful too because then they can go back and refer without bugging me personally or just get a refresher it's a good way to sort of share that narrative of what is this software actually meant to do. There are so many ways to tell stories it becomes more of a collaborative group owned thing it becomes more of a collaborative group owned thing and maintained thing just living in the head of somebody I've interacted with designers in the past before not to go into a huge rant but where it was all in their head I'm sure it made sense but to get it out man I don't know it's hard to work with that because you don't know I think it's an important job of the designer creating the UX model to publicize it to market it to share it to retell the story over and over and sometimes when you tell the story too if there's holes in the plot it'll come out and then you can go back and adjust so I don't know I hope that helps you ever created a model collaboratively with the team that you're working with oh absolutely you have to yeah and that was like part of the Chris model was hours and hours of discussion and sort of well what if this well what if that does this relate to this or that how do you do it you have to really I mean you want to get developers involved too maybe I'll be a bit of a jerk and just say you don't the users opinion like the end users the specialists they're the ones who kind of get the veto on anything a developer might come in and complain well the back end can't do that blah blah blah that's okay the back end can't do that now but in the future maybe it should be able to if the users are saying that's what they need so but yeah definitely definitely collaboration the the talk a lot about as a tool especially where you don't understand the main area you didn't talk about that a whole lot is that something you practice as part of your UX model development and you get something to do sure yeah maybe that's an assumption I should have defined up front definitely like you can't just sort of like dig in and like stick to something especially when you're sharing and collaborating with others who are viewing the research and it shows well maybe this isn't the way to go like you have to be able to adapt you have to be able to revise and I mean especially if it's a large project you have to do it iteratively you can't design the world at one go and then start implementing it you have to sort of build it out in iterations do you rely on wireframes well I would say early on in the process especially with the Chris project a lot of diagrams a lot of like talking through things and this is the other thing that's kind of interesting you're talking through things you're working through a lot of concepts like we would do a lot of our communication is through chat and so it's handy because you have a text log but it's like so meandering so one of the things that I started doing is just writing summaries of conversations like oh yeah yesterday we had this big long discussion about how users will share pipelines just writing up a summary of that and having it available and then once everybody agrees on your summary because sometimes people walk away from a conversation and they have totally different ideas on how it went so have a summary that everybody agrees with yep that's how things went and that's what we decided it's not really like a contract but at least you know that you understood what happened and then start maybe doing diagrams to show it you know and then sort of vet those with everybody involved like does this diagram accurately represent what we talked about because when you start out when you're making a UX model you're not really you're not working at the surface level of the interface yet you're working with even how do all these concepts relate to each other what is the user doing with them once you get beyond the UX model initial creation stage you can move out of diagram space and into actual wire framework which is basically the process we followed sometimes if you're doing a new feature or you're revising something you might step back into diagram space you might step back into research space it's completely fluid there's no like the main order is research model mock-up but there's definitely fluidity between them any other questions anybody want to tell me what I stuck at here because it's the first time I've given this so what would you revise about my talk I definitely had too much material and I tried to like go through it too fast is there anything you wish I talked about or you wish I didn't say yeah so here's the thing so I have a ThinkPad Yoga which has an HDMI thing that I was assuming I wouldn't be able to hook up which I didn't realize till I got here today I didn't think about it I was going to go through like my Git folder of all the design work and I can't do that because it's not my laptop I could show you sort of like I work in open source I always work with open source projects so I try not to dictate tools because if you're working with a public upstream like you know oh here's the designer from Red Hat going and dictating to us how we should do our job no so I try to work where the team works so like an example for Chris we work in github so I kind of adapt in my design process you know I share assets and resources through github hopefully this will load so we have for example this is the UI repo and then although it did it okay so right now like for the summer we're organizing a project using a combine board that's in github I think it's this one so you know like we have design tickets and we have develop tickets so the design ticket is the first one for the feature you come up with the designs and we actually use the ticket as the specification for the feature which I don't know maybe that's novel maybe it's not so like you know there's a little bit of discussion like what is this what are we actually doing here what is this design for what is some of the background on it and like you can see here like I wrote something and I decided it was crap so I stroke it out but I left it there just for historical purposes and then it's just all the different versions like this was the first cut of the mock-ups and then this kind of drives like meetings or discussions about it like look at this what sucks what's good what are we going to do iterate on it so like as you get to the bottom you'll probably see no there's no changes on this one but usually there are changes and then I also have let's see there's a repo that the designers work in and we use a tool called sparkle share which is kind of like a dropbox like front end to get for designers and we have all our assets stored in a get repo and the tool that we use on most of the projects that I work on is inkscape for mock-ups so the files are SVG format the source files but then we also have PNG output of mock-ups yeah and so like you see I have a UX model folder and it has some stuff like we were talking about organizing assets in the Chris UI using tags and so like you know as we were talking we're having the discussion and I was like coming up with these different diagrams to show how the tag model might work it's like for example like you have this says for a piece of data and it's it's kind of hard to see with the projector the piece of data is labeled 1.1.2.1 so it has ancestors these are the pieces of data that came before because every time you take an image and you run a plugin on it then you get another it's another piece of data it's like the grandson of you know whatever so data 1 1.1.2 are the ancestors of that and then you have data 1.1.2.1.1 is the descendant and then these are the plugins that were involved in producing that I mean okay basically it just visually laid out how we were discussing to kind of drive the conversation because this is how something might work like in medical processing of images like say you have an MRI image one of the first things you might need to do is called registration so you run a registration plugin on the image and that sort of like aligns the image like say someone in the MRI machine like was itchy and they moved their head so it like skews the perspective that means it's harder to compare that image to other say brain images because the perspective is a little wonky so you run a registration process on the image and that sort of like squishes the image so that it conforms to the standards then it could be compared to other images so that might be plugin 1 and then plugin 2 might be segmentation which is you go through the image of the brain and you split it out into different brain structures you kind of map out what are the different spaces then once you have it segmented you could run another plugin that will calculate volume on a specific segment and now that you have it segmented you could actually say process the volume of segment 3 something like that so we were just kind of talking about how do these things relate to each other to in depth on like a very specific conversation but you can see how like that sort of drove some of the assets we were producing as we built out the UX model we took that conversation and we built it into a series of diagrams showing how different image data within the system could relate to each other and trying to think about well what tag model would support that I think we have like one minute zero minutes alright so thanks for coming I hope that was helpful