 All right. I think we can get started. It's bright. So yeah, so just a quick introduction to this talk and speakers. I'm Liz Blanchard. I'm going to be going through some of the user experience work that we've done over the past year, six months, just giving you guys some examples of what we do as user experience professionals, what we've done upstream, how we have influenced some specific components. And I'll hand off to Ju and Pete here. They're going to highlight some of the persona work that we've done in the persona working group over the last six months. And then Jackie is going to get into some of the future stuff we plan on working on with respect to user experience. So jumping right into some of the components that UX is helping to shape. And quickly going through just an overview of our life cycle, you know, kind of the way or the work life cycle, how we work as user experience professionals upstream. It's pretty basic. What we try and do is we really try and understand the user and make sure we gather the right requirements based on user needs. And this is really where persona work is helping a lot. And you'll hear a lot more about that later. But then we'll jump into a design phase where we might do things like wireframe sketches. And we'll take those, we'll review them with developers, make sure that everything is actually able to be developed. And then we'll do potentially some updates and then some usability testing and validation of wireframes. Make sure they're along the right lines for what our users are looking for. And then after implementation we'll do even more usability testing and validation with the actual code, you know, what's running. And that'll all, any feedback we get from that will actually feedback into more requirements. So it's really this iterative life cycle. And jumping into some of the specific components, you can hit all the bullets there. So Horizon is really one of the big ones, right? So it's the face of OpenSec. It's the dashboard. UX and UI, you know, do go, you know, hand in hand. But we're not limiting ourselves to just UI. We're actually going to start to branch out to other things like CLI API and try and help out with all components. But Horizon has been a main focus for us so far. We'll definitely continue to focus there. And so as new features are needed, we'll help drive use cases, design usability testing, that exact life cycle that I talked about. And here's an example of that understanding the user requirements phase for a feature of where we wanted to improve the Horizon overview page. And these are hopefully improvements you'll start seeing in the Juno release in Horizon. So the first thing that we wanted to do is, you know, tell that story of who is the consumer? Who's the end user of these overview pages? And, you know, that's some of the text that you see here. If you go to the next one, we then transitioned into specific requirements, right? What are the metrics that these users will want to see on the page? And so this gets into more specific things like, okay, I want to see number of instances and you get on the list. There's actually a long list of these that wouldn't fit on the slide. But if you go to the next slide, this is when we get into the interaction design phase. And so this is an example of a wireframe. It's not, you know, the exact visual look and feel that you would see in Horizon, but it's a quick way that designers can, you know, put something together and get it out there visually. People can start talking about it, developers and end users and say, hey, you know, I don't think we can pull this information yet. We might need to drop that out. Or this is missing, you know, if a user looks at it and says, where's my hypervisor information? That's missing on the screen. I'd really like that. It's a lot easier to update these designs than to implement it and then have to go back and reimplement after it's been tested. And it's just a lot easier to do this in the design phase and then get it to developers when it's more finalized. So Tuscar and Triple O, if you want to, one too many. So just a little bit of background. I know there's been a fair amount of talks around Triple O and Tuscar. Tuscar has combined efforts with Triple O recently. And what Tuscar does is really bring this management UI for cloud administrators into the mix so that they can manage their infrastructure. And it's built, you know, using the Horizon code. And so it has that same look and feel of Horizon. So we've been doing a lot of work upstream. I know Yarmere has done a lot of work in this space. So even though we are four people up here, there's more people in the UX community helping out. So here's an example within Tuscar. So this is a list of user stories. We took a little bit different of an approach for Tuscar where we tried to phrase these to say, you know, as an infrastructure administrator, this is what Anna would want to do with our under cloud in Tuscar. So it helped us all, you know, talk about these things as a team and try and make sure we understood the requirements all together before we jumped into the design phase. And then this is actually showing another example of interaction design. This one's a little bit more fleshed out. So you see there's a little bit more color to it and look and feel. Still not finalized, but just trying to, you know, get a good look at, you know, what type of metrics and how could we show these, you know, if things are really, you know, going up to over 90% we show threshold information for some of these metrics. And so just another example of some interaction design phase work. And then the last one I wanted to talk through is Heat. So Heat is the orchestration engine for OpenStack and enables users to create their stacks and, you know, quickly get their resources up and ready and repeatedly do this within OpenStack. And once support for Heat came into Horizon, it drastically improved the user experience of Heat. And if you go to the next page, you can see an example where we actually did a really early sketch mockup for the Heat team where we didn't even take the time to go through some of the user requirements phase, because they needed a really quick, you know, hey, what do you guys think we could do? And this visual representation started to drive some of the conversation to get to where we are today. So this is actually really early version. It doesn't really look a lot like this today, but another example of how we can work differently with teams and come up with a way to communicate, you know, some ideas that we could have from an end user experience point of view and start to drive some of the requirements and help just form some of the visuals that could go into Horizon. So one of the big things we've been working on the last six months, like I said, is personas. So I'm going to hand off to you to start talking about some of the stuff we've been doing there. So actually, you can click on all of this. How many people in the room here is familiar with the concepts of personas? Can you raise your hands? Okay. So because not everyone in the room is familiar with what personas are, I'll just give a little background behind it. And basically, it's a representation of a real user consuming a particular application or product. So in context to OpenStack, when I first got started, what I'd noticed is there seems to be a gap of knowledge of how we perceive the consumers of OpenStack. And sorry. And what happens is, in order to get everyone on the same page, we needed to start understanding our users better. These are the super users, our cell service end users, our operate, sorry, not operators, our project administrators, and so on. So this is a very early start of it. And to do that, it's through personas and it's used to help guide decision making in the planning, design and development of any product. So one of the things that you do with a persona is once you actually develop it based on real user data based on some interviews and surveys and so on, you start to get everyone working on the same page saying how it's say John or Jane actually do something. What are some of the challenges and pain points they actually face? What are some of the characteristics of behaviors? Or what are things that are concerning to them as disaster recovery really concerning? What does that really mean? So that's just an example. So this is more a little bit about why you use it. As many of you who may or may not use OpenStack today, what you'll find is there are some challenges. It's still fairly early in the development stage and adoption stages, at least within production environments, but it's getting more and more common. That being said, in order to actually help drive it so that maybe more enterprise customers start to use it for their private clouds, we need a way to actually contribute back to the community as well as developers to understand, hey, how do enterprises use it? In fact, I just came from a session just now called when the enterprise and in there, there's a big effort or initiative underway to help solve some of these things and personas and a lot of the ops meetups that you see on Monday and Friday and sprinkled throughout the conference is really to help answer these questions because we need to improve the overall user experience. And one of the ways to get there is through personas to drive the user stories to drive thinking in the mindset and mental model of the user. But more importantly, can we actually develop a product that is what consumer actually wants? So as part of this effort, what happened about late last year is we actually formed a persona working group. And this is actually leveraging a lot of the efforts that Dave neary, who's I believe sitting in the audience here today, who actually helped start about a year ago in Portland, and then also in Hong Kong. And what we did was we actually got a whole group of companies involved, Red Hat, HP, Rackspace, IBM, Morantas, Puppet Labs and so on. So what you're going to see is some of the work that we've done to date. And obviously, there's more that we can do. And what we'd like to do is share with you some of our efforts into understanding our cloud users. So you can go ahead and click all the slides. So as mentioned, a lot of companies have joined in the effort, we have a mailing list for it. If you go to the Wiki, you'll actually see a whole lot of landing pages or web pages related to what we've done. And more that's going on, we've actually had a lot of interviews with people willing to share their roles as well as the experiences with using cloud technologies, be it OpenStack or other public cloud or private cloud technologies. And the whole goal, like I said, is really to get the community to understand what product they're building and serving the needs of the users themselves. So without further ado, I'm going to pass on to Pete, who's from HP. So like I said, everybody here is from a different company. I'm not sure if I need this or not. Can you guys hear me back there? Do it for the recording. Recording. Okay, so I'll go ahead and use this thing. So what have we done? Nope, back, back. It's all right. So what have we done? Go ahead. What have we done so far? And we're missing a slide? So what have we done so far? Let me talk a little bit about that. And, you know, one of the things I want to sort of been, one of the things I want to let you guys know is really you've already been part of this process of creating the personas, because you may have noticed already that if you've gone into, for example, LinkedIn and been part of one of the OpenStack groups, you've already seen one of the recruitment screeners that we've done to try to solicit people to participate in these persona development efforts. And one of the things I really want to emphasize to the group is, you know, it's interesting, right? Because I attended another com or another presentation. And the whole idea behind the presentation is, as an operator, how can you contribute back to the community? And it was interesting, because it's all sort of the way I describe it as being sort of code based, how can you get code back to the community as an operator? And one of the things that occurred to me during that presentation really was, you know, a great way for an operator, for an end user to really contribute to the community is by sort of informing the process and sort of informing us about how do you intend to use the product. And so, again, I'd really like to emphasize that when you see those recruitment screeners or surveys, fill them out. And if you don't fill them out, find somebody at your company to fill them out for you. So what we did is created a recruitment screener. The thing about the recruitment screeners is the trick is, is that you don't want, you don't want to be so narrow that you exclude somebody, but you don't want to cast such a wide net that you're getting people that aren't necessarily relevant to your persona effort, right? So you don't want to go in there and do an interview and have somebody say they're a car mechanic and realize they're just not talking to the right person. You want these people to be focused. So we went through there, we went through the screener, and afterwards we decided to go ahead and start doing the interviews. And the thing about the interviews is that's largely sort of a qualitative effort. And as a result, sort of the questions are things that, you know, they're not really easily encapsulated inside of a survey. They're really things you have to talk to people about. And so, you know, ultimately end up asking questions like, you know, what's your typical day at work look like? Or, you know, who are your team members? Who do you work with? Who do you collaborate with? How does that work exactly? And, you know, particularly, you know, what's something that you've been challenged with at work? And, you know, how do you feel about that? Give us some thoughts about that. And I think sometimes the most important question really is, what haven't we asked you that you want us to know, right? And that's usually the last question that we ask. And so we went through this whole survey interview process. We interviewed about, I think, 18 people. Did it get to be a little bit more? Did we? Yeah, it's right around 1820. Yeah, so we did all these interviews. You know, at that point, the way you do these interviews typically is it's two people. You have one person who has some experience doing interviews that knows to ask questions but not talk a whole lot, which is challenging for some people. And then you have somebody there who's a second person who's sort of a note taker. And typically at HP, what I try to do is the second person who's doing the note taking also had some technical expertise. So if we got to a point where maybe there was something or a concept that I couldn't quite grasp, I could always go to the note taker and maybe get some clarification. And the other thing, too, is we went for an hour. And so at some point you get slowed down or you run out of questions. And the note taker's job is to sort of step in and sort of take over before you ask a few other questions, right? So once we got done with that process, what we did then is we took all of that data, we put it into a spreadsheet, right, online. So at that point, you know, the question really is what do you do with the data? And it's interesting because if you go to these ability studies, they always kind of have this really rigid process until they get to the data unless you've noticed this. And then they kind of say, and then we did, you know, some magic and here's the results. And so, you know, one of the things I wanted out of this presentation is to let you know that the process that we used is actually fairly rigid. It's something that's been written about. We decided with all the data that we had, it made more sense to sort of converge in Austin as a group. And it's for the same reason we're meeting here today, right, which is virtual is great, but sometimes you just have to meet people in person. And what we did is we followed Kim Goodwin's process. You may not know who Kim Goodwin is, but I'm hoping you know who Alan Cooper is. Anyone? Okay, good. Somebody in the back. Alan Cooper. He's a gentleman that wrote the inmates that are running the asylum. And he has a consulting group called Cooper Interactive. Kim Goodwin worked there and she put together a fairly rigid process around how you do personas. And part of that process is that you have to identify your behavioral demographic variables. And what that really means is as a team, and it was actually part of our homework. How are we doing on time, by the way? You okay? So as a team, what we did was we all kind of got together, converged on Austin because we needed to meet in person. And number two, because Austin's really cool. Yeah, exactly. Go Austin. And we actually met at Rack Spaces Facility. So they were kind enough to feed us and allowed us to use a conference room for a couple of days. And so the first day what we did is we sort of identified, you know, what are those variables that are really important to us? So what are the, not important to us, that's wrong. What are the variables that sort of emerged from the data? What are the themes that sort of emerged from the data? And you know, and so, you know, some of the things that we sort of ran into was, you know, there's some distinctions between people, for example, about how they wanted to learn about OpenStack. And if you have kids, you've probably seen this before, but there's, you know, there's people that are sort of active learners. And there's kids that are sort of bookish, right? And we got the same thing when we interviewed people, is that there's some people that they wanted to look at the documentation and read through all of it. But then we got these other people that were sort of on the other end of the spectrum that said, you know, the way I learned is by doing, so I generally just jump right into the code and start looking around, right? So that was one distinction, distinction. And I think the other one was what we refer to sort of as organizational change culture, which is, you know, it's a little deceiving, right? Because everybody here is all about the OpenStack, but we definitely ran into a few people that we interviewed that may be a little more hesitant, right? And it's because they sort of had this organizational change or a culture that was a little hesitant about the community, maybe a little hesitant about changing too quickly, right? And a perfect example would be somebody in the financial sector, right? They don't want to jump into OpenStack right away necessarily. So, and so what you do then is you take all of these, these variables are identified by the team and you create these typically continuums. They can be discreet as well, but what you're seeing up here on the board is all of these variables with these continuums. So for example, with the desired approach to learning, ah, perfect, with the desired approach to learning, you know, one end might be structured, the other end was what we referred to as a team career. We took each of the people that we interviewed and we sort of put them on that continuum according to where we felt they fell on that continuum. We were also very careful about not referring to them by that name at that point, but a little more abstracted. Then the fun happens, right? Because now you have all these variables and you're putting these people on these continuums and what you do is you start to identify patterns. So a pattern might be here's two people, they're co-travelers, they're always right next to each other. It doesn't matter what continuum we put them on. Or more interesting, they're co- travelers and then all of a sudden they end up on this variable and they're completely different. And then, you know, you start finding, well, these two people are always together, but then we have these other people on the continuum, they're completely the opposite. And then from there is kind of where your stories start to emerge, right? So at that point you're like, well, why why do they end up over here where these people sort of end up over here? And frankly that's where your personas start to emerge, or like a friend of mine likes to say, that's sort of the point where your personas sort of introduce themselves to you. And based on those variables, we identified three personas. So go ahead and go to the next slide. And I'm going to just, these are fresh, fresh personas, right? So I don't want to jump into these too much, but we have two personas right now. We have Ben, yeah, so fresh, I haven't memorized them yet. Ben and, well how do we pronounce it? Daishi. And, you know, some of the things that make them a little bit different from one another is things like, for example, one may work for a larger company in the other one. I think one works for university, does open sac for research activities, the other one works for a Fortune 500. But pay attention because we're going to work on these personas even more and we anticipate releasing them sort of formally. I would anticipate the next couple of weeks. So who is next? Awesome, Jackie. Five minutes, oh my. Okay, I'm going to talk about our next steps and work we're going to probably be between like now and six months for Juneau. So we talked about some of the research we've done and we have to put that to work now. So we did, we did usability testing on Horizon a couple months ago and we got a really, some really good results, a lot of ideas for things that we can improve and make even better. So we're going to work on implementing some of those changes and so you'll think you'll see things like a better launch instance workflow, error notifications that are much improved and more detailed, maybe better defaults and things like that. And then like Pete said, we have a lot to do on the persona interviews. We have to make them a lot easier for you to consume. So personas should be real like glanceable, like you can take a look at the persona, just spend a couple minutes. Maybe you've got some charts, some types of visual things where you can compare the different ways that people behave and their motivations. So we're going to work on that too. And then we'll continue usability testing, whether it's new components and there's plenty to do on Horizon as well. We also want to be able to take requests from people as things come up. So for instance, if someone has a new UI that they want to talk about at the next summit and they want to say, hey, can I get a UX person to take a look and maybe make it that much better, we want to be able to do that too. So Liz talked about this. We know that our work isn't limited to Horizon or even the UI. Research that we've done, like findings from interviews, can really guide the use cases for any project. It doesn't have anything, it doesn't have to have anything to do with the UI. So it could be like maybe the developers for Neutron have 20 features they really want. And the research, hopefully, if we do it right, would be able to really be very helpful in prioritizing those things and trying to figure out what's really most important to the users. And then again, with the CLIs and APIs, we can help with that, making them more consistent, better documented, stuff like that. Real quick here, so there's been some conversations and some of you might have been a part of them about how UX really works in OpenStack. Are we like documentation, sort of where we span all projects? Or do we have our own dedicated, or do components have their own dedicated contributors, where say, you know, Liz just works on Horizon and somebody works on Triple O. But I think the answers to that will come out and then I'm going to talk about our process on the next slide. As we figure out our process, a lot of the answers to that will come up. So what do we need to do? One thing is that we need to identify the tools that are going to work for us. So what works for developers and people working on documentation does not always work for people doing design. So a good example is we're trying to figure out how designers will review other designers work. Where do we put those designs? And how do we tell the community that those designs are out there to be reviewed to begin with? And how do we decide when we're done reviewing something and it's ready to be built? We met yesterday for a while. It was really productive and we talked a lot about tools and process. And so we've got some things that we're going to try. And some of them I'm sure will work and some won't. But our goal is really to try to fit our tools in with the community as much as possible. And then, you know, if there's something else where we really just can't use, for instance, IRC is one that we've talked about. But if we really need a new tool, say, for reviewing designs, we'll go ahead and think about doing that. We'd also really like to become a lot more proactive. And Liz talked about that a bit. A little less reactive. So right now we're doing a lot of testing on things that are already implemented. And we're all people, we all make lots of mistakes. And we want to be able to make all of those mistakes before we build something and release it to users. So we'd much rather work on prototypes and mockups and have our users tell us what's wrong with them at that time. So we really need your help. We talked about having community members on our research teams. And that's really important. So in my career, I've found that when stakeholders and developers get involved, they find it really valuable. So an example of this is a developer sitting in on a usability test and actually observing. So they get to watch users use the stuff that they've built. And it also closes that communication gap because the designer doesn't then have to write some big usability report and send it to the developer. The developer can be like, whoa, I need to fix that. They can just go fix it. And then we need people to be involved in interviews and usability tests like as subjects. And we're working on recruiting. We're working on different ways to do that. But that's definitely a challenge that we have. And we need designers doing interaction design, visual stuff. We need people who really love to do user research. And we're really hoping that some more companies start hiring these people and giving them bandwidth to work on OpenStack. So you guys can get involved. There's a horizon IRC meeting. Pretty soon there's going to be a UX IRC meeting. I think maybe in the next few weeks we'll be announcing what time, what the schedule will be for that. And you can always reach out to somebody here or there's some other UX team members in the room. And then if you just use Google and find our personas group or user experience pages on the OpenStack Wiki, there's all kinds of information there too. So I don't think so. It's five minutes. So one question here is the UX, the designs or that's coming out of the screen designs or the mock-up, will it be treated as a blueprint for the actual implementers or like do you cover every scenarios or you just give a kind of prototype to those and they should expand the actual implementation? Let me repeat it, see if I understand your question. Is your question around if we only prototype one specific component at a time or do we span all of OpenStack? No, my question is when you design the screens, right, you are using interaction or interaction, will it be a blueprint of the actual implementation or will it just be a kind of simple prototype where an actual implementer need to expand and find out all scenarios? I understand, yeah. So we actually have, we started a UX launchpad site where we would create blueprints for the design work that we want to do and so it might be a feature that we see in Horizon, for example, that needs to happen. I'll give an example, there was one where we wanted to allow users to create accounts right from the login screen in Horizon, right? And so we created a UX blueprint so that we could do the design and then once that was done we added it to the Horizon implementation blueprint. The UX work that's done, you know, when it's in a design, you know, say, you know a PDF of what we think it ultimately, you know, users would want it to look like. It doesn't mean it has to, you know, be like exactly that. It can still kind of morph and change as it's implemented. Maybe it's implemented like that and then we do some usability testing and it needs to change a little bit and so it, you know, I think that your question was around this, is that like a final stamp of it has to be exactly that and it's always that. And the answer really is no. I think if we get into some of the prototyping that Jackie was talking about, we might be able to have an active prototype that's close to, you know, what the active state is right then and there. But things always get a little bit out of sync as changes get made, you know, little things here and there. And it is, it's a fair amount of work to keep a prototype all the way up to date. But if we use a prototype for user testing, we'd want it to be very up to date. For our usability testing, we use an actual running version of Icehouse. And so we had the latest and greatest code running on a machine that users could hit. So we didn't use a prototype for that. But it's a good question. Yeah. Anyone else have any other questions? Comments? Cool. Let you guys get to your next sessions and feel free to come up and... Oh, I'm sorry. Sure. The personas. Yeah, so up on our user experience wiki, we have a personas working group section there and we'll be posting those up there in the next couple of... Probably we'll post it on the personas wiki. So stay tuned and keep looking for it. Thank you. We'll also send a message out to the mailing list. We've decided for UX work, we're going to start using the tag UX. And so if you want to add that to your, you know, let me through filter, we're going to, you know, announce things like the persona work that we've done on the mailing list as well. Any other questions? Great. Thanks everyone for your time. Appreciate it.