 Welcome to Agile Roots 2010, sponsored by Version 1, Rally Software, Virio, Amirisys, Agile Alliance, and X-Mission Internet, creating big picture design without big design by Desiree Seed. Hi everybody, good morning. My name is Desiree Seed. I'm going to be talking about big picture design and how to create big picture designs without big design up front. That doesn't show it all, does it? That's a big green circle around big picture design, which you can't see. So what do I mean by big picture design? This is something I saw on Twitter recently. One of the things that has been happening, as I've been talking to people, is people have figured out how to do some user experience activities at a sprint level, right? But they don't really know what to do at things that have to look past a sprint or a cycle. So these are some characteristics of Agile projects, and these are some great things that come from the fact that we've got development happening in small increments and we're doing it in time boxed ways and that it's iterative and collaborative. It means that our product development is highly flexible that we are basically not wasting things. We're making just enough of a product and no more. It means, in my case, that the quality of the product that you develop is extremely high. I mean, for example, on products that I've worked on, I have been able to usability test in the field cut of the day, which, you know, prior to Agile development practices was unimaginable, okay? It's very kind of change-friendly. When you get, you can react to things, right? When you get new data, you can do something with it. Obviously, we're faster to product and when you have user experience involvement, it means that some of the work that I do as a user experience person is incorporated in the product right away, sometimes in the next sprint working version. Forget about, you know, in the old days, it was in the next release if I was lucky and it's continuous. It doesn't happen just, you know, it's not front-loaded. It happens all the way through. However, one of the things, some of the not-so-great things that kind of result from working with small increments, time box iterative and collaborative is that you tend to develop what I think of as a sprint-sized tunnel vision, okay? You tend to choose tools to do your work that work well within a sprint or you tend to adapt the techniques and the tools that you use so that they can be done inside, you know, a month or two weeks. And yeah, you can get faster to the product, but you can get faster to the wrong product. Something I heard a lot when I was talking to people is, you know, they've got the process down. They can do usability testing. They can collect requirements for the next sprint and, you know, it's built and they're designing a little bit ahead just enough and there's no blockages, they're feeding the stuff to development, but at the end of the day sometimes their users still can't do stuff. I mean, you know, there are five sprints in and people still can't solve a problem that they actually need to solve. And also, it is iterative, but when, because of this kind of tunnel vision, what tends to happen is sometimes teams can just kind of react instead of stepping back, thinking about it and going, wait, yeah, we can make this change, but is this the best possible change we should make right now, right? And so in general, you kind of, it's continuous, but you can get into this crazy kind of treadmill thing where you just keep working and working and you don't understand whether you're going anywhere, right? So what you need to kind of break away from this problem is context and that to me is what big picture design can give you. Now, what you can't see now is a big green circle around, big design up front, okay. What I mean by big design up front is like we have, you know, we have a bunch of methods inside the user experience community. We have tools that help us develop a big picture of where we want to go with a product, but the problem is they're often done at the very beginning of a product, so they're front-loaded. We tend to only do them once per release of a product. So to me, that means they're non-interitive and they're just insanely inflexible. So yes, a.k.a. big design up front. So what I'm saying is that, why didn't I pick red for everything? Damn it. Anyway. Who knew? And so what I'm trying to say is, hey, you know what? We can do this. It just doesn't have to be big and it doesn't have to be up front. Yes. We too have the butt. Okay. There's now an invisible circle around my name. Who am I? If anybody has heard of me in this group, probably it's because of a paper I wrote for the Journal of Usability Studies, where I described how we adapted some standard usability toolkit methods so that they can actually work inside an agile method with the team. And other than this, here are five things you might want to know about me. No, no, don't worry. Honest. Okay, first, if you want to contact me afterwards, this is my email. This is my Twitter account. Shut down just the Twitter account for now, because any time you see a link in these slides later today, I will tweet links to all of the stuff I've got links to inside my slides. And obviously the slides are going to go up eventually too, but I've been kind of fooling with them. So, second, I'm like the majority of people at this conference. I really can't code. In fact, and I'm older than I look. The last time I did anything that was remotely like that was in high school, and I was doing like very simple Fortran programs on, yeah, mark sense cards. So, you know, I'd frequently have sixes instead of commas. What I am, however, is what I would describe as an agile user experience practitioner, and I'm pretty good at it. What does that mean? I'm a generalist. I do all of these activities. I do ethnography, behavior prototyping, and user experience validation, which is just a fancy way of saying I do research. I create what I describe as minimum fidelity prototypes. So I go with the lowest and the fastest possible to capture the behavior that I want to investigate. And then I validate those prototypes and iterate on them. And I do it inside this framework. I'm going to say up front, I'm not going to talk very much about these activities or anything like that, because I have quite a bit of stuff to kind of cover. But if anybody's interested in this, as I said, I'm going to give you a link to the paper and come talk to me. I will try my best to explain some of this stuff. Okay. Because I'm a user experience practitioner, I'm an interaction designer, usually when I'm talking about this, I'm talking at usability professionals association conference, or SIGCHI. And generally speaking, quite honestly, what I'm doing there is trying to persuade them that if they go agile, they won't have to sign away their firstborn at some point. So, when I first presented to the agile community, it was very strange to me. Because there is not that same fear or defensiveness. I mean, even people who had never done agile before didn't seem to have the same kind of crazy fear about this as other people in my profession do. But I only mention this because if I, actually can I ask, who in, who here are interaction designers and do things that I describe doing? Yeah, that's actually, that's not too bad. That's more than I thought. And how many people here would describe themselves as product owners, business analysts, something like that? Okay. And how many people are developers? So, I will say to the people who were not in the first group that raised their hands, if I say something that is complete jargon, just call me out on it and I'll try and explain it. However, the irony is I, myself, love this stuff. I think it's fantastic, I think it's fantastic for these reasons, which I'm not going to go into, into too much detail. And today's talk is mostly going to focus on these two things. So, I worked at a company that is now called Autodesk. I've worked at the same company for 13 or 14 years. But it keeps changing owners and it keeps changing names. So, a few years ago we were called Alias before we were acquired by Autodesk. And we make 3D computer graphics software. So, this is Maya, this is Showcase, which was developed using Agile. This is Sketchbook Pro, which I'm going to talk about in my examples today. And is the product that I worked on. So, the things you need to know is that we are software. We are shrink-wrapped and our audience are people that we describe as creative professionals. So, they use our products to make things like movies or video games or car designs. And they have very particular user experience thresholds that we need to hit in order to be credible. So, for example, our products can't look ugly. They cannot look hideous. The user experience team at Alias at the time that I'm going to describe this case study consisted of one manager, Lynn Miller, four interaction designers like myself. And we were all generalists, two full-time graphic designers who work on nothing except things like icons and widgets and so forth. And a developer who works on user experience prototypes full-time. These are not production ready code. These are throwaway prototypes for the purpose of design exploration. So, we have a standing role for a student intern. So, this is somebody who comes basically from the University of Waterloo for what they call co-op terms. And I mentioned this just so you can kind of put some of what I'm going to talk about in context and say to yourself, hey, wait a minute. We're not quite like this, but now that I know that what Desiree is saying is based on this, maybe I can make modifications so that this stuff can work for me too. The other thing is I have to say we were a well-established, mature user experience team. Teams liked working with us. In fact, we have a set of criteria that you have to fit into to get us to work on your product. And so, when we went Agile, there was a strong impetus in the company that the things that we did when we were going, working waterfall had to continue and not be dropped when we moved to Agile, okay? The other thing I want to mention is that Alias moved to Agile for very specific reason. I mean, our process was actually not broken. It was pretty good, but there are buts. There are things that were not as good that we wanted to improve on. We were trying to move to Agile for all V1 product development because there's a very high requirements risk for a first-generation product, right? So, we had at that time someone named Kevin Tate, who was a developer. He was researching all sorts of improvements in terms of how to do these things better, and he felt that we could mitigate some of that requirements risk by going to Agile. So, I mean, sometimes when I'm talking with other teams, they have other reasons why they're going to try Agile development. Very often, people are already on what I would describe as dysfunctional team, right? So, if you're a dysfunctional team and you're changing your process, you may not get a good result. So, anyway, the Agile team that worked on the product, the example I'm going to describe consisted of a product manager, a developer who was the lead developer, what I call three-and-a-half developers. I'm saying the half because the dev lead only really coded half the time. The other half of the time, they were doing non-cody things. Similarly, one-and-a-half interaction designers because my manager, Lynn, was the half on this, and so the other half of the time, she was doing managing things. We had a graphic designer, our student intern, as I was saying, who was doing the design prototypes, a QA person and a documentation person. So, those people, that team that I described, all of us were on this product full time and we were co-located. So, when I say full time, what I mean is very often in interaction design teams, sometimes if you're in a large company, you get assigned to lots of different projects or lots of different products. I'm going to say that to do what I'm describing is basically impossible unless you're embedded inside the Agile team because what's replacing stuff like the documents and so forth is complete conversation and daily interaction. If you are on more than one product, that's really, really hard to do. So, in terms of the role of the product owner, I would say that there's participation from the product manager who contributes the business perspective, the dev lead who's basically describing the technical constraints, and then an interaction designer who's providing the use perspective. They're describing what's going to happen in the user's point of view. And I think a really big distinction that I want to make is oftentimes we hear somebody described as the customer or the client. Well, I don't tend to work so much with a client or a customer. I actually work with end users. People who will actually be using the software or product that you're developing because what I'm trying to do, because that's the person who's going to have the experience that you're creating. So, that's the behavior you need to research, prototype, and then iterate on. I'm going to make a argument, though, that on really good Agile team, every team member is going to step up. In other words, I don't remember what is that, chickens, pigs, whatever. Everybody's committed. Get on the bandwagon. The other thing is that we use Sprint or Cycle Zero. I'm going to apologize. I use Cycle. So, if I say that, I mean Sprint and iteration planning. So, all the people that I described at minimum are involved in both of these activities. There are a lot of details about how we integrate user experience planning into this that I'm not going to talk about today because I don't have enough time, but I'm going to be talking about it at Agile. So, if anybody wants to hear that, I will be there. So, here's the big picture where I'm going. I'm going to talk a little bit more in detail about what the problem is. Why do we even need a big picture? What would we get out of a big picture design? And then the framework that our team used to help deal with that, I'm going to talk about how you do big picture work inside Agile. Then I'm going to talk about some design chunking, mini releases, and kind of a high level. Okay. Jeff was kind enough to send me these slides. This is kind of awesome. Oh, wait, this is a, oh, okay, phew. Okay. So, everybody remember this? Jeff was saying we tend to focus on implementation, on how we get to the code instead of thinking about, hey, what is the outcome of people having that code? And another way of saying this is I think we have to look way more closely as a group at solving problems rather than building solutions. Okay. And, oh, wow, you can't see that at all. That is just discovery. And then there's a little arrow here and it's going to delivery. So, we want to pay equal attention to the discovery practice as to the delivery practice. I'm going to describe that discovery phase as problem definition, right? Here's the thing. Some problems we're going to take more than two weeks to solve. Sorry, that's just the way it works, okay? But here's the but. You can still solve them two weeks at a time. So, does everybody see the distinction between those two things, right? There is one thing I'm going to take issue with. Jeff was saying we have a lot of methods to measure how we do output. And Jeff was also saying we don't have a lot of methods to measure how we do outcome. And I'm going to say, actually, no. Outcomes are definitely measurable because that is what I do for a living. I'm going to point you high level to this is a really interesting talk that I didn't get to go to but I saw the slides that Josh Porter gave at U.S. London on metrics-driven design. This is Dana Chisnell with a really... I don't know what that stuff is in the middle, but a really good resource on how to kind of deconstruct big problems when you're doing research. But, okay, here's the but. A lot of these discovery tools that I'm talking about that we use to measure outcomes still are done at the front and they're done in way too much detail sometimes. I mean, you know, sometimes you hear about these crazy things like people are embedded for like six months at a company doing research and it's, come on. Okay, so here's the thing. We want to do big picture design but we want to keep the qualities on the left and dump the qualities on the right. Does that make sense? So what I mean by a big picture is it's a true north, right? It's like, what are we doing with this product that we're building? We are heading in that direction, okay? But we're heading that way two weeks at a time and we can do it in these little jumps and with new information coming in, we can make course corrections, right? If we know that's where we're headed but this stuff happens as you're working, if you know that's where you're going, then you know what to do. When you don't have a big picture, these are the sorts of things that happen. You get incomplete user workflows. Like that's what I was talking about where you've been working for five sprints and oddly enough, people still can't do something that they really need to do, right? You get feature creep but in smaller units. And this is kind of happening because you're getting incoming data but at the sprint level and then people are going, oh God, let's fix that. They're not thinking wait a minute, really should we be fixing that, right? And then what I describe is value thresholds are not defined, right? It's sort of like what's the minimum stuff you have to have together to have something that is of value to your users? And this is kind of the same as the feature creep thing. You have what's so great about Agile is you have all this feedback coming in continuously but if you don't know where you're going, what the heck do you do with that feedback that's coming in, right? And most important, especially for designers, quite frankly, is you don't know what done means. Did anybody here go to Ian's talk on enough design? I did, I thought it was great but one of the ways you know what enough means is by having a big picture and then deconstructing things downward. Okay, so to me an Agile big picture, when we're saying big picture, big does not mean detailed, big means high level. This is really, really important and Agile big picture is not just upfront. It is continuous gathering or continuous validation, that's a better word. Continuous validation of your big picture assumptions as you're going through the course of your product and this is so huge. It's shared and it's visible because the other thing that happens when we were so focused on the implementation and the delivery is everybody's trying to get their cards done, right? Everybody's trying to get their stuff done and they may not know why. I mean, where does this fit inside a bigger scheme that exists outside of two weeks? So when the whole team knows what's going on, people can not only make decisions for themselves but they can help with decisions for other team members. So when I was saying high level does not mean detailed, that doesn't mean that we're not going to get to the detail. But this stuff is done at a sprint level, right? So what we kind of use to think about this is levels of detail. There's a product level and I think of this as being the bedrock. These are the characteristics that define what your product is. Did anybody go to the brand presentation? Okay, this is a little bit like the brand of your product, right? These are things that probably aren't going to change too much. You want to think about things at the release level. This level tends to be concerned with what I think of as the business goals for your product, right? It's like, what are you actually trying to achieve in terms of selling something or making revenue or putting some revenue piece of strategy in place, okay? And inside within those there are what I describe as capabilities. This is where I tend to think as an interaction designer. It's like these are basic workflow chunks. What are people trying to get done? And then these are kind of grouped within releases. And then inside capabilities, there are smaller pieces that are thought of in terms of feature levels. And then there's a sprint level where you can take these little things that kind of get grouped into capabilities and break them down into, to me, the key thing is anything that is at a sprint level has to be something that can be estimated with a high degree of confidence. So we use goals and we use goals at all of these levels of detail. And our goals are applied to the backlogs that we can discard, sort, and rank things. They focus our user experience investigations and as requirements they can define done. So my example, Sketchbook Pro is a 2D sketching application that is designed for use with the Tablet PC and with Wacom tablets. So, okay, that was there. So here's the thing, before, I should have asked this before, how many people have actually read that article I put up, the JUS article? Okay, wow, okay, shoot. Now I almost wish I had kind of stuck those slides into the tech. The principle behind that is basically that you kind of deconstruct things into sprint level pieces and that the design is still working at least one sprint but probably two sprints ahead of development. We are passing validated designs to development and the chief benefit of doing all of this is that we make things that are essentially unestimatable, estimatable, right? But before we are doing that, we're, as designers, we're thinking about product and release goals. So what are product goals? Well, we have something basically called a product vision and it describes who a product is for and who it's not for, what the product is and what it isn't. And then we take a look at that product vision and then those three grips that I described before define principles. I'm going to talk mostly about the design principles but I'm also going to talk a little bit about the interaction with developers and with business sometimes. So, for example, in the product vision for Sketchbook Pro, we state that it's for creative professionals and also it's a sketching application and based on that, it has to be responsive, quick, loose, okay? So one of the things that gets mentioned over and over and over again when users are starting to use Sketchbook for a while is they want some image processing features, right? Like they want a bunch of things that you would find in an application like Photoshop. I'll just leave it at that. We don't do those because guess what? We're not, we're not that app. We are a sketching app. Also, it means that some of the features inside Sketchbook are core features. They're the things that are going to make or break that application. One of those things is certainly brushes, right? It's hugely important that the brush quality is absolutely matchless and also because it's got to be quick and responsive and loose. Anything we can do to make it faster and more responsive or to promote flow are going to be good things. So based on the product vision, we created the design team, design principles. These are some examples of some of the design principles, but this is not a full set. Sketchbook has the idea of elegant simplicity. In other words, just because you can do something or you can expose something, you don't have to. We're trying to get away with the minimum amount that we possibly could in the application. Because it's designed for Wacom tablets and tablet PCs, it has to be stylus friendly. This was actually our joke. This is our prime directive. Everything in Sketchbook has to be able to be done with just a stylus and a stylus with no button. Because when we first developed this stylus or tablet PCs didn't actually have those toggles, so we couldn't use those as clutch combinations. Also, everything in Sketchbook has to be self-revealing. Because we did not consider our competitor to actually be other sketching applications, our competitor for creative professionals was actually pen and paper. You don't need instructions on how to use pen and paper. To do the big things, the core things in Sketchbook, you have to be able to figure it out by just fooling around in it. Also, because our competitor was paper, we always wanted to maximize the work area, the canvas. Anything we do that clutters the canvas is a very bad thing. Again, we were very, very careful about adding features to Sketchbook. We didn't just put things in because we could. There were a lot of things that we could have put in and we refused some of them. As I said, because of the stylus-friendly, no feature is done in Sketchbook until you can access it without a keyboard. It also meant the self-revealing meant we always had to, as user experience people, think about investigating the discoverability issue. We also had to always constantly be on the lookout for evidence of clutter and user irritation with UI clutter. The engineering team also had developed some engineering principles. Sometimes what would happen is we would have to have these conversations. For example, we had this design that we had developed because we wanted it to be self-revealing, but then we were talking to the engineering team. In engineering principle, they called optimized. By the way, all of our things were called things like elegant simplicity and theirs were called things like optimized. What they meant by that is everything has to be really fast and has to have a very small code footprint. Our design was adding to the code footprint and it was slowing down the load up. We had a discussion. How can we still hit self-revealing but also hit the engineering principle? These are kind of conversations we're constantly having. Business principles were things like prior to this, the majority of alias applications were what I would describe as niche products. Here we were entering a broader market. We developed a trial version for Sketchbook because you couldn't really understand why it was better until you actually tried it. That meant that we had to design and code the trial version. We had to make room for that. Other places that have things like this or TiVo, they had this great presentation where they said, one of their design principles is actually it's entertainment, stupid. They also say that they constantly are evaluating their designs because they want people to have what they call a lean back experience instead of a lean forward experience. When you're watching TV, you're kind of slumped this way. Anything they would do that would kind of cause people to go forward. One of the things they are always getting is that they want keyboards. Some people want keyboards for TiVos. It's like, no, we're not going to do that. We could do it, but that's a lean forward thing, not a lean back. Also, notoriously, we waited until version three of the iPhone OS. I guess I could call this now iOS, but anyway, to add cut and paste. Why? Because they couldn't put it in and still maintain what they thought of as the user experience that they wanted. I don't have inside knowledge, but I think it's a good guess that they were working on cut and paste in v1. Come on. But they had the guts to not release it until v3. I'm going to say have courage. Sometimes you have to hit a minimum threshold to make it your product. When do you do this? One of the things is some projects don't have these at all. We actually had some of these in place before we went agile, so this is one of the things we wanted to maintain. I would say that some of this work, the product work, actually happens in a pre-sprint zero thing. There's sometimes at most companies, there's what I would call a green light situation where you decide, hey, is that product a go or a no go? I'm not going to discuss what a bunch of activities are, but at Autodesk and at Alias, we had market validation activities that we would do before they would decide to stuff. For release goals, we tend to do that during sprint zero, the first agile release. This also means that it's got to be the kind of activities that fit within that framework. The only way you can do this is to make sure that you're not doing this stuff in detail. That's just kind of a recap. Release goals are essentially pragmatic distillation of the business goals so that they can be understood by the team in terms of their impact and help them make decisions. This is the thing that helps people align so that we all know we're going the same way, and they give us help so that we know what happens if we make course corrections. The big thing is it's not the backlog. Sometimes when I talk to people, I say, okay, what was the release goal of the last thing that you put out? They said, well, and they start listing all the items that they have in their backlog. That's no good because what a release goal is good for is to help you prioritize your backlog. It can't be the backlog. It's got to be something higher level than that. For V2, the release goal was to remove barriers to purchase. As I said, we had a trial version and it was very successful in that we got great reviews and people said they loved the product, but oddly enough, there were not that many conversions. People were downloading it using the trial, but they weren't purchasing from the trial version. We were going to remove the barriers to purchase. The things that we did was we did a survey that was focused on people who had downloaded but not bought. We got that information from sales. That allowed us to drop rank 200 high-level features to about 25 and identify within those the 10 absolute must-hits and the top five in those 10. This is something I want to say just to the couple of designers who are here. Sometimes this means you can't do something that you just know deep in your gut you really should do. It's crazy. At this point, there was no way to save a customized color in Sketchbook, which to me for this app was like madness, but guess what? It turns out that feature, even though it absolutely is something that should be done, doesn't have anything to do based on the data. It wasn't one of the main reasons why people who were downloading were not buying. Another thing is that one of the things that actually was in the top five was to add the ability to rotate the canvas. Basically, a discussion with development revealed that with the technology that existed at that time, if we did rotate canvas, we wouldn't do anything else. It would basically be a one-feature release, so we dropped that. Very, very rarely you can actually redefine your release goal. The V2 release goal was not actually originally the one I described, the remove barriers from purchase. It was something else which I'm not sure I can talk about. But we started work on that, and then it turned out that we got business data that indicated we should do a MAC port instead, and that that would make us more money than what we were actually planning to do. This framework allowed us to press the reset button. But importantly, it reset for the whole team. It made us understand, hey, what do we have to throw out now? What do we have to add now? What are the new considerations, and what work can we save? Because of the target market, keyboard shortcuts in something like this were very important, so we spent some time doing that work. You do this during sprint zero of upcoming. Sometimes I would get this question, well, how many sprints or cycles do you have in a release? I'm going to say, well, I don't know, you tell me. It's at your company, right? At ALACE for Sketchbook, we released once a year. We were putting out, so when I say we released, what I meant was we got money. People could buy it once a year. We were actually putting interim releases out in the middle people weren't paying for. The key thing is that what you're doing here is defining only enough detail so that you can do that thing I was talking about. You have a compass point. The huge important thing, and the big mistake I think that happens with a lot of user experience people, is they start doing design at this point. And it's like, no, no designs come out of this. You're only defining your problems, not your solutions. Once you know what your product is and what you're trying to get done in the release, then this is where I tend to sort of think, start thinking. We start setting design goals at the capability level. So these are articulating those problems that you described at the release level in such a way that you can actually solve a particular workflow or user problem. And the reason why I've got the slash is that sprint goals are just a subset of your capability goals. So your capability goals, you're saying, these are all the things we must solve and we have to hit in order that this capability can be considered done. Your sprint goals are, what can I do inside this cycle, this sprint? I'm going to talk about this a little bit more. This is so important. These things are not defined through opinions, but through observation and through research. And when I say chunked, what that means is that we're basically doing research in two-week chunks. Actually simultaneous with usability testing. And we use these capability goals and so forth to chunk the designs and then we also use them to chunk what I call mini releases. That's the points at which you want to expose your work to end users. So if you have, for example, a beta group, we had a group, an internal group that we used, and of course they used to define done for design. Okay, so this is V1 of Sketchbook. And what we saw was that people can immediately start working, but more or less within five minutes of working with Sketchbook, they didn't like the size of their brush. They wanted to change the size of their brush. The only way they could actually do that inside V1 was to open this enormous custom brush editor which had all of these controls which were a little bit crazy. Sometimes for some types of brushes to change the size of the brush, you would actually have to adjust five controls. And frequently what would happen is people would break the brush. Okay, it would not look like a marker anymore by the time they tried to resize it. And of course this big honkin dialogue is covering up their work and they all hated that. Okay, so here's a subset, not all of them, of the capability goals for brush resides which was defined as one of the big pieces we had to hit. In the first five minutes, you have to be able to resize your brush without documentation. And the reason we say that is it's going back to self-revealing. Even if users want to look at documentation, they never want to look at it in the first five minutes. Okay, ever. And we saw by watching people that they always wanted to resize their brush within five minutes. So therefore, okay, we wanted to find a way to allow people to resize without opening that brush editor. We wanted one control to ganged together all these things that were happening instead of like two or five of them. We wanted to allow people to keep the focus in the canvas so that they could maximize the work area that's going back to the design principles. We wanted fewer dialogues and of course the whole thing had to work with just a stylus. So this capability can't be done in two weeks, end of story. Also, the design research for it cannot be done inside two weeks. So you know how before I was saying we were working at least one sprint ahead? Well, we were going to have to work more than one sprint ahead, right? But here's the thing about it. By and large, for anything that you have to design over more than one sprint, it is going to take more than one sprint to implement it. So you end up doing these things where you're staggering things. So you're chunking, this is what I'm calling design chunking, you're designing investigations into pieces that fit inside a sprint. And the reason you can get more than one of those in a row is because the last set of fully implemented designs that you passed off are being implemented during more than one sprint. Does that make sense? Okay. Oh, go ahead. Actually, I'm going to give the, a little bit more, I'll give an example of this. And if I haven't answered your question, at the end of the example, ask me again. Okay. So, okay. So before in, you know, the old days, we'd have this humongous piece of upfront research that we did at the very beginning of the project. Then we'd guess and write some crazy long specification. And then, you know, implementation would start, except that's not really what happened. Implementation started while you were actually doing the other stuff, but I'll leave that for now. Anyway, what, what, what we're doing in Agile is we're breaking things down to these smaller sprint-sized pieces. We're doing a little piece of mini-research, a little usability task that fits inside a two-week chunk, and we're iterating on a prototype that you can build inside of two weeks. And these things all build together and accumulate until you've got, you've hit all your capability goals. The way we handle this in terms of planning, I'm not going to talk about this too much, as I said, but I will be talking about this at Agile, is that fundamentally, these capabilities can't be estimated, right? Because development doesn't know what we're asking them to do, right? So we tend to use these color-coded things. So design capabilities are done on these blue cards, and they never have estimates on them. It's basically a high-level description of about when they should expect that they, that these cards will be broken down, we call it breaking, into these white cards, which are feature cards. So development owns the feature cards, okay? And they all have estimates on them. What happens is that we're saying here, during these sets of cycles, we're doing design investigations for this capability. On this date, this is going to turn into a whole bunch of these white cards. And the person who's going to turn those, the blue cards into the white cards is going to be the developer. So at some point, the designer's going to have to sit down with the developer and break the cards. Okay, so caveats with doing this is that you need a buffer. And some of the ways you get that buffer, I've described in the paper, but essentially what you tend to do is you tend to front-load near the initial sprints, things that require very little design input, right? So the example of that for Sketchbook Pro was Photoshop export, which from a design point of view, just meant adding something to the save ads drop down. So it's still big design if you break it down into too many sprints. So don't do something crazy, like make this stretch out to six months. That's completely nutty. I'd say three to four is a good guideline. So what is this? How do you actually do this? So this is a recap of some of the capability goals that we were looking at for brush resize. We chunked these roughly into these four pieces of investigation. We were doing an investigation where you could do brush resize with a hotkey. Then we had to do brush resize with a stylus in terms of how it actually acted. And then brush resize in terms of how it actually looked. And then we created what I call a workflow prototype where you put a lot more of these pieces together where they overlap with other design chunks. So, for example, brush resize with a hotkey. We created a series of disposable code prototypes. We were addressing this subset of the capability goals that we were able to resize without the brush editor that we had one control for the size instead of two to five and that we were keeping the focus in Canvas and removing dialogues. I'm not going to describe all the different versions of this but some of the characteristics you can see this is very early. We've got things like the infamous ugly black dots which basically led to menus and hidden things. This kind of a prototype is disposable and if you're going to use ability test it, it requires an incredible amount of intervention. You have to explain a bunch of stuff like just ignore the black dots and so on and so forth. So we tested these very rapidly. I think we did probably six or seven versions of this with our student intern and the usability testing was very simple. We used internal user proxies. They had to be people with the correct user characteristics and not on the sketchbook team but we asked them to do tasks that we knew were relevant for brush resize and we learned a lot. I'm not going to describe this because of time but the interaction design for the stylus we did with whiteboard prototyping. Meanwhile, our graphic designers were simultaneously working on the look and then we put a whole bunch of these things together. One of the capability goals I talked about was in the first five minutes you have to be able to learn how to do this without documentation. Well, it turned out that it doesn't make sense to kind of look at discoverability issues until you can do more complete workflows. So we did not investigate this particular capability goal until we built what we call workflow prototype that combined brush resize with the brush palette chunk and also custom brushes. Because learning how to do this, it was very organic. You had to have all three of those pieces together at the same time to correctly understand whether or not they were getting it. And at each one of these points we learned something very valuable and we made a lot of design refinements. And it was done fairly, it was possible to do this in two weeks, sprints. Then once we had finished the series of investigations we had an actual design. So then we would try and figure out how do we put these together into mini releases, right? So here you're also trying to break things down into sprint size pieces but the questions you're asking yourself are slightly different. I mean with the design chunking which you're asking yourself is inside the sprint what can I actually investigate reasonably, right? With the implementation breakdown you're trying to figure out which users are going to see the next cuts and when. And also you want to layer the implementation so that you understand the order that you are delivering capability goals in each sprint. And here you're always thinking, well okay, what would actually happen if God forbid we ran out of time or we had to drop the last pieces of this? If we didn't do the last pieces of this implementation would we still have something shipable, right? So you're here, you're really evaluating, you're basing this on value, okay? How do we actually put this together so that people can get something of value done as we're putting each of these pieces in place? So even though we did the investigations in a completely different order the way we did the brush size implementation was to put together things in two pieces. The per brush property editor and the brush resize widget. So this is a property sheet and there's one for each type of brush and one of these controls the size of the brush and the other one controls something else for each different brush. So this actually hits, first there's an overlap with the custom brushes design it hits these main goals. We knew that people would be able to learn this without using documents. We knew that it was a much smaller piece. Instead of having the brush editor which was huge it was a much smaller property sheet. There was only one control so people were not breaking their brushes and you can do this with a stylus. But here's the thing, we knew this was insufficient, right? Technically it's like hey, they can resize brushes. What is the problem, right? But the thing is this is not sketchbook, we knew it. And when we, actually it's kind of funny I wish almost we had done this. We wanted to sort of put in sealed envelopes, okay this is what our users are going to say and this is what they're going to complain about when we release this. This is the final implementation. There's this widget, you can move it around it's kind of a movable spatial modal indicator when you press inside of it and you drag then you're resizing. If you're outside of it then you're painting. Keep the focus in canvas, there are way fewer dialogues. People really love this and it's so, it's very easy to use with a tablet PC. And I mean we just got feedback that people thought this was a complete knock out of the park. I mean you have to kind of compare it to the V1 of this too. So the other thing is so that's all stuff. So when I was talking about this breaking issue, right? So all the time we were doing the design work on the planning board. This design capability was represented as this blue card with no estimate on it. When we were pretty confident about the design then we're turning that blue card into a series of feature cards with estimates on them. As I said the developers own that so what I would tend to do is I would take a guess at it myself. I would try and break things down so that to the best of my knowledge they would be done in an order that delivered the best value at any given point. And I would make up names that made sense to me and then I would explain the design to the developer. And by the way I have to really emphasize this. This is not the first time the developer has seen this. They've been here all along. They were there during the initial ideation. They were there during the prototyping. They were describing technical constraints so that would inform our designs. And it's mandatory for a developer who's working on a feature to attend at least one usability test for that feature. So yeah. So this design is not a surprise, right? But they take my guesses at how to break these things down into features and then they would name the features and to be perfectly honest, more often than not, they would name them something that I didn't really understand. You know, there would be some kind of technical thing that made sense to the developer. But here's the thing, because we were doing it at the same time, I understood what that meant. I had the translation for what that feature actually was and where it fit in terms of the big picture, in terms of the capability, what does that feature represent? So this would actually allow me to put usability acceptance criteria on the back of each feature card. And this is something where you can really, working with your testers is fantastic, because very often they can think of ways to make, you know, unit-driven tests and stuff like that for some of these things. So an example of something like that is for brush resize, there's this heads-up display that kind of displays this number. And one of the things we found was that as you're resizing, if the decimal place, if the number of digits changed and the decimal place moved over, there was this little pop thing that would happen, right? And that basically made the whole application seem stuttery. It made it seem slower and less responsive in terms of its user experience. So we actually never, you know, obviously fixed this inside the prototypes, why bother, we knew it. But this was an acceptance criteria for the heads-up display feature card in brush resize. The other thing about this is that you can, because you're having these conversations, it forms kind of an informal pattern, because we have heads-up display for a whole bunch of other features as well. And now there's an understanding that for all heads-up display with decimal points in sketchbook, it should be fixed. So this is really important because very often development wants to help you create a good user experience. But they need something concrete, specific, and that makes sense inside a sprint. So to interaction designers, I have to say, think about how you can predigest some of these higher-level things into these much lower-level concrete things. So the recap of this is that you want to do product and feature goals together, so that's your big picture. That's how you understand, hey, that's what we want to do with this. And that's going to help you, and it's going to drive the capabilities that you need to meet your release goals and to still make it your product. Then when you have something which is going to take more than two weeks to do, you want to look at your whole list of goals and figure out, okay, over the next series of iterations, what is it possible for me to investigate and how? Then that piece of the design is done when you meet your subset of goals. And when you've done all that stuff that's necessary for your whole capability, then you have enough to put together a design. You have an understanding of how to measure the outcome, which is kind of what I was saying earlier. Yeah, we can actually measure whether or not we have the right outcome if you have an understanding in this way of what problems you're trying to solve. Then you take your whole design and then you again digest it into these little mini releases that fit inside sprints, and here what you're asking yourself is, you know, what order do I put these in so that I'm delivering some value that our users can actually understand? Even if at any given point it's not the whole story, at least you're kind of thinking about it in that way, you're grouping things so people can do something if they were able to do something. And then when you're breaking these capabilities down into the feature level, pair with the programmer so that you understand what's happening and make sure that you kind of, you can make your user experience requirements fit at the granularity of the feature card because that's how it's being implemented. So I'm going to make a call for expanding what we do as agile user experience people. And traditionally these are the kinds of things we do. We're thinking about the whole team trying to cut a path through a forest, say. The kinds of things that user experience people do, we create prototypes, we do this rapid usability testing, we identify and fix problems as they're going on, we define these many specifications and check them. These are all things I think of as things that help the team clear obstacles so that you can cut that path a little bit faster or better. But I think as user experience people we have to start thinking about activities that help the team refine their intent or check their direction. So in essence this is what I mean when I was saying that the product owner consists of the product manager, the business person, the tech lead and the interaction designer. The product owner activities that we bring to the table are these sorts of things. So these activities are things like just as we figured out how to do usability testing rapidly in such a way that it fits inside two-week chunks, we figured out a way to do rapid longitudinal research that fits inside two-week chunks. That's a whole other talk, sorry. And we want to be able to set design goals so that we understand where we're going and when we're done. And we want to basically do these things where we're helping to group and prioritize the backlog as we're going along. Plan the product exposure recognized done. And I'm sorry, there is one thing I forgot. When we're looking at that planning board because we're participating in the iteration planning, that's when I was saying, you know, we're getting all this feedback as we're going along. We're going out into the field. We get feature requests or bug reports or things like that. And because we've got the product goals and the release goals, we can evaluate the incoming feedback and contribute to the iteration planning, right? At any given point, what you can basically say is, hey, is what we were planning to work on in the next cycle more important than these incoming pieces. And then, if that's not the case, then we put something in and we take something out, right? So, yeah. So that's how we feed continuous feedback from the user point of view, user research point of view into this process. And also, it helps us just stop or say no. And that's it. I wanted to leave time for questions. And the first thing I want to ask is, did I answer your question? Very well. Are there other questions? I think your team is not co-locating. I'm experimenting with a bunch of things. Now, okay, first, here's how we report back our findings. We don't write reports of site visits anymore. We do do light usability test reports and we send them out to the team, but we don't expect anyone to read them, frankly. What we do is we take the... During the Daily Scrum or the stand-up meeting, we basically say, hey, we just came back from a customer site and we've got some stories to share. Does anybody want to hear? Now, you know, whereas before, no one read our usability reports, frankly, everybody stays to hear the stories from the field. So what you do is you take whatever prototype you win at with at whatever fidelity it is and you demonstrate what that user did. Generally speaking, we do it with their data, too. We grab data from the customer site and we feed it through the prototype, whether it's a paper prototype that's behavioral or in the case of Sketchbook, it was almost always these coded prototypes. And we said, hey, this is what they did and this is the problem they had and everybody goes, oh, well, yeah, I've got to fix that. And sometimes it would be a design issue, so it would come back on us. Sometimes it would be a technical issue and we could decide whether or not to put that in. But the key thing is that it's this immediate, continuous, emotion-driven, visceral kind of story that we use to try and motivate people because quite honestly, people aren't going to do the work unless they think there's a really good reason. So here's the problem with not being co-located, right? It's so much harder to do this kind of thing. So what we're trying to figure out how to do is to videotape this stuff, which is not so bad, actually. And when I say videotape, I don't mean... We couldn't very often videotape sketchbook in the field just for a whole bunch of different reasons. But we can videotape the reenactments. Do you see what I'm saying? So, yeah, and we're experimenting with a bunch of techniques for how you can videotape paper prototypes as well. I don't know if that answers your question. But the key... Everybody's got to be talking at the same time when you're doing the product division stuff because everybody has to have the same understanding of that. Similarly, everybody has to have the same understanding of the release goals, okay? Per feature or, excuse me, per capability, it's most important that you... You're not going to get that same daily richness if you're on the other side of the world from someone. But we try and do these things like these discussions with video discussions rather than email when possible. But the big thing is when you're getting to the point where you're doing that breakdown and you're turning it from this capability, this unestimated thing into the estimates, you have to have that conversation with the developer. I mean, even if it's on the phone, but you can't don't do that by email. That's like just a short recipe for disaster. The other thing I will have to say is... I hate to say this, but the onus is, frankly, on the designer to go to the developer initially. And do you know why? It's just because very often, culturally, a developer is not used to being able to ask a designer about this. It won't occur to them. They'll come to a point where they're looking at your design and there's going to be two ways to implement it. And one of them you will prefer, trust me. But they'll just choose one, you know? So you have to go and do that walk around. You have to go to the developer originally and then it'll be okay because that pattern will be established. You know, I'm just curious. You lay out a very organized process. It seems like it also seems very fine. And how does your work environment be very sad? Can you give me a little bit of an explanation of the process? I have to wet Jeff's keynote. Okay, you remember he was talking about Jared's schools, a little continuum of their tricks. And that turns into... I'm going to be honest. All of this is tricks. But then I had to write this paper for the JUS and they don't publish tricks. So, you know, I had to kind of think, okay, how do I explain this or formulate it to make it into something that... where it just doesn't seem like one-offs, okay? So, man, that whole little, you know, diagram that looks so beautiful. Sometimes I just want to say to you people, here's the rule. You have to have a validated design ready for the next sprint. And a story. Okay? Actually, I used to have this... I used to take this dance class with this teacher who is hilarious. He looked like a football player, not a dance guy. And he used to have this expression where he'd run this little set of steps, right? And he'd step back and we'd all be going blink-blink. I have no idea how to do that. Just do what you have to do to get there by the count of eight. So, that is what I'm saying to you as designers. Just do what you have to do to get there by the count of eight. You know that you have to validate certain things. You have to hit certain goals. Figure out what you have to do to get that done, right? What about terms of, you know, if you're designing release goals, the capability goals, you have a planning dashboard and it's all that embraced. Yes. Do they complain about going for the motions or having to set those goals or using those tools? Okay. Don't tend to be involved in the very initial high-level work where you're looking at things like the blue cards I was describing. And there's some other color cards that I'm not going to talk about here, but... because they don't care about stuff until it gets to the feature cards. They own the feature cards. So, you know, their scope of work is to figure out the pieces and put estimates on them for the first two cycles. That's not crazy, is it? Right? And from our point of view, you know, we're used to doing... for us, it's less than we used to do, right? Because they used to ask us to write specs with no data, like madness. And so this was... it's less for us and it's less because it's just in time for the developers. And it's not rocket science, right? This is stuff on cards, I don't know. And by the way, of course, that was the other thing with co-located. You can't use cards if you're not co-located. You've got to figure out a way to put that up in some shared space, online space, generally. I don't know, does that answer your question? I don't know, it definitely did. I mean, I'm not saying it's going to be my answer, because I'm sort of the manager, so I'm just very interested in terms of types and grades. And I also have to say, you know, I don't know about some aspect of it that makes sense for you and then find something that works for you. I don't know if that answers your question or not. Any ideas for your comments? So you've got a couple of slides that outline the design team and the development team. If I did my math right, you had about 16 people of us. If you had resources that limited your team size, what's the minimum team size for you to be successful in this? Okay, everybody hates when I say that, but it depends on what you're building. Right? To build this product, this was the minimum team size. To be honest, it probably was one fewer developers than we actually needed, but I digress. To hit it, okay? And I have to say that this is a very, very small team size by both alias and autodesk standards. But an interesting question is, this is actually, there is this fantastic retreat that Anders, back there, organized a few months ago where we got a whole bunch of people who wanted to try and figure out how to work in an agile user experience driven way. But it was very cross, like it wasn't just interaction designers, there were developers and one tester and product donors and so forth. And I think one of the things that kind of came out of that was how do you scale agile development? And I think it's pretty clear that you want to scale it by cloning. Right? You want to try and find these smaller groups and put all that stuff together. But you don't want, at a certain point when there's too many people, this starts getting really crazy because you're replacing documentation with communication. So if you can't communicate with everybody reasonably, then there's a problem. And that's by the reason why I was saying before that the QA person, the documentation person, the interaction designers, they're full-time per product because this is not going to work any other way. The way we work. Anders? Oh, no, I had a comment about the question which was how do you handle being in your distributed and the way that I think of being distributed is everything is the same except for it takes much more time. Communication is much more noisy and it's much more risky, a riskier approach in general. I actually should also say this, almost all of the opinions that I've just said are not Autodesk, they're mine. I described how we worked, but any time I've said something I think that's me, not necessarily Autodesk. But I don't know, I think if you're going to have to work with non-co-located teams, it would make way more sense to have completely self-contained Agile teams that are co-located. Do you know what I mean? And break the functionality into pieces so that things can be happening and that each team has its own volition and its own drive and can make decisions immediately, but I don't know if that's me. You had a question. I have a question on the scaling of the Agile process. If you scale down so far, would it make sense to have some individuals that won a role? People should be concerned so much with roles, they should be concerned with skill sets. It's like, that's part of the reason why I set it up front, I can't code. So it would be a waste of time, quite honestly. It would not be the best use of me as a resource to put me on non-design activities. Similarly, on our team, it would be a waste of time for the developers who have deep knowledge of how to make kick-ass brushes to be not coding, that's like nuts. That may not be the case in your situation. It may be that the way that your skills are distributed, that different people do different things and that the product owner might be different people on your team. You might work in a place where you've got, you haven't got usability. Generalists, you've got a separate user research team, separate set of designers, then you have to think, okay, well, in my situation, what do I need to make this all work? I don't know, again. Does that answer your question? If you put sprint zero at your least interest, do you guys have your own set of experiences? The whole team has one sprint zero. It's one cycle zero, actually. Okay, so the whole team has sprint zero and it takes one sprint. Here's where we're doing the release level stuff. We're trying to figure out at a very high level what do we need to make something, in our case, people are going to buy. Then what happens is in the first cycle, the development guys are working on things that deliver high value but have very, very low design requirements. That was the Photoshop export example. While they're doing that, we are doing the design activities that we described. Let's put it this way. There are a bunch of features that we're kind of investigating and there's also a blue card, a capability card here somewhere. To hit that, we're doing design here. As we're designing that, what we get back from that is data and that feeds our designs. We're doing some contextual analysis here. Again, I apologize, but seriously, how to gather user feedback? I've got, like, an hour on that. Where can we see that speech? Besides waiting on a little edit song. I know the funny thing is, okay, Alias, as I said, had moved to, no, Alias had moved to Agile for all new product development and then we were acquired by Autodesk. Autodesk is not principally, Autodesk is principally a waterfall shop, but there are pockets of Agile rebellion happening throughout Autodesk. Because of this work that we've done, Lynn Miller and my colleague John Shragg and I put together this whole colloquium. It's on Agile design and it's like, I don't know, eight hours of stuff. So inside that, there's this course on how to do Agile user feedback. But I've tried to be way more disciplined because the feedback I get when I make presentations always is too much stuff, too much stuff. So I've tried to really take out anything that didn't have anything to do with the picture stuff for this. So I'm really sorry about that, but yeah. Is this document that you put together, is that on the, I mean, the stuff you've done for Autodesk and the training, is that available online? No, I'm sorry, that's internal right now. We're kind of taking it out in bits and pieces like this. But yeah. But I don't know, if you have individual questions come in and ask me, in case you haven't noticed, I love this stuff. Okay. There's that quote from Dorothy Parker, I've been rich and I've been poor and rich is better. I have designed this way and I've designed without doing this, this is better.