 Alright, so I think since I'm being filmed, I'm going to kind of stay here completely. Which will be hard for me to do because I like to walk around and interact. I'll start, here's the plan today. I'm going to give a quick introduction of Adobe Systems. Most people are probably familiar with some Adobe products. But I'll give a little bit of background that's pertinent to how we develop software. I'll talk about the team that I worked on, the first strung team at Adobe, which is the audio team that's delivered the audio product of this channel. And then I'll talk about a little bit about how that story is starting to spread over the use of Adobe in my current role. So as far as Adobe, Adobe's product development lifecycle. So Adobe, as many of you might know, is a company that's really built on acquisitions. So we require many, many different companies to build the current company of Adobe. And because of that, they're very conscious of being respectful of the teams that they're acquiring of their own culture and their own development methodologies. And so Adobe has never, to this day, said, when you become part of Adobe, here's the lifecycle you need to follow. So what we have at Adobe is just the first set of groups all developing software and how they think it's best. And of course there is a little bit of cross-qualification that happens over time. But different groups can use different terminology and different life cycles. And it's an interesting problem. It has its benefits, of course, because then teams can come in and be immediately stuck with them and they don't feel like the big Adobe is trying to effectively do things. But Adobe's about 7,000 employees right now and making about $3 billion a year, doing primarily desktop software but also software services as well. Products like Photoshop, Acrobat, Flash, all pretty well-known products. Where I came into this was on the audio products, though. And my background was all in audio. So I'm going to talk a little bit about this team, which is the team that developed the first Scrum product. And so the history of that team goes all the way back to 1990 when we had one, the original, founder of Centrillion Software, who was at the time working for Microsoft doing C manufacturing, with me doing coding, but had a background in audio acoustics and wanted to write a PC audio editor. And so he wrote up this little shareware that he called CoolEdit and started asking for $5 donations. He was starting to get all these checks in, and he said it was annoying to take all those checks to bank. So he upped the price to $25 thinking the check flow would slow down, but the number of checks was about the same and they were just 45 times as much. So he decided he better incorporate and he formed this company called Centrillion. So, you know, throughout the years, various releases of CoolEdit and eventually Adobe, for their video business, was interested in acquiring an audio application to be a companion to their video editors. And I was working for Centrillion at the time that Adobe acquired Centrillion, so I was one of the acquisitions of Adobe. We had a couple releases of what became on Adobe Audition. It used to be called CoolEdit Pro and it became Adobe Audition as soon as Adobe bought it. And I worked on the product of the tester. Around the Audition 2.0 cycle, I became the program manager for Audition. And shortly after, I heard about Scrum. Since that time, Adobe, or that audio team has developed three major releases, all using Scrum as well as a few updates, and are still using Scrum on that. So let me talk a little bit about Audition 2.0. And this was the cycle that I became the program manager on. It's not in the middle of it. So there were a few issues that we were dealing with on this cycle. This was a major rewrite of both the back end, the facing engine to mix all the audio streams together and apply effects, all the fun stuff that you can do with an audio application, as well as we were reskinning the whole front end to make it look like we could do the application. Until then, we still had kind of the old user interface that we had developed as our little startup company. And we were making it look like the other applications in the video suites. And one of the issues that we had there is that the fundamental technology that we were relying on to make this new mix engine work really fast to where all of you who have turned a knob on a stereo, you expect that it's going to get louder immediately, not until you have to wait two seconds to hear it get louder, and we were experiencing problems like that. So we had to develop a new engine for that. The guy that was working on that had been with the company for a long time. Quite frankly, he was pretty burned out working on the product. All of our developers at that time were working from their homes, all of the testers were working in an office together. So it was an interesting environment we were working in at the time. But I'll call this developer, John, who was developing this really key feature, the underlying mix engine. John, this is the azio driver. The azio driver was the key thing that was going to allow us to do that real-time mixing. John had worked on all of our driver development up until that time. And the driver came in, gosh, I thought we had a milestone that we called Future Complete. It works like a beta milestone where we stopped working on new feature development. We just started fixing all the bugs that we built in the last, a very water-polished type of model at the time. John delivered that audio driver about two weeks before that Future Complete date. And he delivered pretty full of bugs at the time. And I remember as a tester, when this came in, I was pretty freaked out that we were going to have to try and get all these bugs fixed in the new engine. So we didn't have very much testing on prior to that Future Complete milestone. And we also had, you can see this is the actual bug curve of bugs that need to be fixed. This is open, two-fixed bugs, this is how we get the class button. This is the actual curve, shape of the curve. And you can see, you can guess where a feature complete happened, right? Right at the top of that mountain there. Because of that big mountain, it was a pretty rough end game. And we could barely meet our schedule as, you know, a newly acquired Adobe application. We had to ship at the same time as all the other products. So that was a new experience for us. So we barely made that. This bottom picture down here is, we found out, after the fact, you know, one of those times where somebody accidentally hit a fight of all, when they meant to play the fight of all firsts. We found out that John was spending quite a bit of time building this great new home theater. And he found out he was developing an active driver. So a few personality issues that we were dealing with as well. And of course, you know, John's manager was working with him to do a better job of those things. And I had a lot of success with that. So all these things were happening. And I was a new program manager at Adobe. And I remember going to, like, a brown bag lunchtime, hour-long presentation from Jeff, several hundred years. And Jeff was saying things like, you know, in Scrum, we develop things in priority order, feature priority order, or, you know, based on the customer or based on risk. And I remember thinking, oh, if we did that, an active driver would have been done in the first couple of months. That would have been great. And in Scrum, what we have is we have all the people on the team working together. And they meet once a day for a day in Scrum. And they answer questions like, what did you work on yesterday? And I'm like, wow. If John had answered what you were working on yesterday, what's he going to say? Well, I'm a new receiver, but that's true. And so I was pretty excited about hearing about Scrum. And it just seemed to really resonate with me as a better way to do things. So I read, you know, I read Cantalus and I got online and learned all I could about Scrum. And then I wrote up a little, you know, presentation to the other managers in my team saying, I think this is a good way to do things. And I think our next release, we should try this new development policy, this Scrum thing. They agreed, they, you know, with a few bumps along the way, but everybody finally agreed that, yeah, we're going to give this thing a shot. We got Bunyan from Art Direct executives directly over us to try this on the product. And so myself, the engineering manager, went to Kent's two-day Scrum training back in August of 2006. And then we brought in a local trainer, Alan Shalloway, who works at a company called NetObject, who used to train for the team with varying levels of success depending on who you talk to. And we kicked it off. And I'm going to talk a little bit about some of the things that happened over the course of that cycle. I want to talk about a little bit the results that we had. There were four areas that really changed when we moved to Scrum. The first is in co-quality. The second one was in our ability to focus on what the customers really want. This, you know, kind of buzz-turned productivity, which I'll talk about, and then teamwork. So I'm going to talk about the results that we got over time. So the green line, you might remember that first curve, which is the Audition 2.0 cycle. The next curve is the cycle we used from. So you can see that we have about a third the number of bugs to fix over the cycle. Very similar release time. These are 18 months releases. So the Creator Suite all releases on these 18 months to two-year cycles. We were fitting into that time box. And you can see that, you know, we still had a little bit of a hill that we climbed. And it was a little bit of a self-consuming prophecy, because we said, well, we're trying this new thing, but we're not going to really modify these milestones like feature-complete. We're not going to, you know, what if this thing blows up? We don't want to have feature development up until the last month, and then it blows up and we don't have good quality. And so we built in this, what we call an end game. You know, we're the fixing bugs. And, you know, in that sprint right before feature-complete, we were saying, wow, we really like to get more features in. And we were saying we have these, you know, our bug counts, one third of what it's been in the same team size, but gosh, maybe we should, maybe we should let quality slip a little bit, get a few more features in this time, because what are we going to do in those five months? So it was a bit of a self-consuming prophecy that we had still a bit of a bug, but still, if you look at cycle-cycle improvement, way better. The customer focus area is, because we were at the anniversary sprint, we had complete features that we could show to customers. We were able to go out as opposed to private beta, you know, not the NEA signed beta testers that were able to use these features and give us feedback very early on on the direction of the product. But then we were also able to do a public beta. In fact, we had talked about doing a public beta release of this new software called SoundBlizz, which is another audio product targeted to producers. And originally in the planning phase of that project, we said, you know what, it's too risky. We don't know what the product's going to look like by that point. We don't know what the value is. So we had decided, both as a team and had executive buy-in, not to do it. Well, we got eight or nine months into it, and Adobe, for various reasons, which I go into if you want to ask later, but we're trying to impress us a little bit, Adobe said, you know what, we want you to do a public beta. In fact, we want you to have it ready for the Max conference, which is at three weeks. And I said, well, we had no plans on doing a public beta. And he said, yeah, I know, but we really need you to do one. And so, luckily, we were at the end of a sprint. And in fact, we were to go back to this bug curve. See that little peak right above 250 days? These are both back-day and commercial days. That little peak right there is where they asked us to do it. So we were at a very, very low bug count, and we had just finished the sprint. And so we said, well, gosh, should we do this? I think maybe we could. And this next sprint, we could say the goal for the sprint is to do a public beta, where they do all kinds of things like legal reviews and stuff like that. We're removing codecs where the empty free codec, Brown Hopper wants a lot of money anytime somebody downloads it. And for a free product, we said, well, that's probably another good idea, but we're getting Brown Hopper for a free product. So we removed the Brown Hopper codecs, things like that. But we were able to, within three weeks, turn that around and say, OK, here's your public beta build, and maybe get that up on left. If we were to look at the audition, same contract in an audition, if we had a request like that, no way could we have responded because of the bug count, number one. But number two, the features that we have were all these kind of half-developed things, right? It culminated in finding a feature plate where everything worked. No way would we have had something that was a cohesive set of features that we could have got feedback on. So we only go out and get that public feedback, which was extremely valuable to the team as well. I mentioned productivity. So this term end games, we'll call it during the war, we stop doing things like adding value, you know, new functionality, and start doing things like just fixing all the bugs we created. Over an 18-month cycle, the standard delta for us was five months of that. We've just been fixing bugs. And I mentioned that for the CS3 cycle, we said we're not going to alter that. We're going to say five months is what we're going to do, just in case this thing doesn't work. Well, we got to that feature plate data that you saw. We had one third bug count. And so as a management team, we were sitting at a table saying, what are we going to do? Should we just say, oh, well, we're going to fix bugs for five months, even though we don't have that many. We'll get it done earlier. We'll start working on the next release, which should be good. Most of us said, you know, we want to get more features into this product. Let's do that. But our very conservative QA manager on the team said, ah, I think that we've been doing these sprints where we're later focused on individual features. And we haven't really been focusing on the whole thing. He said, I think that when we get to feature complete, we're going to start these big workflow tests. And I thought we're going to turn up all kinds of bugs that we didn't see before. Now, most of the people on the team and the management have been playing with the application, and it felt pretty solid. And we were saying, really? I don't know if you agree, but we said, all right, here's our compromise. Let's do one sprint. And in that sprint, we'll all focus on regression testing, workflow testing, and let's see what happens. Maybe you're right. And the bug counts have been skyrocketed once we started doing lots of focus testing. And we were right. And this thing's really stable. It's going to go down and fire it up, and we'll rewind back here to this graph. You can see that we did our feature fleet today. We did that first sprint of regression testing, and it actually started going down. It was, you know, as most of us expected. And so what we agreed to was let's add one more sprint then of feature development. So we rolled it to squeeze. We were doing four-week sprints. We were able to squeeze one more month of feature development out of that cycle above what we originally planned. For CS4, we were able to reduce that down to three months. And for the next version that we're currently working on, the plan is to do a two-month. So cycle over cycle, we're getting more and more time of actual feature development. So I mentioned that I wanted to tell a little bit of how the adoption went for us. And so I have three sprint burn down charts here to talk about. They tell a bit of a story, and then they talk about retrospectives a little bit, and how retrospectives were extremely powerful for our team in making this work for us. So this first sprint burn down, you can see some pretty tell-tale signs. First of all, the things flat-lining for the first two, three weeks of the sprint. And then at some point we say, gosh, I guess we're really not going to make it. You know, at this point, the team was still at the point where they're saying, I think we can do it. I think we can do it. You know, if we just really buckle down, we're going to get this stuff. We're going to get these tasks marked off, and we'll make it. And then finally, like a week before the review, you know, reality says, gosh, I guess we're not going to do it. And so, you know, we pulled a user story out of that sprint, put it back in the back, brought it to the area, brought it down. We're not going to get to that one this time. And try to get the rest of them done. And you can see that even doing that, deferring those things back into the product back, we still didn't quite make it. You can see there's still a little bit of work to do at the end of this. And almost certainly that little bit of work was test this, fix this bug, right? A little bit of the last little pieces of work. So we had a retrospect at the end of this sprint. And what came out of that retrospect was this idea that, you know, if we're going to, if we're not, if we're overly committed for a sprint, number one, let's be more careful about what we commit to in the sprint. Let's maybe reduce our expectations a little bit. But number two, if we know three days into it, four days into a week into it, that we're not going to make it, let's put it back on the product, back on right then. Let's go to the product owner and say, this isn't going to make it. Put it back on the product, back on. And really focus on getting the rest of the stuff done so we can get down to zero in our sprint. So that was our plan for sprint number two. And you can see that there were a few points in there where the team said, well, we're committed to put that thing back on the product, back on. A couple of times they did that. And yet they still didn't quite make it down to zero at the end of the sprint. The team was pretty frustrated with this retrospective, this sprint date retrospective. They were saying, you know, we tried this thing. We thought it was going to work. It didn't work. You know, what do we do differently? And we started talking about how we do things on our team. A few issues came up. So sprint seven and eight are kind of represented by these little holes. Now, any set of feature development is going to go through a natural art. These are very, of course, abstract in person to this. But there's going to be some analysis, some coding, some testing. All these whatever the phases are, maybe some testing, some coding, depending on how it works. But on our team we were doing things like doing the design, then the coding, and then black box testing on the end of that. And there was this reluctance on the part of some of the developers to check things in until they were, you know, the whole thing was done, that whole feature for that sprint was done. Then I'll check it in because, and this was the reason I said, I don't want QA, writing bugs on stuff that's not done. So you can see right away there's a little bit of a trust issue already, communication issue. What we talked about was, hey, maybe what we should do is check in like every day, whatever progress you've made, check it in, and then talk to your tester or counterpart and say, hey, here's what I've done. Here's what's ready to be tested. You know, we're talking every day. Let's thank you for that meeting. And the testing guys are pretty, hey, we're not going to write bugs on stuff that you haven't done yet. Don't worry about that. That's in the past. We're working together here. And so instead of doing these big sets of features, we broke things down into very small pieces of functionality. From a UI standpoint, it might be something like, yeah, this little check box or from a back-up standpoint, it might be at the support for this sample rate for an audit process. Very small, little tiny increments of functionality. And those we get checked in and tested so that there were a much faster feedback loop. Also, you can see that as you do that, you have many more opportunities to say, this is the right amount of scope for this user story. Or is it this amount of scope? Which one actually solves the problem? As soon as we solve the problem, let's not do all those other details. I don't know if you've ever used Adobe products. You probably know they're way over-engineered. About Adobe guys standing there saying, yes, our products are way over-engineered. They do many more things than almost anybody needs. Not good at 80, 20 rule as Adobe. And so we said, let's just stop. Let's just stop doing those last two little bumps, maybe, if this thing is already solving the user story. And especially at the end of sprints, we could use that and say, you know what, we got three days left on the sprint. I could add support for this new thing, or I could stop, and we could actually get everything tested and all the bugs fixed for these last couple of days. And I'd see emails going out of this thing when you open it up. So this was our kind of discovery of the sprint red perspective. And so the mantra that came out of that was, as we're in the sprint planning meeting, for every little coding task, the question was going to be, and how are we going to test that? Because some of the things, we had one white box tester out of 10 testers and the rest were all black box. So they didn't have a lot of code expertise. These were people that had a lot of customer expertise. They knew what audience people wanted and they were verifying from that standpoint. And so we had to say, how am I going to test this? And our developers would have to say, oh, good question. I didn't talk about how you're going to test this in any subset of functionality. And so what really happened is the team kind of self-discovered test driven development. Because we had all those discussions in this sprint planning meeting and then throughout the sprint. Every time some new thing was going to come in, the thought popped up, okay, how are we going to test this? So we broke things down into much smaller increments and made them test driven. And you can see that the burn down for sprint nine was much better. You can see it's actually a much more natural curve and that it actually got down to zero. So this is just kind of a story of how our team retrospectives were really valuable to us. If we had hired somebody to come in and say, you know, I see that you're not getting done in sprints and I think test driven development is the solution to your problem, I'm going to train you. It's never going to happen, right? Everybody said, oh, yeah, sure. We're going to have a high pause while the team is down. The fact that the team self-discovered and came up on their own made it very powerful for the team. So that's kind of the story of SoundMove. That was the first release of SoundMove. Certainly didn't do everything right. The next burn down after sprint nine probably had more problems, you know, if I were to show all of them. But that was one small success on that team. At the end of that cycle, the kind of data that I was just showing you were reducing our regression testing time, were reducing our bug count significantly, were able to respond very quickly to what our executives want. Those stories started making around the company. There was a pretty grassroots kind of thing as far as development goes, as I mentioned before. There's nobody telling the teams how to do it. And so those stories started kind of getting around, you know, that, hey, here's a team that did it this way. In fact, I was studying a major review of the Creative Suite, because after I did SoundMove, I got a promotion to be the Group Program Manager for Creative Suite. So I went from this small agile team to the Creative Suite, which is 1,200 developers developing 14 major point products and trying to manage that process. It was like, you know, going from sprinting to running in molasses. It was an interesting experience personally. But right after that, these other teams, I was at this review, and the CEO was asking somebody, hey, can you go to Public Beta? Can you go to Public Beta and ask number of months? And the executive said, well, I don't think we can get there in time. I think it's going to take a system of months. And the CEO said, well, SoundMove did it. What were they using? And it kind of went down the chain. Yeah, Joe, what were they using? And it went down to the director. Yeah. What was it called, Peter? Scrump. Scrump, and it kind of went up back to the chain, and Bruce Jizz and the CEO kind of said, yeah, they're using Scrump. Why don't you look into that? So those stories started making their way around. And because I've been the Program Manager on that team, people started asking me, hey, can you tell us more about this scrump thing? And so I was getting a little presentation and power-winning slides and started talking about what we had done and what scrump is. And over time, in fact, over that CS4 release, I had more and more people asking about it. There was more interest in the company and what scrump could do for these teams. So we started bringing Ken Traver in to do two days, two days from master training. And he did five of those over the course of a year where it was really a whole situation where if I get enough people that were willing to pay Ken's fee, I'd organize it and we'd kick it off. And then I'd help Ken train because what we found was that Ken would come in and say, here's what my team calls the Dallas scrump. Here's the philosophy behind it. And teams would be very skeptical and say, I want to see how this could work at my company. And then if I stood up and said, and now here's how we did it at this company, then everybody would say, oh, I guess we could do it here. And then we'd kind of open the shackles a little bit and say, yeah, let's figure out how we could make this thing work for our team. So we brought Ken in and then at the end of the cycle, I moved into a full-time role of just training teams because there was so much demand. There was kind of this kind of demand. It's very expensive. I'm very cheap, relatively, for a W to hire. And so I moved into a full-time role of just coaching teams. And so I've been able to see kind of the roll-out of scrump at the company. And it's been completely appalled. There's been no push. And what we do, I work for a group called the Lobe Quality Initiative. And there's kind of different best practice people in that. There's somebody who's a big peer review advocate and can go on and talk about peer reviews of health teams. And that's all they do is they give examples of, you know, here's a team of these peer reviews and they were able to reduce their defects to 80 to 80 percent. And then people understood about it and they might bring them down and use some training. So things like peer reviews and there's a big TFB guy involved in that as well. And so it's really just a poll to say, are you interested in this stuff? Cool, we'll come and train on it. And over time our goal is to kind of just spread these best practices, things that work well with the company. So since that time I've trained a lot of teams in scrump, we're definitely following this standard tech adoption curve that Edward Rogers described, where today we're kind of in the early majority stages at Adobe where we have about 25 percent of the product development. And I think that number's probably bigger. If you've got a recorder, I sent out a big email last to say, hey, is your team using scrump? You know, what's your team size? I'm trying to gather some data. And I'm about to do that again. I'm pretty sure this number will go up. So some of the team sizes, the Lifecycle Team is one of the teams that uses the enterprise software to Lifecycle Team. It's probably the biggest scrump team in Adobe. They have about 300 developers across about 50 scrump teams. The audition team that I work on is about 20 developers. So you can see there's various team sizes, both dispersed, distributed. The example team was a distributed team, or we had a homeroom group of about 60 people and 14 in Seattle. We gave them their own chunk of stuff to work on. We're in the process of trying to tool up, you know, studying on different tools because I always get that question, oh, what tools do I use? So Adobe's actually worked on their own internal as well as lots of teams using, you know, the various tools that are out there. So I mentioned that earlier this year I sent out a scrump survey and I kind of had two goals of that survey. The first one is there's this big question being asked in the community, which is are you actually doing the agile? Are you actually doing scrump? And what does that mean? So I wanted to try and get a feel for, you know, when I hear, oh, such and such teams use scrump and Adobe, I kind of want to get a feel for what they actually need by that. I'll hear some teams doing that. It just means they're having stand-up very good and as far as they're concerned, that means they're doing scrump. And I hear some teams doing it and they're not doing anything like that and they have, you know, very many patrol structures in place. So I wanted to come up with kind of a data information like are you actually doing this? And then number two, what effects that actually had on your team as far as bug counts, productivity, features that you're able to deliver to customers, how the team feels about it. You know, this is a good way to work. So there's kind of two parts to the survey that were going out. I send it out with no incentives. You know, sometimes we'll send it out with who gets your hands on, who gets the car, who won't have the, you know, in the draw. We send out with no incentives. We base kind of the are you doing it on the no-kit test? Oh, this, I don't know. I have some of the questions on it. So we modified a little bit the no-kit test if you're familiar with that. And we have about 30 people respond and that's across 21 product teams. Almost all of them were in their first year scrum adoption. So it's a pretty skewed viewpoint of scrum. But here's some of the data that we found. These are the questions that, you know, how does this affect your team? So, you know, I would ask a question like, has the product quality improved since implementing Scrum on a scale of 1 to 10? Zero to 10? What do you think? That's the most, of course. And, you know, these are the initial results on the first pass of this. I'll be very curious if I can send this test out again. Later this month. I'll be very curious to see as these teams have now had another four months of development or sprint into the belts, whether the things are getting better on some teams or more on some teams, how things are going. The kind of no-kit test questions, which is, are you actually doing it? Some pretty poor numbers on here. A scale from zero to 10. So if you look on the no-kit test, you can figure out what these things actually mean. But some pretty poor responses on here about, you know, are we actually doing it? And this, again, this seems in our first year. But out of 100, the average Adobe score, I thought, Aaron, you heard a lot of the average Adobe score, Adobe score is about 58. But interestingly, I asked this question, it's silly that to you, would you keep doing Scrum on your team and 80% of people said yes. So even though there are some issues with the implementation and with how things are going, the great majority of people still like the better way to work than what they were doing before. If I filter that set of results by teams who are just using it more than a year, it's a smaller sample size, so it's a great solving for that. But there's a much more positive, 100% of those people that responded said, yeah, keep doing the interestingly enough, the no-kit test scores are almost identical if you do the overall hold up. But the individual areas that they responded by were much different. So the teams that were doing it for the long run of the year had really high scores on things like self-management and product owner, those types of things. So if they had the vision, they were able to deliver on it. Maybe their sprint links were too long. Maybe other things were going wrong. Or wrong, I guess. Not following the by-the-look-scrum. But overall, those teams that have been doing it a long time were really positive. Of course, they're going to self-select out of that as well, because if they're having a really paid experience with it, they're going to stop doing Scrum and stop doing it on their surface. So they self-select out of that a little bit. But these are the results for how those teams felt Scrum had impacted these areas. So teams doing it long in the year felt the quality was much higher, that communication was better, that the overall product, the overall product is like, we deliver a better product for our customers. How does this affect our customers? Does my work life bounce better since you've done all those things pretty significantly higher? So we'll wrap up there. Any questions? In your stand-ups, do you have the development manager participate? If so, how do you get past the programmers feeling like they're being watched every day? I've seen both sides of that. So the question is, do we have managers participating in the daily Scrums in the stand-ups? And how do we get away from the command and control implications of that? So I've seen, since I worked on one team as a Scrum master for an eight-month cycle, and then I've coached many other teams, I've seen both. So on the audio team, we actually had our managers were just members of the Scrum team. They actually were, you know, delivering code, testing code, whatever their role was, admittedly had a much smaller contribution with the other one. And so they felt like they were vested. And those were also pretty good managers already as far as not being command and control people. So it's a personality thing that they were very successful doing that. Our engineering manager was very egalitarian, very open to ideas about how to do things, but also not afraid to say, hmm, what about this? You know, not afraid to give leadership where it was appropriate. So he was really good at balancing that. I have seen some teams that have gone to Scrum, their managers didn't want to be on the team like that, but they still wanted to be in the daily Scrum. And we had some issues on some teams where, you know, at the retrospectives, things like that would turn up, say, how come so-and-so QA manager is always telling me what to do at the end of the daily Scrum? And luckily those teams, or those managers, have been very open to that feedback in the retrospectives. I think it's helped to have me there as kind of an outsider that they can say, hey, is this the right way to do it? Or should we do it a different way? So having an outside perspective has been valuable, I think, there, so that people felt like they could go to somebody and say, why are they asking these questions? And I can say, hey, managers, remember when we said this is self-managing? You may be overstepping that a little bit. Yeah? You mentioned your team kind of discovered test-driven development along the process. Could you kind of explain what that was? Like was it Mormon acceptance testing, the unit testing, how many test-driven developments? How long this was accepted? Test-braining? And that's what actually dictated the development order of everything as opposed to the unit testing. We were developing unit tests at the same time, but that was not, that wasn't our goal as a community test-driven. Yeah, I have three roles of Scrum Master team and product owner. Which ones do you think were the most defined and were the most, and which ones do you think were the most problematic and didn't, in particular, interested in the product owner role, and maybe how that worked with Adobe? Okay, the question is the various Scrum roles, the three roles and how those were built with Adobe. The first two Scrum projects, my project and then another one that grew up out of a similar, in the same organization that they've heard about ours, both had just fabulous product owners. So we had a really good start. As it grew to many teams, what I saw was that not everybody had these great product managers that were really visionary and really good at communicating to the team what they wanted. And one of the things that's been mentioned in some of the other sessions is that the long-term vision can get view lost easily. And so I would see teams saying, well, we don't need to have user stories for the next sprint to find. That's all we need and we can start a new Scrum. And I would say, wait a minute, you don't know what your release is going to do and they would answer executive questions like, you know, when are you going to do that? When are you using Scrum? And I had to like, you know, die on the freezing floor. And explain that, no, we actually do do product planning and, you know, portfolio planning in Scrum. And here's how it works, you know, down, you can use micro-homes planning on you, right, in the various levels of planning. So that was an area that there were some bumps along the road once we started to scale up to a broad set of product owners, instead of the lucky few that we started with where we got like, you know, had the right strategies on foot. Other questions? Great. Thank you very much.