 That's some technical glitches. My name is Rupa. I work for IBM, the Rational Division. And I'm going to talk about our experiences. We started as an agile team with annual releases. And we are trying to move more and more into a continuous delivery mindset. And I'm going to talk about our experiences. We are still not continuous as Facebook says, Continuous says, or Twitter says, but we have our own version of this whole journey. And that's what I'm going to talk about. So who we are, what we do, we have a big team. It's distributed across the world. And the reason I bring it up is because agile, I'm sure everybody knows this, agile started in small collocated teams. So we practice distributed agile. And that's what we started as a team, right? So it's from Portland to Tokyo, like with a gamut of time zones thrown in between. The product we work on is like a team collaboration tool. It's called Rational Team Concert. So it's interesting. I'm not going to be talking about the product here, but it's interesting because our learnings from this experience is helping us develop a better product, right? Because the product is also all about agile delivery, DevOps and the ALM life cycle. So this is how we started. We started with disciplined agile. The focus is, you know, it's come from the manifesto. So we wanted to produce software that worked, is delivered on time and responds to changes. But so I guess we started on this about six years ago. And six years ago, agile was still fairly new in market. And there was a, there were a lot of misconceptions and myths floating around agile, right? That agile requires no design, no documentation, no process. Basically the idea was that you sit in court and you automatically produce brilliant software. So there was this, there was this kind of, I guess myth again, that somehow some order would emerge out of the madness of agile. And that's not really true, right? So what we discovered in our journey in agile is that agile just does require design, but it requires just enough design. It doesn't require you to produce humongous design document that nobody is going to read. It doesn't require you to produce humongous help documents that your customer is never going to open, right? So it's kind of a balanced thing. Everything in moderation, right? As your doctor tells you, right? It needs everything in moderation. It does need a process, but the best thing would be if you were not aware of the process. If the tools and technologies that you're using takes care of the process of, you kind of design a process and you tell your agile tool that here is what I want to do. Like when I'm in sprint one, I want my developers to behave in a certain way, check in court in a certain way, go through a certain number of reviews. When I'm in the final sprint, I want those rules to change. And your tool actually takes care of implementing that process. That's what agile should be all about. And then the rest of the things you all know, test-driven development. I don't know how many people actually practice TDD? Okay. I'm embarrassed to say that we really don't. I mean, we try, but it doesn't always work. Then stakeholder collaboration throughout the life cycle. That's interesting. We always keep in touch with our customers. We actually have lots of design partner and customer calls, even when the product is kind of ongoing, even during the release. And we take their feedback and we try to incorporate that into the product. And then end of iteration demos, again, a very, very common agile practice. And one we found to be especially useful when it's an internal demo before demoing to the customer. Your team is, as developers, I found most developers to be very blunt and honest. So they'll be very openly critical about your product. And that I've seen, earlier on I used to get, you get emotionally attached to your product and you kind of feel bad about it. But later on you realize that that's probably the best feedback anybody is going to give you ever. And then retrospective, right? You improve as you go along. And how do you improve? You improve by having a bunch of people at the end of every sprint, come together in a room and just, I guess, rant it out to you, right? And that, it's really helped us. People really get angry and they kind of, not shout obscenities exactly, but they get close. But in the end, they kind of vent. Because, you know, I don't know if people agree to this. I used to do Waterfall before and agile is very stressful. At least I find it stressful, right? It's like your productivity improves but you're always out to prove something that, oh, I have to develop this software and deliver it in like three weeks. And, you know, you can't really say that, oh, I can take this week off now because I have the whole year to kind of, you know, tide me along. So, you know, the stress of agile, I think, is really important, especially because they help you improve the next iteration, right? So this was how we started. And then we started moving towards continuous delivery. This was because our customers told us that annual releases were not working for them. They would want a feature in our product and we would tell them, you know what? You just missed the delivery board this year. So you can have it maybe June of next year. You know, it really pissed them off. So they wanted something that was more continuously coming. Like they basically wanted product to be like a set of pipeline of features that would come to them maybe, you know, first every three months and every month, every day, every week. I mean, you know, continuous delivery can, you know, can take you as far as you wanna go, I guess, right? So what is continuous delivery? I got it from the net. You repeatedly push out bug fixes and new features to customers. You code, build, test, and deliver continuously. And right now with all this emphasis in various companies around DevOps, I think continuous delivery is actually becoming a reality in a lot of companies. Goal is low overheads, high quality. I mean, that's a general goal of software programming. I don't think it's a CD goal, but you know, apparently CD can help you get there, right? So the chart here is kind of what we started with, code, build, test, deliver. And then we realized that, and I'm going to be going through that in the other slides, then we realized that doesn't quite cut it, right? Because as I think Naresh was mentioning, you can't really do everything without any design. No, not Naresh, somebody else was mentioning today that you cannot have anything that has completely no design, right? Completely no prototyping. So we realized that code, build, test, deliver was good if you were just implementing a product, but you would need to invest some off-cycle time to investigate on features, especially the large ones, right? And to maybe design and prototype, right? So that is kind of our continuous delivery cycle, I would say, code, build, test, deliver. And in parallel, you have teams investigating and designing for the next cycle. And we found it to be working better than just code, build, test, deliver. And how frequent makes it continuous? I remember that when I first submitted my experience report to the Agile, you know, to this conference, people actually almost laughed politely at me. They said it's like, you know, you have a quarterly release that's hardly continuous. And that's true, right? I mean, with all these Facebook posts of how they deliver fixes several times a day so that his mom is happy, I read some posts like that, right? But really continuous can be, it depends on the kind of product that you're delivering, right? If you have a web-based tool and everybody uses web-based software, then maybe you can't deliver multiple times a week. But the tool that I developed, it has desktop avatar as well as web-based avatar. And for that to deliver, I mean, nobody's gonna change their desktop tools. You're not gonna be continuously reinstalling an uninstalling software, right? So for that, probably we will be at most, in the best case, maybe going to once a week or once in a couple of weeks, right? We haven't reached there yet, but that's the goal. So how did we change your rhythm? So when we started with our annual release, we had this, I guess, we had several backlogs, right? We had, we have, in my tool, right? We have like a source control planning system and we have a change management system. We have a tracking and planning kind of a system. And each one of those capabilities, as we call it, had their own backlogs. They had their own priorities from customers and they were all working on those, right? So a release that was, it was kind of, it was almost like there were several features which were going to come together at the end of a year and that would make it to the, make you the final product, right? We realized that if we had to move to like quarterly releases or like, you know, fortnightly releases or whatever, we couldn't go that way. Everybody had to have a common plan, a common goal. So that was the first thing we started with, a common backlog for our product and, you know, right-sizing the features. It's not, you know, you cannot say that, here's a feature, I normally deliver, team of five can deliver it in a year. So I'll give you a team of 50, can you deliver it in a month? It doesn't, software doesn't work like that, right? It's not really multiplicative. So how do you organize a team to deliver these features? How do you right-size these features? These were things that we kinda, I guess we had our own experiences around those. Then how do you flow these features, right? When you have again, when you have like a year to develop, flowing features is fairly easy because nobody is really using your product in the middle of the year. You deliver and once you're sure that your unit tests have gone through, you can, you keep on delivering to the mainstream, but that doesn't work quite as well when you're having for like a monthly release, right? You have to make sure that nothing breaks your feature, nothing breaks your product because if it's broken, it's harder to kind of get back to normalcy. Then deployment pipeline. How do you actually deploy your product and make sure that it's like runs across various platforms? Like it might be support a whole bunch of operating systems or a whole bunch of hardware. So how do you make sure that it actually gets deployed to all those products and it gets tested and it reaches your customers on time? Continuous testing, automation and quality. So these were some of our focus points as we changed our rhythm. So the first one was planning a release. So what we did was we took a look at our existing backlog. This we started about a year and a half ago at our existing backlog and we took each feature or planning from the backlog and we kind of broke it up into some things that we could release in eight weeks. We call it eight week chunks, right? And those eight week chunks actually created our new backlog. So that's like right sizing a release, right? So you're kind of, we didn't go with the story point approach because we'd already found the story points don't work for us. So we kind of went with a, kind of a, we call them like monster features. It's kind of, so when you think about it to yourself as a developer, you say, oh this feature is small, this feature is medium, this feature is monster. I know that it's really big, but it's actually worked for us, right? We've seen that when we have classified features in terms of their size like that and not in terms of story points, people probably feel less stressed, like you know that the estimates are not being questioned and they kind of tend to do it right, right? So we kind of know that these features will fit in an eight week cycle and we found that initially it was really hard, right? Because we never kind of planned to fit anything into eight weeks, right? You have a feature and it takes, like you're going to finish it in a year. So initially it was a hard thing to kind of, we had to kind of, I guess it was a paradigm shift for us, but later we found that if you have smaller features, they're actually better scoped. It's easier for you to scope those features, right? Because they're small enough that you can figure out whether it's going to actually be deliverable in an eight week cycle or not. That's what we found from experience. So this is how we kind of designed our backlog. So at the bottom of the backlog, we have, so this backlog obviously comes from customers, right, stakeholder collaboration. So at the bottom of the backlog are features that we don't really understand that well, right? That we'll need some kind of investigation or the prototypes that I talk about. The top of the backlog are features that are well understood, that we've already investigated, that we've already maybe had some kind of prototype or design we've talked to customers and in the next release, right? So right now we are in some release and in the next release, the features are planned items from the top of the backlog are going to flow into the release plan because then they are ready for that code build deliver kind of cycle, right? Because their initial design and kind of investigation is already done. So that's planning a release. The next thing I think we had an interesting lightning talk in the morning when somebody talked about how context switching is difficult and we found that too. We found that context switching was really difficult and it was actually killing our productivity especially short releases, right? When you're kind of working, you know that you have a year to work on your product. You do some features, then you fix defects, you work with customers, you go and look at your forum postings and all that and that's all you can still kind of manage to fit it in to one year, right? Because you have this sense of security. Sometimes it's false, but you do have the sense of security. But we found that in a small, in a shorter span, it is absolutely impossible for people to context switch continuously and still deliver a quality product. So what we did was we came up with this, I don't know if it's a new thing, maybe probably people use it already. We came up with this thing of concept of, I guess, feature teams and run teams, right? So what are feature teams? Feature teams are actually cross-functional teams. They contain your normal developers. They contain testers, translators, and then your documentation person and all that. And one feature team once it's built, it focuses on a feature, right? That feature can come from any domain. That sits on your backlog. And because they're focusing on that feature and they're not focusing on anything else, we have seen that they actually do a better job with quality. They actually do a better job of scoping the feature. And we found that it has actually improved the productivity of the team. So obviously, if everybody focused on features, nobody would look at a defect backlog. There would be nobody left to work with customers, you know, and do the build and so on. There's a lot of so-called grant work in the team that has to be done. So then we came up with this thing called run teams. So the run teams are the folks that are actually responsible for, how can I put this? For making sure that your team stays sane and your product runs. Like you actually kinda work on some of your defect backlog and you keep your customers happy and you talk to them and so on, right? So in our product run teams are, you know, somebody talked about Scrum Master. So Run Team has its own Scrum Master and he's a developer as well. And they do the builds and they fix the APRs and they talk to customers, right? So the way we do this feature and Run Team is that at the beginning of every release, we look at our defect backlog and we say that to say, we have a quality goal of say, maybe 70% of normal defects and 100% of CBIT1 defects. And we say that for that, we need these many people to be in the Run Team. And once we have the Run Team in place, we say, okay, now we have 10 developers left. So now we'll be able to do maybe three features of eight week cycles, right? It's the reverse. First, you make sure that your product is high quality and then you see what you can do, how many features you can do, right? And we've seen that because of this, it's kind of easy when you have these numbers, it's kind of easy to explain to execs why you cannot take on the zillionth feature of like an important customer. Because you have the data that tells them that we absolutely need this minimum number of people in the Run Team and we have only five people to spare. So we can only do two features. And if you want us to do more, then we have to trade off quality and nobody likes trading off quality. Then the next thing we did was getting a feature to done, done, right? So we know that typically as a developer, right? I feel that once I've completed coding and run my unit test, my feature is done. But it is not so. For it to actually make it to a product, it has to go through a whole bunch of things, right? It has to go through like, you have to make sure that it's secured. It goes to a security review. It has to go to translations. It has to go to performance testing, migration, like a whole bunch of things. So what we designed this, and I'm sure probably people all over the world follow this kind of a pattern, we said that for our feature to get done, done, these are the kind of things you have to make sure. And each feature team has a feature team lead who's also the Scrum Master. And for that particular feature, he or she is responsible for making sure that whatever is kind of listed out in this checklist gets to done, done. And then we have kind of, weekly we take kind of stock up our execution. And we figure out if we are on track or not, right? So this is like a sample of the execution notes of an integration that we are doing right now. Feature delivery. Again, like it, but it's a little bit already, that when you're delivering multiple features at the same time, there are chances that they'll break each other. So you actually, what we do is we go off into different streams. Are you allowed to deliver your feature to your integration stream? Till then, you have to develop in your own feature stream. So that's kind of the rule that, and we find that this is quite successful, right? You get to done that, you're allowed to deliver, right? So our hope is like right now you see that there are like several windows in which you can actually deliver features. Right now these windows are fixed. You know, we have three monthly iterations, three monthly releases, and every three months you have a feature delivery window. But our hope is that we're going to continuous delivery. These windows will become flexible. Feature teams will actually take off features from the backlog, work on them, and deliver them. And we'll have a deliver and release vehicle that will kind of come into action anytime that you're ready with a feature, right? So my team can deliver a feature like today. And then maybe two days later, some other team delivers another feature. And that goes into the software as well. So this kind of a kind of a, this kind of a development scenario in which you're kind of, you're kind of in a silo of your own until you're ready to deliver, helps to do this because you don't really impact other people. Build, deployment, testing is probably the most important thing in continuous delivery. You have to have, like we had obviously everybody has test automation these days, but the more important thing is what kind of environments do you test in? Like I said, ours is a desktop product. So it has to be really tested across several operating systems. You know, we run on Z and Linux and Windows and whatnot. It has to be tested across several browsers. It has to be tested across different kinds of hardware, different kinds of databases, right? You name it. And it is not going to be possible for a testing team or for your development team to test your product on all these different types of combinations of hardware and software unless you had automatic deployment, right? So that's really important for us. We have virtualized deployment. So we have a thing called a deployment pipeline. So what happens is as a part of your build, your tests are run. And before your tests are run, you have this, we have this some, you know, Obscode Chef recipes written in Python, written in Ruby, and those kind of take your finished software. It deploys it on virtualized environments of your pick, runs the test and then gives you a report. And we found it to be extremely helpful, especially going forward with, you know, continuous delivery. So we have tests and deployment pairs. Each test set can actually be deployed to various software and hardware configurations. We get them provisioned on the cloud and we have a continuously running deployment pipeline. So all kinds of tests run here, right? Personal tests, smoke tests, performance tests, weekly tests, right? There's sometimes they're like one hour, sometimes it takes six hours, but all of those get deployment deployed on various virtualized environments on the cloud and tested. I think I'm almost done here. So this is just a snapshot of our deployment pipeline, right? These are the kinds of builds that we run on our deployment pipeline. And then going forward, these are some of the things that we want to do. We want, we have lots of unit and functional tests which are automated. We don't really have UI driven automated tests. We want to get there because we know that it's going to be important for CD. We want to have parallel test execution in builds. Like, you know, you can design your test cases so that a lot of test cases can run in parallel so that speeds up your builds. We are actually, we have already, I think, as we speak this week probably, we've released our hosted version of our software. So for the hosted version, obviously, it'll be a completely different delivery cycle and delivery release patterns all together. So we have to take care of that. So these are the goals, our goals going forward. So you know, I'm done, this is my last slide, but I wanted to bring this to your attention and maybe as a topic of discussion. So we've seen that with continuous delivery and with agile, we've actually attained higher quality. That's the numbers, our defect backlogs are slimmer now and those are actual numbers for my product. We do have better predictability to our commitment and the customers, they're happier than they were before. We have this done, done concept so we know that we are more predictable and we actually aren't done when we say we are done. So all those things are good things, but what we've also found is that people have become more selfish. That's because you're completely focused on your own feature and if you're in a feature team, you kind of lose the product focus, right? That's what my personal experience, I don't know if you have had different experience. So to counter this, counter some of this, we actually have now a pre-floating population of senior people who are involved in features as well as in the run team, they're kind of in a mentoring role. So you know, we are trying to counter this. I don't know if anybody else has the same experience that if you're agile, you become probably too concerned about what you are doing and not enough concerned about what the other person is doing. Another topic in a slowdown of innovation. Do you feel that agile slows down innovation because you don't design enough? You're always rushing, rushing, rushing for the next product, right? I mean, whenever I was doing waterfall, I actually had time to read books that I like to read, books on design, books on stuff, technical stuff. So I have no time at all. It's come to that. I don't know if people agree to this. I don't know how to counter this. That's why we have those off-cycle, you know, kind of design and prototyping cycles, but you know, they may not be enough. I don't know. And then no room to be. That's how I feel sometimes. I don't know if other people feel the same way when they do agile. Like, you know, you just don't have any breathing space left because you're always running. Yeah. And they're always running. Yeah, it's like always. Can you go through here? Yeah. Yeah, I can see anybody here. Okay, good. Good to know, right? Yeah. Yeah. No, future teams come together, but cross-feature teams. Yeah, I'm all right. Yeah, sorry about that. Yeah, sure. Absolutely. So thank you all.