 I'll come back, my name is Rohit Kumar, I'm the team manager for Oxford Institute of Design. Today we have extremely privileged to have Nautilator amongst us to start off the proceedings for the day with Keynote. I'll come back after the Keynote to share with you all a few of the logistics, key information about the conference, and then with a rate of 10 minutes, then we will start the session. Alright, thanks very much for the interview. How's it sound? Sound okay? You hear me alright? Alright, excellent. Well, thanks. Welcome this morning. Glad to be here. I've just joined up with the new company IHS Global as the Vice President of Development, having come from Landmark Graphics and Halliburton where I had done a long career in product development. That opportunity here to join with our local office in Bangalore to see some excitement going on. So I had a lot of experience with offshore development, working with teams in India, working with teams around the globe. So first of all, since this is India, I figured I'd have to have at least some sort of pseudo Bollywood reference. So we'll start with this. Come on, come on. Are you Mr. Toad? Thad. Poro. Thad. Mr. Toad. Mr. Toad. Toad. Toad. Toad. Mr. Mr. Toad. Toad. Toad. Toad. Toad. Mr. Toad. Toad. Thad. Toad. Toad. Toad. So sorry for the confusion. So I have a secret desire to be a Bollywood actor. So I'd like all of you to join and help me and say, good morning, Mr. Toad. All right? Good morning. Okay, thank you. Thank you. So not only have I completed my desire to be a Bollywood actor, but since you've bastardized my name, that gives me permission to mispronounce all of your names because it's not out of desire. It's just it's very difficult for us. Anyway, very good. Thanks. As I've said, I've had a lot of experience with global teams. From my time back at Halliburton, Halliburton line mark grew through acquisitions. And I had a lot of experience with my teams were scattered around predominantly North America through energy capitals in Houston, Denver, Calgary, as well as internationally in Europe. Over time, we got very active in expanding that through outsourcing providers throughout the globe. I've worked with many, I've worked over about 15 years with a number of different offshore vendors in India, Ukraine and Romania, as well as Vietnam. Now that I've joined IHS, it's got a similar footprint on my focuses in the energy sector within IHS. So again, we have naturally Houston, Denver and Calgary internationally in offices in Europe, as well as we do have a the local center here in Bangalore, as well as a number of partners as well in Chennai and Hyderabad. So various experience dealing with globally distributed teams. I'm going to give you a couple of one particular case study that I worked with and then go into some of the insights that I've had over the years dealing with globally distributed teams. So this first case study that I'm going to talk about is one where I had a particular passion. My career I started at Exxon as my background is a chemical engineer. I migrated into petroleum engineering. I worked for Exxon, then started work for a very small startup company, J.S. Nolan Associates. I was the fifth company, fifth employee of that company. Really passionate about reservoir simulation and doing development in that area. Love the engineering, love the software, love the mathematics associated with it. What we do in reservoir simulation is we take all the data, we collect all the data, historical data from reservoirs, historical data, production data. And we feed it into a big model, we create a big mathematical model of what the earth looks like. And then we simulate the future production and the behavior of the reservoir going forward 10, 15, 30, maybe even 100 years forward to figure out what's production going to look like, what is the future. Very valuable information for oil companies in order to do future planning. So this is an example here of just a visual representation. This is an oil reservoir. The bottom is the water on the bottom. The oil is in the middle and we have gas at the top. And over time the gas will expand, the water will come in and the oil will be depleted. Just a high level view of what we're doing in this particular situation. So from a system workflow perspective, it's a very numerically intensive operation. That's why we utilize a high performance cluster in the back end. On top of that high performance cluster we've got loosely coupled to a user interface, a graphical pre-processing interface, as well as a graphical post-processing 3D visualization operation as you saw the movie there. So there are some computing challenges and software challenges associated with this problem. First of all, few people think it's a little complicated. I don't know, this is the simplified version, but that's the type of thing we're solving. And we're solving that equation not just once, but over millions of cells. So we break the reservoir up into millions of cells and we're solving this into millions of cells, which creates essentially a million by million matrix. And then we solve that problem thousands of times because we're predicting that into the future. So it is very computationally intensive. As you say, that's sort of the large scale matrix that we're solving. Some simulations that we do, even though we're on high performance computing, the good thing about high performance computing is it does a lot of computing really fast. The problem with engineers is they like to make bigger and bigger problems. So simulations often take hours, sometimes days. I even had a customer who said he measured his simulations in haircuts. When the haircut was done, that's when the simulation job was done. Another challenge we have is our testers are petroleum engineers. We need them to be petroleum engineers. We need people to understand the domain. They aren't test automation specialists. So that was one of the challenges we had with most of our team. And another challenge we have is numerical simulations with approximations. We're doing lots and lots of approximations. And due to that, we're subject to a lot of roundup problems, perturbation differences. It's something that means unit testing isn't necessarily sufficient. In fact, sometimes unit testing gives you false information, and the real challenges are solving the integration solution. So if you look at some of our problems that we were facing at the time, we were finding too many defects during beta testing. We were relying too heavily on back-end testing, not getting sufficient coverage early. Our tests were taking a long time. At the time, there was a global shortage of petroleum engineers. The case study I'm going through ironically is one where, so we had a global shortage of petroleum engineers. We tried taking this, working with an offshore vendor in India, and it just didn't work. The reason it didn't work was we couldn't find petroleum engineers that were available to help, and that domain was so critical. So we ended up pulling it back from the Indian vendor and ending up going to two other vendors, which I'll talk about in a little bit. Again, I said our testers don't have the automation skills. We didn't have a lot of unit tests, but we did have a very good collection of developer level tests. And I said the last thing, approximation breaks abstraction. So this is a case where unit tests aren't sufficient. You really need to have a good set of, until you put this whole series of linear equations together and solve the big problem, that's where you really pull together the challenges. The mathematics may be pure, but the time you pull it into a computational environment, you've got challenges that need to be dealt with. So some of the testing problems are similar to other problems we have. There's a little scenario here. Yeah, before Bollywood, there was Hollywood, and in the age of Hollywood, one of our classics is The Wizard of Oz. So this is sort of the software story through The Wizard of Oz. So we start with the project kickoff, and we have little Dorothy saying, when would we get the requirements? Yeah, it would be all in good time. But I guess it doesn't matter anyway. Just give me your estimates by this afternoon. The team binds together. Not so fast. I'd have to give them after a little pause. No, we need something today. Nobody's ever heard this, have they? Okay, then it'll take two years. No, we need it sooner. I already promised the customer it'll be out in six months. You're a very bad man. We're not in Kansas anymore. The developer gets together and says, I mean, I come out alive, but I'm gone in there. He's just about got it under control. And then what happens? Reorganizations. The great and powerful Oz has got mad as well in hand. People come and go so quickly here. And then they finally persevere. They're almost done. And then what happens? They push it off the testing. Going so soon? I was here of it. Why, my distance party just beginning. So that's the type of problem we were hitting. We were hitting that back-end testing, taking too much time, getting us into too much trouble at the end. From that, I came across this paper by Langkow. And I love this paper, estimating Agile Software Project, Effort and Agile Learn Empirical Study. And I'd love it for three reasons. She references some of my research work. So, I mean, that's great. Awesome on its own, right? The second is she basically confirms what I'd found in my research. So that's even the second awesome. But the third reason, the reason I really like this paper is because she took it a little bit further and she compared the estimation accuracy in feature estimation compared with the estimation accuracy in dealing with defects and bugs. And the conclusion, which is not too surprising, but I don't think we think about it this way, is that there's twice as much uncertainty in trying to estimate defects. So what does that tell us when we put too much testing at the back-end? We've pushed so much uncertainty at the end. So we've made much more uncertainty at the end than we want. So it's all the more reason we've got to find and preferably not introduce the defects in the first place, but to the extent that we are introducing defects, we've got to find them as early as possible to remove that uncertainty. So some coming back from these problems, we had a lot of strengths too. We had an experienced and committed team. They were passionate about the domain. A lot of them were PhDs in petroleum engineering, had been working in the industry for a long time. They really knew the product. They really knew the domain. They had a good collection of lightweight integration tests. The developers had a great discipline about building new tests, doing lightweight integration tests that would cover the integration at a very lightweight level. So that was really good. So they were doing that and they were keeping from introducing too many defects, but the challenge was finding when we dealt with the really complicated customer data, that's when we were hitting some challenges. We had actually a collaborative relationship with our customers, meeting with customers regularly, weekly in many cases. We had a great product manager and management team that was working with the team. Because of our relationships with customers, we had access to challenging data, so the really gnarly problems. We had access to them. Now they were hard and they took a long time to run. But we had access, which was good. We could find it. And we also had a willingness to invest. This is one of the challenges with a lot of outsourcing, offshoring activity, is it's done specifically as a cost-saving method or a cost-saving approach where we're reducing the team. In this case, we were keeping the core team whole and we were looking to invest more to improve our quality. So that's a big, big shift and I think that helped our success quite a bit. So our proposed solution, first we wanted to understand what our challenges were, understand our current testing strategy and look to fill some gaps. We wanted to augment the team with global skills, the global teams. And we wanted to find specific partners that had the skills we needed. As I said, we started this first with working with an Indian vendor. It didn't work because I just couldn't find the talent that we needed. We happened to latch on to a Romania partner. He was a petroleum engineering professor, had a team of software engineers that would work with him so he could get the type of petroleum engineers and software developers that we needed. So that was a nice thing. We didn't need a lot of them but we needed that skill. And then we also worked with a company in Vietnam that happened to have very specific and that's one thing they did. They only did one thing and that was test automation. And we really liked their approach, their methodology and the tools that they had for test automation and it was such that they didn't have to have the reservoir engineering domain in order to do the automation. We could provide that and they could do the automation specialty. So if we look back at our system workflow, the nice thing we also had going for us was that we could decouple. We had a loosely coupled environment so we could spend a lot of effort on the high performance cluster. This was the real critical place where if something was wrong, we needed to know about it. It was also the hardest part to test in one sense, well, it was the most complex to test but this is where the business, really the business mattered. If this got wrong, it was a big issue. If we had some errors in the other places that wasn't necessarily as critical, we had to get the simulations right. But we also wanted to improve our testing of the overall system usually at the GUI level for the user interface and the graphical pre and post processing. So at a high level from our test automation workflow, we started with some inputs, we simulated results, we get some outputs, compare that against the baseline. And the key part for us was this difference engine. So we had to develop a difference engine because of the nature of the approximation that we're doing. We had to have a very intelligent differencing engine that didn't tell us whether we got the same results. It told us whether we got within engineering accuracy a result that mattered. So there was some time spent over a number of years in developing this difference engine which we happen to have available to us. We just needed to be able to run more and more data through it in order to increase our coverage. So when we looked at what we had in play, we mapped it out against complexity of tests and breadth of coverage of these tests. And so the good discipline we had the developers test every check-in, they were running through their test suite very quickly and covered good breadth of coverage but not very complex. I mean lightweight, not quite, I mean, a little bit more than unit test but not sufficiently to test the types of really complex problems that break some of the challenges with simulation. We also had a really good collection of smoke tests. These were being run manually. Initially this was done with our team in India. We transferred that over to the Romania team and they were running that manually. And then we had the customer models which are really complicated ones and these are the ones which even though we were running on a high-performance cluster, we're taking upwards of a week to run. So they're finding a really nasty problem but they were taking a long time to run. So when we analyzed this, we concluded we've got this gap between the customer models which are finding some of the complicated problems and the developer tests which are catching the simple problems and keeping us from introducing too many new problems but they're not catching the really nasty problems. And we also had smoke tests that were being manual and we were looking to see what could we do to automate that to possibly do it faster as well as increase our coverage. So our conclusion, this is where we moved to, we developed a new set of synthetic data that captured some of the complexity of the customer data but kept some of the breadth of the developer test and something that we were able to make sure we could run on a nightly basis. We wanted to have that nightly validation that was a little bit more complex than the developer test. So it was an investment in building those data sets and they had to be synthetic because we were working with remote teams that wouldn't have access to some of the customer data. And then we spent time to automate the GUI test to put more effort into first we're automating the additional smoke tests and then that enabled us to build more and more smoke tests that were to expand the coverage of those tests. So over time, we essentially, this was between 2009 and 2010 we just were continually adding to our overall coverage that we were able to cover within our team. So if we look at the global distribution it was about 40 people I think overall that we had on the team. Most of it with our core center in Houston but we had three developers, three petroleum engineer testers that were in Bucharest and then we had four automation testers in LogiGear in Ho Chi Minh City. And how this worked was that our team in Houston would do a lot of the work. We'd also, because we had very strong domain experience in Romania they could do both the smoke tests as well as develop new tests. It was key that they were able to develop new tests because between the engineers in Houston and those in Bucharest then they would do, operate by communicating through Camtasia videos and then the automation specialist in Vietnam would review those videos and then create the automation tests and decompose it using what we used action-based testing which is a keyword and abstraction model to keep the automation tests from being too brittle. It worked very well. So working with the partners it worked really well with our Romania team because the owner of the company he was a professor of petroleum simulation so it's perfect fit for us. I also joke with him that he was a benevolent dictator. His team loved him but he was really key on delivery. He was focused on delivery so he would drill it in that delivery was important and so it worked very well. And I think the other thing that worked very well here was that we communicated in the language of reservoir simulation. Having that rich domain knowledge made communication so at the level we could communicate at a very high level and they got it immediately. That was key. That was one of the huge problems we had dealing with some other vendors. Our Vietnam team they were specialized in this action-based testing and automation testing. The communication like I said was with Camtasia videos and that communication was essentially in the language of automation testing. We were doing that. They were turning back with the automation test and the results. It had really good communication across the board. Prior to 2009 versus 2010. In 2009 we were finding a lot of defects in beta T22 and as a result of finding so many issues in beta it meant that we could only fix so many of them. We shipped and we ended up shipping with around 100 defects. They weren't serious defects. There were things that didn't really impact the business problem that we were solving. The defects nonetheless. We really didn't want them in there. It was good enough to ship. It was good enough to provide value to our customers. The shift of this when we put this additional effort in significantly lower defect finds in beta which just by the fact of having fewer of them in beta it meant we could fix almost all of them. We resulted fixing all but three and as some of them were very high risk some of them were of the type that we weren't actually sure they really were defects but we classified them as such nonetheless. But the big difference we had a 97% reduction in defects known defects at the time of ship. So I feel a huge improvement to this by really focusing on things and then leveraging the talent that we had. Let's see what's... So what I really wanted to go into now after the result of that is where given that I've had experience dealing with distributed offshore teams what are some of the overall lessons learned both from this case as well as from dealing with a number of other projects over time. So I think it all comes back to the manifesto and I love this version of the manifesto by Alexi Kravitzky blah blah blah blah blah blah blah individuals and interactions over processing tools blah blah blah blah blah. It's really I think Martin brought this out yesterday that key part software is produced by people. It's an individual creative activity. Teams and people create software and that's what this is all about. I think that's when the problems a lot of organizations get into when they get into distributed teams to outsourcing and offshore work is they think it's all about replaceable cogs. It's not about replaceable cogs. It's about empowered teams and individuals that can get their job done. So if we look long ago and far away let's take an example a couple of different examples of how distributed teams could work. So here we have one where we've got a management project review. The outside the management comes in to see how the project is going at the distributed team. I dispense with the presentries commander I'm here to put you back on schedule. I assure you Lord Vader my men are working as fast as they can. Perhaps I can find new ways to murder them. I tell you that this station will be operational as planned. The emperor does not share your optimistic appraisal of the situation. But he asks the impossible. Then perhaps you can tell him when he arrives. The emperor's coming here? That is correct commander and he is most displeased with your apparent lack of progress. We shall double our efforts. I hope so commander for your sake. The emperor is not as forgiving as I am. Remote team and see what happened with them. Back door? Good idea. It's only a few guards this shouldn't be too much trouble. It only takes one to sound the alarm. Then we'll do it real quiet like. Oh my, the prince is there. I'm afraid our 30 companion has gone and done something rather rash. Oh no. There goes our surprise attack. You stay here, we'll take care of this. Have decided that we shall stay here. So you see that couple of things. You know the Ewoks they worked in pairs and arranged things. They took initiative, they took ownership and the most important thing is they made the manager decide that it was his idea. Building off what Martin had to say yesterday, I realized that what this is all about is moving from coding monkeys to empower Ewoks. Teams need to be able to be empowered and so how do we get to the point where teams are empowered and so this is a model a friend of mine Paul Gibson has put together. I really love it. The idea is that if you're looking at a team and how they're working, how is their ownership? How is the team owning the problem? Do they have a high ownership or a low ownership? And then how is the leadership and management working with this? Are they controlling the team or are they trusting the team? If it's a controlling environment there's effectively a command and control mode. Leaders are trying to take control and the team can't take ownership so it's command and control. When the team tries to take ownership but the leader will still try to take control, they generate a conflict between the leader and the team and when the leader says okay I trust you to take this on and the team doesn't take ownership the leader is effectively abdicated then team is effectively abdicated by ownership. And where we really want to get to is where the team has ownership and leaders have trusted them to get their job done. Then they can be empowered, then they have energy and innovation so the question is how do we get there? Again it all comes back to people. First you've got to get the right people. You've got to have the right people that have the passion, the ability and the organizational fit and the right material for developing empowered teams. You might have the passion like maybe I have the passion to be a Bollywood actor but I don't have the ability so it's not going to work very well or I may not have the organizational fit it doesn't work but when I've got all those coming together then that works. Recently in fact I left Halliburton Landmark and it was largely because I was really focused on the reservoir simulation and had a lot of passion around that so my passions have shifted over time to be a little bit more around general software engineering. My abilities have stayed about I've probably gotten less able to do some of the detailed work and then the organizational fit I found that over time Halliburton had drifted towards more of an environment that wasn't quite a great place great wonderful company to work for it wasn't the one that fit the organizational fit. The IHS is a great company and a promo for IHS IHS is a company that delivers data information and analytics to a broad aspect of the world we deal with almost all the fortune 500 companies we're global we're making analytics decisions dealing with everything we've got a great organizational fit and we have an office here in Bangalore with a job fair the other thing we need to do is we need to get alignment on the purpose if you're aligned with the purpose that's the only way the team can actually own it we need to know what the purpose of the organization is and we need to have the team own that purpose types of powerful questions or what are we building what business are we in too often what I see the question we ask is what building are we in not as useful so here's a model that my book partner Neal Nicolayasen has come up with I really love it because I think it's a really simple model and clarifies some issues very quickly we're looking at from the organizational perspective where's our market differentiation what are places where we have a high market differentiation versus low market differentiation and where are places where it's mission critical both to us as an organization and to our customers when we have low market differentiation and low mission criticality we're in this who care categories at that point we're trying to minimize or eliminate what we're doing on the other hand when we're in the differentiating category high market differentiation and high mission criticality that's the place where we're creating sustainable competitive advantage we want to innovate and create there's some cases where we have low market differentiating but it's still mission critical for example a payroll system we're not going to go to your customers and say come buy our widgets because we have the world's best payroll system do they care in fact they probably prefer you don't pay people or you don't have the best account receivable package but your employees care so it's mission critical to your employees it's not mission critical or it's not differentiating in the marketplace in that case we're trying to achieve and maintain parity we try to minimize and simplify we look for off the shelf solutions we look for open source solutions we're trying to mimic that and do just about the same thing that they're doing and then the last case where we've got a market differentiating but it may not be mission critical we're involved in it we're looking to see what partnerships might we create what I love about this is it's applicable at corporate strategy level product strategy level even down to the feature level development teams can be looking at their products and their features and say where do we fit in this if a feature is something that's a parity type feature I don't want to be gold plating it I want to be doing the least possible and moving on if it's something that's mission critical I want to say how can that make a difference to our customers how can it be even more market differentiating if it's up there we can look at this for example looking at Apple how might Apple fit in this category so what do they have that's in differentiating category well new product design user experience content distribution these are the types of things that Apple is really focused on now we'll see whether the new product design is one that's sustainable for them they haven't come out with a lot really creative and innovative for a few years and some other people have moved past them so we'll see whether they're able to push on with the new level there in the parity category they made deals with Microsoft to make sure they had Microsoft Office on the Mac they switched over to Intel hardware that was parity decision a lot of other software is down there in the parity area with partnership when they came out with the iPhone at least in the US they made a strategic partnership with AT&T at first in order to capture market they could capture value through that partnership they didn't have to go create a network they could use an existing communication network create a differentiated partnership and make a lot of money on that now over time they said well that moved down into the parity and they started working with all other vendors something that they could look at for a short period of time and then who cares they were once very big in printer space they got out of that this is a quick example of how this can be used I think it's a simple model that can help create a lot of clarity within organizations and teams once you get to that purpose then you're looking at what else do you need to do in order to get ownership what's that key when you step back and give an example here of this beyond just that but also the idea of how we use feedback and how feedback can help us get there so this is a video by John Cleese John Cleese of Monty Python fame or for the younger generation nearly headless Nick in the Harry Potter series John Cleese back in the 80s did some management videos and this management video to me is one of the best best the best view of what agile development is really about Gordon the guided missile sets off in pursuit of its target it immediately sends out signals to discover if it's on course to hit that target and the signals come back no you are not on course so change it up a bit and slightly to the left and Gordon changes course as instructed and then rational little creature that he is he sends out another signal am I on course now and back comes the answer no but if you adjust your present course a little bit a little bit further up and a little bit further to the left then you will be so he adjusts his course again and sends out another request for information and back comes the answer no Gordon you still got it wrong you must come down a bit and a foot to the right and the guided missile its rationality and persistence a lesson to us all goes on and on making mistakes and on and on listening to the feedback and on and on correcting its behavior in the light of that feedback until it blows up the nasty enemy thing then we applaud the missile for its skill and then if some critic says well it made a lot of mistakes on the way we reply yes but that didn't matter did it it got there in the end all its mistakes were little ones in the sense that they could be immediately corrected and as a result of making hundreds of mistakes eventually the missile succeeded in avoiding the one mistake which would really have mattered missing the target were you thinking about your software project are you hitting the release are you hitting your customers really need not hitting what your customers really need would be missing the target well that's the whole point about feedback feedback is so critical to the agile development process and its feedback is how we get that ownership so let me go through being a chemical engineer when I saw that video I said wow that's really control systems chemical engineers deal a lot with control systems and process controls and one of the things that we learned in chemical engineering world in chemical processing plants is that there's two different approaches to control systems basically the model is the same we've got inputs, we've got processes and we've got outputs and one way to focus on this is we could look largely at the inputs and the processes and when we're looking at inputs and processes we're predominantly in a command and control mode in the chemical engineering world when you're looking at inputs and processes there's something called feed forward there's some questions going forward turns out that's a very very unstable mode for chemical processing plants learned in the chemical engineering world it's just unstable much more stable is instead the focus on outputs looking at outputs, being on feedback and adjusting and to me this is what agile leadership is really the shift it's moving from command and control from inputs, from focuses on inputs and processes which auditors love, auditors love being able to look at inputs and processes but they never look at outputs why don't they look at outputs because it's not easy to checkbox but as software engineers in this team we really want to focus on outputs are we delivering and are we getting close to what the end users want are we meeting our customer objective the extent we're focusing on outputs and to the extent we're bringing feedback to the system and adjusting the system accordingly that's where the difference makes so this is the real shift and what I see in this shift if we look back at this trust and ownership model what often has to happen is we need to take shifts in an incremental mode so if we're looking at how to get there it turns out from control system perspective the only stable zones of control are along the diagonal because if we're over in conflict in a high conflict area it's not sustainable eventually the team will burn out the team will burn out and they say okay I give up I tried to take ownership the leaders they're letting me take ownership therefore I'm going to let the leader take over because it's just burning me out and it reverts to commanding control the other side where the leader trusts them the team doesn't own anything they don't deliver anything eventually somebody finds out and when they find out they say well nothing's getting done here they're just playing so the trust goes away and that's not a stable model either but along the zone of stable control we can move up and down that diagonal and how we move up and down that diagonal what I see is that oftentimes what has to happen is we have to take that increment we have to work in an incremental mode so the team can take the action or the leader can provide some of the action in this case the team might take some action I'll own a little bit I'll demonstrate results by demonstrating the results sometimes that can result in the leader providing more trust and then over time we can move up that line alternatively we can take small steps by leaders providing additional trust providing more focus on purpose and ownership and the team can start taking notes if you try to take too big of a jump on this I think oftentimes it becomes unstable but eventually you can get up there you can get up into that energy and innovation and that ownership you've got to have the ownership you've got to have accountability when I look back at when I had really successful in power distributed teams it's always been when that team has autonomy they've got local leadership they've got local domain knowledge that is critical to be able to understand the business problem they've got complete tests the developer team, test team everything necessary to own a particular sub-piece of the problem so they've own it, they deliver it they can take ownership and we trust them to deliver to move away from that the teams are very much less effective so then let's look at some of the challenges that I've experienced in dealing with outsourcing and distributed teams one of the big ones particularly because of the industry I mean is proprietary data our partners are very large oil companies their properties are worth a lot their data is worth a lot because they're in a competitive environment having that proprietary data outside of our hands are very reluctant to give it to us in the first place but when they are they want it under very strict control so often times they're not willing to let it to an outsourcing partner they might even let it outside of our immediate office a big challenge particularly in our area I've talked to a few people from other industries they say they have the same problem because of the type of data that's very sensitive data our approach to that has been one of trying to synthesize data it works okay it's not great it's a limiting factor in how much we can enable other outsourcing teams to do but it's just one of those challenges you know about it another challenge of course is time shift we had a 12 hour time shift between Vietnam and Houston 8 hours between Romania and Houston it turned out in this case we actually well we looked at the whole system and it said yes we have a time shift distance and yes it makes issues on communication what can we do about it so one of the things we did is we said well what if we change some of our build time some of our Houston build time and we made it so the Vietnam team would have a build ready for themselves that they could run the automation test during their day once they were done with it with the automation test they didn't have the knowledge or detailed engineering to figure out whether the tests were false tests or issues so we had to have additional analysis work done which was done by our patrolling engineers in Romania so when the automation has finished the Romania team picked it up they did the engineering analysis and then by the time the Houston morning was gone going on we would have the results from the Romanian team and we would know whether we had a build that was successful and something that could take even deeper testing so we actually turned things around and said yes we've got a problem but also how can we utilize it we also took advantage of the fact that we had the Romanian team was 8 hours, 8 hour difference makes it very easy to have at least a couple hours of overlap then they were actually but they had overlap as well with the Vietnamese team so they became an intermediary in some cases in some of the communication so we were able to turn it out and it worked out quite well for us so yes it's a challenge yes a pain in the ass trying to deal meetings during the night, during the day during the early morning eventually you can make it work out another challenge, xenophobia this is one of my Houston developers pickup truck and gun I tell people all the time distributed, outsourced, offshore team it's a real challenge it's challenging to make it fail it's really easy to make it fail you want to make it work you've got to put the work into it and you're going to encounter people that just don't want to make it work they have fear about something they have fear about losing their job they just are persnickety and they don't want to have to deal with the challenges of dealing with it it will happen you have to identify that, you have to figure out how do we get around that but it's reality and here's one that's particular of issue culturally within India is transparency and honesty there are cultural issues dealing in India it's often difficult to get all the team members to feel like they can communicate as a team, as an individual member to the remote everything has to go through the leader hierarchy introduced has issues dealing with transparency honesty sometimes feels like they need to tell what people want to hear rather than what is truly happening it's all about honesty I mean teams can only be effective if there's transparency and honesty I have a big issue after dealing with so many Indian vendors over the years I have come to finally one realization I've been able to figure out how the Indian vendor sales rep is lying to me you know how their lips are moving it's not quite that bad but sometimes it seems that way we can't get straight answers and it's so frustrating trying to deal with it because we're trying to all be in the same game we're trying to all do the right thing and when we're not getting the right feedback once you break that feedback loop the whole premise of what we're doing is honest, it has to be transparent sometimes I'll tell my teams the feedback has to be I say ruthless I don't necessarily mean as ruthless as my picture is going to be but your code it sucks you can still be nice you can still have respect but if we're not getting what we need from the remote teams we need to let them know we need to get the feedback so they get better and we need to get to the point where our remote teams are telling that their code isn't up to par that's the level we need to get to to the point where there's real partnership there's real ownership across the board that transparency and honesty and respect is key and I think that's one of the biggest challenges I see from a cultural perspective dealing with groups in India great talent, great talent all around but that transparency is a key issue and it's a different dealing with Eastern Europe for example they're very transparent they'll come and get in our face and tell us no you've got to do things differently and it's really a healthy exchange so I think that's one of the shifts that will make a big difference and it changes it really matters in being effective so some of the key takeaways from a software quality perspective always try to build quality in you can't test quality in you've got to build it in you've got to find and correct the defects early to reduce uncertainty and uncertainty all to the end look at what your testing strategies are how are you testing so many teams that I work with they don't even know what their testing strategy is they just sort of make it up on the fly what really helped for us was analyzing our situation thinking using our brains a little bit and figuring out where our challenge is what could we do about it you won't necessarily have the same problem but if you look and analyze it you can see what could you do about it don't just accept that it's a problem to maintain velocity but test automation does not replace exploratory testing what we did because we were able to do more test automation it freed up our key petroleum engineers to be able to do the really hard problems with exploratory testing and exploratory testing is where we find most of the problems the automation tests are just to make sure we're not introducing new things exploratory testing is really the key so you still have a fair amount of manual testing but those people are doing really hard testing they're using the brains a lot rather than just manually rote activity and the thing we worked with with action-based testing did help us with reducing test brilliance so having an automation tool that provided that abstraction was really good and knowing how to do it a couple of takeaways on distributed teams sure, distributed teams can be very effective I think I've worked with a large number of them and most of my teams have been effective but as I say it's really easy to make it not work if your teams don't want it to work the key was autonomy and feedback it's critical to building trusted ownership we need to treat I tell a lot of people you treat your outsourcer as a partner you'll get much better results if we're all in the game the same it makes a big difference I see distributed teams are a reality having everyone in one room is great but I think it's distributed teams are a reality it's a way to leverage global talent the key is being able to make those distributed teams all act as independent units as loosely coupled as possible and then come together for the global good one thing I think globally and optimize the whole look like what we did with our timeline we had to change some of our practices once we looked at the overall global situation so we changed our build times in order to take advantage of the situation many teams I had that were just stuck, well this doesn't work because we have this at time have you thought about changing it globally if you're thinking about the whole situation ask the question what could we do to improve and think we'll be able to change if you're stuck and it has to be this way you won't get there so my contact this is my book stand back and deliver couple of emails and towards impoverished they're so cute and they're so powerful may the force be with you I'd be happy to take any questions yeah yeah what is the investment for all of us who may be plus or something what kind of investment would that be so the investment in this case was was pretty much the incremental team so the incremental team was the six developer testers in Romania as well as the four automation testers in Vietnam certainly that was a lot cheaper than trying to add a similar number of people to our Houston team and in fact we might not even be able to find the talent in the Houston area so as a relative investment I don't know what we read that was adding about 10 to 10 people to 30 but it was probably well less than half so it was a relatively small incremental investment and we also had some incremental we took people off of some things that they were doing in Houston and also had them refocus as well so I think overall we felt that it was a clear win right the huge reduction was something we felt like we had to do in order to get our velocity up so yeah an investment but it was not a huge investment and the fact that we were able to leverage global talent made it much less of an investment yeah and the tools were something that came because we were using the vendor in Vietnam and because it was part of a larger engagement that we had they provided the tool for free so that was actually part of their engagement that work that was one of the reason we worked with that that vendor yeah yeah so we were a product company so as a product company we have a limited number of test environments that we use and then we strip it out to the world and they have any number of production environments so we don't have a control over what production is all we can do is say can we simulate it and is it good enough and you can't afford a large number of high performance computing clusters they're pretty expensive so we had one high performance computing cluster that was fairly standard and actually we had two we had a Linux version and we had a Windows version at that time Linux was the predominant high performance computing cluster that we were working with but we did have a smaller Windows cluster as well so we had two and that was sufficient we felt like just from a hardware perspective that was sufficient environments to test on the bigger challenges is the collection of data that we run through it because that's where our challenges were and that's where we had all the customer data collection probably 30 40 really really hard customer problems we had you know 500 to a thousand simple data simple models that we had the developers working with it was the creation of this middle area which we created about a hundred or so intermediate problems and that was all synthetic data and then those models could be run anywhere and they didn't need to run on a high performance computing cluster because they were small enough that they could run so they could be run remotely on a mini high performance you know processor eight processor cluster rather than needing a full 6428 cluster node cluster it was a matter of finding what we could do because our Romania team could buy reasonable hardware that wasn't too expensive and they were able to simulate some of the environment and the I think as well as the Vietnam the test that were going on in the Vietnam were more at the user interface level and so they were doing the computational part was very minor but it was more hitting the user interface pieces we said what's our challenges and what could we do about it and that was mostly our Houston team so our team got together we got together with the QA manager my team was the development team, the QA manager we got together with the product team and we said where are the challenges and we came to a conclusion that we needed to look at what our strategy was, where our gaps were and when we found where our gaps were we said what could we invest in and how much do we take out of our existing team in Houston to put into that effort how much can we leverage our talent and how much use the team in Vietnam to do the GUI automation so that was where we came together with that combination there also was thinking about what we wanted to do and opportunities finding the right vendors those gaps filling in those gaps yes I was having trouble hearing that so you represented the ownership model 2x2 grid so my question is with respect to how you went about determining where the organization plays as a role in the respective grids and was there any method to determine what our strategy is to take us to the grid so the interesting thing there is I know about the ownership model when these teams were formed so it was sort of a retrospect afterwards this is something that I've learned about more or less in the last year but it explained to me so much when I've seen success has been exactly that so I think for us what it was with our two remote teams I think our Romania team was really successful because they knew our domain they knew exactly what they were responsible for and they were able to deliver on it if they didn't deliver and the good thing there was we were probably willing to cut more slack than the owner of the Romania company was it was a new partnership but he was on top of it he wanted them to be successful from the first day and so he really created an environment where output was important so he made sure that they delivered and when they delivered we gave them more and they were able to trust and they got more efficient so that worked really well again I think the same thing happened with the Vietnamese team they had a very specific piece that they owned and they had the talent and experience to be able to own that so by doing that they could own it they could produce the outputs and we had regular feedback with them on a regular basis, daily basis as necessary to provide that so it was the feedback loops and taking the ownership and petitioning it so that they could in fact own it trusting them to do it and getting them to own it oh yes so it was a global problem of barely any requirements so how did you distribute it with the global team? the varying requirements? so varying requirements, sure absolutely the key part there was that regular communication with both teams and to the extent possible we would keep what they owned as compartmentalized and structured so that what they provided we didn't expect to change so it was a small timeline that they were all working on so that it didn't change during the time and if they were going to change we'd get right back to them and say okay you're working on that but we don't want you to do that anymore typically that didn't happen because we knew what we needed teams to do and most of the churn in this case probably was dealt with at the local level but we were never in an environment where the churn was that fast and we needed to be looking for so the management team needed to be having the foresight to say yes we agree that there's a change coming but we're not going to have to deal with this change today as a product company typically we're looking at a release cycle and that release cycle we've got usually some time to navigate through it so yes there's some change but we don't have to have that change happen immediately unless it's something where they had to be pulled into some maintenance activity or some emergency demo or something like that which does always happen in a product company but typically we dealt with that with the local team try to isolate that any other questions?