 To me, the core element of really engaging agility being able to really be agile is all about feedback and the learning we get from feedback. And this is nothing new. I mean, feedback is something that we've known about for a long time. Lots of different approaches that are based on feedback approaches. This one, Plan-Do-Study Act. Does anyone know where that comes from? Deming cycle, Schuert cycle? Yeah, absolutely. This one, the Build-Measure-Learn, has anyone seen that one? Lean startup, absolutely. How about the OODA loop? What's the meaning of that? That's the Observe-Orient Decide and Act. It comes out of the U.S. Air Force. John Boyd has been very active and it's very much the same thing. And then this type of feedback loop, the Control Loop. And all of these are really derived from the scientific method, the granddaddy of them all. It's all about what we observe, make some hypothesis, test and collect some data, analyze results, and accept or reject the hypothesis. How many of you were at Dave Farley's talk last yesterday evening? Okay, a couple. You're going to see a lot of very similar things. He brought in up the fact that the field of chemical engineering has particular ways of looking at things. Well, that resonated with me. I am a chemical engineer by training, petroleum engineering. Most of my career has been spent developing software for the chemical, well, mostly petroleum engineering system. So I developed software and managed teams of people that were learning how to explore and collect oil and gas. My background, I'm an executive roles in VP of Product Development, director of software and technology. At Halliburton, Landmark Graphics was a division of Halliburton. Doing some really cool things, very high-end performance, high-performance computing, 3D visualization, a lot of those things. I went to work for IHS, which is a data company, so I got exposed to data for oil and gas. I'm currently at Lean Canban. I'm the CEO of Lean Canban. My path to Lean Canban came through my accidental, perhaps, connection into the agile community. I had a chance meeting with Alistair Coburn, and he told me that he was starting a conference, which was the Agile Development Conference, the first one in 2003, and I had had some conference planning experience at my company and showed him some of the ideas we had done to really create a dynamic conference experience. He really liked it, and we brought a lot of that to the first. So I co-created the Agile Development Conference with him. Went on to run the Agile Conference for a couple of years, and was part of the Agile Alliance and also was a founding member and president of the Agile Leadership Network. I've been in the community for a long time. This is my fifth Agile India conference. I really enjoy it, opportunity to really share what I have learned over the years and engage with the community here. So what is business agility? What does business agility mean to you? This is business agility day. What does it mean to you? What's that? Responding. Responding to change, being able, and particularly to business changing conditions, right? So, what's that? Big stylus, okay, that's potentially something as well. All in the vein of trying to respond to a business. Over my period of time looking at things, part of the reason I got pulled into the Agile community was I had been working in a particular way. I had been working in the way that I had been trained as a chemical engineer, which was very much iterative and based on feedback. And I was working in a very dynamic organization that was just really well connected to its customers. And so I had an exposure of what I felt was really business agility back in the 1990s before Agile even existed. And so then when I saw the Agile community and what was happening, I finally found my home of what was going on and started to look at that. Yet what we say is in retrospect, looking back in the 90s, really the thing to me that drove us, I look at these things in terms of people process and technology enabling the business, what was really important to us, what we really had right, we had a great connection to the business and we created an environment where people were able to work it. We had great connections to our customers. Our customers loved us. We had great feedback loops with our customers. And we created an environment where people were empowered to do the right thing. So if I look at what's happened in a lot of Agile transformations, what I see is they start down here. They start processing technology. Yet I think where real business agility is up here in the people and the business. And so in the spirit of the Agile manifesto, I have come to value business agility over Agile transformations. Agile transformation in and of itself is pretty useless unless it enables business agility. So that's not to say that Agile transformations can't provide the leverage to get there, but you've got to have the goal. You've got to have the feedback loops that tell you how am I connecting to my customer and how am I engaging with the customers. So one of my first exposures here was the Scrum book, the very first Scrum book by Ken Schwaber and the late Mike Meadle, who was a fantastic individual tragic loss who was murdered last year in Chicago, which is just so sad. But this very first Scrum book, I got a little chuckle out of it because here again, as I'm coming from a chemical engineering world, Ken Schwaber is talking about how he was trying to, to a certain extent, discovered elements of Scrum or why Scrum worked. And he talks about how he was trying to sell a very heavy waterfall process to some chemical engineers at DuPont. And so he laid out the whole thing and they started laughing at him. And he said, why are you laughing at him? This is really wonderful stuff. It's great. And he said, it might look wonderful, but it'll never work. It can't possibly work because what you're truing is you're trying to apply a defined process model to something which cannot be controlled in a defined process way. We've discovered this in chemical engineering over and over. You have to, when you have processes which are not in advance definable, you need empirical process control. Empirical process control relies heavily on feedback loops. So here, the chemical engineering industry has taken this to great extent to a point where really complicated plants are automated. And there's a joke running around in the chemical, in the process control world that all future chemical plants like this, complicated ones, will be run by a man and his dog because they'll be so simple. And the really important part of it, the purpose of the man is to feed the dog. And the purpose of the dog is to bite the man's hand if he changes the valves because the systems were so automated. Now that's a joke, but the thing is to discover the fact that if you have process control feedbacks on automation, then you can actually control some very, very complex situations. And now we take that to the business world. So control systems here, the top one here is an open loop where it's an open loop or a defined process. We focus on inputs and the processes with the idea that we'll have very predictable results. The closed loop, our feedback system, is based on the feedback into the process and we look more at the outputs, right? And this is the part that makes it closed. Output feeds back into our input and we can control it. Here we're looking at things which we know in advance the process is so well defined that we have predictable and reproducible results all the time. So if we look at this, just in a schematic diagram, we have some output response, we have some controller, some process, and get some output. So very simple thing, not so popular here, but certainly in the western world and people love their toast, right? So they have a dial, that dial sets what color they want the toast. Now there's no feedback loop, what that really is doing is just setting a timer. That timer then goes in and we heat the bread up and out comes this wonderful brown toast. This is an example, I'm talking on a mic, I go through the amplifier and I get a sound. So something like this. Now what happens, this is an open loop system, what happens if I connect this back? So if I start talking and the sound starts getting louder and louder as I get closer and closer to the speaker, what's going to happen? Any idea? Screeching sound, so that is feedback, that is a reinforcing feedback loop. And what happens in a reinforcing feedback loop is the system gets out of control. That's not to say that these are bad, these can actually be quite good. If you want like an economic engine that's feeding itself and growth, a growth engine, this is perfect for a growth engine, but it's not a control system. So what we're looking for when we're trying to control work, flow of work, we're mostly looking for control systems. So control systems rather than having a positive feedback have a negative corrective action. So we start with some desired state, we take some corrective action in order to try to meet that desired state, then we have a process that happens which gives us some actual output through a sensor and feed it back. So let's look at a couple of very simple, simple of these that you can write that to. Here we've got a thermostat set and the thermostat just tells us to turn on or turn off the air conditioner or heater, whatever it may be set to do. So it's very simple, it's just a signal, sends a signal, this comes on off, then we get a new temperature, based on that new temperature, that feedback loop comes back to the thermostat and we keep the thing very much in control. So it's a control loop. Sometimes I feel like in the agile community we hear this word control and say, I don't want control, don't control me. Control is exactly what we want to control projects. We don't want to control people. It's how you control it that matters. We want to be within control. The opposite of control would be chaos. We don't want chaos or anarchy. But we want our systems which are under control, but control the system, not the workers. So I travel a lot and every time I come across a new shower, I've got to relearn it and figure out what's going on. But so what happens here? And I think a couple interesting things. So I've got a deviation, I've got a shower adjustment, I adjust it, turn it on, water comes out, I get a temperature, but I don't actually take a thermostat here, a thermometer, I use my hand, right? Now, I use my hand and then I adjust the knob based on what temperature I want. Or the other thing is the flow rate as well, right? So I may have to adjust, I got two adjustments, right? Adjust the temperature, adjust the flow rate. But I can do this all, I don't have to do any calculate, there's no calculate, there's no numbers in it. It's all, and the reason this works is because the controller is a human, right? When the controller is a human, it's a relative subjective assessment, it's perfectly fine. When you're in a chemical plant, you actually have, and you're automating entirely with computers, you actually have to have set points and numbers. But here we don't. And that's good because when we're in project mode, projects have humans involved in them. So this tells us something pretty important in our feedback loops. Sometimes also we have more complicated or different types of valves. Like if I adjust one of these valves, I adjust the temperature, but my flow rate changes. So I've got to, often times we have to adjust two valves or two things to adjust. So there's multiple things in our feedback loop. We need to understand what those things are as we're doing it, as we get more complicated. So back to the basic situation here we've got. This is very simple, the reality is we get much more complicated. For example, we have multiple inputs, we have multiple outputs. We're measuring multiple things. This is called a multi-input, multi-output system. But the point is you've got multiple things you can try to control. And often times you can't control all, you can't adjust all of them independently. You can't, they're all interdependent. So you change one, you mess with the other. Just like I did in the two valve situation, I adjust one valve, more things change, right? This is a rally to the types of complexity that we have in the real world, right? Now we can start looking at this as well. What are some of the things we have in, for example, software projects? Well the output variables are things like value, schedule, cost, and quality. There may be many more than that, but these are the types of things we frequently have that we want to be measuring. And then we want to be making adjustments to those on an ongoing basis. This is how we get business agility. This is how we see, are we meeting the target? Are we meeting our customer needs? If not, maybe we should be doing something about it. So what are some of the things we could do? What are some of the things we have as knobs that we could work with? Well, I just list a couple of things. We can adjust scope, we can adjust date, we can adjust the team. We can add members to the team. We can shuffle people around on the team, move them around. We can adjust quality, make sure that our quality controls are better. We can adjust our processes so that we get better behavior through the system. These are the types of corrective actions we have available to us. And this is a type of feedback loops we need to be looking towards. What happens when we really focus in on one of those things? So we have a multiple of things, but what happens, and this is frequently one I see, where schedule rules everything. What happens when you say schedule is going to rule everything? Well, it becomes the big controller and we do everything, everything else becomes subservient to that, right? So this is a case study. This is the, we're essentially the same, a little bit, what happened? This is the Ford Taurus, it came out in 1986. At the time, Ford was nearly bankrupt. The company was doing terrible, the Chrysler Motor Company had just come out of bankruptcy or had gone in and out. So Ford was really struggling, and this car came out. And we look at it now and say, well, that's a pretty ugly car. But in 1986, it was pretty cool. It actually was a really, really effective car. The release of this in 1986 was just awesome. It saved the company. It was really, they really did, the project manager did all sorts of user group studies, customer surveys, to really hone in and make sure he met the market. Great job, right? Problem is he was six months late. So what do you think happened? He was demoted. He didn't meet the schedule. So now we fast forward to the second revision of the Ford Taurus. What do you think was the one thing the project manager made sure of? Made sure he was on time. Skipped all the special things that the first project manager did. No, we don't need any customer user, customer group feedback. And we don't need more prototypes. We don't need to check things out. We're just gonna make that date. Made the date. Horrible response. Way lower, I mean, just way, way, way less than the original rollout. So what happened? That's what happened there. Let me give you an example from my own company. The company I told you was doing so great in the 1990s. Well, later on, a new executive comes in. And we had, I could say at the time, our customers just loved us. We were delivering, but we were not on time with much of anything. We were always missing our commitments. We were making guesses and commitments, what we call commitments. But our customers really didn't care. They loved us because we were solving their problems. We were solving the problems that they had. But this new executive came in and said, well, I'm gonna reinvent this company. And I'm gonna create what I call the calendar of innovation. This calendar of innovation means that every month we are gonna deliver something and it's gonna be hugely innovative and it's gonna deliver on time in that month. So what do you think happened? What do you think that behavior motivated from the employees of the company? We made every single delivery, every single delivery on time. After a while, but in order to make every single delivery, we sacrificed in every cut, we cut corners on scope. We cut corners on, not so much on quality, a little bit on quality, but mostly on scope. We weren't really that connected to the customer. We were connected to one goal, which was meeting a schedule. After a while of this, we had customer meetings and we talked, someone talked about the calendar of innovation. Someone said, you mean crap on time. That's not the type of ringing endorsement you want from your customers. Crap on time is just, that's what it is, it's crap. They didn't like us so much anymore. That was a mistake and that's because what had happened was everything else had become subverient to that one thing of schedule. And the reality is, we're trying to be business agility. Yes, schedule and time is important, but the whole picture is what matters. Are we meeting our customers? And I've never had any of our real customers. We were a product company, so product company, they had been used to. What they really cared about was meeting their solving their problems. We solved their problems. If we were six months late on delivery, that wasn't the issue because we were saving them so much, but the problems we were able to solve. What they cared about were we eliminating the problem entirely. And if we went with a partial solution that didn't eliminate it entirely, they didn't care, they didn't even use it. So you have to really get in tune with your customers and tune with all these parameters in the feedback loop. Let's look at the Knephen framework as a, this is from Dave Snowden. I really like this as a tool for understanding the type of complexity and discovery of knowledge work that we have. Ranging from obvious to complicated to complex and chaotic. And what I'm going to do is go through each one of these and what is the parameters and what they look like. So starting in the very obvious situation. If we look at our flow, we've got some set of inputs. We've got a very well understood process by which we go through and we then have outputs. And then the distributions are very narrow. We get pretty much known inputs, known process, very predictable outputs. Input, it's input output obvious to. So cause and effect is pretty much obvious to all. It's the home of the known knowns. What we do there is we sense it, we categorize it, respond and apply best practices. Very simple approaches. Typically manufacturing would fit into this. It's a repeatable process, we know it. Now we're just trying to hone it and optimize it. In the complicated domain, we have inputs and we have outputs. But on, at the very beginning, it doesn't look obvious. It's not obvious to everyone what it is. In order to discover cause and effect, it requires doing some analysis. Once we do analysis, then we can start to see what's happening. So we have the experts come in to do the analysis and then it becomes clear what's happening. This is the known unknowns. We know they're unknown, that's why we bring the experts in. The experts are the ones who are able to take it from the unknown into the more known space. And in this case, what we do is we sense it, we analyze it, do the analysis, and then response and apply some good practices. And I think that fairly often I would look at incremental product development as fitting into this category. It's where we're making some enhancements and the experts that already know the system are able to make those fairly easily doing the additional analysis. Although sometimes incremental product development does move into the complex domain. So in the complex domain, the key, we've got inputs, we've got this black box and we've got outputs. And no matter how much analysis we do here, we still can't discover the system. We can't discover the cause and effect. Cause and effect are only perceived in retrospect. Now how do we get that? Well, what we do is we put a probe through the system. So we put, we try something. We try something, we learn something. And then we provide feedback mechanisms by which we continue to upgrade our learning. So in retrospect, we discover something, feed it back, and continue to learn more. This is the complex domain. So we probe, we sense, and then respond. It's a new emergent practices because this is not something that we've necessarily done before. New product development of largely the home of the unknown unknowns. And even with that, the other thing is distributions tend to be a bit wider. And lastly, we get to the chaotic domain, inputs, black boxes, outputs, distributions all over the place. We don't even, we can't even tell what there's. There's no relationship between cause and effect. Largely the home of the unknowables. In this space, what do we have to do? We have to act, we have to do something. Do something because we've got chaos. Then do something which helps us get out of chaos. So accents respond, example of medical emergency. We've got someone bleeding, right? We don't go check and analyze why it's bleeding. I mean, we stop the bleeding, right? Stop the bleeding, that takes you out of the, then once that happens, it buys you time, right? You might think of this also, say we have a security breach. Well, what's the first thing we do? We shut down any way to get to the system so that there's no more security breaches, right? You take action in order to buy time to do the other work you need to do. Which takes it, once you've bought time, you've taken it out of the chaotic domain and pulled it into either the complex or potentially complicated domain. So another way of looking at things is the value chain, right? So I come from a product world. So in the product world, we have a market they're trying to meet. We do our product development and sales. These are pretty well tied together. We also have contract models and we have internal IT. Now, how do these look differently from a feedback or frequently from a feedback mechanism? Well, the good thing on product company is we know whether we're successful based on the sales of our products. We have a very tight feedback loop that tells us we were successful or not. What we don't have is we don't necessarily always have direct contact with all our customers. But what we do is we can send probes out to our customers and we'll get the feedback because sales numbers come back. Now, it's a lagging indicator, but it's still a fairly direct feedback loop. When we have a contract model, we have this specification. We become decoupled. We start throwing things over the fence. This is traditional contract, waterfall-ish type mode. Specifications then develop and then delivery. In fact, the system is designed to not have feedback loops in it. If you have feedback loops, it's, well, I have a change request. Well, change requests have to go through it. You create series process challenges to getting the feedback in. And you create a lot of friction in the system. So this creates a lot of problems because we don't really have good feedback. And I think also with internal IT, you can have it actually, it could go either direction. Sometimes where the IT and business have tried to work closely together, you could get that feedback loop. But I think a lot of places I've seen where there's a huge separation between IT and business, this feedback loop gets broken as well. And so it's something you need to look at in your organizations. How can we make sure we're getting proper feedback? Because feedback is the thing that enables us to really get to business agility. Feedback's all about learning. So this is just another view of it. I live in Houston, Texas. And so one of the problems we have in Houston in the Gulf Coast area in the US is that we have hurricanes. And hurricanes are pretty nasty things if they come and hit directly because they're huge. They've got huge amounts of wind and they're a lot of water with it. And so it's a really serious problem. Now, this was 15 different weather services that were projecting the path of a hurricane, Hurricane Rita. What do we know about every one of these projections? What's that? They're estimates? What else do we know about them since they're estimates? What's that? Probability. We don't know probability necessarily because they haven't told us. They've only given us a single path. We do know one thing about probability. We know that the probability is minuscule. It's essentially that they're all wrong. They can't be right because they're projecting out more detail. So none of them can actually be right, but collectively they're useful because they're coming from different perspectives and helping us know, well, this over here is New Orleans over here which is probably not gonna get hit and over here in Mexico or Corpus Christi Park, that's not, but this area here where Houston is, that's pretty serious because the other thing we know is that point is pretty accurate, it gets measured. And we also know the wind speed and things that we know about a hurricane. It's a category five, which is the highest level category. So pretty serious threat to the space. Only after we're done do we actually know where it actually went, right? So we only can, just like with any project, the only way we can actually know when it's gonna ship is when it's actually shipped. So in the weather forecasting business though, we make heavy use of forecast. We look at things and so what they have is what they call the cone of uncertainty out here at certain point and then over time, they can project where they think it's gonna be over time by iterating through time and going through time. We can learn more and as we learn more, we can know better where we're gonna be and eventually here, we're getting to the point where Houston might get saved because Houston's about here and eventually it comes to the east of Houston which was huge because on the west side of a hurricane is the dry side. So it was fairly uneventful in Houston when I was there at the time but it was still a big concern and this type of thing is this is how we take in new information, use feedback to help us make decisions. This I borrow from Jeff Patton, distinction between incrementing and iterating. Incrementing, we take an idea and build it a bit at a time. So if we're gonna paint the Mona Lisa, we could start painting the head and we could start painting a piece of the torso and we finally have a masterpiece at the end. The problem with this approach is that it requires a fully formed idea in advance. We can build it precisely and then build onto it but that only works if it's formed in advance. Iterating, we start by building a rough version, we validate it and then we slowly build up quality. So here we have idea. We don't have a fully formed idea, we just have an idea. We build this woman in a pastoral setting, we start with a sketch, we start building more detail and quality into it and eventually we come up with the masterpiece. So this is the difference and how we did this, it's through feedback and learning, right? It's incremental. So iterating allows you to move from vague idea to realization and the key, and this is the point of feedback. It's not the point of it is not to get it right in the beginning, the right objective is to get it right at the end, right? Because that's when you're delivered, that's when it's actually in the hands of the customer. So get it right in the end, that's the key. So this is the lean startup feedback circle. We start with some hypothesis which really should be coming from some observation. So observation, hypothesis, then we build some experiments, we design some experiments, we baseline and measure, analyze, learn, let me see. The example I'm going to show next is not specifically from lean startup, but it's an idea of how could you design experiments to help you learn? And it's an interesting perspective, and we'll see. Do you want to continue talking about me or should we discuss what the liver damage tells us? I was born in a locked cabin in Illinois. Himalayan academia doesn't cause liver damage. Add the fact he's coughing blood, you've got three of the indicators of organ-threatening lupus. It's moving too fast. Could be hepatitis E. There's only been one case of hep E originating in the US since his history, since he's been in and out of the country four times in the last year. You really think he's got hep E? No, I think lupus is way more likely. All right, then let's start him on IVC toxin and plasma pharesis. No, we should rule out hep E. You just said it wasn't hep E. I said lupus is way more likely, but if we treat for lupus, then it is hep E. Then he's toast? Exactly. But there isn't a treatment for hepatitis E. Either he'll get better on his own, or he'll continue to deteriorate. Yeah, I went to medical school too. Started on saldimendrol. If he's got hep E, that's only gonna make him worse. Not as much. It's goldilocks, people. Won't hurt him so much that it'll kill him. I want to hurt him so little that we can't tell. I want to hurt him just right. And if it does nothing. We'll know it's not hep E to start training for lupus. Now watch me do it while drinking a glass of water. What do we tell the dad? We think your kid has lupus, so we're gonna treat him from hepatitis E. And oh yeah, if it really is hep E, we're not actually giving him hep E medication. So it's gonna make him worse, not better? She's thinking, go for that. So you want us to lie? No. I want you to lie. Why me? Because he trusts you. So the point I want to get across from this is that when you're designing experiments, it's actually okay to design negative experiments. You don't always have to use that. They could have taken the most logical path, or what seemed like the right path, yet that didn't actually get them the information that they needed to have. So you're actually doing some, when you're experimenting, you're actually, it's all about learning. It's not necessarily about getting things in the hands of customers. It's about how can we learn something new? That's what feedback's all about. It's about learning, getting the learning, doing small learning cycles, rapid learning cycles through the feedback process. So how does Kanban do it with this? Since I'm in the Kanban world. We have ideas, we've got things flowing through the system. It's all about flow, and we can scale through the system. We understand the system. But then the feedback loops come in. We get our data, and we do incremental process improvement. So a key part of Kanban is building in incremental process improvement. We're an evolutionary approach to continue to build improvement. And likewise, we're bringing cycles back to help us understand the product. Now you can apply those two things both to any approach you want. You apply it to Scrum. But it's just bringing out, this is how we do it in Kanban. And what we do in Kanban is we have a full set of cadences where we try to reinforce feedback all the way through the organization. Because we believe that we have services that are interconnected through the organization. So we're looking for, how are we enabling feedback to go across the entire organization and pull that together? So we have a particular service. We're doing feedback within the service or the product line. But then at the, we're doing, we're rolling that up and having operations reviews and strategy reviews to make sure we're still on track with what we're trying to deliver. Feedback's a really important part. And you've got to be thinking whenever feedback is broken, that is creating an opportunity for problems in the organization. It's actually oftentimes feedback and transparency go hand in hand. And when you don't have feedback and transparency, that's the opportunity for politics to show up. So something to think about there. So this is one, I sort of want to, this is a video from John Cleese who was making, well-known for being a member of the Money Python Troop as well as Faulty Towers. And in recent times, nearly headless Nick in the Harry Potter series. He was doing a series of management videos back in 1986 or 1980s. And I came across it. This particular talk is called The Importance of Mistakes. And it's a really brilliant talk that he does. Talks about how he learned about this from his childhood, a book that was called Gordon the Guided Missile. And it's the importance of mistakes and the importance of feedback. So let me play this. Gordon the Guided Missile sets off in pursuit of its target. It immediately sends out signals to discover if it's on course to hit that target and the signals come back. No, you are not on course. So change it up a bit and slightly to the left. And Gordon changes course as instructed and then rational little creature that he is, he sends out another signal. Am I on course? Now, and back comes the answer. No, but if you adjust your present course a little bit, a little bit further up and a little bit further to the left, then you will be. So he adjusts his course again then sends out another request for information. And back comes the answer. No, Gordon, you've still got it wrong. You must come down a bit and a foot to the right. And the Guided Missile, its rationality and persistence, a lesson for us all. Goes on and on making mistakes and on and on listening to feedback and on and on correcting its behavior in the light of that feedback until it blows up the nasty enemy thing. Then we applaud the missile for its skill. And then if some critic says, well, it made a lot of mistakes on the way, we reply, yes, but that's a pretty good matter. Did it? It got there in the end. All its mistakes, the little ones in the sense that they could be immediately corrected. And as a result of making hundreds of mistakes, eventually the missile succeeded in avoiding the one mistake which would really have mattered, missing the target. So that's the essence of feedback. It's about getting the feedback, making the small incremental things, learning, and then getting it right in the end. Making sure you're connecting to your customer. That's what business agility to me is all about. So this is my contact information. Books stand back and deliver. Happen to connect with your LinkedIn. And I think we have about five, some minutes left for questions. Maybe a little bit more. Seven minutes, all right, more time. Who would like to start with a question? No questions? By a group? Somebody must have a question. Okay, very good. I'll be around. Feel free to, if you're too shy to have a public question, you can ask me in private. All right, thank you.