 I'm ready. Welcome back to yet another OpenShift Commons briefing, and today, as we like to do on Fridays, we're going to talk about some transformational issues with folks from the Global Transformation Office at Red Hat. And one of my favorite guests, Jade Bloom, is here with us today to talk about developing anticipatory awareness and common ground. And I'm going to let Jade introduce himself and the topic, and we'll have live Q&A at the end. It's always an interesting conversation, so stick around for the last few minutes and 15 minutes or so, and hopefully we can have a good and in-depth conversation on the topic. So, Jade, take it away, and thanks again for coming. Thank you for having me. It's always fun. So, I wanted to talk a little bit. The last time we talked, we were talking about kind of social practice, and one of the things we were talking about is the way in which social practice kind of orchestrates a sense of shared behavior. So, people have expectations about what's going to happen and things like that. And I really wanted to kind of open that box up and play with what's inside of this kind of idea of interpredictability or anticipatory awareness, the relationship between some of these ideas and transformation. So, I'm Jade. I work at Red Hat at the Global Transformation Office with a bunch of Yahoo's Andrew Kleschefer, John Willis, Kevin Baer, and we are working with Red Hat clients to help them with their transformations to improve their outcomes through socio-technical change. So, super fun, interesting stuff. So, if you guys want to find me or chat with me, you can find me on Twitter. I am happy to talk about all these topics and talk about these ideas on Twitter a lot. I enjoy it. So, find me there. So, I wanted to really quickly start with this kind of like, I just want to talk about clocks. I mean, my dissertation is on time, so I love clocks. But one of the reasons I want to talk about clocks is because kind of this idea of joint action or interpredictability or working together, cooperating, all these ideas. In a socio-technical system, I think probably a clock is literally the first machine intended on creating the conditions for coordination between humans, right? So, the whole point of a clock is to kind of create another coordinate system that allows you to say like, I will meet you at the office at noon. And at noon part is enabled by this machine, this device called a clock. And, you know, initially these clocks would have been on a tower in the middle of town and therefore they would have been a shared common resource, things we can talk about later. But also, eventually they become watches and things that people can bring around with them. But just because the machine is a personal machine, so the watch becomes a personal clock, time is still a shared construct. It's still an intersubjective thing that we are sharing in order to coordinate. So just because I have a watch doesn't mean I have the time. The time is something that's given to me by other people or by other systems. In an interesting way, the original watchmakers in London, actually the owner of the clock tower in London, the London clock tower, his wife literally sold the time to watchmakers. So she would set a watch based on the clock in London and she would walk around to watch vendors, people who were selling watches, and she would sell them the correct time. She would let them see her highly accurate watch that was set according to the official time in the London tower. And so the interesting thing about that is two things. One is this idea of kind of like a distributed system of machines that are being synchronized in some way. So, you know, I think not only should you think about kind of distributed systems and the problems of distributed systems and distributed sociotechnical systems, but you should also kind of think about the problems of synchronicity and things like that. Finally, you get this weird thing, which is all of the clocks are in this state of what we would call continuous partial failure. So the clocks can, all the watches in the watch shops can kind of keep time, but the reason why they are paying this woman to come and give them the time is because they constantly need to be corrected and they need to be corrected by a human exchange of information. And this, you know, this plays out over a long period of time to the current day, right? So if you go and you go to NIST and you try to find out what is the official time? What's the official time right now? What is the time of the universal standard time? What is it? And the answer to that question, while NIST can give you an official time, is how do they like calculate that time? And the interesting thing about it is that what, the way they calculate the time is there is a distribution of atomic clocks all over the planet and they all submit them their current time, I believe it's monthly, to essential server basically. There's some, obviously, some correction for the submission cycles and things like that. But then there's a formula by which those clocks become averaged together in order to create the official time. And then the average time determines how much any one of those little atomic clocks is incorrect and therefore a correction is sent to the owner of that clock, like you need to move your clock forward or backwards slightly. So what you get is this very Bayesian update of time, but the official time doesn't exist in any one clock. In fact, all of the clocks are assumed to be incorrect. But at the same time, all of the clocks are assumed to be able to be predictably within a range of correctness. And that range of correctness is what allows for the averaging to happen, so that you get this kind of concept of a universal time. So no one clock can hold the time, but the kind of average of the predictable variance of all the clocks allows us to calculate kind of a standard time. So I think this distributed system in which all of the components are reliable, in other words, they operate within a predictable range of performance, but only produce a resilient time, a time which can be thought of as being standard by combining themselves. No one thing can hold that official time. So this is kind of what I want to play with today. I want to play with these ideas of kind of continuous partial failure communication, the creation of some common thing, and how that works in socio-technical systems. So one of the things that I think we need to talk about in order to get there is this idea of expectations. And we'll explore why we need expectations in a second, but just another kind of opening concept. So when I talk about expectations, I often like to talk about music. Music isn't a note, right? There's no sense in which one note is what we would normally consider music. Music is in relation to other notes. It's in relation to a set of potential notes that one might hear. And even in this kind of corbit system where we have the amount of time the note is playing and its relative pitch in relation to other potential notes, this also isn't music, right? So music itself has to do with a relationship between notes. And not only between notes like a chord where those relationships are simultaneous, the relationships occur at the same time, but actually in the way that music always is about the relationship between what notes you're currently hearing, what notes preceded those notes, and what expectations you have about the next set of notes that you would expect to hear. And so in this way, music is like a form of entrainment where you get pleasure from two pieces when you listen to music. One is you get pleasure from the entrainment. In other words, you get a sense of pleasure from being able to predict what's going to happen. And it's pleasurable as nice to be able to predict what the next set of notes are and have the musicians like magically produce the sounds that you predicted in your mind. And those predictions are based on what you've already heard, what you're currently hearing, and what you might expect to hear. But also you get pleasure from this occasional, it can't be all the time, but occasional way in which your prediction is not quite right and the music produces a hook or a shift that causes you to wake up and notice that your predictions were not correct at that particular moment and then they tend to resolve back to predictable patterns again. So what we can see, I think, when we talk about this is that music and kind of temporal experience has to do with this relationship between making predictions, the pleasure of those predictions coming true, and the surprise and the pleasure and the surprise of those things not playing out. And so in phenomenology, we call this idea the difference between retention, pretension, and the immediate present. So retention is this idea that the way in which the previous notes inform your expectations, your pretensions, is not direct. You don't mathematically calculate the next notes. You use those notes and the sense of them to kind of make an idea about what you expect to have happen. And pretensions work in the same way in that you are, your hearing of what is currently being played is colored, let's say, by what you expect to hear next. So there's this way in which the future and the past color or shape, the present, and as Milipati kind of points out here, it's not time or our experience in time is not like a slideshow. We don't see one slide at a time. We have this kind of smeared or extended concept of how we experience things through time. And this is, you know, when we think about longer periods of time, this has to do with things like your sense of identity and your sense of how you got to work and things like this are all related to the way in which you knit together what you've done, the retentions that you have, and what you expect to do and the way in which those things that you've done and the things that you want to do are related in some way and cause action to occur in the present. So one of the things we could say, I think, about this is that human experiences, including joy and meaning, emerge from this intersection between what I expect, its resolution, or its novel surprising result. So that's what gives us pleasure in the world. That's one of the things, it's why we like things is because either things become meaningful or they become surprising. So there's, you know, when we think about creating meaning, when we think about sharing meaning with others, these expectations about what's happening and how our performances, so if we were the musicians, produce, you know, pleasure in others has to do with this kind of interaction of whether we're able to reproduce either kind of the results that were expected or in a kind of entrainment way, or that we can produce pleasant surprises, so surprises that allow people to understand that there are interesting novel ways of moving from whatever system they're currently into another one. So that's maybe music. So then another thing before we kind of get into the meat of it, like, I want to talk about like uncertainty and complexity a little bit, because I think the words are like hyper overloaded, especially complexity. And I want to point at some specific versions of it in this particular case, so we can kind of grasp onto it and think about it a little bit better. First of all, we have a concept called the problem of future knowledge. So one of the ways in which the systems that we work in are complex is based on this idea that says there is future knowledge. There is knowledge that will only occur in the future, because if it occurred in the present, then it wouldn't be future knowledge. And if we had that knowledge now, we would do things differently. But in fact, because of the way in which systems evolve and co-evolve and change, there we will come to know things in the future that we cannot know now. And so this is like the source of things like technical debt and things like that, right? The way in which we make the best decisions we can in the present. And then in the future, those best decisions don't look like good decisions anymore. But there would be no way to have made those good decisions in the future in the present. We can't because we don't have the knowledge that the present would create. So this is an idea that there's no stability to some of the knowledge that we have. That the knowledge that we have is emergent from the system itself. And if the system changes significantly or evolves in any way, then we will have new knowledge about the system. And then we will have different expectations about what that system should or should not be able to do. The second thing is kind of the way in which longer periods of time, extended periods of time, exposed components to two other forces that are not exposed in short periods of time. And those are novel configurations. So the longer components around, the more likely it is to be, you know, someone try to use it with another component in a new or novel way. And second of all, we have this idea that the components themselves could be exposed to novel stimuli. In other words, the environment could have new expectations on what the component should be, should or shouldn't be doing. So in this way, there's complexity that's not just this kind of like the stability of like a complex network at any one moment, but that the network over time is modifying itself. And therefore, there's like this extra layer of complexity. I call it temporal complexity. But the idea is that over time, things change and that creates complexity. We get this idea of environmental change and environmental change, creating complexity, creating frictions between the current components and the way they're supposed to work. We get the idea also in kind of modern socio-technical systems theories, where the actors are part of the system they are modifying. So every action modifies not only the system, but the actor themselves. So there's this like feedback loop that feeds back on the actors themselves. This is something in the past called ontological design. But the idea of it here is that the actor themselves, they are not stable in relation to the system because they are being modified by the system as well. So again, the actors are enmeshed in this temporal complexity themselves. And finally, we get this idea of contingent contingencies where what contingent contingencies means is roughly kind of physics and science, kind of correlation theories of science have to do with understanding the contingencies between two physical natural objects, atoms, rocks, things like this, and that we can create a set of rules like, for instance, Newtonian physics that determine what we should expect out of those interactions. But when we look at socio-technical systems, what we are looking at is a system where at least some of the decisions were not driven by physical laws or mathematical laws. In fact, the decisions were either driven by context, by an attempt to respond to an event or a moment in time, or they were made in a satisfying as opposed to a complete analysis. In other words, the people who were designing the system had a certain amount of time they could make those decisions in, and so they made the best decisions they could in those periods of time. But if they had been given an infinite period of time with an infinite amount of information, so if we could like literally freeze time, they may have made a completely satisfactory, in other words, a complete analysis of the situation. And that almost certainly doesn't occur in most systems that we're working with because of temporal compression and the competitive nature of markets, right? So almost all these systems have significant amounts of satisfying, and since that satisfying has something to do with the individuals involved, the context, the timing, etc., often the decision criteria can't be reproduced and therefore it's not, you're not able to use normal scientific thinking which only thinks about a single contingency because you're dealing with contingent contingencies. In other words, you're dealing with decisions that were made about physical laws, contingencies, in a contingent manner, in other words, based on what's happening in that context. So you get this double kind of complexity, and like David Snowden would kind of refer to this as like a form of anthro complexity where the complexity is partially created not simply from physical laws, which also create normal complexities that we might think about, but also that there's another level of it in which humans are negotiating decisions based on policies and opinions and things like this. So you get a double level of the complexity. And so the result of this is kind of we get this idea that the systems that we're mainly working with these days are not really based on solid stable decisions that are are atemporal. The decisions don't change over time, but because of these ideas of kind of evolution and feedback loops and the actor being involved in the system, what we get is instead of an idea where the system that we're working with is actually just be kind of a four in the way that we're working with the system is this kind of constant dynamic rebalancing of forces where every time we do something, we probably overdo it or underdo it a little bit. And therefore we constantly having to like re engage with the system in order to rebalance the forces. And then of course, because of competitive environments and because of just nature's wants to do things like throw COVID viruses at us, the forces that stabilize the system in its environment are also changing all the time. So we're rebalancing not only the internal relationships, but the we're always constantly rebalancing the environmental relationships as well, ecological relationships. So finally, we get this last thing, which I started the talk with, which is even simple machines like clocks don't actually operate for long without human intervention. So it's not just that complex systems require or rely on human interventions to kind of stabilize themselves through this kind of dynamic rebalancing. But any form of mechanism always requires humans to kind of interact with it in order to stabilize its performance over a period of time. So what we get then is this idea that our systems, all of our systems, the systems that we're working with are always in this kind of state of continuous partial failure. And I really like or want to make sure to point out what failure is here because failures often kind of thought of as like a normative statement, like a good, bad ethical statement. And really part of the reason I wanted to play with this idea at the beginning of expectation is because what continues partial failure is, is that the system fails to do what it's expected to do. Systems in and of themselves kind of mechanistic systems can't fail themselves. They have to fail in relationship to what we expect them to be doing. So there's this idea that systems are always kind of not quite doing what we expect them to. And there's some interesting tensions that we want to play with there because what we end up with is kind of this idea where when we're trying to engage in complexity and we're trying to engage in the management of complexity or the wrangling with complexity is this idea often stated as we should move from fail safe to safe to fail. And I actually think that that's not really the right frame. I think the right frame is actually that we need to recognize the difference between fail safe and safe to fail. So fail safe is those decisions that we can make about an architecture or a system using physical laws. So you should not be able to expect to transmit packets faster than the speed of light. So if you design a system that expects faster than speed of light performance, you're going to have issues. There are a set of kind of load bearing architectural decisions like kind of a minimum set of criteria that define the physical limitations of the system. And recognizing and understanding what those are is important. And I like to think of these as kind of like decisions that are made or refined over time based on calculations and things that we can know using kind of a scientific theory. On the other hand, we have these ideas of safe to fail architectures. And these for me are more about what I call architecture and use. And what I mean by architecture and use is that the failure to meet expectation is only something that can be evaluated in use. It can't be evaluated prior to use. It can't be evaluated before the system is operational. And so you get safe to fail aspects of a system based on it actually being operated. And so what we see here is kind of the difference between a form of planning that is a form of planning about what we might call governing constraints or immutable constraints. And then a set of planning about how to constrain the system so that we can see whether or not it's meeting our expectations and react appropriately when the system isn't meeting our expectations so as to stabilize it by intervening in an appropriate way. And know that it's been stabilized. And this idea of humans needing to interact constantly with the systems that they're operating in order to keep those systems appearing as if they are operational is what we might call skillful coping. And so what we can say is in any socio-technical system what we're actually observing is a mechanistic system, a technical system that is being enabled to perform according to expectations by humans intervening in order to nudge the system towards the expected outcomes. And that is a continuous process. And the moment that humans would maybe stop interacting with that system, that system would essentially collapse. And therefore we can see things like this all the time. If you have a clock anywhere in your house, if you don't actually intervene and reset the time, it won't work well. At the same time you could look at things like open source projects, the moment that a community around an open source project collapses, the technology itself starts to deteriorate. And we all kind of know this based on our experiences. So I think that's interesting. So why are we kind of talking about all this noise? What we're trying to talk about when we talk about this, when we're talking about transformations in particular is this idea of how do we increase this joint activity, this ability to act. And real quick step back. One of the reactions sometimes to complexity is that you can't do anything about it. That you can't manage it in a way. And yes, I think it's true that you can't manage complexity away, but also complexity is not an excuse to not attempt to manage the systems that you're building, no matter how complex, to make sure that they meet the expectations that you have of them. And so joint activity and interpredictability in client's framing, we'll get back to this in a second, is not just between humans, but it's between humans and the technical systems that they're interacting with. So that the system performs according to expectations or predictions, and the humans perform in relationship to each other as well. So what we're trying to do is talk about what happens during transformation in which we increase the rate of change. How does this kind of interpredictability and joint activity either contribute or become detrimental to the transformation of an organization, the change inside of an organization. So let's first talk about kind of anticipatory awareness, or kind of like what I what I'm pointing at here, which is like a different kind of planning about the world, where if we are assuming our systems are complex, we kind of need to think differently about how we might work with them into the future over time. And so I'm going the wrong direction. So the first thing is to say like, a lot of planning, a lot of traditional planning, engages experts to make predictions and those predictions are chained together in a way where early predictions have a lot of impact on late predictions. But also, the experts are valued for their predictions and their their ability to produce predictable results. And therefore they become preoccupied with making sure that the conclusions that they made earlier come true, come hell or high water later. And this results in the suppression of new information and potentials that that individuals and teams can interact with in order to produce better results than could be envisioned in the beginning. That's one of those kind of problems of future knowledge pieces, where the future knowledge is ignored, where it could be used to create better results. So what we're trying to talk about when we talk about kind of these kind of anticipatory awareness pieces is changing the organization's focus from what will happen, and also changing the organization's kind of this is like the alignment argument what should happen. Both of these, by the way, again are not bad, but they're not sufficient for a transformation. So it's not defining the exact cause, it's not defining the kind of alignment that allows for these kind of changes that we're looking for, it's exploring what might happen as an organization. And the reason we want to explore what might happen is because if we want to predict what the next note in the transformation song is, it's better to know what all the next potential notes might be, as opposed to focusing just on a particular note and saying that will be the next note that gets played. And so what we're doing is we're moving from these linear causal chains that have to do with more of that kind of architecture as load-bearing analysis. And we're moving forward from kind of linear normative expectations, in other words, chains of expectations that are based on opinions, where the chain becomes long enough that if one of those opinions is incorrect, we get problems, to abductive, not adductive, abductive parallel plausibilities. And what I mean by this is that the organization is kind of constantly reexamining what's plausible from where they are in the transformation, as opposed to from some preconceived chain of events. And so the result of this is not a form of planning that produces a lowering of cognition in the organization. So I don't know if you guys have ever had this experience, but as a manager, I used to have this experience all the time. If you give someone a plan, there seems to be a lower likelihood that people will question the plan, because they assume someone has thought the plan through, so they just kind of adopt the plan as their own. On the other hand, if you use this kind of anticipatory awareness version of it, what you're trying to do is actually say to the organization, as we move forward, it's equally important at every kind of step that we're scanning for the possibilities of what could happen. And that has to do with the expectation piece. So Ger Klein, who wrote about anticipatory awareness with David Snowden, one of the things they were trying to point at was this idea that there's two ways to use mental models. So a mental model just really quickly would be a way of thinking about what might happen, but not just a kind of a linear way, like a set of expectations and their interrelationships to each other. And so you might have a mental model of, for instance, for Geri Klein's studies, you might have, as a firefighter, you might have a mental model of how the house burns. And what he says is that a mental model basically kind of links previous experiences with what's happening right in front of you with action, so like what should I do based on what's happening based on what I've seen in the past. And then there's one second link, and that second link has to do with what would the result of this action, what should I expect based on the result of this action? And so one of the ways that we can differentiate high-performing teams from low-performing teams, teams that are likely to be successful in transformation and less likely to be successful in transformation is that low-performing teams use the mental models to lower their cognition. They use the first part of the model. My previous experience, the current context and what action should I take? And then they take the action and then they kind of rinse and repeat. High-performing teams add that second piece, which is they say, based on my previous experience, this is the action I should take and this is the result I should expect. And therefore, if I don't get the system doesn't perform the way I expect it to, then there's something wrong with my mental model and I need to heighten my cognition. I need to slow down and be more careful and try to figure out what's happening because something's not working according to my expectations. And that becomes an important idea so that anticipatory awareness is about increasing the awareness of what we might anticipate happening as opposed to saying this is what will happen. And the result of those things is an engagement in the complexity of the system. The human social aspects of the system become engaged in the relation, the temporal relationship between what I expect to happen and what did happen. So I need my slides are not doing what I expect them to. So we can think through this really quickly using this weird model which is called the future cone model. And it basically says that organizations can kind of think through the future as what's probably happening, what's preferable to happen, what's plausible to happen, what can we think of that might happen, and finally what's possible. And there's a real set of distinctions here because one of the important things to recognize is that the difference between what's plausible and what's possible is significant. What's possible is almost always greater than what we can conceptualize. And when we talk about common ground in a minute, common ground is the shared sense of plausibility, the shared sense of where we might arrive in the future is what's plausible and multiple divergent mental models and ways of thinking about the world would point out possibilities to some that wouldn't be plausible to others. So we get this span of potential contentions and ways of behaving and acting that seem kind of difficult to mediate or engage with. But what we want to kind of talk about when we talk about creating interpredictability and creating common ground is the way in which we want to shift from what's probable, what we expect could happen if we don't intervene or do anything with the system and what is preferable. So what we want, how we want to nudge the system in the future so that when we talk about changing the system to a more preferable state, all the participants in the system can understand where the system might likely arrive so it becomes more predictable. Wood's in his paper about these ideas basically outlines the idea of two people driving in cars and the way in which if you are following someone, there is a set of expectations that are created about the driver in front, where the driver in front needs to make sure it's clear that they are predictable in their actions so that the driver behind can actually follow them. They can't do unpredictable things like run red lights or do sudden U-turns, things like this. That breaks the joint action in a way that makes it impossible for the person to follow. So the leader or follower relationship then has to do with not just the follower following, somehow subordinating themselves to the direction of the leader, but actually the leader subordinating their decisions to enable the follower to follow them. So there's this kind of interaction that's happening there that I think is kind of interesting. Anyway, so then we have this idea of a kind of common ground and we want to explore this. So we talk about anticipatory awareness, it's kind of like increasing the scanning and maybe the diversity of scanning inside the organization and then we have kind of common ground. So what do we mean by common ground? So the first thing to talk about when we talk about common ground is to point back to those kind of satisfying comments we talked about at the beginning and bounded cognition. So we can't think everything about anything, we can't make our decisions based on a complete analysis. We have to, we can only pay attention to a certain amount of information in our environment and the result of this is that the mental models that we create basically are about focusing our attention on particular parts of the environment and the relationship between those parts of the environment and our expectations about what's happening in the environment. And this means that we're kind of intentionally putting blinders on ourselves to like focus on particular parts of the environment in order to make sense of the world. And so when we talk a little bit about common ground and anticipatory awareness one of the ideas is that everybody, all the actors in your system have these blinders on but they could potentially be kind of pointing their flashlights in the kind of visual metaphor that I just used in slightly different directions. Therefore increasing the scanning of the future and the possibilities for the future so you get a multiple diversity of thinking about what might happen. The anticipatory awareness increases. So what we can say then is this idea that we have a requisite variety so we have a minimum amount of variety of mind mental models required in order to stay stable in a system that's in this kind of continuous partial failure state. If we homogenize or over homogenize our mental models so we try to get everybody to think exactly the same way about what's happening if we do that then we risk missing stimuli in the environment that could indicate that the expectations of the system aren't being met. And so this requisite variety is a way of increasing the scanning. But on the other hand we get this idea of requisite coherence. So the idea of requisite coherence is just to say if everybody thinks completely differently about what's happening and everybody has a completely different relationship between the stimuli the action and the expectation then we have no interpredictability. And the result of like a complete lack of interpredictability is kind of we can't trust each other we can't follow each other we can't do coordinated action and therefore kind of we get this compression of what kind of planning we can do things become very real time and it becomes difficult to follow each other and understand and make and do planning of any kind of form. So we get this kind of idea where we have common ground which is a shared set of practices vocabulary performances resources etc that shared set of things has these two pressures on it one is requisite variety which is trying to minimize the common ground and increase the the the cognitive differences between people and the requisite coherence is trying to increase the common ground to the point at which when coordination is required coordination is accessible via these shared practices vocabularies etc and so this requires this kind of careful dance between advocacy and inquiry where we want to be advocating for our particular mental models and the observations that we're making in our models and we want to be inquiring and open to new models from other people so that we can then kind of converge on these ideas of proposition experimentation being able to challenge each other about whether or not the system is performing according to expectation and and therefore negotiating kind of a movement into the future. All right I need to go faster I only got a couple left so then we get this idea of adaptive capacity so what what what is adaptive capacity why do we want to talk about adaptive capacity so I will get in trouble for this all certainly but it's a simple metaphor adaptive capacity is kind of like a bank account and there you can imagine that there's like some amount of adaptivity that you have in your bank account and when you want to change something either like change is forced on you or you want to create change through some sort of transformational activity if you spend all of your adaptive capacity all your adaptivity credits go to zero then the organization kind of stops being able to change it kind of locks up one of the ways to think about it is that when your adaptive capacity is exceeded and the organization's adaptive capacity is exceeded people will become afraid they won't know what to do they won't know how to act and the as a result will stop acting so you can think of this kind of in relationship to that requisite variety where increasing requisite variety increasing the number of mental models increases the ability for the organization to come up with novel interventions in order to attempt adaptive behaviors and if you narrow that adaptive capacity I'm sorry that if you narrow the requisite variety too much at the same time you basically narrow the ability for the organization to adapt and so this is particularly critical in in the relationship to this kind of continuous partial failure idea because in particular adaptive capacity isn't something that's put in reserve it's something that's constantly exercised and it's constantly exercised by people trying to skillfully cope with these kind of micro failures that are constantly happening in your system and that's not good or bad and it's not fully eliminate you could cannot fully eliminate it from the system all socio technical systems are always in continuous partial failure so there's this question then about like what do we do about this adaptive capacity and like how could we think about it in in the form of a transformation and and there's two ways I like to think about increasing adaptive capacity there's there's others and and there are certainly people who will comment on this but there's two ways to that I'd like to propose right now to increase the available capacity inside of an organization and the first one is is to recognize operational load and toil in which to which I mean there is a minimal amount of continuous partial failure in any system having to do with kind of change that is that is endemic to the system that you're working in but there's also a lot of of other things that happen regularly that could be removed that you're deploying skillful coping to deal with right now and a lot of that could be kind of described as toil and a lot of the other pieces of it could be described as high operational load or systems that have both unnecessary redundancy especially if you take a more modern view of or a more contemporary view of how to how to create a resilient system in which case redundancy is is more of a liability than it then it used to be seen but also the idea that the system has been poorly designed for operation so you can think about issues like poor observability in which case it's hard to see when the system is performing poorly the expectations of the system are not encoded in a way that becomes readily available in operation or in use so that so one of the ways to kind of increase adaptive capacity if you want to think of adaptive capacity about as something that you would want to be able to deploy either because of an emergency or emergent change or because of the desire for transformational change if adaptive capacity are used for those two things then eliminating these things doesn't increase the total amount of adaptive capacity in the organization but it does increase the amount of available adaptive capacity to be deployed elsewhere to be deployed towards those ideas the second one is just to say that actively understanding how to create modify and recreate common ground becomes a form of adaptive capacity itself because common ground as we described it links expectation context and and expect sorry previous experience current context and expectations it links those things and so when when environments change and those linkages change then the common ground itself needs to be modified so another way of increasing adaptive capacity inside of an organization is increasing kind of the flexibility and ability to modify common ground in a way that people can kind of constantly renew the common ground and make it more valuable really quickly most of this talk has been based in in kind of a linguistic or conceptual version of common ground but I I think it's important to point out that common ground also is kind of encoded or held in material systems in other words the technical systems themselves hold forms of of common ground or in the way that I described for instance like low observability systems and high observability systems the high observability system one of the things it does is it's encoded inside of itself the the reproduction of a set of common ground so that you can see what the expectations are so these are not always just kind of conceptual conceptual mental model pieces sometimes they are shared common resources and therefore the you know the the organization's ability to negotiate and adapt these shared common resources it becomes a critical part of creating kind of ongoing common ground so I have one last rant and then I will stop why do we care about these two ideas or these three ideas well one of the things we're trying to move towards inside of kind of contemporary systems is resilience and resilience is the ability for a system to adjust prior to during or following disruptions changes or frankly the ability resilience is the ability to undergo transformation organizations that have low resilience who try to transform they they lock up they get brittle and they don't transform well because low levels of resilience in the system and the thing that I'm we've been trying to kind of link together when we talk about socio-technical systems is to say that the resilience is enabled by the human coping and skillful coping and the availability of that resource to the system that that ability that capability in the system technical systems can only ever be reliable within performance tolerances but they can't they can't be resilient the humans the human part of the system is what creates this resilience and so when we when we think about how how this how this plays out events happen in the world right things have market events COVID all sorts of things like this happen and to the extent that those events are reasonably predictable anticipatory awareness about that event allows us to start planning when costs are low and options are high in other words we have lots of options for dealing with this event and the cost of making those changes is is low as the event horizon gets closer and closer to us the the number of options goes down primarily because options take time to exercise and the cost of those options often goes up shorter shorter more bursty interventions tend to cost more money and the result of that is that we can't actually modify the system to avoid negative reactions to the event and we end up in the event and so we move then from a kind of anticipatory awareness to the need for adaptive capacity inside the event what determines the performance of the system the system's ability to kind of interact or or adopt change inside of an event has to do with the organization's adaptive capacity those organizations that have a high adaptive capacity can absorb larger more disruptive events than organizations that have low adaptive capacity so these are the relationship between the two as opposed to kind of like the last one which is stabilization having to do with like if we survive the adaptive capacity how do we do kind of retrospectives retrospectives post incident reviews in order to improve future performance yeah so when we when we think about this why do we care well first of all you know we know that the way in which organizations tend to work is that they tend to do this kind of disruptive behavior where the dominant market players tend to overperform the market because they get into feature races with each other and by overperforming the market they open up a market below themselves for disruptive smaller companies to come in and provide less functionality for significantly less cost which kind of creates a spinning flywheel in which those disruptive organizations eventually grow to be the size of the dominant players and then there's a conflict the conflict occurs when the dominant players markets are significantly encroached on by the disruptive players and therefore we get kind of this friction in a marketplace and so what we can say is that we've kind of got an incumbent that goes through these diffusion curves right the way in which kind of crossing the chasm works where technology just kind of distributes itself from early adopters to late adopters etc and the incumbent is going is reaching the end of its life kind of where the disruptor comes in because of this market encroachment right and so what what we can see here is this idea that why that happens is because as the system accelerates up the diffusion curve the organizations become kind of like we'll say fat lazy but what we mean by that in particular is they become increasingly insensitive to ecological pressures because they're kind of riding a wave up the system and as they become increasingly less sensitive ecological pressures they tend to shed the the diversity that they needed when they were trying to understand the environment because they think that they can create efficiencies and maximize kind of profits and the result of that is that just as they need that critical diversity inside the organization when the disruptive event happens they have less diversity in their system and then you get this kind of this distributor who has a high level of diversity in their system because they are still trying to understand how the market works and how the ecology works and so they have a competition between a dominant player who has who's kind of like insensitive to market change and a new player who's highly sensitive to market change and this creates kind of like a potential moment in time where if the dominant player can't recreate kind of the diversity common ground and adaptive capacity required to interact with the market change they get what we call competence induced failure and they begin declining and the decline is indicated by the adoption of the new the new vendor and just to really quickly before I stop this plays out not once this plays out you know this is in in Wardley's terms at multiple stages in the life cycle of a technology these happen over and over again these diffusion curves don't occur once in the adoption of the technology but multiple times and therefore these kind of moments of high complexity that are caused by the interaction of a disruptor and a dominant player that that interaction happens multiple times in the adoption of technology and therefore the adaptive capacity that we're looking for is important to survive things like COVID and also to survive competitive markets so the development and the focus on creating adaptive capacity common ground and and anticipatory awareness in socio-technical systems is a way in which you increase the survivability of your organization through radical change either by choice aka transformation or by force aka transformation inflicted by something like COVID thank you wow all right did you take a breath anywhere in there and that was that was super awesome um talk and and so there's so many questions and if you have time we we can go a little bit longer um to have a conversation and if any of the folks who are in the chat want to um pop in and ask questions please do so um uh I am reminded that I need to introduce you to um a gentleman I met along about in 2018 I shared a stage with Danny Hillis from the Long Now Foundation and um he's uh he applied minds thinking machines kind of guys and way back in like 86 he um kind of he wrote an essay or raised an alarm about society having a mental barrier at looking at the year 2000 as the limit of the future and so he went about and created this foundation to create this 10,000 year clock so that people would start thinking about things in much longer terms and you know I'm listening to you do all the the clock stuff and everything and I'm reminded by all these conversations I had with Danny um as we walked about south by southwest and we're stunned and amazed at all the diversity there um and it was it was a lot of fun but I also think he there are a lot of people who are who would really get this talk um and the need for um this anticipatory thinking point of view and and there were a number of things that that resonated for me um in in what you were saying um and a lot of it felt like playing three-dimensional chess you know it's like you're you know we uh in technology when we're planning projects and we're you know trying to avoid feature creep and all of the things that we do when we become entrenched in a market space as leaders trying to figure out how to um to create the diversity that drives the innovation into our projects um sometimes like red hat we've acquired companies like threescale and coro s and things like that that have really helped us um in continue to um innovate and drive that diversity into our culture and into our thinking about the technology but it's it's an interesting concept trying to keep that common ground um open and and and and then I'm trying to think of all the words you had so many words in there that top level coming in and the bottom level coming in um the founded cognition the blinders that we wear that there was just a lot about that and I think and and as Jeff has mentioned in the chat um your last point um is quite dire given the motivations of companies to lean in towards market control for sustained revenue because we all know we want that sustained revenue um and yeah and as Jeff is saying diversity is prune since it's inefficient to maintain and inconvenient to manage and I kind of disagree with that it's not inconvenient to manage you just have to learn and management has to learn behaviors that allow us to have common ground and and move things um so that we can have the resiliency that we need and the diversity that we need to to take on basis so there's there's quite a lot going on here um and you know I'm just trying to see if Jeff is if Jeff you want to jump in here I'll unmute you if you want to follow up on that at all and anybody else who's who's chatting here but boy um stunning tour to force again and a lot of things um that we need to think about as a company um as socioeconomic um resiliency is something here and you know uh and and Jeff is arguing in in the chat that recent history does not demonstrate that management learns and I wonder if you can talk a little bit about like the idea of acquiring those startups when the incumbent you know you see apple and google and red hat and ibm ibm acquires red hat whether that actually is helpful um how does that um move things forward and from your point of view yeah I mean I think you know without without getting into much trouble I think uh kind of the red hat ibm story is interesting in that uh ibm has been you know very careful to honor red hats culture um because they want to learn from red hats so they they're giving time and space for the development of a set of common ground between the two companies uh and they're not doing kind of the the the death embrace version which is something that many companies are known for right like um and I think you know it's interesting to look at something like the ibm red hat acquisition as something where there was a recognition that they weren't just buying technology they were buying a way of working in a way of of a culture or a way of existing and that that ibm was interested in having that access to those insights for themselves but also thought that they could offer those insights to their customers as a new way of working and you know I think red hat has a great story um around profitability and the way that they have been able to have a have this culture um and also uh produce above average results quarterly for you know whatever it is 22 quarters in a row or something like that um so I think um you know there's there's that and there's this kind of careful like what do you what are you buying when you buy in in a merger acquisition what what is it that you're getting um and I think you know the way that I usually talk about that is there's two different kind of acquisition mindsets uh one is that you own a book of business or you own um kind of a portfolio and one is that you are trying to create a system a whole system yeah um and so organizations that tend to think of themselves as portfolio owners will often do acquisitions because they see an opportunity to eliminate kind of the social aspects of the system in order to create savings by normalizing the interactions and therefore kind of reducing the cost and making the system behave better um but by eliminating the culture kind of like the the let's shock the pool with chlorine version and then on the other side you get kind of like a lot of organizations nowadays you look at like somebody like google the amount of acu hire that they do where they're actually saying I don't actually care anything about the technology you have what I want is your well functioning team to come in because I have projects that I think are more valuable for your well functioning team to work on than the the project that you're currently working on so let me hire you let me hire your entire team because we value that social network that you've built more than the technology that you've built and so I think there's this you know there's these different ways of looking at it and you know I've said I think here before but I'll say it again most of the companies I think that are getting the greatest value these days are getting it from social capital they're getting it from relationships and the way in which humans are relating to each other that that is the primary driver of value in a lot of organizations um and it's not the same thing as saying hire a bunch of experts because those are like individuals I'm literally mean the way in which you create a set of interactions between experts that is the most valuable thing for an organization to have access to right now and that's social capital is so important right now yeah I think that that's one of the things especially and um and we can go a little bit longer here what I see in the open source communities that that I work in live and breathe in that are all technology based and they're all companies collaborating with each other is it is those relationships um and the social capital that we have um and that we give to um the projects we work on and the ecosystems we work on and understanding those relationships and nurturing them is probably one of the most valuable things that you can do as as a commercial organization in participating in that and a lot of the education process on those acu the acquisitions not the aqua hire is moving those proprietary products into being open source projects as we are want to do at red hat there's a couple of other things that came in in the chat tim is saying red right out to point to the point of using mental models in different ways you mentioned the high performing orgs add in expectations of results i.e mismatch between orgs and expectations for those results within a complex system and you talk a bit about the risks of over instrumenting or attempting to measure those through oh i love it kpi's metrics and our favorites ok ours so i i think here one of the the critical things um about most metrics kpi's and ok ours is that they tend to be vertical li aligned they tend to go up and down um and the problem with that i think ends up being that um as you aggregate the metrics from multiple teams together um if they are not managing the same complex system you are basically averaging information about uh teams that are doing completely different things and therefore you're actually creating mud like you're actually not helping the organization see what's happening because you're you're conflating the performance of multiple different systems together in a way that the average doesn't indicate the average performance it indicates some sort of like middle point that doesn't really exist yeah so there's that like there's a difference i think uh when you are so in management or in the executive there's a there's a certain number of metrics that you want in place and i call them hunting metrics and what i mean by a hunting metric is it only tells you where to look for problems it doesn't tell you what the problem is um so it basically is like raising your hand and some you're like having a way for the system to raise its hand and say like there's something wrong over here and not attempting to diagnose what's wrong but just point it what's wrong um and that requires doing things like um you should have candlestick metrics and things like this that show you the variation in performance across teams as opposed to the average performance across teams uh because variation in performance can indicate skewing and and drifting uh away from things uh and making it harder for that common ground to exist yeah you want a certain amount of variation but if you get too much variation in performance it could be indications that people are having problems staying together uh the second thing i'd say is like um the difference then between that type of measurement which is the OKR KPI up and down the hierarchy negotiation versus something like SLOs SLIs which are horizontal negotiations about expectations between teams these metrics where we agree on the minimum the minimum performance criteria so that's the stuff i was talking about with i was talking about load bearing right like in order to perform well our system has to respond within three milliseconds do we agree that that's what the expectation is can we agree that there is a a a a point uh where if we are approaching three millisecond response times that we will get together and try to prevent exceeding that um that measure the these are that's metrics that are used to increase the observability of the system to increase the ability to do predictions and correlate results to those predictions and they're vertical and towards the work surface not towards the management system um and so i think the difficulty uh you know the easiest version of the problem is like story points where you take something that's designed to help a team where a set of teams make sense of something and then you try to generalize it across a broad set of teams and then average it as a management system and then you just get nonsense like anybody who's tried to like have a manager try to uh say the team down the hall is doing 20 20 story points a day why are you only doing five and just have the teams be like you just totally don't understand the problem that that that type of metricking is is exactly the confusion that i think happens in a lot of organizations so i do think there's like specific metrics that are appropriate for each level of the organization and i could i'd be glad to kind of come back and talk about that at some point i think it's an interesting discussion well i think um we do have to wrap up a little bit here though i could probably talk all day today i think the whole concepts um and and i think what you've done today is give us a common ground um and and a vocabulary to talk about this and i really appreciate that um you know it goes back to you know increasing the scanning i think that was the phrase that you use you know it's like the diversity um of these conversations that we have on fridays um the the ideas that you're presenting really help us understand the need for for these systems to grow and and i think gives us some of the requisite coherence to have the conversations um and builds up our ability to have trust each other in these conversations so that when we have the common language so i'm going to be watching this again later tonight because i got to edit it and get it into some format that i can share because i definitely will be sharing this internally at red hat and everywhere else on in the universe so um as always jabe um thank you so much for taking the time and the thoughtful conversation about this this applies at so many levels of so many different kinds of organizations whether they're technical um political social um and and also in so many levels you know inside of our own organization red hat and ibm um as well as in the open source communities that we all live and work in together and collaborate and so again thank you very much for taking the time thank you for having me i always i always enjoy it thank you