 Let me do the honors of quickly introducing Andy. I won't take too much time but it's what saying a thing or two about Andy. So Andy Hunt for most of you who have been familiar with this work, at least one of the two things you should be familiar with. Andy was one of the original agile manifesto signatories. He has been a big influence in the space of programming for me personally, especially, you know, I read your pragmatic programmer book early in my career, Andy, and that has been a huge influence in how I look at programming. So thank you for that. That was a wonderful piece. And I believe it's like a top recommended book now, which is pretty cool. I also happen to be good friends with Venkat Subramaniam, and you had written a book with Venkat on the practices of an agile developer. Beautiful, small book with very crisp advice. Strongly recommend if folks have not read that, still very relevant. After all these years, it's still a, it's a classic book in my opinion. And you wrote the refactoring your wetwear book, which I thought was brilliant, especially this water cooler analogy that you have when you walk away and suddenly things hit you in that and why that happens. So I think there are some very interesting pieces that we kind of take for granted, but you pop the lid on that and kind of explain what's going on. So kind of really appreciate that. And of course, last but not the least, pragmatic press that you had, you and Dave had set up and all the fantastic books that you guys have been publishing. You know, it's amazing. So so much from the Ruby books to everything that you've published, greatly appreciate that. So such an honor to have you. And now without taking too much time, I would want to get into your really interesting topic, refactoring the legacy. Thank you for that great introduction. That's that sounds so good. I think my mom could have done it. That was wonderful. Let me start off the slide deck here, which starts off with some of the very books you just mentioned. Cragmatic Programmer, Cragmatic Thinking Learning, the book I did with Venkit. Those are all pretty well known. What maybe is less known is I write other books too. I write some science fiction, supernatural thriller horror, I record music. And I think it's very important, you know, we talk a lot about STEM education, science, technology, engineering and math. But I think we really need to talk about steam education. You need that A in there, you need the arts. Because in doing arts as a hobby, doing it as something fun, you learn a lot more about humans and how we work together and how we don't and, you know, topics like that. So I'm a big advocate of steam instead of STEM. But I'm not going to talk about that today. That's a that's a subject for another time. Today is, of course, Agilindia 2022. This talk is split into sort of three sections. I'm going to talk about bugs in legacy systems and talk a little bit about the skills I think you really need today. And then we'll end up with skills that I think you're going to need tomorrow, which should be interesting. I see in the chat window someone is unable to see the share. I think it's fine, Andy, we can see. Okay, technology, you know, when it works, it's great. So that's a good segue. Let's talk about legacy systems, right? You know, as soon as you write a piece of code, it becomes legacy, right? It really doesn't take long at all, unfortunately. So what's what really is the problem with legacy systems? Well, really kind of boils down to the fact that reality has moved on. And the system that you've written is sort of stuck in amber, it's stuck back in time. It can be, you know, hard because the models, the understanding that was embedded in the code hasn't been updated. You know, reality has moved on, things have changed. It can be hard to understand the older design and the algorithms used and whatever. So you find yourself asking something like, well, why did this behavior happen? And it's not necessarily clear from looking at the code why it's doing what it's doing. Even the hardware might not be up to modern tasks and requirements. I mean, you see this in the news quite a lot actually that some ATM system, some battleship destroyer system, something fairly important is running on Windows 95 or XP on some poor ancient, you know, 486 processor or something like this. And it's just it's not up to snuff. It is it's not useful for current problems. And unfortunately, the same thing kind of happens with people. We are legacy systems as well. We have models that are no longer correct. Reality has moved on. Our brains and related systems were designed and evolved into in an area where we were grassland hunters and gatherers. Well, that's reality has moved on. We don't do that anymore. We sit professionally. That's our that's our job. And some of that design is unhelpful at this point. It can be hard to understand the older design and algorithms used. Why did this behavior happen? Why did somebody act that way? Why was this decision made? It's not always clear. And it's almost always emotional, but it's not always clear sort of how that happened. And our hardware is, you know, frankly, a little bit sketchy. It's not necessarily sufficient to the things we would like to be able to do in this day and age. Metal models. We'll start with that. This is how we make sense of the world. We build internally models of how we expect the world to work, how we expect people to function and how things are. And the problem is, it takes time and commitment to build these mental models of the world. Later models rely on earlier ones. And because of the sort of cognitive effort that's involved, you don't want to discard these models lately. You know, you learned how to do something this way. You're really powerfully incented by your brain mechanisms to stick with it and not to do something new, not try something new. And a lot of times, I certainly think we've seen this with agile adoption, when you try to learn and absorb a new model, what ends up happening is it gets distorted and twisted and ends up looking exactly like the old model you're trying to replace with new words. But no real change has actually happened. Learning, as it turns out, actually requires unlearning. And that's a hard thing, because our brains are trying to sort of conserve energy and they don't want to discard models and do something different. So as a result, this makes us somewhat creatures of habit. But if you do the same routine every day, then you sort of become predictable, as here where this officer apparently always is hiding there to look for speeders behind that same sign, so much so that they put on the sign, slow down, the cop hides here. We work on autopilot sort of as much as we can. So we don't really want to have to think about every little step of every little task we do all day long. We use hotkeys, we use shortcuts, we have automation, we don't want to think about every tiny little detail. And the problem is, when you're on autopilot, which is really what our brain favors, it's kind of a default mode, you don't notice small mistakes. And in fact, you might not even notice large mistakes. I'm pretty sure that's not how you spell school for a school zone crossing. But I guess nobody noticed until the paint was dry. We don't like being corrected. In fact, in the more general sense, we don't like being wrong. It's really kind of a almost physically painful thing in your brain chemistry. It's a threat. And your brain reacts to that. Because one way of looking at brain architecture is the sort of triune idea. Now, this is not exactly 100% correct. Modern research is a little better than this, but this is close enough for our purposes today. You have sort of three levels of brain construct. You've got the reptilian complex with the basal ganglia and low level vital functions. That's what keeps you breathing, keeps your heart pumping, these kinds of things. There's the paleo mammalian complex where the amygdala is there, which is a bit of a problem. The hypothalamus, the amygdala particularly regulates emotion and motivation. And we often make the joke that when threatened, the amygdala kind of hijacks your thought processes. So you've got a hijacked amygdala, which I really want to reserve for a band name one of these days. But get to that later. And then up top, you've got the neomemalian complex, the neocortex, language abstraction, our higher order thinking, the stuff we're trying to do every day as part of our job. Now, most of this is not under deliberate or conscious control. In fact, these lower older emotional levels quickly respond to a perceived threat. Now, in an evolutionary stage, that was very important because an actual threat, when we're out in the jungle, out in the grasslands, wherever, it really could be a tiger or some large predator that's approaching. But these days, these very same mechanisms get triggered, and it doesn't have to be a tiger hiding in the grasses. It could be a coworker outperforms you at some very visible task. The project deadline is approaching. Everyone loves that one. It's a threat to your job territory or your identity. And unfortunately, our brain can't really tell the difference between one of these sort of soft threats and an actual predator about to jump out and kill you. It kind of reacts the same way, either way. And in fact, one family research suggests that your brain is really optimized for two actions. Minimize threats and danger and maximize reward. Kind of makes sense. You sort of figure you want to go for the stuff that will be rewarding and avoid anything that will kill you. I can believe that. The problem is that we get sort of wrong signals about what a threat really is, but we react just as badly. So what did find great, nice background, Andy? What does this have to do with programming? Well, that's a good question. What is programming all about in the first place? I posit, and I've said this for many years now, programming is basically about two things, communication and learning. Those are our two top skills, the two things we do the most of, the two elements that really comprise most of the job. Because some of this is obvious, if you're communicating, you have to communicate to the language, the compiler, the tech staff, but then you're learning it and learning from it as you go along. Makes perfect sense. But you're also communicating with team members and sponsors and users, obviously, and you're learning from them as well. You're learning how the team works together and how it doesn't, what problems might come up there. You're continually discovering needs from the users, the sponsors, however that may go. And more importantly, maybe you're learning from the evolving system itself as it's being built and as it starts to exhibit perhaps emergent behavior that wasn't anticipated. It's like, oh, that's interesting or that's a problem. So up and down this whole stack, we're learning and we're communicating the whole way. It's a kind of a tangly mess. So mostly, though, I think programming is about mindset. It's a mental activity. So how we think about it really affects what it is. And in that book that I wrote with Venkat back in, I think, 2005, 2007, somewhere in there, we made the definition that effective software development uses fast, real-time feedback to make constant adjustments in a highly collaborative environment. Every portion of this statement is important. You can't make adjustments unless you have feedback. The feedback needs to come quickly. You need to be in a collaborative environment. You can't get feedback and then just ignore it. You need all of these aspects together. And what I find interesting is modern research efforts demonstrate this. They prove this. This isn't just us making a wild assertion. This is indeed how you get high-performing software development teams. It was sort of funny on a related note. Back in the Pragmatic Programmer, the original edition back at the turn of the century, I made an assertion about the broken windows theory and how that applied to technical debt and software projects. This was not founded by any research. It was anecdotal evidence. It was something I had observed and had seen and posited that this was an important factor and how to deal with that. There was just a research paper out of, I believe it was Cornell within this last month, where they actually studied that and said, yes, Andy and Dave were right. This is exactly how this works. So it's nice that eventually the research does catch up with us. It's like, yeah, that is it. So this is what it takes. These are the important things. The problem is we avoid all of these aspects as hard as we can sometimes. We avoid communication and learning. We'd use things like full requests, JIRA tickets, long-running feature branches. These are all largely terrible ideas in the context of corporate commercial development, but they're widely used. Working alone limits collaboration requires messy merging. Instead of doing something like pair programming or ensemble programming, tickets and tools constrain you to one-way communication. They limit bandwidth. Full requests. Great if you're running a large distributed open source project. Most of us aren't. In the context of corporate development, it's just a stupid idea. It introduces queues, delays and waste. Anything that was learned in developing the code, now you've got to sort of duplicate that learning again when someone's accepting the pull request. It's wasteful. It's time consuming. It's a bad idea, but these are all widely used. Somewhat worse than that. When things do start to go bad, our default settings, our default programming maybe, is to add more rules. Add more process. Add more rules. Put in a change control board to try to limit the changes coming in. Stupidest idea I can think of right there. Limit variation. Double down on getting it right the first time. Again, this isn't people being evil or particularly stupid. These are our brains trying to minimize threats, which in this case also leads to trying to minimize uncertainty because that's something we perceive as threatening. The problem with all this approach here obviously is it is 100% wrong. It's just 180 degrees away from what you should do when things start to go bad. When things do start to go bad, what you should do is have fewer rules. Have better general principles of things such as seeking feedback, removing proxies. You want less process and more individuals and interactions. Hey, that sounds familiar. What a good idea. More variation gets you more innovation. Any effort to try to make something like an assembly line to stamp out variation means you'll have less creativity, less innovation. That's kind of what we're here for. Again, trying to stifle, that's a very poor idea. It's important to recast, again, thinking of mindset. We want to get it right the last time. I think it was G-Paw Hill that said on Twitter, we discover the work by doing it. That's a very profound and important observation because we can look at a spec, we can look at a requirement, something a user said, and that's all nice. We don't really understand the implications of that or the additional requirements of that until we actually get into it and start developing it. Then we understand. As a consequence of that, that means we really need to be comfortable with uncertainty. That's a problem because our brains, by default, aren't. That's a very uncomfortable situation, but we have to find a way to soldier through that. When I look around and see how many corporations are trying to develop software these days, it kind of strikes me that they're trying to eat soup with a fork. It kind of works, and it kind of works enough that they just keep going and they do their scrum and they do their tickets and they play and then they miss their targets and it's late and the user's not happy and they're threatening lawsuits and then they fire the VP and get a whole new set in and start the project over under a new name and nothing changes. Does this all sound familiar? Sadly, it happens, but there's a better way, which I should try it. Let's take a look. This is one of my latest ambitious diagrams to try to encapsulate what a modern software development project should really look like. We'll go into this in detail in a few minutes, but I want to point out the three colored regions, first of all. At the heart is this idea of generating continuous value and having continuous learning. That's absolutely crucial. Without that, you really can't have a high-performing team. The green hexagons surrounding that is sort of the idea of the workflow. This is how you actually do work and proceed with it and get it out the door and do your thing in a general sort of way. But then more important, you've got these yellow hexagons around the outside and that's the supporting structures that you have to have to make those inner workflow items actually work. This is where I think almost every single effort at agile adoption, digital transformation, any of these sorts of efforts fail is they will change one or two things in the middle and not have the necessary supporting structures on the outside to make it work. Then it fails and they go back and institute stricter rules and heavier process and then the whole thing collapses under its own weight. They cancel the project and take a $20 million charge against earnings, sad but common. These then are the skills I think we need to look at today for today's development, the inner workflow here. I hope that's big enough that you can see it. I've got to try and zoom here a little bit. Let's start off on that hexagon in the upper right-hand corner. You've got some tasks from a list. You've got some requirements. You've talked to users. You've interviewed them. You've got an idea of where to start. You've got things you need to do. Great. So you break it into small steps. No, smaller steps. No, smaller still. Whatever steps you're breaking it into, I guarantee you, you need to break it into smaller steps and make a lot more of them. You want tasks that you can do within an hour or two, a couple of hours tops. You don't want to estimate things that are going to take days, four weeks or worse. You've got to help us. There's a lovely theory circulating that says you really want to, when you're trying to sort of estimate story sizes and guess how long something's going to take, there's really only three answers. It's a story size of one, something that can be done in a few hours. Or option two, it's too big. Refactor it into smaller tasks that can be done in a couple hours. Or answer number three, which is perfectly acceptable. We don't know. We have no idea. Okay. So then you put that on as a very small task to try to get some more information, get some feedback, experiments, and get some feedback. We start with that. Great. Then we go to develop what we've sliced into these tiny tasks. And we take an example-driven, test-driven, outside-in approach, a tracer bullet development, as we discussed in Pragmatic Programmer. This is where you have a thin thread of execution that could be completely stubbed out with dummy data in the middle. It could be skeletal. A few lines of code each. But it goes all the way through the entire system, whatever that might entail. From the database, through the servers, out to the UI, whatever all the bits and pieces are, it hits all the pieces at once, created by one team, hopefully. And it just goes end-to-end. And you flesh that out as you go along. That's how tracer bullet development works. And one of the virtues of doing that is it's always deployable. You're committing into the mainline branch all day long with very small changes. You need to back out if something horrible happens, which it shouldn't. You've got tests and everything. But it's always in deployable state. You don't have any of these sort of big merge events that have to happen at some point, because this team worked on that and this team worked on the other thing. Or there's a long-running feature branch that has to be integrated. Terrible idea. You've reinvented waterfall in that case. So we're developing our thin thread from our small stories. And we go that. That goes through the continuous flow pipeline, continuous integration, continuous deployment, continuous testing, the whole thing, the pipeline and the clouds. Wonderful. It does its business. Now we've got build artifacts. We can get real-time feedback from working tested features. We can show that to users. Maybe it's not a general release to the entire user population. That might have to be slower for any number of very valid reasons. But we can show it to some subject matter experts, our user community representatives, whoever we've got sponsors that care about this stuff. We can get real-time feedback from them. And that continues this process of shared learning, shared discovery. And it is an ongoing process of discovery. This old-fashioned notion that requirements get dumped on you and you build them is not how high-performing teams work. It's a dialogue. It's a back-and-forth. It's a discussion. And that leads us to this notion that, this is good. These are a couple of things the team has to do now. Overall, what I like to recommend, I call the three-track attack. The team has three goals at the same time. Deliver, discover, and refine. Deliver, value. Software that actually works, that the user can do something with, that gives them an edge, gives them something useful to them. But there's also the ongoing discovery. As everyone knows, it's like the Heisenberg principle. As soon as you give a user a feature, the world changes. They might want that feature to work a little differently. That might suggest to them they need a different feature that had never been discussed before. This is all perfectly normal and natural and the way software development should happen. So it's this aspect of discovery. And then the third one, which you can't forget, is to refine this whole process right here, this workflow. There's better ways to do this, I'm sure. And we keep discovering them and integrating them and finding better ways to do it. Because at this junction, these practices are dynamic. They're not static. And Dr. Patricia Benner, in her book From Novice to Expert, talking about the nursing and medical profession, pointed out that practices can never be completely objectified or formalized because they must be ever worked out anew in particular relationships and in real time. And that's true of programming as well and a number of other industries. And this is where you need to look at something like the Kenevan sense-making framework to realize depending on the class of problem that you're working on, in most cases, the cases that we are exposed to, there is no best practice. There may be a set of good practices. There may not even be that. So that's something I commend you to look at if you're not familiar with that yet. So, okay, this is our workflow. This is sort of our middle. Then we've got the supporting environment, the things around the outside edge. And these I'll go over these in detail in a second. But these are the important things that many organizations are just not doing. And again, the reason for that, I think, is the sort of known bugs in our thinking, our legacy systems, the common cognitive biases, things like the fundamental attribution error, the need for closure, the fact that we cling to these 19th century management models that are our outdated old habits. And we just can't seem to get away from them. We're on autopilot. We don't like being wrong. We don't like uncertainty. Wikipedia lists over 90 common cognitive biases. And I'm fair certain I've met people who've got way more than that. Myself included, right? We are just buggy in the old brain box. Let's look at a couple of these real quick. The fundamental attribution error, people react in a context. They don't react because of who they are. And this is a mistake we make all the time. Oh, they said that because they're a junior staff person. They said that because they're a tester. They're not a developer. They said that because they're a business person. We look at legacy code and say, wow, these people are really stupid. No, not necessarily. They were working under constraints you probably aren't aware of. They may have been maximizing what they were doing, maximizing for a different goal than what you're trying to do. So you always have to consider the context, not the person. The need for closure is a huge cognitive bias. We need an answer right now even if it's wrong. And we see this an awful lot with things like estimates. I need a number right now. I need to know how long this project is going to take that we don't even we haven't even talked to the users yet, but it's next year's budget. Tell me how long it's going to take because I have to put a number in the spreadsheet. I don't know is a great answer. I don't know yet is a better answer. I don't know yet, but I'll find out. We'll find out. It's probably the best answer yet. So these are kind of personal fixes we can make, but more importantly, I think is these environmental issues, these environmental fixes that we need. We start off with the team itself. It should be small and self-contained and able to develop that thin threat of execution from one end to the other. So you don't have a separate UI team and a separate database team and a separate whatever. You have a small team that works on these several features and this other small team that works on these set of features and they go end to end. The team should know each other. They should be trust. There's lower communications, friction, no cues as you have to go between teams and the sort of thing. We need psychological safety and support. Your job, your position, your reputation, it needs to be safe to propose new ideas. There's no threat. You're not going to get disbarred or thrown out or anything by suggesting that it'd be a good idea to have sharks in a tornado. I use that as an example because I think that was the stupidest idea I've ever heard and they've made a billion dollars on movies with sharks in tornadoes. So no such thing as a dumb idea. You need to be able to converse freely without any kind of threat there. You get what you reward and this gets tricky because if you want teams, but you're rewarding, giving bonuses and raises to individuals, this causes a problem. That's something we need to look at. Teams need quiet time to think where they aren't interrupted, but teams also need to be able to respond to late breaking bugs, to production problems, to whatever. So you need to balance that out. Some teams say, all right, well, we can't be interrupted for anything, whatever's on fire all day Tuesday and Thursday or whatever. Some teams say we can't be interrupted between noon and 4pm every day, whatever. It doesn't particularly matter as long as the team agrees to whatever the protocol is and publishes that. This is when it's okay to interrupt us. This is when it is not okay to interrupt us. So we have quiet time to think and work. We need fast feedback, many more, much smaller steps. We need to experiment and try things and get feedback. If you're not getting feedback, you're guessing. You have no idea what's really going to happen. We need free information flow. And this is interesting if you want to read up on the Western continuum. Some pathological organizations devolve into information hoarding. It's like, well, I know this and I'm not going to share it because it gives me power. Again, you get that kind of little brain reward there. And that's not what we need. We need people to mentor junior, more novice practitioners. We need to share what we've learned and learn and just have this free flow of information, not even just on the development team, but between the executives, between the C-suite, between the user community. Information has to flow freely. That's how you get high-performing companies. If you don't use it yet, you should look into OOTA loops, observe, orient, decide, and act. And this is, I find this personally interesting because it emphasizes the observed part first. And that has its place in the Kinevan quadrants as well. But this is one of those things that you find we really recommend with debugging. It's like, actually observe the bug first. See all the ways you can make it happen. Find all the places you can make it happen. Don't just jump in and fix what you think is immediately the problem because then it doesn't fix it and now you've got two problems. And that can geometrically increase kind of quickly. So observe, orient yourself. Where does this happen and where else does this happen? What else can we say about it? Decide what to do about it, write a test, and then act. Do the code. Sense making with Kinevan and wordly mapping. Wordly mapping kind of takes this idea and sort of sees what happens over time. Things that we're custom developing now are going to be commodity sooner than you think. And there's this sort of procession. And it's very interesting to look at what decisions you're making and how that might be different a year or two, five from now. The Dreyfus model of skill acquisition, looking at how we learn because that's so fundamental and appreciating that beginners and experts and intermediates have different needs, how you communicate to them, how you learn at different stages is different and changes. These are things we need to be aware of. And things that you wouldn't think make a difference, make a huge difference, like corporate accounting policies. We need to look at introducing non-batch funding. These annual budgets and quarterly targets are stupid in modern day. And there's a whole beyond budgeting movement that folks in the C-suite are starting to gravitate towards that takes a much more incremental and iterative approach to governance that makes much more sense with the kind of environments that we're dealing with today. And then finally, there's possibly the grandmother of all environmental fixes is learning to adopt system thinking tools, being able to look at something as a system rather than trying to isolate individual parts. And this, I think, is pretty critical because if you've worked on any kind of team or any kind of large software, you realize everything affects everything else all at once all the time. You can get vicious cycles. These set of events makes the next cycle worse, and that makes things worse and you spiral into this vicious cycle. You see that with things like hyperinflation, poverty. You see that with development teams that are slow to deliver. And then they do things that makes the next round even slower and that causes more pressure and that lower that increases the failure demand and increases the cost and you just go downhill really quickly. You can also have a virtuous cycle, which is the same mechanism, but goes into a positive net effect. So for example, continuous delivery leads to faster feedback, which leads to quicker resolution of errors and better alignment with user needs, which leads to faster delivery, which leads to faster feedback, which leads to better resolution of errors, which leads to a virtuous cycle. So these are important concepts that if you're not familiar with, you should be. You want to be. That's today's world. Okay, great, Andy. What about the future? Well, that's a good question. I got asked this fair about. I started writing code commercially 40 years ago. So I've seen four decades of what we've been doing and they've been pretty monumental decades. Lots happened in that time. So what can I say about the future? Well, the first thing I want to say about the future is we are terrible fortune tellers. We suck at it. As a species, not a core competency. And what do we try to predict? You asked? Well, everything. We try to predict the schedule. We try to predict task effort. We try to predict user reactions, software longevity. How can I make the software maintainable? How can I make it extensible? Rubbish. You shouldn't even be asking those questions because you don't know. What are future use cases? No idea. What did you predict was going to happen, say, fall of 2019, early 2020? How are those project plans, your personal plans, your vacation plans? How did all that work out? Not at all because then the pandemic hit and all that got thrown out the window. The future itself is really hard and we suck at fortune telling. So one thing, at least on the short end, is stop fortune telling. Instead of saying we must get better at estimates, say let's become less dependent on fortune telling. And the problem is one of the reasons we're bad at prediction, and this is again a cognitive bias, but we tend to focus on the wrong things. So a great example of this in the 90s, I guess it was, there was huge amounts of press and articles and blog posts and discussion on the leading topic of the day. Who would win the desktop wars? Would it be open look or motif? Who would win the middleware wars? Would it be Corba or RMI? And anyone under a certain age in the audience is shaking their head like, what are you even talking about? These were desktop systems at the time, two leading contenders for the windowing system. And it was a huge deal. Would this be motif or open look? And what happened? None of it because the web came in and all those questions were irrelevant. It didn't matter who won, neither won. Neither one were even on the board anymore as game pieces. This is why predictions fail. You get what's called a black swan event from Talib's book of the similar name. A black swan is a large impact, hard to predict in rare event beyond the realm of normal expectations, the pandemic. It's not that nobody knew this was going to happen. Epidemologists knew conditions were ripe and that this was something that could happen. We, general population didn't, governments really didn't. And all consequential events in history come from these unexpected events, which makes our efforts to predict generally useless. Having said that, I'm going to go out on a limb and say, what is our next black swan? And I claim it's going to be prompt engineering. Andy, what are you talking about now? Well, glad you asked. Are you familiar with these systems such as stable diffusion or Dolly to mid journey? These are machine learning text to image models that generate images from natural language descriptions. So you type in a prompt and the AI generates an image for you. The example on Wikipedia, if you look it up, has the prompt photograph of an astronaut riding a horse. And the AI produced this image, which is indeed an astronaut riding a horse. And I come across this. I think, wow, this is pretty cool. This is something worth looking at. Let me give it a try. So I put in the prompt, ancient wise man presenting at a conference on a barren moon photorealistic. I got these four images. Interesting, different styles. I kind of like that one on the bottom right. But let me try to improve the prompt a little bit. So I type in ancient wise man presenting at a conference on a barren moon, ringed planets in the background, meteor shooting across the sky, blue cues, photorealistic. And these were the four images I got. Interesting, I see something that looks like a meteor in that upper left one. I do see a blue theme, blue hues on all four images. I only got the ringed planet in one though. I actually wanted that as a feature in all the images. So it didn't quite do what I wanted. Because as an end user, I don't have a lot of experience feeding prompts into the AI and seeing what the results are. But there are people who do and people who will. And so we have a burgeoning new job category of prompt engineer. Currently we're just talking about images. But is it that much of a stretch to think that you could ask the AI to build you a banking app, an image editor, a timing app, whatever. So we might see jobs for prompt engineers. Is that science fiction? Well, I don't think so. Because let's take a look at the progression here a little bit. When I started programming, everything was 100% custom code. You would start off by editing main.c. And if there was a feature that you wanted, you had to write it. You had operating system calls, you had some low level library calls, but they were pretty low level. Anything of any significance. If you wanted it, you had to make it. And then over the years, things got better. We got libraries that we could call into. Now we had database access, we had communications, access packet handling, what have you. And we got more and more libraries. And we started to connect these into larger and larger frameworks, which handled a lot of the common details of an application. So you didn't have to. And, you know, we're slowly raising the level of abstraction along the way here. And so finally starting, I don't know, maybe 10, 15 years ago, 5, 10 years ago, 10 years ago, plus or minus, you start notice the industry, most corporate programmers really moving more towards system integration instead of custom code. And don't get me wrong. I mean, there's still tons of custom code, but a lot of it is knitting together this library, this framework, this system, this data service, this bit of analysis, this other thing over here. So it's slowly but steadily becoming more and more systems integration and less and less raw code, which leads us to the no code movement, which I think is, it's interesting. I don't think it's, it's, you know, there yet, but it's coming along. And it makes perfect sense because for common things, why should you have to go through all this effort? You know, these are known features, known quantities. We just want to glue them together, which makes the next step even simpler. AI assembled code via a prompt. You say, I want a banking app. I want, you know, whatever it might be. And the AI, the AI goes together and assembles this for you. And you're starting to see this even now with a GitHub co-pilot. I thought that was kind of like Clippy saying, so you want to build a banking app. But actually they've done studies that showing programmers are like 50, 60% more efficient when using co-pilot. So I don't think this is particularly science fiction. And it's not really all that difference because we're still in the position. We've got the user's needs over here and a working system over there. And we're telling the computer how to do it, except instead of worrying about threads and microservices and you know, library calls and what have you, we're creating a text prompt that says, well, I'd like a system with these sorts of components. Skills you need in this future, still talking with users, having a dialogue with users, maybe look to see what journalist majors are taught for interview techniques. How do we talk to and get information out of users better? Same with anthropology. We should look at that a bit more because we're talking to users, they exist in a context. They exist in a culture. They have ethics. We have ethics. What we do matters. And what we do could be weaponized. It could be used against us. It could be used against a vulnerable population. That becomes much more important in our current age and in the next age. And systems thinking again, you know, we can't really look at individual pieces. We have to look at pieces and events in context because it's a system, not a bunch of parts. And at this level, that system includes culture and ethics and these other sort of squishy topics that we've been able to ignore so far, but we really can't for much longer. So that's what I think the future might hold. Thank you all for being with me this evening. You can find me on Twitter at PragmaticAndy. My homepage on the web where I've got a bunch of articles and things is toolshed.com. If you like the sort of methodology thinking, head over to growsmethod.com. Our tech books, of course, at the Pragmatic Bookshelf are at pragprog.com. My science fiction and thriller books at konglamora.com with two Rs. And my music recording hobby is over at strangespecial.com because Steen is better than STEM. Thank you all again. All right. That's awesome, Andy. Thank you so much. And you bang on time. You've taken a little time and greatly appreciate that. If there are any questions for Andy, please use the Q&A section and post any of your questions. I had one question, Andy, for you. You talked about the Uda loop, right? Which is, I think I was more familiar with this because of the Boyd loop. That's what Jim... That is that. That is the same thing. And I've seen this used in a very interesting context of basically in terms of strategy. If you get into the decision-making loop of your competitor or your opponent, then you control how they react to things because you're now in the decision-making loop. And I think this was originally by John Boyd where he was trying to fight and even though the flights they were flying were not as fast, they were able to maneuver more quickly and they were able to just get the opponent to predict where they are and then hence get into their decision-making loop. And if you maneuvered very quickly enough, you can take them by surprise. This I thought was a very interesting perspective. But what you've done now is you've taken that and you've kind of put that in the context of software development and how one could use that, which I thought was very interesting. So it's not really a question, but I just wanted to kind of... And let me just add on to that a little bit because what I didn't mention on that slide, because I had too many slides already, the order of UDA I think can change depending on which Canevan quadrant you're in. So depending which quadrant you want to probe sense and react, but in the chaotic chaos quadrant, you act first and then sense what reaction that had and whatnot. So in each of the four quadrants, the order is a little bit different because it's a different style. It's a different class of problem that you're trying to address. And I find that really interesting because that's a sort of a second order of thinking that we rarely bother with. We see something like this and it's like, oh, okay, here's the technique. It's UDA, you do it this way, you always do it that way. Now I'm going to get an UDA certification and I'm going to be an UDA master and it's always this way. And you know how that song and dance ends and it's ridiculous because all those efforts to sort of commoditize our techniques and our thinking, dilute and destroy the intent of the original author. And again, it just comes back to it's not always the same thing. You can't always just go on autopilot. We really want to, but it context matters. So if you're in a linear system or complex adaptive system or you're in an emergency situation in a chaotic environment, it makes a difference. Absolutely. I think very well said. In fact, I think the biggest takeaway for me with all Dave Snowden's work on Canadian is not so much about the four domains as much as it's actually figuring out which domain you are in. And that's as equally important because sometimes people are so obsessed about the domains and they just talk about the, what are you calling the four quadrants? They're so obsessed about that. But it's not like to me the equally important part is to even first figure out which domain do you belong to? 100%. Yeah, exactly. And I shouldn't call them quadrants. That's the wrong word. That's just laziness on my part because there's nothing math. It's not a Cartesian plane. These are just four fluffy domains that are in a four-dimensional Hilbert space. Sure, whatever. But yeah, no, you're exactly right. It's the identifying what kind of system you're dealing with is 90% of the battle because the reason again, the reason that most of these sort of adoption and transformation efforts fail and management systems fail is because they're using linear tools in a complex adaptive system. And it doesn't work. We know that doesn't work. And yet because it's habits because of resistance change, this is what so many organizations just do. And like I said in the middle, if they do it and you're eating soup with a fork, it works enough that they just keep doing it. It's like, my man, we have a really big spoon here. This could be so much better for you. It's like, but we've always used forks. Okay. And there you have it. That's a great analogy. All right. I think we have one question here. I'll try and give that out. So, okay, when agile is adaptive, I see people talk about predictive agile and prediction about effort, schedule, future techniques, etc. What is your view on this? I've not heard of that in those words. I think it's rubbish for obvious reasons. We physically as a species do not predict well. We think we do. That's a cognitive bias. We think, oh, sure, absolutely. If we predicted well, I'd be playing the stock market and I'd be coming to you live from Fiji right now. I'm not. I don't. We're just not good at that, but we think we are. So I don't think that that's a valuable line of reasoning or questioning to go down. I think it's much better to be able to react quickly because now you're guaranteed whatever happens, you're going to win. It's like selling supplies to miners instead of mining for gold. You're going to win. You don't actually have to find the gold because you're supplying the miners who are coming through. There are ways to win no matter which way something goes and that's where you want to position yourself instead of making a bet. It's fine. You should think in bets in management anyhow. You shouldn't think that, well, this is our plans that we're going to do. It's a bet. You're hoping it works out and this is how much investment you're putting behind your bet. That's perfectly valid. That's great, but it's a bet. There's no guarantee it's going to go that way. This notion about trying to predict effort in particular, we are not at a point of maturity in our field where we really can say that in general terms with much certainty. Now, there's exceptions, of course. If you are building the exact same application that you built 50 times before, sure. You can optimize that, but now you're in Cuneven terms. You're in a linear space and okay, fine. That's great, but that's not where most of the action happens. All right. Thanks, Andy. The next question is from Tom Gilb, our very own Tom Gilb. He's asking, do you believe that systems engineers have to start with quantified critical qualities like security, usability, adaptability? Yes, for the most part, because there are a number of aspects to a system that you really can't bolt on later. Security is a great example. Internationalization. There's these kind of cross-cutting fractal aspects that yes, you need to address that and have that in mind kind of from the get-go. It gets a little fuzzier when you get to things like usability and adaptability because that's one of those areas that no plan survives contact with the enemy. You can be going down a road and as you start getting real time feedback and real users, you have to adjust and sometimes a lot. I found it really interesting. I'm not a gamer for the most part, but about a year ago I got a Steam Valve Index VR setup and I was playing Half-Life Alyx. You're running around shooting headcrabs on zombies and this kind of thing. They released developers commentary on how they had developed the game and over a course of how many years it took them. What I found really interesting was, as this game developer that did it, they were so focused on play testing and user feedback because this was brand new territory. It's like, well, how do you represent this kind of gameplay in VR? This really hasn't been done before. They were very, very upfront about, let's get this in front of people, hey, this is a great idea. Oh, it's making the motion sickness. This was a great idea. Nobody understood it. Whatever it might be, but they had a very, very focused effort on getting that feedback as fast as possible and adjusting to it. I mean, yes, you have to do that. Back to Tom's question. Yes, I do agree, things, especially engineering level, things like that, even software hard real-time requirements, things like that. Yes, you have to get that in from day one. You certainly have to at least think about it from day one. All right. Thanks, Andy. I'll probably, I think we're just running out of time, so I'll take one last question if that's okay with you. Absolutely. Cool. So we have a question that where do you think Scrum Masters or Agile lead kind of roles are going towards in the future? I'm going to get in trouble for this, I'm sure, and probably widely quoted on Twitter because that's how that goes. I think those roles are a bad idea. And I'll explain, I think Scrum is a bad idea and I'll explain why. I have a friend who was in the military, high-end, think like elite strike force, SEAL team kind of elite environment. And one thing that I appreciated from their take on management was the person leading the mission, the team leader on the mission is leading it because they picked the short straw today or it's not that God anointed them on the shoulder and said, you are the Scrum Master, you are the lead, it was just their turn. And when that, especially in a military situation, when that person gets picked off, then the next person can step right into the role and take over in the same heartbeat and the project keeps going. So there's nothing magic about the leadership role because it's a team. And I think that's the most important concept. The team is the most important concept in software development. So if you look back, especially at like extreme programming, that had a very almost anti-management tilt to it when it came out and in fairness, because it was developers trying to seize back control of the means of production because non-software managers trying to manage a software project is generally a recipe for failure because they don't understand all the aspects that I just talked about in this talk. That's not something they were exposed to in business school or in their own professional career. So that's a problem. The idea of having some form of process police, be that a Scrum Master or whatever, is again, maybe not a great idea. There are some things, obviously, that you want to enforce and be very strict about. So things like version control, hygiene, checking in every asset, making sure that the build, the pipeline build is clean, that kind of DevOps and engineering and version control. Yes, you want that to be very strict. You don't want to say, well, it's okay, you don't have to check in some of your files because we're going to do an experiment. No, that's stupid. That's like saying, I'm going to stop brushing my teeth and see what happens. No, you don't do that. So yeah, there is a level of things that we do need to be disciplined about and we do need, if necessary, someone to make sure we wash our hands. But beyond that, the problem with Scrum, Scrum was a great idea 20-some years ago because it was a stepping stone to get people from thinking of months and years long deployments into thinking into two-week or four-week iterations. And that was a bold and powerful move. But that was 20 years ago. And the emphasis now isn't on a two-week or four-week sprint, it's on continuous. You check in, it builds. If you need it to, you can get out into users' hands that afternoon. Now, if it's a middle of an attack, you're getting DDoS or whatever, the chaotic environment in Geneva, you do it now. And Scrum really isn't aligned for that. I think because so many corporations adopted Scrum, that put a freeze on innovation in the space. We don't have a lot of new techniques or practices that are coming out of the space. I mean, a couple have come out over the years, but nothing like what maybe could have happened because everyone's like, oh, well, we do Scrum. And they don't, they do half of it badly and they use a ticketing system. But yeah, that's a topic for another time. So I think that's hurt innovation. And I think we need to kind of get away from that. My friend Dan North says, you really want to avoid any kind of branded methodology and simply do what works. And I like Dan, I think that's a very pragmatic answer. If test-driven design, outside-in testing as a design technique is very powerful. Some people get all hung up and it's like, well, but this is testing and they do it wrong and it's painful and they stop doing it. That's a whole separate line. But we need to get back to the idea of searching for better ways of developing software, learning and discovering better ways of creating software. Like it says at the top of the manifesto, which nobody reads, but that top part I think is the most important. We are discovering new ways and we should still continue to discover new ways. Absolutely. Amen to that. And I think that's a great way to wrap up this session that we're still discovering. We're still uncovering and there's a lot more for us to figure out. Don't let the training wheels become permanent fixtures that you're just riding with. Exactly that. All right. Well, thank you all. And thank you so much for being with us. And thanks everyone for hanging in so late in the evening to listen in to Andy. Thanks again, everyone. We'll see you tomorrow.