 Whoever's going to work on it and whoever provides input to this story, whoever is involved in this story in any kind of way, put them in a room, spend 15 minutes talking through the story and come up with some examples to describe this story. And when you do that, some amazing things will happen. Well, first, you create those examples that will come out of it. The next thing you'll discover is, you know, with these examples, you'll start to notice that the business rules around this story become clearer. Or maybe some of them are already defined, but you will discover new ones. You will realize that the ones that you already knew about were ambiguous or unclear or contradictory. But as soon as you start talking about concrete examples, people can visualize it. People can see where it's inconsistent. The other thing that you come out of these meetings is questions. Every now and then, the developer will ask the product owner about something and the product owner goes, you know, that's a really good question. I don't know. Let me try and find out. And then you can just park it. What you've just done is to promote an unknown unknown to a known unknown that the BA or the product owner or the domain expert or whoever it is can go back and analyze and find out about before you start writing the defective code. So I said 15 minutes because it's important to keep these meetings really fast and really frequent. You want to have a 15-minute meeting for every story like this. So don't try to solve the problem or get an answer to the question. Just write it down. Another thing that happens is that you can create smaller stories. Now, tomorrow in my workshop, I'm going to do this. So I'll show you some techniques about how you can come up with more examples for a story. But one thing that happens is, you know, for some stories, you'll get a lot of examples. Or for some stories, you'll realize that, you know, there are so many different rules. And when you see that there's lots of examples and lots of rules, well, that's an indication that this is a big story. You can break it up into smaller chunks. And it's much easier for the product owner to realize the magnitude of what they're asking for when they can actually experience firsthand through a conversation that there are so many different paths. There are so many different examples. We have to break this up. So stop breaking your stuff up by tasks, you know, user interface, domain layer persistence. Break them up in examples which are more like horizontal, sorry, vertical slices, which is just like, you know, this is one path through this story. This is another one. But perhaps the most important thing that you get out of this meeting is not even any of these. It's just that you get a shared understanding between everyone on the team. And some people have told me that, you know, this actually has some amazing effects. It helps people create empathy, you know, it helps people be more empathetic. When things go wrong, which they always do, even if you do three Migos meetings, it's harder to blame one another because everybody was in the same room and gave their best shot. So if you have a problem with blame, you know, with blame in your organization, this is a great tool to just get rid of that toxic culture. There's absolutely no excuse not to do this. Just try it out. Don't say it's BDD. Just, you know, try and grab some people and say, you know, can I ask, you know, can we have a quick chat, talk through some examples, just see what happens? You'll be amazed. So, yeah, because everybody has a different perspective. So talking about perspectives, what I see a lot of people do when they're trying to adopt BDD is, you know, we have this syntax called given when then, at least in Cucumber, which is a way to formalize those examples. And one of the things I see is that, well, yeah, the BAs, they write the given when then in the JIRA ticket and then they hand it off to a developer. Now this, I'm not going to ask any of you, you know, how many people, you know, if anyone's doing it this way because I don't want to embarrass you, but this is really missing an important trick, you know, which I just talked about. It's you want to have the conversation around these things. You don't want someone to sit there and do this in a vacuum on the road. So if you're putting given when then or Cucumber scenarios in JIRA, please stop doing that. It doesn't help. It's just the old way, you know, in new clothes. So I've been talking about concrete examples. Do you know what I mean by concrete example? Do I need to get meta and give you an example of a concrete example? Fair enough. So think about the last time you used a piece of software, maybe on your phone, on your laptop, right? Who wants to give an example? How about you? Last time you used a piece of software on your phone. What was it? Top of, top, excellent quote, yeah. Top up my prepaid phone. So that's an example. But it's, can you give me, tell me what you did? Yeah? You put your phone number, where did you put your phone number? Into your phone? So you went to a website. You entered your phone number. And then what happened? How much money you want to charge for? And then you expect to get an SMS, sent to your phone with confirmation that you've charged it. So that's a concrete example. You can make it even more concrete, right? You can say, well, what's your number? Because that might be relevant. I've got a UK number. What happens if I put my number in? Maybe the amount is relevant. Maybe something different happens if you use a different amount. So concrete example is just little stories like this. And when I say story, I don't mean story in a sense of scrum. I mean story in a sense of telling a story. With concrete, with real people, real values, real places, that helps people visualize it. And the great thing about concrete example is that everybody understands it. Everybody. It's not technical. This is about how users are interacting with your software. Everybody can provide feedback. And everybody has a different perspective. So you will talk about it until you agree what it is. And the great thing about this is that it's rooted in the problem domain. Now, what does that mean? Another buzzword, problem domain. So has anyone raised your hand if you read Eric Evans' domain-driven design book? Well, I haven't because it's too long, but I've been flicking through it for 10 years. And there's this picture in there. It looks like this. I love this picture. It's such a great illustration. When you write software, there's two domains. There's the problem domain, which is where the users are, and the businesses, and the patients, and the contracts, and the futures, and the options, and all these real-world people and things. And then you have a problem that, hopefully, we're going to solve with the software. And then there is the solution domain, which is where people like me operate most of the time, where we have our MongoDBs, and our JavaScripts, and our database tables, and our classes, and methods, and so on. But one of the problems, and I'm sure you've all experienced this, one big problem on the project is, you know, this communication problem, because we don't speak the same language. We've completed different vocabulary. So there is a translation cost whenever you're going to have a conversation between somebody who's in a solution domain, which is typically a tester or a developer with somebody in a problem domain, they're not going to understand one another. And if we can increase this area in the middle here where people talk the same language, we'll have fewer misunderstandings. And this is more about the people in the solution domain reaching out to the problem domain and trying to understand that domain more than the other way around. I have seen teams where the technical people are so fanatic about trying to educate people in the problem domain about the constraints of relational databases and whatnot, but that doesn't really help. If the people in a solution domain can understand the problem domain well enough, the solution will take care of itself, because, you know, software, all we're doing is we're creating a model of the real world. And if you don't understand... how can you create a good model or good software unless you understand the problem domain? So everybody who's a programmer, they need to understand this. I was in my previous job. I was working in finance. I was, you know, options in futures. I didn't have a clue. So I had to learn about this whole financial domain before I could even start writing software. I mean, I guess I suppose I could have started writing software the first day, but it would just have been a mess, right? So if you model the problem well enough, the solution will take care of itself, but you have to have this ubiquitous language. Yet another buzzword. So ubiquitous language is another thing from DomainDream and Design, which is basically... it's a language that is everywhere. That's what it means. And by everywhere, I mean that the words that the business people or the users, the words that they use, they have to be used in the solution domain as well, meaning in the code. So if it's called a customer, if people are talking about customers, you know, the people who are in the shops, it's a big mistake to model them as users or use some other word in the software because that is going to create this misunderstanding. So ubiquitous language is the idea that you take the vocabulary from the problem domain and you use those words and you let it affect the software very, very, very much. And concrete examples, talking through concrete examples, they help you build this ubiquitous language because you're putting these people from both the problem domain and the solution domain in the same room, and now you will get rid of this, or you'll probably never get rid of it, but you'll eliminate a lot of the misunderstandings and you'll converge towards a more common language. Another thing is these concrete examples, they can provide a shared source of truth. So I remember when I joined ThoughtWorks, I was introduced to extreme programming and I came from Waterfall, and this is quite liberating because in extreme programming there's no documentation. If you wanted to seek out the truth, you had to go to the code and you had to read the unit tests because being good extreme programmers, you would write TDD and pay a program and that's where we would define the truth. And if you didn't know how to read code, well, tough, right? So unfair, that's mean. What about all the non-technical people on the project? Where is their source of truth? They can't read the code. In this kind of attitude, I find it a bit ridiculous that you go to the code to find the truth in the case of excluding the most important people from the truth. Where do the real people go and look for the truth in software? Do they go to the requirement specification? No, that's out of date. They go to the documentation, they go to the software, they open the app, they start poking around. So we have two different truths. We have the external description of the thing itself and then you have the way it is on the inside. But with examples, you can converge this into one thing. The examples, they are one source of truth because as I'll show you tomorrow, if you come to my workshop, we can turn them into automated tests. Concrete examples, they help you to split stories into small pieces, which is really, really important for predictability. In order to be able to have any level of predictability, you need to work in small chunks and if you create examples, you will quickly see whether a story is too big and is to be split up or whether you can just work on it as is. In most cases, you will find that you can split them up. So that's concrete examples. The other thing about behavior-driven development is test-driven development. So why do we even have tests for our software? Well, imagine Homer Simpson. He's sitting in his atomic nuclear power plant and there's all these lights, all these yellow lights, and something goes wrong and they all start flashing. That's what automated tests are. They're warning lights that you attach to your code so that when something goes wrong, these warning lights go off. They all go off. You'll be really annoying and you don't really know what to do with it. But if you do this well and TDD helps with that, they will actually give you, tell you exactly where the problem is so that you can fix it. That's one of the reasons we do TDD. If you try and write all the tests afterwards, typically they tell you where the problem is. They just tell you that there is a problem. And then there's this other thing about TDD that everyone forgets. That's refactoring. So with TDD, the cycle is you write a failing test, you watch it fail, you write just enough code to make it pass, and then you write another test, right? No, you refactor. And then you write another test. But people don't do refactoring. At least a lot of the teams that I visit don't do it for many different reasons. Some people need permission to do refactoring. You have to go and ask the manager if they can do some refactoring. Or some teams, they do refactoring sprints. Two weeks of refactoring. The planet ahead. We're going to do refactoring in a sprint. Or they don't, right? They just don't. And I think one of the reasons people don't do refactoring is that they don't know what it is. They see the benefit. If you go back to this problem domain, solution domain thing, this word, where does that belong? It's in the solution domain. It's technical lingo. But does that mean that it doesn't have value to the people in the problem domain? Does refactoring have value to the people in the problem domain? Of course it does. This is what's going to keep the product alive as time goes by. This is what's going to make it possible to actually maintain and fix bugs and evolve this product. If you don't do refactoring, it's going to die. A horrible and slow death. So it has value to the people in the problem domain, but people don't do it because it's not well understood what it is. So I think we need a new word for it. So I have a metaphor for this. Imagine you're in a kitchen. Kitchen like this. And the programmers and testers, they are the chefs. And the BAs, they are the waiters taking the orders from the customers sitting in the restaurant. And the kitchen is the code base. So imagine what this kitchen looks like. Does it look like this? Or does it look like this? This is the kitchen that most programmers work in. It's messy. Baking food here is slow. The food we make here is bad. It can make you sick. The chefs will walk in here and they will trip over and hurt themselves. You get dirty. If you want to be agile, you can't have a kitchen like this. You need to clean up. So a much better word for refactoring is cleaning up. If you go to a sushi restaurant, you will observe the sushi chef. They make a little dish and then they clean up immediately. That's how you do refactoring. So you can't stay agile unless you have a clean kitchen or clean code. And you can't have clean code unless you do refactoring. And you can't refactor unless you have good automated tests. You need those warning lines to go off when you do refactoring that destroys something. So TDD will eventually slow you down because you build lots of tests and you have to start listening to those tests. When they get slow, that's the clue for you that you shouldn't try and parallelize it and run it on lots of different machines. That's the clue to you that you need to change your architecture. To make it more modular. To migrate from a monolith to microservices, perhaps. You have to listen to those tests because you run them and you have to keep them fast. So to sum up, if you want to succeed with agile, there are no shortcuts. You have to have excellent communication in your team. You have to talk to one another. Try those discovery workshops for stories. And you have to have excellent code. If you don't have this, you can forget about agile. That's my talk. Any questions? Would be effective if we have only good automation code. So... What I felt is at least what we are doing is automation code mainly focuses on the functionality. So... Refactoring is something like which won't impact the functionality. So even the functionality is fine. Some refactoring work will be there in the backlog. We often say that we will take it up in the next release like that. As long as the functionality is not broken, refactoring is not a high priority case. So in such cases, how would automation be helpful? Sorry, I didn't catch it. What was the question? I said refactoring is more effective. Automation code will be more effective to refactor the actual code. Automation test. So if we have good automation test, then refactoring will be done easier. That's what I got it. But what I think is automation code focuses more on the functionality. And... If there is no break in functionality, how would automation code be helpful for taking up a refactoring story? Okay, so your question is how would automation code help for refactoring? Yeah. So refactoring is all it means is to change the internal structure of a program without changing its external behavior. And an automated test makes assumptions about the behavior. It observes the behavior and if the behavior is not what it is expected to do, then the test will fail and you'll have one of those warning lamps go off. Right? So how does it help? Well, if you do a refactoring and accidentally change the behavior while you're cleaning up, the test will tell you and then you can fix it. But if you don't have those tests, you can't refactor safely because you don't know if you broke anything. And then people will stop refactoring and you'll end up with a code base like this. Not immediately. It will take you six months to start out like this, but it happens over time. It's like boiling a frog. Have you ever tried to boil a frog? I haven't, but I've heard it said that if you drop it in cold water and just turn off the heat, they'll just die. They don't notice it. If you drop it in, boiling water will jump out. So you don't... If you stop refactoring, you won't notice that your code is deteriorating. You need the tests to be able to clean up. Yeah. Actually, that's what is happening in our case. Whenever we talk about a refactoring story and the next question comes up is like, is it creating any problem now? Is the existing feature or something has broken? So why should we take it up? Because it's a national higher priority stories. Yeah. So more often than not, it is going back to the backlog. Thank you. So in our company, we are writing a lot of specification using health care and tons of it. And it's really good. It is creating conversations and we have very good specifications and examples and all the good things are happening but when it comes to creating automation that goes to have that specification to live like a automation suite that's when things start breaking up and so maybe 20% of those specifications actually become live or turn green in their life. Most of them are still red. Is it a good idea to keep them that way or is it... Do we just labor through each and everything to make it green? No. I don't know your particular situation well but fix them or get rid of them. Tests are a liability. Right? Because you have to maintain it. The value of test is not constant over time. They are most valuable when you are modifying the code that is testing which is typically in the beginning and it can be over time as well. So if you have tests that never pass well they provide no value so you can throw them away. It's not that. I think my question is this are not passing if you run those manually they will pass but the automation that goes behind those tests is not yet done. So do we labor through automating each and every because there is so much of specification that we are over one in terms of automation? Yeah. If your tests are never green how are you going to notice if you have two kinds of tests. You have the important ones that are green and the ones that we have given up worrying about that are red how are you going to notice if one of the important ones start failing? It's just another red among the 120 other red ones. You are not going to notice so they are no longer warning lights. They will not cause anyone to react. If your tests are always green and you have one going off you can be sure that people will actually do something about it but I would wager that in your case one of the tests goes red nobody cares. So they don't really serve much of a purpose I think. I would stop running I would fix to stop running the red ones and try and introduce a culture where tests should always be green and you stop everything if something goes red. From now on you can turn the page and you can start over and you can maybe keep those old ones just for documentation but they are only dragging you down at this point. Do you need a microphone? Actually, we have the automated test I'm not sure whether here how many of them knows the fixtures are JTF tests. We have some 7000 tests running in a spot of daily build process they cover the existing code so if any new check-ins break the existing functionality that test immediately will fail. So we do see that these new check-ins are breaking the existing code that's like a refactor so immediately we'll come to know so this like this advantage of having the automated test as like warning signs to catch the new check-ins like if they are breaking any builds or if they are it's not like good code something if it is breaking an existing functionality. Through the JTFs we'll see the code coverages also. So just I'm stressing that having the automated test is like warning sign exactly. Yeah? Last question. I'm sorry, what's that a question or was it just your observation? Yeah, no, I agree. Hi. Regarding the source of truth when you were... Yeah, the source of truth. The source of truth. We also have a similar kind of situation where the product doesn't have a good requirement or a specification and then when we have to do a major refactoring we have to look into the code so obviously it's the development team who are well aware of the code goes and digs into it. I'm touching upon the topic of source of truth you were saying that product requirements or the requirement specification is absolute. Is that the trend now that we use BDD to document the source of truth so that even a product manager or a product owner can go and record? Is that the trend? Is it a trend? Is it a requirement specification? I don't know if it's a trend but I've been... it's what the people in the BDD community have been doing for years. There's many ways that you can write executable specifications. You can write it in a way that is just a test script that is not very readable for a non-technical person or you can write it if you do it together you can write it in a way so that they're very expressive they illustrate a business rule they don't have a lot of UI details it's just about a business rule and if you do it that way and if you make sure that your tests are always green and don't have red ones lying around then this is those tests are not just an executable specification but they're also the truth because they just describe what the software does if there was a mismatch your test would be red but they're not because you keep them green so it's this combination of making sure your tests are always green making sure they're readable by the business and then you can get rid of a lot of extra documentation to try and keep it synced because now you have it in one place. I think I'm way over time sorry one question from my side can you please take it offline we need to make sure we don't slip I'll meet you over here I'll talk about it in a break sorry for going over time thanks what you're stressing upon is you can forget about agile if you don't focus on those two aspects yeah you have to bring the people who have different perspectives and different backgrounds into the same room and talk a lot about your user stories before you even think about implementing them or writing tests around them