 So you guys have a good conference so far? Cool. All right. I didn't expect so many people. This is being served to you freshly made five minutes ago. That's how most of my presentations are. And there'll be still a few slides that I'll create while you guys do some activities. All right. You just have about one minute, 10 seconds to go. We'll get started when it's 1.30. Looks like there's a lot of interest in terms of scaling the engineering practices or XP practices inside your organization. Or is this here to see me? I like to believe the second one, right? But I know I'm not that handsome that people come to see me. So I'm assuming that most people are trying to scale some of the engineering practices inside their organizations. From what I have seen in many organizations, transition to agile is pretty stupid and simple, right? I'm going to make that statement. Having done that for 10 years, pretty stupid and simple. The challenge is when you actually hit the technical practices. Because everything else before that, that generally we talk about, is, in my opinion, a relatively easier change to go through until you hit the point of really changing the behavior of how your technical folks in the organization work. And that's where, at least in India, from my experience, a lot of companies are struggling if I can use that word. And this workshop, so this is a workshop. This is not a presentation. I don't have all the answers. I have some approaches that I have figured that works for different companies. I see a few people who have been in my workshops before. So we've done work at some of the companies where we've been able to scale some of their engineering practices inside their organization. I certainly don't have all the answers. So I'm going to present a little bit of what I have learned over the years. But this is going to be structured as a workshop, which means together we're going to try and come up with some more ideas. We're going to get ideas out of you guys, and then I'll take credit for it. Cool with everyone? Well, we'll open source it, or we will put it under Creative Commons if that concerns you. But let's first hope to get something really good out of this workshop, where we can actually take back something meaningful saying these are things that possibly we could be looking at in terms of why scaling some of the engineering practices at work has been a little difficult than I thought it would be. So let's get started. I always joke that whenever I'm not prepared, I start with group exercise, which means that gives me time to create the next slide. So here's the first group exercise. That's the second one. I see that's the first one. All right, what I want you to do is think a little bit about what are the success criteria you have in mind of a successful training that you would have, right? Let's say I'm going for a test-driven development training or a continuous integration training. Disclaimer, I'm going to be talking more of XP practices, not general agile practices or scrum practices. So whenever we talk about training from now onwards, it's more specific to some of the technical practices. Let it be TDD, let it be CI, evolutionary design. What other practices do you guys have? Continuous deployment, BDD, TDD, same, same wine old, new bottle, different things. Fair programming, who's my friend over there? Wonderful. So fair programming, has anyone actually done a training on fair programming? Yep, there we have Eunice. He does trainings on fair programming. So yeah, there are people who actually help you learn fair programming through a training, through a workshop. And I've seen varied degrees of success. So I do want to include fair programming into this. We want to talk about some of the tips while doing that. So let's think of some of these practices. And I want you to discuss at your tables what is a success criteria for you? What do you think is a successful training? What criteria would you have for a successful training? And how would you measure each of them, right? Let's say you had someone do a training for you on test-driven development. How would you measure that that training was actually effective or successful, right? So I want you to guys to quickly discuss at your tables. You get five minutes and post that. Each table will quickly, we'll go around and quickly ask people if, you know, what were the success criteria and how they would measure training to be successful. This is all warm up because I want to really get into some of the techniques, but we want to get out what is that we care about and how do we measure that, right? So go ahead, XP trainings, only XP trainings, yeah. So it'd be good if a person brings up saying, you know, this is a success criteria for me from the training. And then someone else can quickly ask, okay, how would you measure whether it was successful or not and put it down and then go to the next point. And then this is how you can capture a few after five minutes, we'll have a bigger discussion to make sure we share what we've learned from others. Let me create the next slide here, all right? Let's do it live. So we'll start with a group over there. Tell me one top criteria that you had and what was the, how would you measure? All right, so your success, sorry, your success criteria is that there is awareness in the people who, the participants who attended, there is awareness about these XP practices. And how would you measure that? Awareness and evaluation through Q and A. So it's like a question paper that people get that they have to go through. Okay, and who evaluates that? The trainer evaluates that, that's skewed, right? Because if I'm the trainer, I would say everyone got everything. Why would I say no, they don't understand the basics after attending three days of my training? That would be killing my own business. Now I want to be pragmatic, right? Because I want to come up with a model that works for us that you can go back and you know, you can try and think about it. Because, and I want to actually talk through this. So I don't have many slides because I think this conversation is important, right? So, Okay, let's, let's, so, step one and Q and A evaluated application of the learnings from the training, okay? How do you measure something like this? Like I have only two people now. So after three sprints I should be able to say like, you know, more than two, it could be six or more. Is the one I could be able to measure that the application has been done as part of the user stories and I could be able to in having more people working on those user stories. So that the velocity you can get. Okay, I don't even want to go there. More people being able to acquire the skill. Okay, it's interesting. I actually want to take a segue. What's the difference between skill and knowledge? So are trainings geared towards knowledge or towards skill? I mean, I was looking something on those lines is that, you know, knowledge levels of people have gone up or skill level of people have gone up. Training is all associated with knowledge. And that makes me very uncomfortable. So just after training, you want to have some knowledge, right? And then maybe in a shorter, longer term, you want to improve the skills. Okay, let's complete your thoughts then before we do one. The way I'm doing is probably the best that works for me. I mean, expect to adapt physically and after a point. I lost you a little bit because that feels like you're saying after training, I'm going to do coaching and mentoring. You apply it. But how do you evaluate? My whole question is to focus just on the training. How do you evaluate the training? The training you teach us. I went through that, by the way. I'm not going to do coaching because you're still adapting if you're still not there. Maybe when the organization is mature, you will not look back and say, what practice are we doing? Okay. That's the thing that we don't need to measure. But we still need to. So the way you can measure this could be surveys, customer feedback, any of those that are physical advanced. You don't go to metrics at this level, you go little more in terms of the faculty. All right. So hang on to that. I want to move. I have a lot more tables to quickly cover. So let's come here. If there's anything that stands out other than what they've already talked about. I'm not sure how much you have to connect. I'm just feeling from what you have measured. So I'm seeing whether they have really improved or not. That's where the skill is coming. But exactly my table makes sense. So that is where we are getting whether he really is skilled or not. And that's where I see another one measure. So just so everyone can hear you, you said that you would start with pre-assessment like a programming assessment before the training. And then you would go through the training and at the end of the training or after a little while from the training, you would do a post-assessment and then you would compare the pre and post-assessment both on the knowledge level and to some extent the skill level because we can do both of those. And then that would give you a measure of whether the training that you did was effective or not, whether it helped you achieve the kind of things that you wanted to achieve or not. So that you've stolen. So that's good. Let's go to this table because they had. Sorry? We have some more. But we have so many people and we want to make sure everyone feels accommodated. So let's go. That's a good point and that's good enough. So while I'm gonna put the pre-assessment here real quick, pre and post-assessment. I have got the board so I could use that but this will help me the next time I do. Yeah, according to me, training is... Is it to you or the table? Because I want to hear the whole table. What's the point of the group exercise? Not me creating the next slide. Okay, so let's move to that table then. Anything new to add? Anything other than this to add? Common language and then you can see if everyone's now talking the same language, right? So same language or vocabulary and then that is something that's visible. People are talking the same language. You go to a design patterns course and now suddenly everyone's only talking in terms of patterns, right? So you're kind of getting into the next topic which is this is what we, this is the objectives we want. Now how do you structure something that'll help us? So practicals, having practicals will be one aspect of that, right? So that's the attributes of a good training, right? Where you want to have a healthy balance between knowledge and skill and you want to measure and that's done through some practical experience, some situations and we'll get into those details. All right, which group has anything new to add? Yes, please. Confidence, right? Having confidence after the training that I can apply what I learned. Confidence that I can apply what I learned is one criteria, success criteria and you can measure if people are confident or not, right? They can read themselves saying, you know, after this training, I'm quite confident that I'll be able to apply this at work or I'll require less coaching and less consultants to do this stuff, right? So that's good. Anything else? Yes, please. We'll come to you. Sustainance, beautiful word, right? And tell me more about sustainance. So maybe on the evaluation, as soon as you complete your training, we'll be on 100 on 100 and most assessment also maybe in the end too of learning all these new things, I'll go and perform for a month or so. Looks like someone's talking from a lot of experience. Right, because most trainings tend to be like that unfortunately. First few weeks after the training, lot of enthusiasm, lot of excitement. This is like the new kid thing that we wanna do and then slowly tapers off reality when you experience reality. It goes. Take me back, shift back to the older processes because people around me will be using that. So maybe two, three months, maybe in this new skills, we'll be using it for that also we should be able to sustain it for a year or so. The new things are new from our skills. The learnings from the training. Manage that and that's something you could see if people are still kind of doing those practices or they're kind of falling back to old habits. Cool. I think this is a good enough list. There's a lot more stuff. I don't have slides yet, but there's a lot more stuff that we need to cover. So what I wanna do is keep these things in mind. These are success criteria or key things that we want out of trainings. Does everyone agree to these? Yes? So let's next talk about what challenges do you actually face when you try to achieve this? So let's spend five minutes kind of really focusing on what are the key challenges? Why this is possible or it's not possible? What are the key challenges you guys face? And then we can actually come to some answers. How do I address this? So five minutes, let's talk about what are key challenges that you face to achieve some of these success criteria. In group discussion again, and then we will do a recap and this time we'll start from the back. So you guys are on the spotlight next. Because the challenges, all right, we have a show of hand over there, so we'll start there. What's some of the key challenges? In the next scenario, what happens is even if I'm not interested and I don't see the need of it, because the minute I go see the land, I think it's gonna be okay. You cannot laugh and clap when someone talks about reality. It's not something you're trying to do. What we need to make sure that people understand the need of it. People see the need of it. I think the point is that it's not that the management is unnecessarily spending money, but the people on ground need to understand why, what is important? Why do I need to go through this training? Or why do I need a skill? Looks like the work I'm doing is already awesome. Most people think that way. So you have way too many good points. Seeing people see the need of it and then selecting the right set of people. What kind of challenges do you face while selecting the right kind of people? For all these projects, for specific projects. If I do a specific project, for what purpose? Just for a little bit. For getting a new project. So I can show on the resumes that I have so many people with XP, right? What business benefit I want to achieve out of this. So everything has to be aligned. If you make it aligned, things will fall in place. Okay, so your selection of people should be based on what objectives you're trying to achieve, and then raising awareness in them that this is the objective you're trying to achieve, and then look at possibly doing a training. So highlight your objectives beforehand. This is all stuff that we want, but we face challenges because this does not happen. Okay, cool. So we'll go to another table. Right, opportunities and expectation. I think this kind of touches upon that. But maybe opportunities is a brighter way to look at it. What is the opportunity? What extra could I be able to do if I had the skill? Okay, so it should not be driven by what everybody else is doing in the market, but it should be based on really the need that you have at hand. Yeah, so that basically I thought was the first point that people see the need of this thing, and there's actually a real need for something, for a training. We'll go back there, yeah. In theoretical knowledge classes of training, it would be hard and difficult to practice that in your time scenario. It might take a time. The biggest challenge I see with XP practices is you take me into a classroom and you show the XP practices and they look awesome. Right, but the moment I go back to my project, the reality of my project is very different from what it is in the classroom. Right, so I might be convinced that some of these practices are perfect and they are the right thing to do, but when I faced with the reality, you know, that's the real challenge people face that they're not able to apply some of these skills. So the challenge of basically being able to apply, being able to apply post the training, right? Post training being able to apply what you learn is where a lot of people hit a roadblock, right? Don't get the opportunity to apply, but that kind of goes back to actually having a need, right? So let's say I'm on a project which is suffering with a very heavy technical debt, and that's really slowing us down. So that's a clear need, right? Now I need a specific training that could help me, that could help the team understand how we can address that particular problem that we are facing. So there's a clear need, there's a clear objective, you need to identify the right set of people who will be able to do that. Maybe you wanna train everybody, maybe you wanna select a smaller set of people. How do you go about that? We'll get into more details of that. And then being able to apply what you've learned post the training, right? Those are important things. Conflicting priorities. That's agile, right? Being agile about it. So for sustainability, environment is the blocker. Maybe the sustainability is not happening because the management is not, management is saying more than anyone, management is not putting it all across the company, for other conditions like the priorities or something like that. So there is the management support from the environment point of view, and then there is the team support from an environment point of view. Both elements are equally important, right? Because I've been in many organizations where the management is all for it, but the team does not want to do it because they have not bought into it, right? And someone was actually talking here that mindset is actually the biggest challenge that you hit when you go to any of these kind of XP trainings, is changing the mindset. That's the hardest part than helping someone understand how to do TDD or helping someone understand. It's more about changing that mindset of why it is important or what will it really help me do, right? The realization of the expectation, right? On individual's part is, I think, equally important. So the management can set the expectation, but I as an individual who's gonna go through the training have to, you know, this is gonna solve all of our problems from technical bit right away. Okay, so I think that's a new interesting point that, you know, basically the expectations from the management side should be realistic and that's generally a big challenge because, you know, a lot of companies, you go to a two day training and they expect you to be came back after the training, right? So I think these are good challenges that we have to face. Now, what solutions do we have? I'm sure there are a lot more challenges, but I'm more keen to actually get some, you know, something concrete that we can take out of this. So let's jump the gun on challenges. Again, because when you'll do the group activities, include those challenges and trying to come up with a model that you think will work, right? Getting good trainer is a bigger challenge. I have all of these, but I can't find a trainer, right? Probably scalability, right? Let's put this also. So I can do all of this with a small group, but you know, you ask me to scale this across my organization because I need to do that to be able to be effective. I would argue training is sufficient, provided the training is good, right? We don't know yet. I don't think it's only up to the trainer. I think the trainer can do maybe 20%. 80% is actually up to the people attending the training, right? And that goes to all the previous point and the environment aspect and stuff like that. But scaling, I want to put that because that was somewhere there in my title. I just learned this technique that you put the word scaling in anything and you fill the room, right? It works. If you're here for, on Saturday, I would strongly recommend you attend Dave Thomas' keynote after writing the Agile Manifesto 11 years ago, 12 years ago, or 13 years ago, actually, sorry. This is the first Agile conference that Dave Thomas will be at, all right? He wrote the Agile Manifesto 13 years. He's not attended a single Agile conference. This is the first Agile conference and you must listen to Dave Thomas. Because he's gonna hammer the word scaling back into your head. Let it out. Let it out. There's a lot more stuff that he's gonna do. I just put a little cookie in there. All right, so these are challenges I want to come to a solution, right? There are two approaches we can take. I can talk through a approach that's worked for me that I have been successful with companies. It's not perfect, but it seems to do better than what I know and we can talk through that or we give you guys a chance to kind of come up with a model individually on each of your table and then we discuss if that model is already covered, otherwise we present that model. What would you guys like? You hear me first and then I basically completely do the inception in your brain that this is exactly how it should be. I'd actually prefer that we spend some time, you guys actually coming up with a model at your table that you think will help solve this and then I prepare the slide in the meantime. To present that, actually as you can see I do have that slide, that's the only slide I had. So why don't you guys take the next five minutes and try and come up with a model that you think will be able to address these challenges. We will see if that's practical, that's realistic or not, but let's at least give it a shot in terms of what an ideal training should be which can address some of these challenges and there like this gentleman was talking, we can go into attributes of what is a good training and try to make sure those attributes are met. Like for example, one of the most important thing for me in training is that there is learning but it has to be edutainment, right? So education with entertainment because if people are not entertained, they're not having fun when they're learning, the retention and I have some data because I have not done my intro, but I run a startup which basically builds e-learning software for kids before that I did e-learning for programmers, very frustrating, doing it for kids is not more fun and what we found is if it's actually a game or if it's more fun elements are built into it, the retention of what you're teaching is actually much higher and there are a lot of enough studies out there that actually talk about that, right? So I want you to include some of these elements into what your model will be in terms of training, right? Everyone's gonna talk about hands on, it needs to be hands on, right? Everyone's gonna talk about how it should be fun. So plan, build some of these elements into your model and try and come up with what would be ideal training situation, right? Also try and build some of the assessment part that we talked about because that will kind of give you a more realistic measure of whether, you know, what you did was meaningful or not. So why don't you guys take a shot at, you know, coming up with a training model that you think will try and address some of these challenges and then I'll present a couple of options. I have the post, the big paper, so if you want to like draw it out, big on that. Don't have enough markers, but I have these many markers in case. Kill it across a thousand people organization. Quest here from Mohan, right? From Mohan. Mohan has a great idea. If you guys have phones or you're on Twitter, then maybe you can just tweet some of these things so it stays afterwards so you can see what other people are doing as well. So like hashtag challenges and you know, put in, well, we'll put the conference that is Agile India 2014 and then challenge, two week for 140 characters. Random number with the challenge. So how about 26F, that's 26 February. So for example, what he's suggesting is you can say challenges or challenge, keep it short. And then 26, 26 February. That's unique enough, all right? How about 26A? I don't know. Let's not waste too much time, but something. If you can agree to something, if you want to tweet, then use that as a hashtag, but focus on building the model, right? Someone talked about the challenge in finding a trainer. So let's say I have a trainer in United States, right? What can I do? Can you do remote trainings? Have you guys explored that, right? Try and build some of these things into your model. If you feel that this is generally what we do, then probably scrap it and think something quite radically different. They have a new product idea over here. Look at a different model. Should training be an event, right? One time event or different kinds of things that people are doing today, which would require a lot of that stuff. It's not generic trainings. There's a group or two saying we are lost. We don't know what you're doing, what the heck is going on. So I apologize for that, but all workshops that I run are like that. My suggestion is if you feel lost, that might be because you're trying to solve every single problem listed over here one shot, right? That's hard to do. Pick a particular problem that most people agree that is there today in their organization. Take a challenge and say how you're gonna address that, right? What can we do to address the challenge? So I was giving an example that let's take this, people seeing a need, right? People seeing the need for learning clean code, for example, right? I need to learn things about code smells. I need to learn about refactoring, automated refactoring, all of that stuff. That's all good stuff that I need to learn, but people don't see the need for it, right? So how do you help people see the need for it? You can get some big shot, give a big lecture, a few people might get excited by it, but majority will not even care, right? So that model does not work. You can try it, but the success is very little, right? So think of an alternative model. One example that we have actually working is as you're programming, right? Remember back in the days, I don't know if anyone remembers, when you would program in VI and you would write the stuff and then you would go and run the compiler and then it would say you missed a semicolon, you stupid guy, right? And then we came up with IDEs, whereas you type, they tell you, hey, you missed a semicolon, some IDEs are smart enough that they automatically insert the semicolon for you, right? And that helped people learn languages faster. There's enough data to actually prove that the IDEs giving these suggestions or doing autocorrections for you actually helps people learn things faster or inline completion or things like that, right? So that's a way people were learning faster newer languages, right? Instead of writing in VI, now I have something that is giving me feedback, that is giving me these suggestions and stuff like that. People have taken that to the next level, right? There are Eclipse plugins, there are a bunch of other things that people have done where as you're programming, it tells you that you've introduced a code smell. And I might not know what the code smell is. There's a link that takes me to either an internal repository or an external repository where it explains what the code smell is and few examples of the code smell, right? That's in-context learning. While I'm doing something, I'm getting introduced to a concept and I'm going and learning about it. And then I realize that what actually I should read a little bit more about how to get rid of this code smell, right? And then maybe there are a few remedies in terms of refactoring associated with that code smell. And then I can go through that. That's one example that I'm talking about is helping people see the need for something. So I want you to think more on these lines because that's really what I think works better in my opinion, right? This is pool-based learning. This is in-context learning. This is not sit through a training for three days and get a dump of all the code smells and then go back and figure yourself, right? We're talking about breaking it down. We're talking about pool-based learning. Those are models that actually work much better in my opinion. Can you scale something like this? It's scalable by nature, right? Because I could have 500 people or 5,000 people. You never mandate anything. If you have to mandate, then learning is not gonna happen. That's my opinion. I'm not an expert. But if you mandate, learning does not happen. You motivate and that's where learning happens, right? You motivate either through giving feedback to people in-context feedback or you motivate through kind of having them compare themselves with others. So I'm gonna come to gamification, gamification of learning, right? There's a lot of stuff that people have done and I've spent last two and a half years doing gamification of learning. What can I do in terms of gamification that actually will motivate people to learn, right? Because that's one big challenge. I can mandate and I can put all kinds of processes around it and people are still smart enough to game the system. They'll still find ways to game the system, right? So in my opinion, that's a dead end. Don't even go there, right? Look at alternatives, look at gamification. Anyone's watched a TED Talk where a lady's talking about how kids these days are spending 50% of the time that they, it's about equal amount of the time that they spend at school playing games. And her talk is titled, it's a TED Talk, it's titled that games are the only thing that will save the world, right? And it's brilliant because she gives you this very different perspective of how actually games can really help you learn certain skills that are very essential. Social skills, other kinds of skills, problem solving skills. So that's again something, I'm just putting some food for thought in terms of those are directions that I want you to think and take something away from the section as those are things that we could try. So my request is don't mandate. It doesn't work that way. Perfect, so we are not saying, basically do mutually exclusive strategies, right? Pool-based learning is available, right? You might also need to do some proctoring. What I mean by proctoring is that basically you get someone in for a day, either external or internal, who's gonna basically walk through some of this stuff and be there while you do, you know, like a basic workshop or a course or something like that. They're not actually teaching, they're just there proctoring, you're doing the training, you could be using e-learning, you could be watching screencasts, you could be doing something else and you're just proctoring them and then you're out, you've raised the curiosity and then you leave, right? So that's another model that we've done, especially at a big internet search company that some of you might have used. That was the model, so we proctor for a day and then we give them e-learning for people to go through the rest of the stuff, right? And then we go back at a later point in time to do like Q and A sessions, like code clinics or things like that, where we sit down and people come up with the problem, saying, here, this is my problem, I can't solve it, I went through all this stuff, but I can't figure out how to do this. All right, let's sit down, pair on this and let's figure that out, right? So you start with proctoring, give e-learning, screencast, whatever you will, right? And then go back, have regular clinics. That's one model to look at. How do you gamify this? Right, you give points to people, you give badges to people, doesn't work. People at the big internet search company which is filled with geeks seems to have, actually this has worked wonderfully for them. Actually people are going back and retrying labs because they want to score higher, right? And it's beautiful because what you want is to build skill and skill comes with practice. And putting some points in front of them as a way to basically see the dial fall on 100 is a great motivation for people to keep retrying a lab till they actually hit 100, right? And there, I didn't have to mandate, I didn't have to do anything, but people actually went in and practiced enough till they hit 100. I'm gonna show a quick demo a little later, but these are again some ideas that has worked well. I'm just sensing the room, taking a goldfish moment to see how people are digesting what I just mentioned, right? Is this making any sense? Like I don't want to just tell you this is what you need to do because that's not going to help. I want you to really think what is helping. I just want to throw in some suggestions here and there and kind of guide you to coming up with something because when you leave with something that you have come up collectively, probably you'll have a greater sense of ownership. That's one of the things that we do in trainings is make sure people come up with the content in the training so they feel they have a sense of ownership on that. Even though it's lousy, they will still be, they'll rate it very high, right? It goes back to confidence, right? And the confidence is very important. So if you guys feel lost, I can jump ahead, but I want you to think on some of these lines and I want to also talk a little bit about scaling in a different way, right? So this is one model in which you can scale pretty well. So the model I was proposing is you have a one day of proctoring, right? Let's say I want to teach people test-driven development, right? So I do a one day of proctoring. The proctoring starts with a quick video of, you know, this is what we're trying to do and here is one way to do it, another way to do it is in a different way or give people actually a problem statement, let them try and do it in whatever method they do it and then you kind of do a live demo showing how, you know, an alternative way of doing, solving the same problem would look like, right? So you've raised the curiosity enough that they want to play the game now, right? And then you basically tell them here are a bunch of things that you, you know, labs that you need to solve and there's inline help in the labs which will kind of guide you through. There's a scoring that's built into the labs that will tell you, you know, you only scored 60% you missed four cookies or you missed four things in this, right? Get them to go through that and then you go back doing the clinics. So that's one model that we talked about. Now I want to build more champions inside the organization who will be able to do the proctoring, right? That's where the scaling comes in, right? So that's like kind of quickly jump in and show you one model to do that. So we talk about like attend a session and you have identified like a couple of people who are potential people who could do the proctoring or the potential changes inside the organization, right? You get them to attend the training. They're basically going through this. It could be like a day long, it could be two day long. Post that basically they are responsible to basically present a small portion like one or two topics out of the topic. Why am I talking about this? Because I want to talk about another aspect of learning is you only learn when you teach somebody else, right? So actually our way of building skill is to build, you know, you training someone else as a mandatory part of the training. And that's kind of one of the things that we're talking about is the, actually next batch, if you have a larger group, then one of the person who was identified as a pilot or as a change agent would basically do a portion of the topic to feel confident that they can actually do it. Then there is a remote pairing session or a live pairing session with that person on their project to help them, you know, learn some of these skills back into work so they actually build examples that then when they go back into their training, they'll be able to use those examples as first hand examples to explain some of the concepts. Let it be, you know, test driven development, evolutionary design, you know, applying some of this. And they co-teach the next course and then they're on their own to take over from there. This is one model which is generally referred to as the train the trader model. This is nothing new, no rocket science, pretty old beaten model, right? Now you throw some of the pull based learning into it, you know, you try and tweak this by saying, okay, where can I put in some e-learning into this? Should it all be in person? What options do I have to reduce the in person so people can, you know, do the training at their leisure? So you throw in some e-learning options into it. What portions could actually be e-learning in this, if you will? We tried different models. One is we start with the e-learning, we tell people here's e-learning go through this and then we will show up. Only about 21% of the people did the e-learning, right? And most of them just skipped through the labs and skipped through some of the important sessions that we had, right? So they didn't really serve the purpose. That's when we changed the model to basically starting with proctoring, right? And the proctoring kind of got some motivation, but that was not good enough. Then we came up with something which I think here we were talking about earlier, is kind of doing this assessment beforehand, pre and post assessment. So before you go in, you send a problem statement and you ask the person to work on this problem, right? They send back code, there is a system which evaluates them and puts in some basic commands and then rates them in terms of where do they stand, right? That's a good enough motivator for a lot of people to actually come into the training more with a mindset that I want to get better at these things or at least I want to know what the heck this guy's mentioned over here, right? I don't understand any of these gibberish. So I want to at least understand what they are. And then they come in with that mindset, then they are able to basically learn some of these things, take it back and then generally I do two weeks after my training, I do a post assessment because I want to see if after two weeks the retention is still there, right? Two weeks or a month depending on different organizing. And then we compare, right? Before and after what you were talking earlier, right? So we see that in this particular graph over there that the top right corner is basically the testing skills, right? They had zero testing skills to start with or at least they didn't know how to write unit tests or any of that stuff. And then we see that post training actually they have some tests that they have written. So that's a concrete way to say, okay, they have learned a particular skill. Are they gonna apply at work? Is something that we still have to evaluate, right? But at least I know now they have the skill that they know how to write tests. What is the quality of the test is again gonna get rated over here, right? Do you still have room for improvement in terms of your skill of writing tests, unit tests and things like that? So stepping back, what I was trying to explain is this train the trainer model which is like a pretty traditional model. Take that add proctoring before that add assessment and then kind of run through this whole thing and see places in which you can basically do e-learning elements inside this. And that's one model that seems to have worked well in my experience, right? Where we've done this with a lot of different companies and they go through this for about six months, maybe thousand people in the company and they actually see results on the project where people are able to reduce the technical debt, people are able to do at least the new development that they're gonna start with, test-driven development. They are more, during these trainings, you actually get them to do pair programming to do the labs so that gives them a flavor of what pair programming is really about. Some of the things that they need to be aware of when they're doing pair programming. It's not just about throwing two people together and magically it working. There are certain skills that they need to learn and these safe environments in which you can help people understand. Again, creates a great environment for them to learn. So this is one model that I have used starting from the train-the-trainer model and kind of adding some more elements to it, tweaking it. I'll pause at that, I have another model which is the model that we use for helping kids learn mental mathematics and I'm gonna talk about that and how that can actually be used for doing more of the programming skills. Any questions so far? Yeah, absolutely. Absolutely, I think that's a very good question, right? Because I can take a dummy example of, you know, legacy code and I can walk you through saying how I will refactor this but that will not really convince people that this can actually be done in my code base, right? So there are two things that generally we do. One is that you need a set of ready-to-go examples which get people into the mindset of how this can be done. So there is, one is just learning the basic skills, right? You need to get people up and running on the basic skills. Okay, the refactoring in the IDE will help me do this stuff, right? And I don't need my real project code base to understand some of those things, right? So you go through that for about a day. Post that, you would have already spent some time taking some of the examples from their code base and then giving those as labs saying here is a code base, can you identify three code smells in this, right? If you identify three code smells, can you refactor that, right? And then there is a recorder that we have built which records in Eclipse or records in Visual Studio and it'll actually tell you what paths you took in terms of refactoring. Did you do it manually? Did you do it in an automated fashion? Did you have, where you failing, compiling throughout when you were doing this or you actually had running tests frequently when you were doing this refactoring? So all of these things can be measured and then shown to people. But some of these things, doing it on a live code base is harder which is where you will need some of these dummy examples to help people visualize how they're doing stuff but then go in and spend about a day either in forms of clinics or in forms of extended training where they actually work on their code base and apply this. Generally what I do is I keep one day at the end of the training as a time I would spend with four people. So two hours pairing session and each person comes with a specific problem that they have. We pair for two hours, we get to in some direction this is how this can be solved and then end of the day they have still time they have to kind of continue doing that and end of the day they come and present. This was before, this was after this is the approach we took, right? Something that they present because then the ownership is again with them, right? So that's again a good model specifically for legacy code. All right, any other models that you guys have used some of you guys have done like a lot of work you want to chip in something that you've done anything else anyone else has tried that's worked well for them. So some gamification in terms of a competition and winner and things like that but you create, do they pick their own project or you assign a project? So I want to build the next Twitter, right? And I've given like how many, what is the duration? Three days, five days? Three days, I can build the next Twitter three days go apply what you learned in this workshop and then at the end there will be a competition who evaluates who won, who did not win. The challenge I see is that the managers need to know their skills really well because if they're gonna be evaluating this stuff then they really need to understand and they need to have insight not only at the end result but the approach that was taken, right? The path that was taken which is kind of what I was trying to talk earlier a little bit about, you know, I can do an evaluation and I can give you a score but that was not as interesting as actually showing you the progression that you had. Let me quickly jump to what I'm talking real quick here. So this is images from back when I worked for, when I was a partner at Industrial Logic this is something that we did. This is a graph that we show in terms of, you know when a person is doing TDD, how is, how did they progress? So they started at this time and basically they started doing this. There's a bunch of events that happened in terms of automated refactoring over here. They got a failing test. They got it to pass. Then they did another refactoring. Did they do the refactoring while they were in red or while they were in green, right? Because that's important. Like some of the skills that we talk about is you don't want to refactor while you are, well you have compilation errors or where you have failed, brick and test, right? So you're again trying to visualize this and for each right thing they are doing as you can see there is a score that we assign to them and we show that these are the points that you have earned along the way, right? And this helps people understand how they got from here to there and whether they did the right things along the way. Here for example, someone did something wrong and then you dinged them, you put the points down and then that will make them curious of why did that go down, right? So they'll go try and figure something out in terms of why did that go down. Here's, this is a form of visualizing how you got from the start to the end and help people learn that some of these things are important. Now I don't need to talk really about, you know, in test-driven development this is how we do, this is how we do. They can see this and they want to score more points. So they'll go be curious to why I got dinged, right? What can I do to get a better score over there? And generally, because you're dealing with programmers or testers, this kind of works really well because they want to score more, they want to do stuff like this, right? This stuff that you're talking about, those are points that you're getting for doing the right thing along the way. That's the TDD score in terms of your TDD score. These are even, no, but these are also scores that you're getting in terms of along the way. You're right, sorry. These are events that you're generated along the way so you can see what you actually did and these are the scores, I'm sorry. Three at this point. That's not very important in terms of what we are looking at is when you got to a certain point in terms of you wanted to do a particular refactoring or you wanted to do a particular new feature that you wanted to add, right? Whether you did things in the right manner, helping you visualize that, right? If you had like, I can show you some other graphs where you have this whole big portion is just red, which means I had a failing test and I just kept doing a lot of work when I had failing tests, right? That's not a good thing. Or for example, I didn't even fail any time, so I don't see these bands, red, green, red, green bands. I just see like white, which means I've not written a single test, right? And then your score will be pretty low if you do that. It's per individual, it's not a competition between people, yeah, but it's per individual. The thing is that you kind of give a score towards the end of how you did stuff. And then this is what I was talking about. People get motivated to basically go all the way to 100 and then they retry the labs to make sure that their score keeps increasing. This is the, yeah, I was talking about. So we can even show you what refactorings you did at what point kind of highlight that. Did you did, you know, change method, extract, whatever. And this is an example where basically you started, you wrote a test, and you had the failing test till the very end, and then you finally got it to work, but no automated refactoring, no nothing in terms of events that you have done. So the score would be basically one at this point. So this is, again, it's just an example of how you can build some of the visualization and gamification into the learning to get people more excited and more motivated towards kind of learning some of these technical skills, right? Quick time check, we have about 10 minutes left, and I wanna make sure that if there are any questions we have, we take that up. We were not able to get to a model in the end. I was hoping we'll come up with a model that each one will walk away with, but maybe you've got some ideas that you can walk away with and try and think of those, you know, in terms of applying back at work. But any questions you have at this point in terms of what we've discussed so far? There is no open source tool that I'm aware of. There is bunch of things that people have done in Eclipse, like plugins, where they tell you, you have a code smell, or they tell you that, you know, you could do this refactoring or things like that, but that's, I think, something that can be done. What a lot of people have done is build stuff in-house for their company, or there is the industrial logics e-learning, which is something that is publicly available for charge, of course. But that's what I'm aware of, no real public open source tool. But that's a great idea if you wanna run with that. All right, any other questions? What sort of exercises during the training? So there is a combination of, you know, something that is easy enough for them to get done and get that initial win, right? And then something which they just cannot do, which is really hard unless they have mastered a particular skill, and then I kind of do a demo of if they had this skill, they would be able to do it in this way. And that combination of just hard enough, for example, refactoring, right? So I start with a simple example where they spot duplication and they need to get rid of duplication, right? That's a simple enough, most people get it and they see an initial success. Then I give them an example where they have to make, there is basically parallel change that they need to make, but they need to consolidate that change from two different, you know, streams into one without breaking the code, without having compile time errors. And that's something that a lot of people don't understand in terms of different strategies that you can use towards refactoring, right? And then that's the second lab that I would then give them, do a demo, give them about 20, 30 minutes to try it on their own, and then do a demo afterwards, a live demo where I would show how I would approach this. Correct, so a lot of my trainings, there is ready to go things and some of them are like six steps workshops, so you do six different things and at the end of each, there is a solution available to them, so if they didn't get to that end, they can take a solution and get started for the next one. But I like to build on top of what was already done because, and do it in baby steps. So your focus is only to do this one thing at this point, get that done. If you didn't get it, take the solution and let's go to the next one and build on top of that. This is something I picked up from industrial logic and I think it really works well because it helps people if they are lost in between to kind of jump on, take the next bit and move on, but they see the whole progression in baby steps and that really helps people understand the concept. All right, I'm officially done. If there are any other questions, I'll be around. Thank you guys.