 So we have Matthew who's a founder at Zinnix-Makshunia, so I see you and Mia who's a founding partner for the organization. So without further delay, I will give it to you, Matthew and Mia. We're very excited to join you virtually for Agile India this time around. Hopefully we can see you all in person next time. But essentially we want to talk about how metrics are really important to drive business agility and we've looked at a lot of the metrics you use in psychology to actually form how we think about business agility and behavioural change. The metrics are an approach that we're going to talk about today. We've actually used with a number of our customers. We've got a wide range of different customers that we're working with at the moment. A lot of commercial ones as well as government agencies. So to start off, we'd really like to understand how you currently measure agility. So if you can go using your smartphones to www.menti.com and if you key in the code there, there's free text there for you to give us an idea of the sort of things you're using to measure agility now. So that menti meter is now open and it'd be really great if everyone could use their smartphones and give us some feedback. We've got the usual ones on budget on time. We've also got velocity. See what else we come up with. Okay. Counting stories. Starting to see some value delivered performance. Key results. Outcomes. Lots and lots of things. Lead time. Some really great metrics coming through now. That's fantastic. Now it's really interesting that outcomes is coming up as quite a big one as well as velocity and stories. Because what we want to talk about today is very much looking at let's look at not just the activities we're doing when measuring agility but let's get the outcomes. And the best way that we can look at that is to look at the behaviors. So a lot of the things you've got there about outcomes and value, team happiness, the sort of things we were looking at because traditionally when people are thinking about how do they measure agility, they think about velocity, how many stories do they do, and is it on budget? Because they've been our traditional activity metrics that we've used in project management. As we move into agile product management, we want to look at how do we measure our agility in a different way. This was the question that was posed to us quite recently by a client who involved in Great Barrier Reef in Australia. They looked at some of our outcome impact metrics from our other clients, including moving one organization from a 220-day lead time to as little as four weeks. And they said, well, how is it that we can track whether or not we're on a track to be able to achieve that kind of outcome if we go down this path? So we had some conversations around what kind of measures can we put in place, how can we predict, how can we use behavior to predict those kinds of outcomes? And importantly, as we started to see them, how can we encourage them, how can we encourage those behaviors so that we could actually help our client to get to this kind of outcome? Because most of the time, as you've seen, a lot of our metrics typically don't come from behavior. They come from descriptive analytics, and those are things like activity, velocity, and efficiency metrics. This tells us what has happened, but it won't tell us things like, why are some teams more successful than others? Why are some teams what actually makes those teams our job? And importantly, the things, the antecedents that create agility, how do we understand and then measure them to actually make them repeatable? To answer these questions, we need a different set of metrics. We need data analytics. Importantly, we need things like diagnostics and predictive analytics as well as prospective analytics. Key to this is understanding behavior from a statistical perspective, having statistical models, not simple correlation of if we have improvements in velocity, this creates improvements in agility. Because human behavior is really complex. To answer our approach, I'm going to take you back nearly two decades when I was doing my postgraduate work in the John Harder Hospital in Sydney, just north of Sydney. And the team that I was working with had psychologists, occupational therapists, and speech pathologists. And our team would look at behavior from their specific discipline. We looked at verbal comprehension, working memory, perception, processing speed. We looked at behavioral traits attached to those, and from that we're able to predict developmental issues, ADHD, autism, things like that. This is exactly the same kind of approach the FBI has used for nearly 50 years. It takes that they take experts, they look at behaviors, they classify those behaviors, they use those behaviors to reconstruct crime scenes in order to start to identify what they think are the signature motivators. That helps create a profile that helps predict the behavior of criminals in the future. It's exactly the same kind of approach that any kind of behavioral measures use based on particular types of influences. When people see a particular motivator, they've got different options. Choosing a particular option creates an experience and the experience itself influences behavior. So part of our question was as we started to approach things from a psychological perspective and understanding behavioral analytics and data analytics was how do we take this model in order to understand business agility? So nearly 10 years ago now, this is the basis of the model that we took. We didn't choose self-assessments. We didn't ask teams themselves to self-report because that's prone to cognitive bias, including done in career effects. So instead we took experts, they did things like naturalistic observation, diary studies, and contextual inquiries. We took a whole range of potential behaviors from their manifestos, from guiding, calm barn, even management 3.0 systems thinking of extreme programming. We started with 85 questions and we looked at the strength of our observed behaviors against those areas and looked at them in the context of enterprise outcomes in relation to productivity, cost savings, both capital and operational, as well as risk profile delivery. For those of you interested, 10 years of data collection over 30 organizations, large and small, over 500 teams, some of which we actually looked at across time. So not just a single snapshot, but continually observe those and collect them every month. Some teams actually do quarter. We looked at software and non-software teams, so not just software development, but HR, marketing, finance, change management, even executive and move management. And for those of you who are data nerds, long to tune your principal components analysis with a very max transformation, that's what we did. And importantly, we looked at ever a relation between trait reliability. We also did some post hoc analysis on the questions to see which questions actually weren't contributing any real significant significance to the data models. And we started to move them in order to come up with a final data model, which is this. The data model saw four main behaviors emerge and these were provided the most significant perspective into enterprise outcomes for business agility. And the model explains 85% of the variance we saw in behavior. These four key behaviors of this number one self-organization, it made up 50% of the model. That self-organization breaking down into things like managing products with agility, cliff structure, gold clarity, working in small batches, dependability in relation to people being able to rely on each other in a team context, delivery. We saw our job values come out as the second strongest behavior, things like empiricism, self-improvement, decentralized decision making, shape purpose, and leadership connectedness, etc. 10% of the model. And I'm just kind of surprised a number of our colleagues, only 10% of the model is about sprinting, the structure of sprint planning, daily scrum, etc. That only makes up 10% of the model. Because civilization and natural values comprises quite a significant amount. And the last 5% around continuous learning culture. The last 15% of the model, we've been able to, we've been doing some analysis since then over the last two years. We've found that a number of those things in relation to agility are unique to the organizational context and the culture itself. Things like the symbols, behaviors, their own attitudes and rituals, essentially the organizational culture. And this particular model we call agile attitude. Has it worked? Well, basically, we take a team profile. We record a couple of behavioral observations against the team. We've got about half a dozen archetypal behaviors that help us, including the agile framework or the scaling framework that they use. We record that. We also look at the assessor's profile, essentially what kinds of certifications they have, and importantly, how long they've had those certifications. Because we found the assessors that only had a CSM, for example, that's over 10 years old. And the scrum guide's changed a lot since then. So that form part of our model. And the third element was the questions themselves. Those get fed into the data model. The data model then is not about you collecting an individual team's data over time and how many points. Because you can compare the team's data immediately to the strength of those behaviors that I showed you. Their agile maturity stage, which I'll show you in just a sec, a comparison of your team or your teams. And your team's age compared to teams of a similar age. If your team's already doing this for a year, you wouldn't be able to compare it to like teams. So we compare teams of a similar age. And lastly, a forecast model including cost savings, which we're able to model, risk profile, capability of pivot, psychological safety, and other kinds of outcomes. So I'm just going to talk you through some of the longitudinal studies that we've done on some of the teams, as Matthew mentioned, we've been following these teams for about four to five years as we've been developing the model. And it's very interesting data that we started to see. What we found as their overtime, as their agile IQ increased, which is their maturity, we actually found that the number of times that they needed to do overtime decreased. So they were actually getting more efficient at what they were doing and didn't need to do that rush at the end before a release to do overtime to get it over the line or to do more overtime at the end of a sprint to finish everything up. When we looked at the predictive analytics, we also saw that in addition to that decrease in overtime, we were also seeing a decrease in defects and rework as their agile IQ maturity increased. So you can see at about the 80 agile IQ, we've got quite a lot of defects. We got teams that are quite mature with an agile IQ of 140 to 180 with almost getting down to zero defects in any of their releases. And the agile IQ, very similar to normal IQ, it's on a scale of one to 200. So you can see those teams that are performing really well have that IQ from 160 to 180. And they're our very high performing teams that we could see. So we were seeing a decrease in overtime and a decrease in defects. What that translated to was actually cost savings per 10 per month. So we did the modeling based on a team of 10 people. And we were able to show those particular organizations that we were working with that just by looking at those decreases and that agile IQ increase in maturity, they were saving themselves up to $40,000 to $50,000 per month. And that meant that they were able to be more efficient at what they were doing, get more features out the door. As I mentioned, we looked at these teams over a number of years. And some of them, we were able to sort of see when we were getting that significant approval improvement. When I'm an agile coach and I go in and work with an organization, I'm often told, well, how long is it going to take for this agile transformation or how long is it going to take before my teams are up and running? Most of you who are working in agile, you'll know that within sort of three to six months, you're seeing quite good improvement. And that's what we saw in our data too. At about that six month mark, teams had quite significant improvements and got to stage two in their agile IQ status. We found that they continued to increase over time. There was a time around about the three year mark that we saw a bit of a dip. So we looked into this to see what was causing that slight decrease. What we found that was that we had pretty stable teams. But about that three year mark was when people either moved to new roles or we had some changes. We also saw that there was a bit of complacency in their practice had set in. And we found that around about the four to five year mark, it started to recover again. So it just shows that when you make those significant changes, we also found that at that three year mark, a lot of the scrum masters that we've been working with from the initial stages had all been promoted. So about 30% of our original scrum masters got promoted. So it soon became the thing to get on the agile trains because that way you get promoted. So that was kind of a different offshoot of the data that we weren't expecting either. So that dip at three years was also part of our success that people got promoted and moved to other teams. We found that with an agile IQ at about 80 at that three month mark, when you get up to an agile IQ of about 130, we saw that around the one year mark. This is really where teams have that true ability to pivot really quickly if there's a big change. We've all gone through COVID and the pandemic and we're still going through it. And we did a lot of this analysis, particularly when the pandemic hit, to see which teams we felt would be okay. And we were able to use this data to talk to the executive and management level about which teams were an agile IQ where we felt they would be okay and they maintain their effectiveness and which ones were a little bit lower that might need some more support from management when we went to remote working. And we found that these findings were replicated industry research. So we're very interested to see that the more self-managing the teams are, the more effective they are, the more faster decision making, increased productivity, higher quality, they achieve their goals, feel more useful, feel more challenge. And there's a lot more trust within the team and the team members. We also looked at the industry research particularly in the behavioral area. And we saw a really strong alignment to that industry research, particularly Google's Project Aristotle, which was all about team effectiveness. Amy Edmondson's work on psychological safety. Scrum.org has a competency model that we also compared it to, as well as some of the Bain and Co research. So we found that there was quite a lot of robust robustness to the findings that we were finding with what the industry were finding in their research as well. And recently we started to notice as COVID hit in Australia we had a lock, we went into lockdown and then came out of lockdown and in and out. We found that AgilaQ was a really great predictor of organizational resilience. We saw organizations that were more pro, had more managers were more prone to going into crisis management. We saw AgilaQ go down and then when they get around came back in the office and they thought, oh well good, I can get back to self-organization and self-managing and their AgilaQ would go up. And that up and down actually decreased their ability to pivot. Whereas some organizations that had a much higher AgilaQ were much more resilient. We saw managers less prone to crisis management and just simply continued supporting their staff as they started to work from home and do their work. We saw their AgilaQ pretty much maintain a steady rate across the three, six, 12 months that they're in lockdown. So you've got the results. What now? Well with what we've noticed is that there are five discrete sets of behaviors that represent certain types of Agile mindset as well as certain types of Agile behaviors. And it's for each of those that we then have looked at the kinds of practices that help teams get beyond from one stage to the next stage, then we start to find good strong correlations with. So for example, for stage two team, if they work on self-management, we find that they move into stage three. Their ability to pivot improves their risk decreases, their ability to be more effective as teams, etc. And so for each of these stages, we compared the AgilaQ to terms of similar age and based on those sets of things we're able to provide through AI recommendations for that team in that stage at that time to improve, to help focus on behaviors to encourage and help regarding what kinds of principles and practices to adopt. So for stage one team, the kinds of behaviors that we saw ultimately were teams in stage one are really typically at the start of the journey. They're thinking about Agile, they're kind of tweaking a few things. And the patterns that we saw with teams at this particular stage are things like kind of called managers tended to be micro management, some cultural resistance phrases like, oh, but we deliver. So why should we change? Key to growth at a stage one were for we found with managers to set expectations about a framework to adopt, set the guardrails in essence to lead by example, and probably treat Agile as behavioural change, not as a methodology to implement. We found that this also aligned with Waste Fizzers model for organisational culture, that the the symbols and basic behaviours, these are the easiest things to change. And that teams that were involved because stuck in stage one had really strong in relation to this, but nothing further. They didn't have for it, for example, scalability, minimising waste, great transparency, systems thinking, etc. Because these we found were the types of behavioural traits that were more attached to stages two, three and four and onwards. So for stage two, stage two teams, we found good baseline practices, commitment to actually changing the way that they were working, not just tweaking around the edges. Any patterns for teams in stage two, typically water scum fall or hybrid, they're kind of still trapped in a project management mindset. And it wasn't until teams actually started to give up, those got that kind of way of thinking and moving to agile product management, that we started to see them shift out of stage two. Realistically, this actually aligns really strongly with Dunning-Kerrug effect, where we see teams as their competence and their experience. For stage one and two, they get to a certain point and their confidence is really awesome. They go, wow, we're agile teams. We've got lots of symbols of agile, but realistically, unless they actually made the behavioural shift to sort of organisation and agile product management, they started to use impacting outcome metrics, particularly from frameworks like evidence-based management, they got stuck in stage two. For teams that actually could make it through to stage five, we sort of get out of the other side of the Dunning-Kerrug curve and actually have really, really strong agile outcomes. So for stage three, the strongest behaviour is here, agile product management, customer mindset, strong empiricism. For stage four teams, we saw teams as they started to adapt lean and flow metrics on top of typically scrum. And as they started to implement agile OKRs or evidence-based management, these were teams that started to thrive and started to become hyper-productive. Key to stage four teams actually growing was systems thinking, actually understanding that like they're not the beyond all of a system of people processing tools, particularly at scale, as they started to support other teams and take a leadership perspective, we start to see those teams have really, really strong exemplified leadership behaviour. These were teams and these were scrum masters, really strong engineers that became leaders for communities of practice around agility. And of course, for them, key to sustaining that growth was for management to provide them leadership opportunities. So today with our client that asks this question, well, how do I measure it? This is what we're doing. We ended up developing enough because for me, spending all this time actually doing manual statistical analysis in Excel takes a long time. So we're putting our team information to our consultants and to our clients. They enter in those snapshots of who's doing the analysis and the assessment of what the profile look like, etc. They can do a quick or full baseline assessment, answer those questions. And for an 85-question baseline, typically you do that first. That can take you about 20-30 minutes. After that, 15 questions is all that you need in order to adjust the baseline. That then gives you a result of roughly where you are, compares you to teams of a similar age, and breaks it down by those four behavioural elements of self-organisation in agile values, as well as deconstructed into its sub-factors. From that, we've been able to set goals in order to prove those things. And then based on the stage, attaching recommendations essentially to teams at that stage to help them move through to the following stage. And we've got the dashboard now that then helps us actually provide a much bigger, basically a big picture view of how the teams themselves performing, because we're comparing behaviour. So much easier to compare as a result, the strength of those behaviours. We can do that across an agile release training programme. We can also compare the average of that programme or release train to the average of other release trains that we've done, because maybe we've got long to chew more data, in some instances that span for more late news. We can throw in that behavioural strengths into modelling to show forecast modern cost savings, as well as elements of delivery effectiveness. So some final thoughts. The traditional metrics are great. They're a really, really good way to understand what has occurred. Burn-up, burn-down, defects, stories, complete cumulative flow in velocity. But because we can measure behaviour, what we need is some kind of model then to go in order to answer some of these questions around why are some teams more agile than others? Why are they more successful? What can help make teams add more agile? What recommended actions are going to provide teams with the ability to help this to be their outcomes and their impacts to be repeatable and scalable? We need data mocks, and that's where we've gone with our agile IQ tool that we've been using quite extensively now with a lot of our clients. We're able to collect data, put it into the model, and provide some really deep insights into what are the strengths of some of these behaviours. And the implications for leadership in executive are really quite interesting. It means that to actually, if you want these kinds of outcomes, if you want to deliver true value in terms of improved experiences, improved products, the greater experience with products, improving share price, improving internal capability as well, developing in product enterprise management into new products, or how do I support existing products to be even better? To get those kind of enterprise outcomes, managers have to mod the data more shows, as reinforced by the literature and science of it's nearly three, four, five decades old. We know that traditional manager led is less effective than in terms of productivity to self-managing product teams. So that shift increases agility. We know that to promote agile values and managers to lead by example is an important aspect of that. The value of driven, prioritised work by value, decentralised decision making, these things contribute to agile values for sprinting. A model showed that only 10% of the structure of your agility contributes to agile outcomes. They're still a significant part of it. It means short work cycles sprinting essentially, inspection and adaptation, applying the empiricism, particularly from Scrum, works really well. Lots more long lift teams, not project teams, not assembling teams and dissembling them, but forming them and keeping them together. Some of our data shows, it takes about three months for a team to learn how to work together. Why would you then get to the end of the project and disassemble them and put them on something else? So keeping the team together and as we often say with our clients, moving the work to the team as opposed to putting individuals and giving them tasks and putting individuals on projects. Remember, agile product management was one of the key things to accelerate based on the model, accelerate business agility from stage three onwards. And of course, a commitment to producing continuous learning culture. Continuous learning helps teams get out of that Dunning-Kruger effect of, oh, we're awesome. And so as soon as they start to open up and learn that just because they can sprint isn't the be-all and end-all of agile or that no Kanban isn't just visualization. It's limiting work in progress. It's putting in flow metrics. Those kinds of learnings help teams to understand that just implementing Scrum or visualizing work is not the end point. And management supporting that helps to actually push that a bit further. The strength of those four combined, the model shows as does the literature research shows that these four are able to predict improvements in productivity and lower costs and lower risk in terms of living. So for leaders, Scrum Master, Product Owner, a Release-Train Engineer, all of these these elements mean that don't treat agile, agile ways of working as kind of a second for getting an implemented methodology and we're done. And stay away from measuring just activity. We're going to move beyond that. That's what our model shows, that do treat agile as a change in the way that people work, the way that they think about work, and the way that they behave in work as well. And measure the behavior. The strength of the behavior is going to be the best predictor of capability and growing capability in business agility and getting these strong outcomes of faster pivot, lower costs and decreased risks. For those of you who are interested in the white paper that we co-authored with Scrum.org, if you go to agileq.com and go on to how it works, you'll find the white paper. You can search scrum.org, agile.q, you'll also find it on their website. You can go to agile.com and we've made this publicly available now. You can get a 45-day free trial where you can start to add in teams and do assessments and see the comparisons with your teams versus others based on the strength of their behavior. And if you download it in the app store as well as the iOS store, so the Apple store, so you can pop it on your mobile and have a play with it if you'd like. And we found that for our coaches in particular, having it on your mobile phone, being able to go to a team and kind of take your assessment tool with you instead of being stuck at a computer was one of the easiest ways to be able to carry out doing assessments. For some Scrum Masters, for example, meant that they could do it themselves with just having a phone by themselves and either screencasting the questions up onto a TV or just answering, reading off the questions themselves. So if you're interested in subscription 20% off if you use this code, then of course happy to do a demo to go walk you through these things. So send an email to support at agileq and love to have a chat. I think we're done. We're kind of on 36 minutes. Time for questions I guess. Yeah. Thanks, Mia. Thanks, Matthew. It was very impressive. And you could see the comments also in the chat. People are very impressed. Mia has answered a couple of questions already and there are still four more. So do you want to take it right now? We have some. Yeah, I'll just summarise a couple of things. So Matt said that when we did the algorithm to prevent Dunning Kruger, we actually asked the person who's doing the assessment a few questions. We do look at certifications and that did mention CSM. He wasn't saying that CSM isn't a good cert because we actually have many of our colleagues who do those certifications. What we're saying is that the AI looks at, oh, you've also got a CSMP or a practitioner. So you progress and you've got lifelong learning. So we were looking at people, if they went beyond just that first cert and then went to the more advanced cert and got advanced scrum master and so forth, the AI gives you more waiting as being more experienced. We also looked at your level of experience and knowledge, not just certifications. So there are a few questions that we ask in the beginning. The other question they're asking for is it's not working when they go to agileIQ.com? So if you put in zenxmachine.com, agile, so www.agileIQ.com, it will work. You've just got to put the www in front of it. So sorry about that, but yes, you can get there or you can get to it from the zenxmachine website. People took lots of screenshots. That's great. We will make the presentation available to you with data nerd. So we'd love to talk about data all day. So we will hang around and be in the hangouts. We'd love you to come and join us. But happy to take any others, lots of things in the QA. So we probably should address those Matt. So sustainable work for teams is another measure. Was that considered? So that was yeah. So we looked at sustainable pace as in the decrease in overtime as one measure of were people working sustainably rather than doing hero work? Did you have any other comments on that Matt? Well, if that's some of the questions. So remember some of the questions we took out of data manifesto and certainly working to sustainable pace is one of those key questions. Once we put all those questions into the data model, what we're looking for was clusters of questions that showed us that there was an emerging factor. Sorry, things like optimizing flow, managing practical agility, smaller work batches, dependability, a number of questions manifested themselves in these kinds of factors. Okay. So it's not a survey where teams self assess. Yes, how do you get together the data? The data, you literally go on your phone and based on your profile because now we've got a data model. Remember the data model is based on an expert assessment. So we meant that the reliability and repeatability and inter-rater reliability is very, very high. So when you first do an assessment, so it's not a whole team, it fills out a survey. It's one person needs to do the assessment. You could do that in retrospective and simply read off the questions off your phone or you can also do it via the web portal if you could read the questions off and have a conversation and come up with a collective response as to, is it based on statements, it's strongly agree or strongly disagree with that particular statement. So it's based on those, that all then goes into the data model. And it looks at your team profile, your the assesses profile, as well as the questions, those are inputs into the model. And then it will then show you those the strength of the behavior against those four factors. And there's 23 sub factors, I think in total, choosing improvements. And the input from the team, if you want to gather that. So we asked the scrum master or the agile coach with the team to do the that first initial baseline assessment. But then because once you've done that baseline, the next lot you only need to do 15 questions, sometimes you can use that in a retrospective. And you can share screen and the team can put that together. That's I've seen teams use that as a retrospective pattern. We also have scrum masters using it quite regularly as well. Recent case studies during this time. Yes, we've continued to work with a lot of teams and gathering data during COVID and getting some really interesting statistics on team effectiveness. That's really leveraging off the Google Aristotle work. So maybe that's our white paper part two, where we talk about the more recent case studies, but we will try and share those as we've got them as well. Yeah, we're continuing to collect data and analyze the data and see what tells us. Someone has mentioned expert fallacy. How did we compensate for that? That's actually quite simple. We looked at inter-rater reliability. So we had some teams that were assessed by the same team, assessed by different assessors and then compared. And what we found out was pretty early on that by adjusting some of the questions, putting in a life scale as well, which is hidden inside the data more, there are many aspects that and the AI helps as well to be able to assess how does the individual actually respond? Is there a halo effect going on, for example? We looked at correlations between people who have things like a PST, because we had several PSTs doing assessment. So that's a peer reviewed assessment. So their level of knowledge has been peer reviewed and experience has been peer reviewed. So we compared different raters to that level. And the data model is able to adequately compensate for a number of those cognitive biases as a result. We also had a lot of teams at scale. So if they were an SPC or they're a SPC trainer or a CST, they were weighted higher as well because we've got a number of CSTs that have helped us test this as well. So we were looking at all of those things. In the Q&A, I'll just go back to that. How many organisations are using Agile IQ at this point? Look, I know it's over 100, but I don't have the exact figure. But we're getting subscription. We've only just gone live on the app store about two months ago before that it was used just through some people that were helping us test it over the last couple of years. But yes, we've got over 100 people that are actively using Agile IQ at the moment. And most of them are quite large organisations because they're the ones that are trying to save the costs of Agile coaching. And I guess we're doing ourselves out of a job by using Agile IQ, but they're wanting to have coaches in there initially, but then they want the teams to be self-sustaining. And Agile IQ is almost their coach in a pocket that they can use to help them with those tips and tricks. And then the coach goes back and does pulse checks and health checks further down the track to just see how they're going. Changing the manager's belief Chandan had in the Q&A as well. How do we change the manager's thinking about just velocity, bugs and etc. That is working with the leadership and it is training and mentoring and coaching that leadership team. And we always make sure that any of the transformations that we're doing, and I know a lot of my colleagues do this as well, that they start with that leadership because they are so critical to understanding that metrics need to be outcomes. And it's the language that they talk all the time. We don't talk to them about velocity. What they want to know is, am I going to get return on in my investment? Is my market share going to get up? Have I got customers that are using more of this stuff? So they naturally understand outcome metrics rather than activity metrics. It's just about having the discussion with them. I know we've got four minutes to go and I'm trying to get through as many questions as I can. How will it help lowering costs? Maybe that's one for you, Matt. Yeah. So what we found was that as AgileQ got higher, defects went down. And that meant the teams weren't spending so much time addressing issues with their product. And so they finished early. Essentially, that they were delivering more with less. So the amount that it cost to deliver the same features went down. That they were doing feature development and then bug fixes. So the amount of time to deliver decreased. That meant the amount of invested time to deliver a feature went down. Hence cost savings. And the meantime to repair also went down as well. So we looked at some of those other metrics. Can we show us some sample reports? No, because that's commotionally confidence with our clients and I've done my best to de-identify. Because what I've just shown you in terms of some of those graphs, that's actual data from AgileQ. Why is AgileQ not- Oh, sorry. Just on the reports. Once you do your team assessment, if you want to download and get the free trial, it automatically when you press finish assessment, it says print to PDF and you actually get your reports straight away in real time. So you'll actually get a PDF of the assessment you do and it'll show you what your stage is, how you compare to other teams of a similar age and what particular areas your team could work on. And you can also set yourself some improvement goals in the app as well and track it. So it's almost like a fit bit for your Agile team. You can keep looking at that every day if you wanted to. Here we have questions related to technical agility. So what we're after is the behaviours, not specific types of practices, although there are those kinds of questions in the in the full 85 questions that are around. So some of my friends are programming. So what we're after is we know that TAD and BAD great practices. What's the mindset behind those and what does it lead to? And that's then manifested in, let me see, one of these sub factors. No, I can't remember if I've stopped my hearing, but it does manifest itself in one of the sub factors in particular. And the final question, how to get to speed again when a new team member joins a team, people leave and so forth, in our longitudinal data and also in the project board that Matthew showed in the slides, when we had any change in teams, it does take them at least three sprints to recover. Some of them it will take three to six months. So that's why long live teams and being very wary of making those changes too frequently is very important to consider because there is that time to recover. I think we've covered as many as we could. Yes, Mia. Thanks, Matthew. Thanks, Mia. It was very, very interesting session, impressive and it gave a different perspective for all of us.