 Welcome to today's Postgres conference webinar, Episode 2, Accelerating Build Feedback. We're joined by Justin Riak, Chief Evangelist and Field CTO at Gradle Enterprise, who will discuss the first major productivity bottleneck that DPE addresses, which is the amount of time it takes for developers to get feedback about their build and the way that feedback is delivered, why slow feedback is toxic to development, acceleration and observation tools provided by Gradle Build Tool and Gradle Enterprise, and speeding up build times by using a combination of build caching and build scans. This is the second in a series of three presentations, and if you'd like to brush up on the first one, I'll make sure to include it in my follow-up email. My name is Lindsay Hooper. I'm one of the Postgres conference organizers, and I'll be your moderator for this webinar. A little bit about your speaker. Justin is an outspoken blogger, speaker and free software evangelist. He has over 20 years of experience working in various software roles and has delivered enterprise solutions, technical leadership and community education on a range of topics. Welcome. And actually with that, I'm going to hand it off to Justin. Take it away. Thanks so much, Lindsay. I appreciate it. And thanks everybody for joining again for this second episode of our talk about developer productivity engineering, where we're talking specifically about how this practice uses data to improve the experience of developers. We're going to do a quick recap from the first episode very fast just to reintroduce the practice. Then we're going to get into the kind of the nuts and bolts of what we're discussing today around build feedback times. So again, to set the stage with this practice, I really like to come back to this quote from Eric Pearson, who's the CIO of Intercontinental Hotels Group and says it's no longer the big beating, the small, but the fast beating, the slow, which is the state of the industry right now. It's no longer the huge mega software corporations that are disrupting the industry. It's really the agile organizations that are able to add features quickly and respond to market feedback fastest, the ones that are really winning. So the essence of this process, this developer productivity engineering is the sort of the ethic that software development is a creative process, not a purely creative process. It's also a scientific process, but there's a flow involved. And the scientific part of that flow is that we want to be able to run experiments against the computer. We want to be able to write down some code. We want to be able to get feedback about the code that we've written and that feedback essentially fuels our decision making process. We want to be able to do that quickly. But working in the enterprise with software, project success has a tendency to make it harder and harder to maintain this creative flow. Because in successful projects, we're going to grow a number of metrics that will actually make it harder for developers to quickly get feedback about the work that they're doing. So for instance, the number of lines of code in a project might grow or the number of developers who are contributing to that project or number of individual repositories where the code lives, the ultimate diversity of the tech stack. So as we start moving from just a single application to maybe multiple layers of architecture, all of these things are going to contribute to longer build times, longer test cycle times, more and more impediments to develop or create a flow to the point that nowadays this is sort of what the average developer's calendar might look like, might spend some time in the zone fresh, starting with our work, we're coding, and then we're waiting, we're waiting for a local build and the build failed, we're debugging the failure, okay, we're going to go to lunch, we're going to come back, we're going to code some more, we're going to wait for our local build, hey, maybe we debugged it, now it works, but now we're going to push to a CI pipeline and maybe that build fails and now we're debugging a CI build, okay. So the essence of what we're trying to do with productivity engineers is to give developers hours and hours back in their week to work on valuable solutions. And we know that, you know, this is an hour speculation. This is a very recent survey that we actually ran looking for specific pain points that organizations that we know who are taking steps with developer productivity engineering and putting some, putting the practice, some of what we're going to talk about today in their organization, these are the common pains that they were feeling, you know, before they really started delving into this stuff and really paying attention to the practice of developer productivity engineering. Number one here, 90% reported spending too much time waiting on build and test feedback, too much time spent waiting on build and test feedback, 90%, like an overwhelming response here. And this matches a lot of other industry data. And this time right here, this time spent waiting on build is really what we're diving into today. There's all, there's really several different pillars to productivity engineering and we'll revisit all of those pillars. But what we'll be diving into today is this problem of build feedback, right? So we know that this is a really painful one's problem. We know that we also have, you know, next up is just the inability to take a look at how builds are performing over time. So build feedback at one moment of time is one metric, but how does that build feedback change over time? We already said that it's so easy to introduce multiple performance regressions into the developer experience. So what can we do in terms of observing trends and looking for outliers after we've already taken some corrective actions and tried to make things better? So let's talk then about build feedback. Build feedback again is the answer to the hypothesis that we ask of our code while we're doing the scientific part of software development. It's I think I want to do this thing with the code. I'm going to write a piece of code to see if it makes it do what I want to do. I'm going to build the code. I'm going to get feedback from the build as to that, whether that did what I thought it was supposed to do. When this happens quickly, when we get feedback very fast, then software development can be fun. It's still fun. Still a creative process for us. When we get that feedback quickly, when we can work in a creative flow, acceleration technologies and the acceleration of that build feedback can make development fun. Data can help keep the development fun. Data and tracking the performance of those builds, tracking the time at which it takes to get feedback to each developer locally. Not necessarily aggregated up, but watching the experience of individual developers, how much time are they wasting waiting for feedback from their builds, even sitting at their workstations? All that is data. Those are all data points that we can aggregate as part of this practice. By keeping track of that data, by looking for trends in that data and for keeping that data within certain thresholds, we can keep software development fun, which is the essence of developer productivity engineering. To tie this into broader trends in the industry, this is still just a constraints-based practice. It's just one that I think is maybe a little bit more practical than some other process-based approaches. Not that that's necessarily what we're talking about with these examples here. Just that DPE, developer productivity engineering, the engineering in that word is definitely a verb. How are we engineering various solutions to make developers happier and keep them happier by making sure that they get feedback quickly, as well as many other constraints that they face? This is a constraints-based approach, just like these other approaches that you see here. We've identified bottlenecks to throughput in the process. In this case, these bottlenecks are a little bit further left in the process than sometimes you've seen with more recent practices like DevOps. There's no question that the origins of DevOps, when you trace it back to things like theory of constraints work, things like the goal, further things like DevOps and Agile, you can take those theory of constraints-based processes. You can look at what they're trying to do. They're trying to eliminate bottlenecks from process to convert, in our case, code into money faster or code into features faster, and that's the essence of DevOps. DPE does the same thing. It just looks for bottlenecks that are in a different part of the STLC than are typically found in DevOps initiatives. When we say that DPE is the next thing, we really mean that literally, that we can use acceleration technologies and data to address the next set of bottlenecks in software productivity, which are ones that are caused by unreliable or slow build infrastructure, so the build tool chain being either slow, unreliable or both, introducing a ton of productivity black holes that are just simply not in scope for DevOps. Today, we're going to talk really about the leftmost pillars of the solution. If you recall from DPE, there are a number of pillars to the solution, but really today's talk is about slowdowns in build feedback, right? So slowdown in the time that it takes for a developer to actually get feedback around the specific, the build part. Test distribution is another feedback acceleration technology, but it's there to improve the amount of time that it takes for developers to get feedback around the test cycles that they're running. We're going to dive into that specifically in the next episode, an episode two. We're really just focused on build feedback in this one. Build scans we're going to talk about as well. So we're not going to just talk about how to speed up the build, but we're also going to talk about how to speed up troubleshooting of that build. If you recall back from the schedule, that developer wasn't just dealing with slow builds, but they were also dealing with a slow time troubleshooting problems with the build, right? And this, you know, without the practice of developer productivity engineering, without something that makes it really easy to share the kinds of details that are important for troubleshooting a build, you can kind of, I think, empathize or maybe even recognize certain troubleshooting actions like copying and pasting console log from a Jenkins output or from some build output into Slack, and then trying to highlight where in that console log you're actually trying to call attention to and then hoping that the build engineer actually is able to glean that from the console log data, right? There's, and then what do they want inevitably after that? They want architecture details. They want to know about the environment. And so there's a lot of back and forth and question and answer and discovery before debugging can even begin. So absolutely part of improving the build feedback experience for developers will be in speeding up the troubleshooting process as well. And so we'll talk about build scans in that context. And we're going to talk a bit about our analytics as well. We're going to talk a little bit about what kind of outliers we should look for in the build performance metrics that we're diving into today. So again, these are all of the pillars of developer productivity engineering, but we're really going to just focus on some parts of these leftmost ones and really talk about improving build feedback times. Because fast build feedback means more developer joy. Okay. It's, it's, it's very to us a very straightforward equation, the less amount of time a developer needs to spend waiting to learn whether what they've done was correct, the less their creative flow is impeded, the more they're able to produce and the happier they are with their crafts. We can think about this in terms of teams as well. And like what putting practices in place that will improve build times can do for the productivity of an entire team. And it's a little bit striking when you start looking at some of the numbers. Think about team one, team one has 11 developers. They have a build time of four minutes and they're able to run, let's just say in whatever unit of time about eight hundred and fifty builds in, in a certain unit of time. The build time is the impediment to being able to run a larger number of local, local builds. The first thing I want to point out is that if you look at this number, four minutes doesn't seem like a build time that would be killing anybody. Right. If you ask any Java development team, any Java EE team, oh, how long is your build time? Well, first of all, they probably don't know. That's, that's the first answer you usually get is, oh, we don't actually, we're not, we're not actually tracking that, which is part of what we're talking about today is that we should tracking it, should be tracking it. Those that are tracking it, if they were to give an answer like, oh, it takes about four minutes for us to complete our build, they're probably going to say, oh, it's fine. That's just kind of typical. And they're not wrong. You know, if you kind of look at the status quo, that's, that's pretty good. I mean, we have, we even know some people who have builds that take hours, I think in the most extreme case, we've seen 20 hours for a clean build of a single project. So, you know, four minutes may not look too bad, but then look at what this team of six can do with a one minute build. They're able to run in that same unit of time, almost 200 more builds. And we know, we already know that the more often the developer team is able to build the faster they're able to ship features, right? It's just the more often they're able to cut through the hypothesis and experimentation and the sooner they're able to get to the best code and the right code. We already know that. So if this team of six people, you know, is able to actually build significantly more just because of this build time, then you're in all likelihood looking at a team with less people that can ship features faster. But you're also looking at a team that doesn't know that they have a problem. Right. Team one with a four minute build time maybe hasn't asked the question that we like to ask around developer productivity engineering, which is not how fast is your build, but how fast could your build be? When we start rolling this up to very large teams, 100 developers, maybe 12,000 local builds per week, you're going to really start to see some significant time savings across the entire team. And you have to start asking some real questions like what could these developers be doing with their time if they weren't spending so much time waiting on the build process. So build caching is a tool for very fast build feedback. So again, the process of developer productivity engineering prescribes you know, I think pragmatic solutions and technology based solutions for problems that we suss out. We don't want to look at maybe process management or productivity management, which would generally focus on making sure that developers are cranking out the right number of lines of code or making sure that, you know, the developer teams are matching what they claim their story point capacity should be. Now, this is really more like what types of technologies that can we introduce to improve the tool chain to make it so that developers don't face as many bottlenecks. So build caching is a tool for very fast feedback. And if it's a tool for very fast feedback, then it's a tool that will bring us more developer joy. It was introduced to the Java world by Gradle in 2017. OK, so Gradle had already supports what we call incremental building. Acceleration has been important to the open source Gradle build tool since the beginning, and it was designed with incremental building in process. What this means is that at any time, you know, the inputs to an overall Gradle build, a set of tasks are looked at. And when something is changed, if a bit of code is changed, Gradle is able to see what inputs that changes to the various tasks in the build and for any tasks that need to run where the input hasn't changed based on what those code changes were, the task just doesn't need to run. You can go with whatever the output of the previous run was. You don't even really need to run it because that task is already up to date. That increment of the entire build has already been fulfilled. There's nothing that's been changed. And so there's no reason to run that increment again. That's all well and good in Gradle builds. But what about clean builds? What about builds where you have to clean? Well, this takes it a step further and actually allows us to cache output from those builds so that even when we're running clean builds, if code hasn't changed, that would change the actual bytecode output from that part of the run, then we still can just pull that output from cache and we don't have to rerun it even during a clean build. This is even more striking for Maven builds, which caching works for in our solution as well. And this also gives a framework for being able to build a scaled, build caching infrastructure, which integrates with a distributed CI and a lot of users, if you have hundreds or even thousands of developers contributing to this cache. And we'll look at a picture of that in a moment. It's available for both Maven and Gradle, can support both local and remote caching, which again, when we look at that diagram, we'll see what that buys us, but effectively, it's what it sounds like, multiple developers can actually contribute to each other's remotely distributed cache, which is very efficient. Build caches are complementary to dependency caches, not mutually exclusive, so if you're looking at this and you're like, oh, yeah, we already do this, we've got Artifactory or we've got Sonatite Nexus or something like that. No, that's actually, we are talking about something different here. A dependency cache takes your dependency binaries, the jar files that you are relying on so like your SLF for J or whatever you might be using along with your app and sticks those jar files in a repository somewhere that can be pulled locally or pulled through a content distribution network or whatever is necessary, depending on how distributed your team is. This actually caches increments of the build. In the case of a Maven build, that's goals, so it caches output from goals. And in the case of a Gradle build that's output from tasks. And a build cache then accelerates a single build building off of a single source repository. And these can be used again in tandem, right? You want to use a dependency cache, definitely. That's a good pattern. It's a good DevOps pattern to have an artifact manager or artifact repository living local to the rest of your build infrastructure. But what we're talking about here is something that's per build, not for all dependencies. The action is pretty straightforward. Gradle tasks or Maven goals, each have a set of inputs. Based on those inputs, we generate a cryptographic key and that cryptographic key is gonna be unique for those inputs. If those inputs were to change, then the cryptographic key would change, right? We take the output from a build and we store it in a cache using that key. Whenever we run the build again, we create the same key from the inputs and we do a lookup. If that key exists, in other words, the inputs are cryptographically proven to be the same, then we pull from cache. If they've changed, then it's not a cache it, we rerun and we move again. So in this way, we are able to have a extremely accurate cache that is able to use hashing to actually ensure that the inputs are fully identical. So that if anything has changed at all, we know safely that we shouldn't be pulling from cache and vice versa. When the inputs have not changed, the outputs can be reused from a previous run. We also support the idea of a remote cache and this is really good for widely distributed teams. In this pattern, you wanna be kind of thoughtful about it. Imagine that we've got hundreds, maybe even thousands of developers that are contributing to a remote cache. Let's think about the benefit of this first. The benefit is anyone in the development organization can contribute to the cached objects that can then be used by other developers in the organization, including CI. That means that if a developer over here has just run a new build or something of some branch that this developer is working on over here and they've never run the build before, so normally they would be in a position where on that first build, they still have to store everything in cache to be able to benefit from it. They don't actually. They could pull cache results from this user who just ran the build. Or more properly, what really is the best architectural pattern is that when any developer checks code into CI and CI runs a new build as a result of that commit or merge or whatever, the CI populates a remote cache. And then everyone through the organization can pull, can read from that cache. So no one in the organization writes directly to the remote cache. Then we're not pummeling the remote cache and we're not asking, this can actually slow things down if you have an overwhelmed remote cache with things getting stored from too many different users and too many different branches. But if you have one governing entity actually pushing into the remote cache, which is the CI server and everyone else is just pushing code into the CI server, well, it's basically the same thing, right? Because the CI server is gonna build it just like it might be built locally and then write to the remote cache here. Now, if that's a lot, we're gonna demonstrate exactly this. But I just wanna point out that when we think about caching as something for build acceleration, it's something that we shouldn't just think about for a single local build, right? We should really think about opportunities to reuse those cached objects and speed up builds widely across the entire organization. And then we should roll up those metrics, local builds, remote builds, CI builds, all of that into a central aggregated source of data so that we can look for outliers so that we can figure out if there are people who are experiencing longer build times than they should be based on the rest of the organization. And we should also make sure that the acceleration tools have as much opportunity as possible to accelerate builds. And this remote cache pattern is one way to do that. This distributed remote cache pattern. To give you an idea of how much this can speed things up, these are just a few open source projects that are using the tooling and give you an idea of what they've experienced. These are Maven builds using Maven caching. The commons.io project uncached about one minute, 23 seconds. So they shared Apache Java commons.io library down to about four seconds when fully cached. The Guava library about seven minutes down to about one minute when fully cached. Spring boot 21 minutes down to six. This I believe actually has gotten lower. The Tommy E build, which contains a lot of sub projects like for instance, ActiveMQ for JMS. I believe Hibernate for ORM contains another number of other things out of the box that Tomcat doesn't usually come from. Down from an hour and 27 minute build to a 20 minute build using caching. Okay, so these are pretty striking results. If you wanna actually see springs results, they keep everything that they do public with their DPE dashboard. And you can browse through it here. One of the demos that we'll look at later in this talk is actually our own Gradle build tool, the open source Gradle build tools DPE dashboard. We're gonna look through some of the various metrics that we pay attention to to ensure, like I kind of mentioned before that outliers don't creep up and that regressions to build time don't sneak back in. But you can take a look without build caching with just out of the box caching turned on we saw a six X decrease in the amount of time it took to do a build. And of course there's ways to optimize this, right? The out of the box savings from caching are almost never the final answer, right? You can make the code more modular. You can improve or normalize the way that you declare your input tasks, your inputs to your tasks and that sort of thing. So there's a number of ways that you can continue to improve the acceleration time even further through optimization. Again, really the motive here, like really the mindset is not, how fast is it gonna be when we hook it up to caching first is how fast can this build possibly be widespread across the whole organization? All right, so a lot of theory. Let's look at a demo. So I'm gonna demonstrate local Maven build caching running on my Mac. We're then going to show the code deployed into a CI server and then trigger a build on that CI server. And we're gonna have that CI server acting as a remote cache, which is gonna, and once you'll give me the ability to demonstrate this diagram that I showed before where other users, remote users of the cache are able to benefit, even if they've never run the build before, even if they don't have a local cache set up because the CI has populated the remote cache for them. All right, so here I've got just a really simple Maven application. This is the Spring Boot camel artifact. It's an archetype. If you wanna do this yourself, you can create a new Maven application and you can do it from the camel Spring Boot archetype. So I mean, if we were to come in and do a new project in here, if we want to create a new Java project and we wanna do it from an archetype, let me say camel Spring Boot, and we're gonna find more in the remote catalog. This is the archetype that I've started with here. So this is just an open source project. Anybody could start with this. And what I've done is modified it to add the Gradle Enterprise Maven extension. This is a freely redistributable extension. You can add it to your project right now to be able to start running build scans, which we'll look at, which is a free part of the Gradle build tool also works for Gradle. We're gonna add this extension and then I've got some specific configuration to hook up to the enterprise server that I'm gonna be showing for the purpose of this demo so that we can look at the scans and everything like that in here. Other than that, the code is untouched. I'm going to make sure that my cache is clear locally. The cache just lives in the M2 directory, Gradle Enterprise directory here and there's a build cache folder. So I'm going to get rid of that folder. I'm gonna run a Maven clean verify on this project. And this should execute a full run. Now I picked this archetype because it builds quickly but it's also, so we pulled from cache because this is actually still pulling from the remote cache on this server. So if you'll bear with me just a moment, let me clear that on the remote cache as well. But just a good example of how badly this thing wants to help you. Okay, so I'm purging that right now and I'm gonna have to clear the local cache again. And now we'll run this. Okay, so now you can see it's running our full run. It's executing all of our tests and everything. So yeah, even very little effort I had the remote cache hooked up and not purged. Okay, so now we've executed every single task in our build and it took about 10 seconds to execute the build. Okay, great. I'm not gonna show the scan yet. We're gonna look at the scans in a moment. For now, let's just look at this caching behavior. Eight goals and total eight executed. No surprise, we've already seen this when we run this again. This actually will be even faster though because it pulled from a local cache and not our remote cache. So great, we only actually had to execute five tasks. We were able to pull three of those from cache. We're gonna look at what those tasks were in a moment when we look at the build scan. What I wanna do now though is change a little bit of the code in here. So I'm gonna come into this router code and I'm gonna just change just something. I'm gonna change this logging tag to have a different identifier. We're gonna save and I'm gonna rerun the build again. I wanna just demonstrate that this is looking at a full input cryptographic hash. It noticed that I just changed code. And so it noticed that it needs to run the test harness again. But it was still able to pull one thing from cache. And when we look at the scan, we can see that what it pulled from cache was actually just building our test harness itself because even though the code that we were, because we changed, even though we changed some code, we didn't change the test code. We didn't change the harness that actually runs the test. So it was able to at least pull that from cache. Going back to a full build like this. And if this whole scan build scan thing is new to you, don't worry, we have a demo where we're gonna look just at that. We now see that we had three avoided goals saving almost seven seconds in total. When we look at our overall performance with goal execution, that's like an 80% avoidance savings. So this is a fast project. It only took about 10 seconds to build. But I just wanna point out out of the box savings of about 80% here, which if you think about a very complex project with a lot of modules, you could really see a lot of potential for savings here. All right, last thing I wanna show is how this translates to CI. So I have a Jenkins server that we'll hit. And this project has been moved into Jenkins and published in a Git server. So let's go ahead and clear cache again and clear my remote cache. Remote cache, clear our local cache. Great, let's go back to Jenkins and let's run our build, look at our console output. We should see something very similar here that we saw with our terminal build. We should just be running our tests just like we were doing in our terminal. Great, eight goals executed, nothing unexpected here. I cleared my local cache. I don't think it'll happen if I do a maven clean verify on my local. We didn't push to remote on that last build. I didn't quite do that in order, I apologize. Let's do this the right way. Let's run our local build and run our Jenkins build. And we're gonna run this one a second time so that the build has the opportunity to push to our remote cache. It had to populate its local cache first. So now when we build this a second time, first of all, it's gonna be able to take advantage of its local cache, which it did. So you see it was able to pull three from cache, but it should also have populated our remote cache, which I just got rid of and it did, see? So we nuked the local cache here, right? After the remote CI ran, and we still were able to pull from cache, we still were able to pull from the remote cache after it was built by CI, right? So just to revisit architecturally what happened, it was this picture again here. The CI build actually pushed into a remote cache. We had our local cache cleared, but the remote cache had the build available for us. And so we still were able to have time savings by pulling from that remote cache, okay? So really, we find different opportunities to make the build as fast as possible in every area we can, whether locally or even if for some reason the local build can't be fast on that first time, it can't be cached on the first time, we still have the possibility that our remote cache has been populated. It doesn't have to be another developer, it could just have been a CI build that did it. All right, great. So faster builds and build acceleration is one part of more developer joy and less waiting on feedback cycles. But faster troubleshooting is also going to ease developer focus, right? Context switching is toxic to the creative flow. Having to switch between being in the type of zone where you're writing code to then having to move into a troubleshooting flow, to then even debugging local flow or waiting on build feedback type of flow. All of these things are toxic to overall productivity. So being able to troubleshoot quickly can really ease our developer focus and that's where build scans come in. We just looked at the build scan, we're gonna dive deeper into it. Again, it's a feature that's available for free for Gradle builds and Maven builds to revisit the configuration for Maven. All you need to do is add the Gradle Enterprise Maven extension to a file called extensions.xml on your .maven directory. You can do this in your project route or you can do this in your local user directory. And that's it, when you add this it's just gonna start publishing build scans. It's gonna do it by default to our publicscans.gradle.com. But it's a free feature. Now what do they do? Well, they provide a comprehensive and I think more importantly, shareable summary of the entire build. We're gonna go through some specific parts of that but it breaks down a whole bunch of areas of the build whether it's the console log it gives a nice searchable console log, failures in the build it actually breaks down individual failures and gives you a nice view overall a timeline what took place during the build where did the build spend most of its time working makes it really easy to spot bugs and failures and it's fully self-service to a developer. So they don't have to wait on a build engineer for instance to go and pull output log data or something like that from Jenkins to figure out why their build failed. They can just get the build scan data and start diving right into it and then also create a URL from it and start sharing it with people. So again, we'll look at this in a demo. Sharing the build detail but with context, right? So maybe there's a failure that's trying to try to diagnose or troubleshoot a failure but you need a whole bunch of other stuff for the build engineers to be able to do that effectively they need the console log they need to understand what dependencies were involved in the build they need to understand which switches were turned on in the build tool chain all of that's encapsulated in this one place that has all the details in one place. Just to enable we've already gone over this you can just add this extensions.xml file if you're using Gradle you can just do Gradle dash dash scan from the command line and any Gradle build will go ahead and run a scan against our public Gradle build server. That's the scans.gradle.com site if you need to reference that you can go to scans.gradle.com and there's actually a landing page that'll walk you through turning it on for Maven and for Gradle and of course you could always reach out. So let's take a look at this build scan in a little bit more depth. So we'll take a look at the build that we just ran first and then I'm actually going to demonstrate a Gradle one as well. But let's start with the familiar Maven application that we just looked at. So let's just pull the build scan that we just ran. To recap we put this extension in here. I'm going to show what this will look like if you just use the public server. This is our little event server. These credentials are no longer valid so you don't get any funny ideas. This is from ApacheCon. But I want to give you an idea of what this looks like when you're using our public build server just so the details don't throw you. You see here that I pulled out our Gradle Enterprise server and now this is running the scan against our publicscans.gradle.com server which we'll ask you to accept terms of service and then create a build scan for you on our public server. Let's look at that. It's going to prompt you to activate. That's okay. So instead of doing that let's just take a look at the scan running on the event server. If you're a Jenkins user the Gradle plugin does a really nice job of actually just pulling the build scan link out for you too which I like. So this was the one that we ran on the previous build. So we have a lot of information available to us in here. We have a full summary of everything that took place in the build and anytime you see this little link icon pop up this right here that means that you can create a URL to that part of the scan. So if I wanted to say for instance for some reason if I wanted to link to this part of the build scan output this will actually create a unique URL that I could share with somebody else in my organization or Stack Overflow or something like that. Pop it in it'll take you right to right to that part of the summary. The entire console log from the build is represented here and you can search it. You can narrow down by like particular goal. Maybe I just want to look at the test compile section. Maybe I want to search by goal name instead. So I just want to see the surefire tests that were executed. But I can kind of easily search through and filter down to parts of the log. Let's look at a longer build one that didn't pull everything from cache so that we can see more of that console log output. Yeah, here's longer. So let's say that we just wanted to look at some of the test execution the surefire tests that were executed. Then we could just drill down to that part. Let's say that we wanted to link to the assertion test data that actually went out. Again, I can create a unique link from here. I can give this link to somebody else and it's gonna take them, whoops, and it's gonna take them right to the part of the log that I was trying to call out, okay? Overall performance of the build. We saw in this case that we got a nice cache savings. What actually happened? Well, we found out that three of our tasks came from cache. That's great. What wasn't cacheable in this one? We can see what didn't get cache. We can use this to figure out, in this case, the goal wasn't supported, but maybe there's some other things that we could have done to change this to make it more cacheable. So we can use this to try to figure out, are there areas of the build that can be even more performant? We can look at which tests executed in this one. Tests were pulled from cache. They didn't actually execute. See which projects, which dependencies were used. We can even look at cascading dependencies and trans-transitive dependencies like that. We can see which plugins were used with the build. See some details on the plugin. We can link to these as well, which switches were set up in the build tool chain itself. Like, if you wanna learn more about this switch, this will actually link you directly to the documentation for it. So this is the Maven documentation for that switch right there. So bottom line, lots of information. Contained in this that can be then shared with somebody else in the organization to make troubleshooting faster, right? Just to make it easier to get to parts of the build that they're trying to look at. Let's take a quick look at a Gradle one as well. And then we'll move back into the presentation and talk a little bit about insights. So this is the Spring Kafka project. And I've already got a lot of stuff cached in here. And in the interest of time, we're just gonna run a build with a scan. And we had a failure, that's okay. I wanna investigate that failure as part of this scan. So it's gonna give us our build scan. Here we are. And we can see that a few things are going on. First of all, I wanna just point out these custom values. This is really cool. Inside the build script, you can add any number of tags, links, buttons, anything you want to. Let me scroll tags in here. You can add a number of links to the build scan output itself in the summary. And what's useful for that is any of these could become tags. I can make an external link directly to the get commit that this build was related to in my custom values. If I wanted to find out what get commit ID this build was included in, I can do that here as well. If I wanna indicate something about the environment where this ring and I can add this. So all these things are very handy for adding additional data to it. In this case, we had a failure. So we can call out that the failure in this case was related to this ASCII doctor plugin. And we can look at a failure history across the whole organization and actually see like how many users were affected by this failure. In this case, you can see that I was several times today when I was setting up this demo, running into this failure that I had with ASCII doctor. But I can take the failure details from the scan and again link directly to it. If there was anything that might have been deprecated in the build that might be contributing to that failure then that could come up here. Notice that the switches that we have are specific to Gradle now, not Maven. So like for instance, if I wanted to go learn about the build cache switch, I can see that the build cache switch is turned on. If I wanna link to the documentation in Gradle that tells me about the switch, I can link to it right from here, right? So yeah, this is just again sort of the same idea that we saw from the Maven build scan, just the details that you would see inside of a Gradle build scan instead. See everything organized by project, plugins. And again, being able to link back to this failure history kind of is a good segue into the next part of the presentation while we're sort of wrap things up, or wrap things up a little bit I should say, in that, you know, this is really handy for giving us a lot of clues, details and data that'll help us accelerate troubleshooting a single build. But we wanna aggregate that data up. We wanna look at trends in that data so that we can proactively solve problems for developers, maybe even help them avoid failures altogether. If we're able to look at a full failure history, for instance, and see that lots of people have been affected by this failure, then we also know that if we're able to spend some time on this failure, maybe we can ensure that other developers don't have to waste time ever running into this in the first place. We can also track test failures over time to try to determine if perhaps maybe certain tests are flaky. And we'll look at that when we look at our last demo. All right, so let's wrap things up. So to close out this part of the presentation, we'll go to a quote, I think it's kind of funny, but it's also really insightful quote by Yogi Berra, catcher and philosopher. You can observe a lot just by watching. All right, it seems common sense, but we know from data that a lot of people aren't watching. All right, I mentioned earlier that if you ask most people what they think about a four minute build time or whether they have a four minute build time, the usual answer is that they just, they don't even know. They don't know what their build times are. They're not watching, right? They're not paying attention. They don't know how long it takes a developer to build locally. And then maybe they do find out and they find out it's four minutes. They think that's not a problem until what we've learned today, how fast can it be? And what does that mean for an overall team's productivity? The other thing too is that fixing these performance problems can't be a one shot deal, right? We said before that project success can have a negative impact on project performance, right? That means that what works really well today, three months from now as we introduce a lot more code, dependencies, repositories, developers may not be dealing with that same level of performance that we saw. Infrastructure changes, caching, CI agents, new annotation engines, compiler settings, code refactor, new office locations can change bandwidth considerations. All of these things are gonna change the speed at which, the time at which it takes local builds to run. So we have to keep track of this data. So data is also essential to keeping builds fast. So we'll discuss a couple more of these pillars here. We covered build cache as an acceleration technology to speed up the overall time and feedback that it takes. We talked about build scans as enabling technology to speed up the troubleshooting process. Now we'll look at failure analytics trends and insights as a way to make sure that we keep fast builds fast. So our recommended practice is these build metrics that you just saw coming out of those build scans like the time it took to complete a build. These metrics should be centralized so that we can trend them, which allows the centralized management office for the developer productivity engineers to continually improve productivity for the business. Build feedback is automatically provided to the DPO team. Each build, whether locally or remotely across the entire org. So when those build scans run, they aggregate data up into a central server that we're able to then trend against. And again, we're tracking the local experience for developers and the remote experience for developers and the CI experience. We also wanna run analytics on all of these failures. We wanna be able to look at the results of tests, which we're gonna dive into a lot more in the next episode, but also build results and failures. We wanna sometimes, if we can, eliminate these failures before developers even encounter them, right? I mean, what's the absolute best way to fix that type of bottleneck? You know, make sure that the developer doesn't even encounter it in the first place, right? I mean, they never even see the failure in the first place because it's been dealt with proactively. So we're gonna close this out, just taking a quick look at how the open source Gradle build team uses this technology, uses the trends and analytics to identify things like failures and tests. So, and flaky tests. So right now, this is aggregating all of the build failure, all of the local build failures that we've, or remote build failures for that matter, any failures encountered by the build team across the organization over, in this case, just the last seven days. So what are we doing? Again, we're aggregating individual failures that we're encountered up across multiple builds. We can see, for instance, that this failure right here has actually affected 366 builds just in the last week. And if we, you know, maybe expand this timeline out to maybe 28 days, give it a moment to aggregate, we can see that this failure has affected over 1.30, almost over 1,340 builds. We can click into these to see who specifically was affected and when did it happen. We can see this is a team city agent that's been running into this failure. We can then drill into the specific build scan for that one particular failure and start looking at all of these failure details. But the bottom line is that we were able to aggregate these common failures up across the entire organization. And now we can see what's sinking time for our developers. When they go and run a build and it fails and they have to go troubleshoot it, figure out what failed and go start over again. I mean, make no mistake. Each one of these failures is a productivity black hole. We can now see these trending across the entire organization and we can do the same thing with tests. We can look at test results, again, coming across the entire organization when it did 28 days worth of tests. We can look at these individual tests. We can see the ones that are failing most often and we can dig into them. We can look at how they've been trending recently. So this is very handy rather than having to dive into the individual test and look at how it's been performing over time. We can actually just see right there in that dropdown list. Normally, see how long we'd have to wait to go and dive into how this one test has performed over the last month. I'm gonna just block out of it for now. We can also see how many of these tests were flaky. Flaky tests, a test that executed one way for one developer, maybe passed for one developer, didn't pass for another developer. We can see a lot of flakiness here. These are huge productivity black holes. So we're not gonna test now. This is a nice little preview for our next episode, but these are the types of analytics that we'll be diving into the next one. For now, again, just to go back here, we have the ability to use analytics to eliminate failures that developers would encounter in their build, which again means they have to start over. They have to run that build again and figure out what happened in that failure and run it over again to try to find common ones across the organization and eliminate them before developers even encounter them. All right, so that is one of many ways that we can use data to make developers more happy. So to wrap things up, just a reminder here, be a hero to your organization. You can introduce this practice to your business today. It is a vendor agnostic practice. As you look through this e-book and the enabling technologies that we looked at today, the Gradle caching and the build scans are free for you to use today. So try one, try a free Maverick Gradle build scan. Also check out our docs if you want to and look at comparisons. We didn't get into that one today, but another good way to use data to try to accelerate the troubleshooting process. And of course, you can always feel free to reach out jaredgradle.com. Thanks so much today.