 Thanks everybody for coming in this session. Looks like I'm your gate to lunch. So we just got to get through this together and then have some easy mortals after we're done. So what I'm looking to do here is I'm looking to start with a little bit of slide where we're going to be together here for a little while. I only have about 20 slides to get through and then we're going to happen to an ID kind of like live generally kind of scenario for the both of the time here today. So I'm not expecting as if you're setting up slides. So hopefully everybody appreciates that. So just to kind of get started, a lot of this next couple of slides will probably not necessarily be used to any of you, but just to kind of set the table for the genesis of some of the things I'm going to be talking about here. There's a little bit of an intro into the impetus behind digital transformation. So obviously everything we're doing in the so-called digital world is software defined. So we have obviously code as code. We have pipelines as code. Now we've got the network infrastructure is all software defined. So everything is truly software driven, right? And what we're trying to do, everything we're trying to do here that all this software was driving our business, it needs to work perfectly, right? It needs to work more perfectly than the text on this slide because I'm just now noticing that we actually have like a heritage training in order to work perfectly, which wasn't there when I looked at the slide earlier. So just another example of where we actually need things to work right. Now some of the things that I would be walking you through here are some techniques that we've actually employed inside of Dynatrace. And it's techniques that we've kind of taken on the road for the last couple of years, kind of pulling one piece of technology out, putting another piece of technology in in terms of information details and things like that. But a lot of this is kind of based on some of the things that we found internally as we've undergone our own digital transformation. I think some of it's a little bit interesting because you might think, like, how is that a vendor that's already a software company? How do they digitally transform? Shouldn't they be digital to begin with? And you'll find that the answer is for some vendors that have been around for a while, maybe you didn't actually start off that way. So a little bit of background on us, like who is Dynatrace. I actually think the first bullet point here is actually out of date now, since the governor MQ actually came out just a couple of days ago. So I do believe that we're at a nine time presence, presence in the governor magic quadrant. And if you look at some of the analysts we filled, we actually are the market leader in terms of market share for APM. And when we talk about some of the new technology that I'm showing you today, the new Dynatrace platform, this has seen some pretty explosive growth. So this is kind of a great opportunity for me to get some audience participation. So like, how many people actually know the name Dynatrace? Right, a great number of hands raised. How many of people knew the name Dynatrace before he came here? Right, oh good, good, okay. And then so how many people were familiar with APMOM versus the new Dynatrace? All right, cool. So are you, so when you're raising, I'm gonna ask that question real reportedly. So did you folks were more familiar with APMOM than the new Dynatrace maybe? You might have been APMOM users, is that correct? Yeah, okay. So a lot of what I'm talking about here is obviously the transformation that has happened to us has been powered by, or maybe even necessitated by our transition from the APMOM product base and the kind of R&D processes there to the new Dynatrace, right? With the new Dynatrace, we're delivering software via processes that are gonna be very similar to perhaps the processes that your organizations are trying to go to. One of those key figures there is that we have, with the new Dynatrace, we have 26 major releases a year, right? In the past with the APMOM products, we would have four, right? And that's a pretty big shift. It's actually pretty hard to do that. Six and a half years ago is about when we began this journey. Dynatrace likes to call our current state something that we call no ops. And I know everybody's kind of got a little bit of overload of pending ops to some other down or putting in part of another down. The way that we define our no ops journey is the folks that might have traditionally been ops people and then the folks that might have traditionally been DevOps people, like the people that transitioned into the DevOps role have now transitioned into your debt role, right? So we don't actually necessarily have people that actually fulfill the former kind of ops type of role. Like our DevOps folks now are predominantly responsible for flipping feature blocks, right? So these are folks that, you know, we basically have a Slack channel and we say, hey, I need this feature block enabled on the certain Dynatrace that in and they go do that for us and help open it, right? And the folks that might have been responsible for that more kind of DevOps-y type of role or building pipelines and stuff like that, those folks are proper to 100% part of Spritkey, right? And we began that journey about six and a half years ago. And, you know, where we're trying to go is, we have this vision of self-driving IT, I'm not actually super well versed in how they actually define those levels of automation in cars, but I believe what we're kind of trying to get to is that level three fully hands-off automation where a lot of these things that are happening in these environments, the developer is pushing code and that code is good, it's going to get to production automatically, right? You know, one of the metrics that, you know, we're actually trying to get to at some point, you know, with both ourselves and a lot of our customers is we'd like to see code make it in the product an hour after it's been, right? That's kind of like a good, you know, a big bubble set of something that we're trying to get to. And the idea is that, you know, you remove humans from this process, right? And we automate everything with pipelines, like what I'm going to show you in a little bit, the numbers, right? If we eliminate humans from the process as much as possible, we can eliminate many of the mistakes, right? So it's all, it's all going to go about, you know, kind of detecting things and feeling that, which allows us to actually deploy faster and maybe take a little bit more risk in production, right? And, you know, one of the reasons we started actually taking the story on the road here a little bit is, you know, these types of methodologies have actually been pretty successful for us. Like, if you look at this, you know, that kind of little bar there about the number of production bugs, now mind you, the date on the slide is perhaps a little out of date, as it represents 2017. But, you know, in the past, you know, before we adopted a lot of these processes internally, you know, a lot of production bugs were actually reported by our customers, right? That's not really the greatest place to be, you know, ideally we'd like to actually find some quotes ourselves, right, it makes things easier and support people and actually, you know, makes everybody's lives a little bit easier and developers, you don't have to get on their little call at midnight because somebody found like a P1 feedback, right? So the sooner you find these things, the sooner they can be, you know, it actually takes less time to fix them and it costs less to fix problems earlier in the life cycle, right, and obviously, you know, this EC2 number is probably woefully out of date because, you know, we have, you know, 750 to, like, 1700% growth in a SAS platform, like I think this number of EC2 instances now I think might actually be somewhere in the 10,000 range, but, you know, so what is the challenge about, you know, getting us to this point, right? You know, what we're trying to do here is we're trying to deliver better software faster, right, it sounds like a pretty noble rule from the 3GT rule, but, you know, one of the things that we've found and perhaps a lot of you have found this as well is that just taking your app and putting it in a container or just taking your app and putting it on, you know, PCF or what have you, you know, that doesn't necessarily mean that you're cloud native, right, that means you have a cloud native application or a delivery platform, but it doesn't assume, you know, your processes are as well. And, you know, so we've been, you know, building out this offering something that we call ECM, Thomas Bogdan, and as we were preparing that, we started querying our customer base, right? So we found that, you know, the vast majority of folks are actually not truly cloud native yet in terms of, you know, how we've actually defined that. The 95th percentile of our customer base can actually get a piece of code into prod two days after it's been committed. But the median, it takes two and a half weeks, right? You know, how many business impacting deployments happen? So that's kind of like, you know, issues that are actually detected in prod, something that's actually negatively impacted revenue or something like that. You know, our 95th percentile customer has actually only seen, you know, one out of 10 deployments negatively impacting the business. But the median is three octets, right? And then when we actually look at, you know, how many times do we have to actually patch something that we've deployed to production? How many times do we have to deploy optics? The 95th percentile, that has a nice zero there. That's a pretty good number. But the median, the average customer is actually more like three octets for production deployments, right? And then when we talk about how much time does it actually identify some, you know, how much time does it take from identifying a product to actually fixing it, right? The 95th percentile customer actually can find a problem and fix it within four hours. But the median customer, you know, you're talking almost five days, right? So, you know, when we start talking about some of the, you know, measure rules around implementing some of the techniques I'm going to talk about here, we look at having a 75% reduction in production incidents, right? Because we're going to have a higher level of quality before we get the problem by detecting those problems earlier and addressing them earlier. And then one of the really kind of cool big numbers is like a 97% reduction in deployment data, right? So, by taking advantage of some of these capabilities, you're looking to be able to deliver software more rapidly in production. And then obviously, you know, with that, you know, more rapid deployment process, you can, you know, significantly up the release process. And when we save this number, like this 26 number, that's like 26 major releases, right? Like a lot of folks that, you know, are doing, you know, continuous delivery of the deployment really well, like a release is going to be like a single, you know, CSS change or something like that, something that doesn't necessarily constitute a major change. Like when we talk about this, we're talking about like a major, like, maybe not necessarily major in terms of like, you know, semantic versioning, but you know, we're talking about incrementing that middle number in your semantic version. We're not talking about like patch releases, right? Like those patch releases, you might have hundreds of those a day, right? We're talking about being able to deliver a major release on a more frequent basis, right? So, you know, I'm gonna try and kind of go through some of these a little bit quicker because they're a little bit annoying. But when we start taking a look at the breakable continuous delivery action, like what we found is there's kind of like a pyramid of capabilities here, right? And one of the things that as I start to talk about some of these things, my personal take is actually kind of reordering this pyramid just a little bit. This pyramid is generated by some of my colleagues and I actually think some of these things are maybe a little bit more impactful or a little bit easier to do than some of my colleagues do. But the base of the pyramid, the thing that could best be defined as you must be this tall to ride on this roller coaster is actually having a consistent monitoring strategy across all of the different deployment methodologies, right? So having the same kind of tooling across all of the environments that pass stage and prod, having the same tooling across on-prem, public cloud, having the same tooling across your bare VM strategy as well as your computer strategy. That's kind of like the base of everything. And then what this particular slide is taking shift left as kind of the number two, I actually think shift left is number three and I think shift right is number two, right? So when we talk about shift left and shift right, I'm not sure how familiar everybody is with that concept but basically shifting right, the thing that I feel is really important is taking metadata about your build, right? Taking metadata about what it is that you're actually trying to do with the release and bringing that into your monitoring tooling as an event that's associated with the environment that you're monitoring, right? And that's really important for triaging issues and things like that to understand the context of what it was that was deployed into the environment. And then when we talk about shifting left, right? Shifting left is taking some of those metrics and techniques that you might have been deploying in production around monitoring and shifting those earlier in the life cycle, right? So when I start talking to you about well, what you're doing is what we like to call all the gates, that's the concept of shifting left. And why I personally reverse these is because I feel like the getting build metadata and events in the broad, that's like really low risk and really high reward. So I feel like that's something that really everybody should be doing one way or another and it's not that hard to do that. When we talk about the automated quality gates, as I walk you through those, those are a little bit tougher, right? And those can be maybe a little bit riskier for people, might be a little bit scary, right? Because these automated quality gates, those are the kinds of things that are standing between continuous delivery and continuous deployment, right? The automated quality gates are one of the things that can help you get there, but continuous deployment is actually super scary, right? So that's kind of why I invert this pyramid a little bit. So I got a nice slide here that actually helps to illustrate this and one of my marketing guys actually just left the room and it's really funny because we were talking about another way to visualize this. And the example that I actually came up with is the converse pipeline that I'm gonna be working you through. So it's very possible that this slide may or may not be completely replaced in the future with a visualization of a converse pipeline but that's a really great job of that. But basically, the stages that we're talking about here is the first thing you're doing is you're actually taking your deployment in your staging environment and you're giving the context of that deployment to your monitoring tool, right? The second stage is approving what it was that you released in the stage, right? And you're gonna do that with an automated quality gate. And obviously I'll talk to you guys in a little bit about what's actually given that quality gate. And then the third step is once that quality gate has been validated, we're gonna push in the prod, right? My example is gonna be pushing in the prod with like a blue ring, right? So that's gonna be a little bit scary but just go on pushing in the prod automatically. But once we've actually validated prod, right? Then we will be able to flip the switch on the route for a blue ring deployment. Hopefully, our customers are happy. If they're not happy, the idea is that what we're working towards now here at Dynastase is automated remediation actions, right? So if we detect a problem and your monitoring solution understands the root cause of that problem, then we can actually take action to remedy that automatically. So we've been near the end of the slides, so hopefully everybody's excited that I am. You know, one of the things that I wanna mention, so my colleague's in the back of the room there, John, this is the fact that you're sort of kind of halfway. He did a little bit of a hands-on of what we're calling our CAPTAIN project or CAPTAIN initiative. The whole idea here is that as we started to deliver autonomous plug-in and with our customer base, we found that there was kind of a need to take an opinionated approach to creating the pipelines for continuous delivery, right? So the CAPTAIN project basically will take your, the liberal artifact, whatever that might be, and automatically create some of the pipelines that I'm gonna show you today. It's gonna automatically create those, automatically create the quality gates, and then it's gonna automatically create deployment models based on what it is that you put in some yellow guidewalks, right? So it's like, maybe you wanna do canary for this particular release, you wanna do blue green for this other one, you're actually gonna provide those guidelines in some yellow, and then CAPTAIN is gonna go out and build all of that for you. And then I would be remiss if I didn't include this super fun slide about the Dynatrace UFO. The reason why I included this is the Dynatrace UFO was actually one of the ways that I got started working with Conquers myself. The UFO is this crazy little thing here and it's something that one of the crazy developers in our RINTS office found that we didn't have enough information radiators. For example, there wasn't an information radiator above the coffee machine. So developers would actually commit code and destroy and break everything. And nobody would know until they kinda got back to their desk or whatever. So we started including these information radiators around the office to continuously admit the status of our pipelines. And I thought that was a really cool idea. It's basically this little 3D printed device with the Raspberry Pi inside. And the whole design and firmware is all open source, available via this URL here. And when I saw that device and I saw the on-pass, on-pay-all functionality inside of Conquers, I'm like, I'm gonna make a Conquers resource to talk to the UFO, right? And once I saw how cool and easy that was, then I started actually kinda diving into things a little bit more in depth. This is a kind of pure marketing slide. I'm gonna put a little bit of fancy animation there, where we're talking about some of the benefits around our trace, but I don't really want to dive into that very much. And there's a couple of nice logos up here of some of the folks that have started to implement some of the techniques that I'm talking about here today. You know, one of the things that I find really interesting is, one of the reasons why I'm doing this talk is because I'm personally like a really big fan of Conquers. And Pat and Jane know this, they're actually sitting in the back of the room there. Like, I actually really like the way Conquers functions and the way Conquers handle their deployments and I really wish four people were using it for apps, right? We're all using it for, you know, well, I guess I can't say all of us, but a lot of us are actually already using Conquers to deploy our pass and BTS formations. But I really like to see more people actually using it for the applications as well. So the example, you know, what I'm walking everybody through today is going to be using Conquers to the plan, right? And then the time is coming up now. What I'm going to be watching everybody through is deploying the spring music of Conquers, not sure if anybody's from quick spring music. I know I see the damn thing while I'm sleeping, because I just probably work with spring music way too much, the voice. All right, so first things first, we've got this nice super small thing because my resolution here is bad, I'm using it a little bit, all right? So how many people here have actually seen a Conquers pipeline? Oh, awesome, got a lot of answers. So I mean, so are any of you actually using Conquers? Yeah, using Conquers, and I suspect that the vast majority of you are using Conquers for the black woman's up, right? And is anybody using Conquers for their ass? Great, that's awesome. I'm sure folks are happy to hear that. Once again, obviously, like I'm a big fan and I really wish more people were doing this. But when we take a look at what we have in the pipeline here, we get this nice visualization of what's actually happening, right? The glue that's kind of binding everything together here, there's this main dashed line, solid line that goes through everything, is the repo containing my code base. So once we actually, we'll be actually looking at the channel, the way that that's configured is to act as a trigger. So when I make a change to my code base on GitHub, it's there automatically to figure everything that goes through the pipeline. We start with unit tests, pretty standard practice. If the unit tests pass, we're gonna build our, we're gonna build our binary, in this case, it's spring music. So we're building a spring loop jar. And then once that's built, we deploy it, crazy, right? Actually deploying built artifacts, no, we didn't. And then, this first step is actually deploying into my stage, the stage environment of my BAS foundation. After that, deploy, we're gonna automatically execute a load test. We're gonna automatically validate that load test. This is kind of the quality gate that I talked about, right? So one of the cool things then about Converse and how these visualizations help is, this solid line here is once I've done my deployment, I'm also letting down the trace know that I've done the deployment. This is something that's happening automatically with that Converse resource. And then this validation, we start to actually, when I show you some of the data elements that we're validating, what we're doing here is we're executing a comparison via something that we here at Dynatrace call Monspec. Surprisingly, it's actually very familiar to those of you that might have been talking to physical about indicator protocol. It just so happened that both of us were kind of actually working on almost the same thing at the same time. So Monspec is JSON based, indicator protocol is gamma based. But basically what we're doing in both of these items is, defining the guidelines for our application as code in a way that we're checking those in to our repo alongside of our code, right? So as developers, you're building the application. This is a way for you to document your NFRs, your non-functional requirements. It's a way for you to document those and then validate them automatically. And what you might be thinking here is like, okay, well response time, things like that, I'm already validating that with my performance tests. What's really neat here is you can actually start to validate some of the architectural requirements and things like that as well, right? So when I actually show you what Monspec looks like, we'll be validating something, a value called runs on, right? Which is how many instances of it do I have? So if I've asked for two instances, am I really getting two instances? If I've asked for four, am I really getting four instances or am I only getting one? And if I don't get four, I should probably fail. And then we're also gonna validate talkers, right? So we're also gonna validate like, hey, this particular service is talking to four other services, right? Which I think is something that's really important because we do see a lot of regressions where somebody created a service and maybe forgot to use a standardized configuration, right? And they might have one service instead of talking to four services, somehow it's talking to six or 12 or 40, right? It's important to actually start to validate some of your architectural NFRs as well. And that's where, you know, including this APM data in the mix can be really helpful, right? And then as we start to scroll to the right of the pipeline, right, you start to see that basically what we've done is repeat and we've repeated that process, right? The cool thing about Monspec is we can actually define different comparisons to make in different data elements that may be tracked depending upon the environment. So like if, for example, we weren't as concerned with runs on in stage, well, then we don't have to validate that, right? You can actually define those things on a current environment basis. But once again, there's this validate prod step. And then when everything's done, I'm gonna promote that new release in prod, which is basically automatically floating around, right? So that seemed pretty clear to everybody. We're gonna actually now, assuming I can actually swipe the right way, we're gonna actually take a look at what that pipeline actually serves to look like. So, being that this is a D5, we're gonna be kind of walking through the line items of the pipeline here, so hopefully that's okay with everybody. I'm not gonna start with the groups because that's not really as important. But one of the things that we define here at the top of our pipeline is our resource types, right? So those are the things that we're gonna be calling out later to do some work for us. And, you know, assuming I don't battle on too much about important things, I'd actually kind of like to show you what a conference resource actually looks like. They're actually really fun to build. Maybe I'm weird, I don't know, but I actually had a really good time with that. But you can see the two resources here. Sadly, I didn't bring my UFO with me, but I do have the UFO resource here to define, and then I do have the Dynatrace resource. So the Dynatrace resource is basically what's gonna enable that shipwright functionality. And basically, you know, resources are Docker images that are going to take an expected payload and do something with it. And it's something that, you know, my, you know, my Dynatrace, my resources are not necessarily taking advantage of all the other functionality. I have some ideas on improving them in the future, but mostly I'm just gonna take some information and do something with it. We can see here, obviously, we've got a Git repo here with the information in it. Cool thing here is if anybody actually gets a picture of this and actually takes a look at id.hub.com slash idcuresoft.com slash string music, they'll actually, I have all the pipelines there, so you can actually take a look at it. So the pipelines are all there along with all the tasks that I think this up. All right, so we've got a Summary resource here, which is hopefully pretty familiar to everybody. This is how we actually handle our versioning. Cool thing about Converse is every time you run a task, it's completely like a net view container. So that's actually super useful. I'm not sure how many of you actually ran into weird, you know, issues with the file system on your Jenkins workers having unexpected things on it or your Jenkins workers having, you know, unexpected processes running and things like that, and you don't necessarily get a nice repeatable result. One of those really cool things here is every time it task fires, it's in a new container. That's probably the thing that I like the most, but it can sometimes be a little difficult for folks to wrap their head around. It's like anytime you need something, you need to explicitly define it so that it comes along, otherwise it's not there. Right? My pipeline is actually building the release artifact and putting it out on S3. So you can see here, that's actually defined here. You know, we've got a little bit of fun projects, kind of doing some pattern matching there. This pipeline is originally based on kind of like a reference example from the developing. And then you can see here that I've got my stage and prod foundations for time. So, you know, what's nice about those is that you're getting to really different things, obviously, if those could be either for foundations or, in my case, they're just different orders. Right? And then I need to actually define a couple of elements that the Dynatrace resource needs. So when you're interacting with Dynatrace's APIs, you need an API token, and then you obviously need to know which Dynatrace token you're going to interact with. So the cool thing there is that that's going to definitely detect, based on the URL, it's going to automatically detect whether you're managed, right, which is our on-premise solution versus the SaaS solution. So it's going to kind of automatically detect some of that stuff for you based on the formatting of the whole thing, right? So one of the things that this pipeline does is it's going to take a snapshot of performance data associated with your release and post it up to Dynatrace. So Dynatrace has this concept of a custom device. It was functionality that we originally created to store data about like a five load balancers and things like that. Things that we want to monitor, but we don't have an agent for them because an agent doesn't make sense. So we're going to take that snapshot and we're going to upload that data there so that we can kind of chart all of those things in the same place later if we wanted to. So the nice thing about Concourse is with your groups, like the Converse team likes to call Converse Norman and Fendiware. So this is a step that's actually not always run as part of the pipeline, it ran once. But the cool thing here is since it's part of the pipeline and something happens to my environment, I can actually just fire this task off again. And it's always going to do so in a nice repeatable way. So now we start to actually look at the nitty-gritty tests of the job, right? So obviously we start at a unit test, right? So it's going to get the contents of our repo, which is the source code. You can see here, as I mentioned before, this is kind of key. This is what actually helps us to make sure that that pipeline runs every time we make a change. So that's the trigger. And then here is where we actually define which task units that we're going to run in this step. I'm probably not going to walk through all of them. I actually want to start actually showing the contents of the task when I get some of the more interesting ones. Cool. So now we've given them our build steps. So we've actually passed our unit test. Now we're going to actually start building our binary. If I did show you the contents of that, you'd see that that's actually just basically running the Gradle build. And we've got some timeouts there in case for some reason that the build takes too long. But here's kind of the neat part. So we're getting the contents of the repo. So as I mentioned before, the cool thing about concourse is every time one of these tests run, you have to explicitly define what's going to be there when it runs. So we actually need to go back and get the contents of our repo again. And it's going to cache that. So it's nice. It's really fast. But in order for that stuff to be in the container, in order for us to depend on that stuff being there, you need to actually explicitly define that you're getting it. We're also going to get that Sember file. So there's a file sitting on S3 that actually has the version number in it. So we're going to get that. And then we're also going to increment the patch version. Because we're building a patch right now. And then now is when we start to actually do some puts. So everything else has been getting data. Now we're actually going to do something with that data. So we're going to take and we're going to add that artifact that we built, the jar that we built. We're going to shove that up on S3. We don't have to define everything about that again. This is already defined at the top of the pipeline. And then we're also going to put a tag up on GitHub. So this release, the source code at that point in time is actually going to be tagged on GitHub as a release. And then obviously, since we did increment the version number, so we incremented the patch, we need to go back and put that file back up on S3. And I think it might make sense to everybody. Must. All right. So now we've got an artifact. Now we've got a build artifact. So now we're really starting to cook with gas, or magnets, induction, that, and whatever. So now that we've actually got that artifact, we want to start doing something with it. We just have artifacts. We're not really going to have much of a fun time. And then there's going to be nothing to validate about it. So we've obviously got to go back and get the December number. We've got to get that release artifact. And then we have to, I have a step here to actually do my Blueprint performance and actually prepare some of the tags that Donatrace needs to automatically identify things. So I'm going to dynamically manipulate my Atlas. And then I'm going to tell Donatrace, hey, there's a new release. And then here is where I define some of the information that Donatrace needs to represent that release, that deployment. So there's a couple of different things that happen here. So when I actually flip over to actually show you Momspect, I'm trying to think of what you've done to flip over to that. As I mentioned before, Converse doesn't have anything there unless you explicitly add it. So since I am including the information, some of the metadata around my pipeline as well as the metadata around my Momspect in my app repo, I need to tell this task where that might be in the container file system. Because obviously, for Spring Music, although in the Spring Music folder, we have all the contents of that repo in it. But if I took this same concept and implemented another piece of technology or another application, that path would be different. So I do need to actually give that path to the resource. Because we actually do some funny things with something called a pipeline definition JSON so that Donatrace can actually pre-populate some of that information for us. And then this step here is basically doing the CF push for us. So we don't have to actually shell out and do the CF push ourselves. It's happening for us automatically. So now we start to look at the load tests. So here we've actually got the load tests that are running. Notice here, nothing is actually defining how the load test is actually executed that's actually in the task. So I think now is probably the time to maybe show some tasks so you can actually see what a task looks like. So we actually look at this load test which is where we define what it is that we're actually executing. So once again, you'll see that I've got a definition of a Docker image. And you'll see that this is expecting some inputs. Because once again, nothing is there unless we tell it to be there. Because if I didn't have any of this stuff there, all I would have would be whatever is the vanilla contents of that Docker. So this is asking for a couple of different options here because a couple of different parameters because otherwise you wouldn't know what it needs to actually execute a load test against. And then you can see here, this is the script that we're actually calling. Cool. So now we look at what does the task actually look like? So the cool thing here is it kind of looks like whatever you wanted to look like. So this one is a shell script. It's actually going to look for a flag to tell it whether it's production or not. Because in production, I'm doing that for green deployment. So I need to figure out whether I want to talk to blue or whether I want to talk to green. And then otherwise, I'm just going to run a load test against this particular URL. And then here it's basically just a call out to our artillery to go around our load test. Artillery is actually the tooling that was part of the sample. But this could be January or whatever you might want to have here. I actually kind of like, I don't really know if I'm sure if anybody actually works with that on a regular basis, but I was actually reasonably impressed with what I was able to do. And then one of the things that's inside of artillery that actually have penetrates to make sense of it is to add some headers. So this can actually, this is really all you need to make penetrates aware that those are load test requests, is to actually add these headers to those requests, which is kind of a nice, concise way to do this. It's a pretty simple test, but you can see it's only like 16 lines. So let's go back to my editor, close my other pipeline. And so let's go down a little bit. So now we're going to actually start to look at some of the more interesting tasks. So now I've got a build that has been deployed to my staging environment. I've executed some load tests against that environment. Now I'm going to pull some data down and take a look at executing some comparisons. So you can see here, once again, obviously, it's got to do that current app color test task because that actually definitely helps in the pride environment. But when we look at this one, this is kind of the nitty-gritty of the quality here. This is validate, be a modifier. So what I need to do here is define what comparison that I'm making. And this is actually kind of giving us a path to the contents of the JSON file. And then I need to actually give it some information on how to select the entries. So when we look at what's actually happening in this task, so the nice thing here is you'll see that we don't have any curl in there. There's no request out to the APIs. One of my colleagues, Andy Grabner, created a Devatrace CLI in Python. So one of the cool things here is, once again, obviously these tasks can be kind of like any arbitrary body of work that we want to execute. So in this case, I'm actually just running this Python-based CLI. I'm telling it to configure itself with the information on how to connect the Devatrace. And then I'm going to tell it what it means to validate. So I'm going to tell the CLI that they need to do a Montspec pull-compare. Here is our Montspec file. I'm going to show you that momentarily. Here is the pipeline information. And then it's going to take this variable that was defined back in the pipeline. And it's going to say, OK, I want to do this comparison. I'm going to do this comparison for the last five minutes. I'm going to get that output. I'm going to redirect that output to a file. And just because I like to go blank things, pretty printing it. Because then when we actually go back and look at this in converse, it's actually going to give us that pretty printed output of that step along with the color coding and everything. It's actually pretty slick. And then I'm going to, this is actually right here, is really just our validation. Because the actual Python CLI is not going to give us an exit status if it's detected violation. So we need to actually create that exit status ourselves. But it's a simple matter of evaluating whether or not that particular value in that JSON body is something other than is greater than 0. So let's take a look at. So there's a couple of different things that we do here. So one of the things that rather than hard code link to a specific service entity in Dynatrace with some pencil that seems to be kind of fluid, we're actually just giving a Q-value pair in tags that we want to look at. And so Dynatrace has this ability to dynamically create tags based on environment variables. So these are just environment variables that happen to be in the app manifest. So we're going to take those, and we're going to term those in the tags that we'll actually then be able to query Dynatrace based on those tags that figure out what entities we want to touch data from. So you can see here I've actually obviously got an entry here for stage. I've got an entry for production in its entirety. And then I got tags that are color based for the blue and green production. The next thing that we do in this JSON, we start to define the comparisons that we want to make. So you have data, this is actually pretty freeform. So you can compare staging to production. You can compare production with staging. You can compare production with staging. And you can say, hey, I know that production is faster than staging. And you can actually kind of build what used to be a pledge factor. You can actually build that into the non-spec definition and include that type of criteria when you're validating that. So you can see I have a whole bunch of stuff in here. I don't necessarily use all of these, but they're there for reference if I ever want to change something. So when we talk about the guide rails that we're actually looking to validate via the non-spec comparison, that's performance signature. So that's the perfect signature here in my data. And these time series values are elements of Dynatrace API responses. So service response time, that's pretty self-explanatory. We have the ability to collect the different metrics that represent the service response time. So you can see here we're actually looking at the average. Here we can look at the 90% title. It's almost entirely arbitrary. So I could change that to 99%. One of the things that I do when I'm building these pipelines or when I'm helping my customers build pipelines is I will actually define upper and lower limits. Normally, in production with a tool like Dynatrace, you let Dynatrace detect your guidelines for alerting. But in your stage environment, things like that usually don't have enough traffic or anomalies to be detected automatically. Because there's no steady state traffic in that environment. So what we do is we actually define some of that criteria right here in the non-spec. So this is once again where I talk about codifying those non-functional requirements, documenting them as part of your pipeline, as part of your repo, and it just sits there along with the code. And then rather than interacting with your performance engineering team or something like that, this is something that you, as a developer, define and check in as part of your focus. And we start to look at things like failure rate, request for minutes. These are all pretty standard things. But this is where I think some of this starts to get really interesting. So these two relationships calls are things that query what Dynatrace calls the smartscape API. Smartscape is our topology map of the environment. That's where we understand who's talking to who and how many times they're talking, and so on and so forth. So this is one of the things that I really, really like about this concept is validating some of the architecture of the application. So that's who's talking to it, how many instances of it are running, and who is it talking to. So the two relationships is who's calling it, the firm relationship is who is it talking to. So I think these are some really interesting things that validate. If we just sat here and talked about request per minute and response time, we'll get that already out of your performance testing tools. It's some of these architectural validations that are things that can be really impactful for us and really helpful. And maybe have the ability to define key requests, but I'm not actually using that functionality. So I'm gonna, the push-to-prong looks exactly like the pushed stage. We're gonna go through this whole process of actually putting it on Dynatrace. We're gonna do a load test again. We're gonna validate that again. And then we're gonna look at the routes. So I'm going to flop back over to my browser. I can find it because I wanna show you how that validation can actually be kind of interesting and in front of that, it's a super small plan, right? So what I mentioned before, like the Comforce is actually just gonna give us the standard out of whatever it is that we're running inside of our task. So what's really neat here then obviously is by pretty printing that, it's actually gonna take all the nice pretty colors coming out of GPU and allow us to see this, right? And you can see that as we scroll through here, we have no non-cycle violations, right? But I think I actually have enough time that I should be opening it for you. Just like actually, it takes a few minutes to actually run, did that change up to my repo. I have done very bad things to spring music and I have a configurable, you know, I'm more about sleeping there because it's nice and easy to demo that. And you know, what's gonna happen here eventually is it takes a minute or two but well, yeah, you can see that it's already started with fire. So the tech that I made a change to the code base, in this case, I just altered the manifest to adjust the demand variable that defines how long that thread is gonna sleep for. And it's gonna go through this build process, then it's gonna be load tested and then what's gonna happen is this validation is gonna fail, right? But it's gonna take us about five minutes or so to get there. So I think while this is happening, it's actually a really great time to pause and kind of ask some questions because this is the point where we just wait for a box to turn around. So are there any questions? I didn't lose all of you, lost a few of you, but I didn't lose everybody. So are there any kind of questions here? Does anybody kind of see the validity of the workflow? All right, somebody's gotta break the ice here. Yeah, so it's actually really funny you asked that because the concepts that you see here were originally developed by my colleagues against Jenkins, right? And against whatever the hell the Amazon pipeline tool is. So what I really personally like about Converse is that whole concept of every time you do something, what's there is only what you said to be there, right? Because literally every single time my colleagues have created a workshop against Jenkins, there is always some crap that's there that they didn't expect to be there, right? It does weird things with like renaming workspace directories and stuff like that. You end up having like odd and nowhere permissions issues and you're like, what is this about? And it's just because some directory is left hanging. And the way you fix that is you go and you rename the project, which creates a new directory for it. Here you don't need to do that because every single time you run something it's in a net new container. One of the things that was really hard for me when I was getting started with Converse was understanding that there really wasn't a lot there by default, right? It's like everything else that like the little things to do is like the very abstracted thing. But then once I realized like, wow, this is really abstracted and I can literally do anything I want in a task, I'm like, this is amazing, right? Like I pretty much like, I love automation and I would like to build Converse pipelines that I can just automate my entire life away and I can just lay it back all day. And like theoretically you can do that because this is not limited to software delivery, it's not limited to delivering your applications like you can do so many different things with it. Like this is my real Converse environment, right? So I have a pipeline here that's actually deploying paths. I've got the pipeline here that's actually deploying my PKS environment, right? And then the other thing that I needed to do is my colleagues that are manning the booth need to have something to demo. So I have a service that I'm actually been demoing this afternoon where I have a front end and a Java application in pass, PKS, that are talking to two services that are in PKS, right? And we've broken that service. So I have one pipeline that is actually, this was the way that I chose to do it here to be here, but I have one pipeline that actually deploys the good version and I have another pipeline that deploys the bad version. So what I can do is I can deploy the bad version, let it sit there for a couple of minutes so Dynatrace will lose its mind. And then a couple of minutes later, I can deploy the good version again. And then what will happen is the Dynatrace problem will solve itself because it's no longer occurring, right? And then this pipeline, like I mentioned, one of the things that I wanted to double down for everybody is you notice this pipeline actually doesn't do the quality gates, but it does do the Dynatrace resource and tell Dynatrace that the bill took place because then Dynatrace will take that information and include that in the root cause analysis, right? And this is like, if everybody leaves here with one idea, right? One thing that you can take back with you and do, it's definitely let your, whatever your monitoring solution is, assuming you can do something with that data is to take and make it aware of what's actually happening in the environment, even more context, right? Because one of the cool things about Dynatrace is we use all of that context to help you find out what the root cause of the issue is. So without the context, you know, you have that information available to you, just push it into your monitoring tool. So that's a couple of the, I don't know if that's the one by one, that's a couple of the things that I found. You know, when I actually look into, you know, building the resource itself, right? There's like a predefined payload of what that actually looks like. But it's pretty easy to reason with because it's just JSON. Everything that I'm doing is like shell scripts. The, you know, but it doesn't have to be shell scripts. You can, you know, you saw one of my shell scripts was actually calling a Python CLI, right? You could actually interact with a dose CLI command directly, right? And just a one-liner and just put that in the task. So that flexibility is what I really like and why I have a lot of fun, you know, working with this stuff. Sometimes it's interesting maybe when things go wrong, but the cool thing about that is everything's a container, right? So if something did go wrong, I can actually check into that container, right? So there's the fly hijack command. Well, the hijack, I guess, is actually the alternate name for it. It's actually your stuff, but hijack is more about the type. So I like being able to go in there and I can actually explore the container to find out, you know, maybe more information about what it is that happens. Like maybe, you know, you notice that I have a lot of different things where I was actually specifying the repo name and how I got there is I found out like things weren't necessary exactly, exactly where I thought they would be. So I had to go in there and have that capability to ensure that I think we're more expected to be there. I think that whole idea of nothing is there and mustn't tell it to be there is probably the biggest stumbling block, right? So that's like, that's still running and that'll run for like another minute or so. And then it does take about two minutes to get to the next step. So we still got another couple of minutes. And I still have all you folks for another five minutes you know, I'd love to be unless I say it's okay. But yeah, another question. Yeah, so the question was is that the, you know, the Python CLI that I'm using for the quality gates, you know, is that generally available? The answer is yes. That the Python, that VT CLI is a fully open source project that my colleague Amy Grabner has created. So that's github.com slash Denitrace slash Denitrace dash CLI. The original use space behind the VT CLI was actually to be a reference limitation of Denitrace API call, right? And nowadays, people create these kinds of things and go but my boss, Andy, was a little bit more familiar with Python. So it's pretty, the other, the only bad thing about the VT CLI, at least that I ran into for anybody that deals with Python on a regular basis, this is actually based on Python 3. So like when you're on a Mac, you don't really have Python 3 and you have to deal with IE and V or, you know, what I did, and this is probably the best thing that I've ever read on Twitter ever, is I think what Jesse Brazell was actually talking about using Docker containers for CLI commands. So I started basically, like when I'm working with the VT CLI locally, it's always in a Docker container. So I have a Docker container that I've created that actually sets all the prereqs for the VT CLI and I just run it from there. And then I don't have to worry about IE and V or any of that over there. That's another fun takeaway for everybody, I guess. Yeah, so this is gonna wait a little bit. It'll wait a second or two just to make sure that all the data has flown into Denitrace. All right, can't see where we're at. So we scroll down here, you'll see that we had two violations, and if we scroll up, we have a failure in response time. So we were expecting 400,000. That's actually, for whatever reason, the Denitrace CLI actually uses microseconds, it's not milliseconds. So it was supposed to be, we would just pretend to do the math that I had. So it was supposed to be 400 milliseconds and it was actually more like three seconds, which is kind of bad. And then the same thing happened to our average as well. So both of these are violations, right? And we see that we had two violations. So we encountered two Montspec violations, we're failing our build, and then you can see here that push the product didn't happen because we failed, which aborts our fight. But that same validation process could have actually been used to validate architectural violations and things like that. It's just a little bit tougher to demo. All right, well, we are actually about done in 1208. So if there aren't any other questions, I thank everybody for their time. Anything else? Thank you.